The proper use of the technology, which has been advancing globally along with the expansion of Artificial Intelligence, is the one issue that has continuously sparked worries. Big tech corporations are moving toward a collective to support responsible adoption of AI while numerous nations are in the process of creating a framework to regulate its responsible usage.
A new AI safety body was just founded by Google, Anthropic, Microsoft, and OpenAI, despite the growing call for AI regulation. The organization, known as the Frontier Model Forum, supports the ethical and secure creation of AI models. With the safety body, these tech titans have demonstrated that they care about more than just developing ground-breaking innovations; they also want to make sure that they are in step with the rest of the world.
According to Google’s blog, the organization will use the technical and operational know-how of its member firms to advance technical evaluations and benchmarks, create a public library of solutions to promote best practices and standards in the industry, and benefit the AI ecosystem.
The organization’s four main goals are to advance AI safety research, establish best practices, work with policymakers, academia, civil society organizations, and businesses, and encourage efforts to create applications that can help solve society’s most pressing problems.
By fostering conversations and activities on AI safety and responsibility, the organization, according to Google, intends to promote the responsible development and deployment of frontier AI models. The forum’s main objectives will be to advance AI safety research by coordinating efforts in areas like interpretability, adversarial robustness, and oversight; to identify best practices to mitigate potential risks; and to facilitate secure information sharing among governments, businesses, and various stakeholders to address AI safety and risks. The organization will also work with ongoing initiatives of numerous nations, organizations, and other international organizations.