Artificial Intelligence sector has been growing tremendously, bringing immense advancement and opportunities among various sectors. The development also comes with new kinds of security risks. To tackle this, companies that are pioneering in AI technology including Anthropic, Google, Microsoft, and OpenAI have undertaken an intiative called the Frontier Model Forum.
As per Business Today, the Frontier Model Forum is an industry-led body with a mission to focus on the safe and careful development of AI, particularly in the context of frontier models. These frontier models encompass large-scale machine-learning models that surpass current capabilities and possess an extensive range of abilities, making them powerful tools with great potential impact on society.
Also Read: Amazon's AI push; unveils AI tool Healthscribe
The Forum is looking to establish an advisory committee, charter, and also to secure funding. The Forum aims to contribute significantly to the ongoing research in AI safety by fostering collaboration and sharing knowledge among member organisations. They are also hoping to identify and address potential security vulnerabilities in frontier models through the knowledge transfer.
Also Read: European Parliament approves draft laws to regulate Artificial Intelligence
The forum will also work on guidelines and standardise set of best practices to ensure the safe and ethical use of these powerful AI tools. Along with this the council will also engage with various stakeholders to create a safe environment and promote the AI technology.