Search

Google reveals the Alliance for Safe Artificial Intelligence.

Share it

Alliance for Safe Artificial Intelligence Unveiled by Google

The rapid expansion of AI necessitates a security framework and established standards that can keep up with its growth rate. Last year, the Secure AI Framework (SAIF) was introduced by Google as an initial step towards this goal. However, to effectively implement any industry framework, collaboration with other entities is essential, along with a dedicated platform to facilitate cooperation.

At the Aspen Security Forum today, in conjunction with industry partners, Google is announcing the launch of the Coalition for Secure AI (CoSAI). Over the past year, significant efforts have been made to bring together this coalition with the aim of advancing comprehensive security protocols to address the distinct risks associated with AI, both in the current landscape and future scenarios.

Notable founding members of CoSAI include Amazon, Anthropic, Chainguard, Cisco, Cohere, GenLab, IBM, Intel, Microsoft, NVIDIA, OpenAI, Paypal, and Wiz. The coalition will be managed under OASIS Open, the global standards and open-source consortium.

Overview of CoSAI’s Initial Focus Areas

CoSAI is committed to supporting the adoption of common security standards and best practices by individuals, developers, and businesses investing in AI security. The coalition is announcing the first three focus areas to be addressed through collaboration with the industry and academic partners:

  1. AI systems’ Software Supply Chain Security: Google is extending SLSA Provenance to AI models to enhance AI security by evaluating provenance, managing third-party model risks, and assessing full AI application provenance. This effort builds upon existing security principles.
  2. Enhancing the preparedness of defenders for evolving cybersecurity challenges: This workstream will develop a framework to assist defenders in identifying investments and mitigation strategies to counter cybersecurity threats arising from AI advancements.
  3. Establishing AI Security Governance: CoSAI will create resources like a risk and control taxonomy, a checklist, and a scorecard to aid in readiness assessments and monitoring of AI product security.

Additionally, CoSAI will collaborate with various organizations like Frontier Model Forum, Partnership on AI, Open Source Security Foundation, and ML Commons to promote responsible AI initiatives.

Future Outlook

As AI technologies progress, Google remains committed to evolving effective risk management strategies in parallel. The industry’s support for enhancing AI safety and security has been encouraging, with substantial contributions from developers, experts, and companies of all sizes. CoSAI marks a crucial milestone in this journey towards ensuring secure AI implementation and usage.

To learn more about supporting CoSAI or to stay updated on its progress, visit coalitionforsecureai.org. For additional insights into Google’s AI security initiatives, visit our Secure AI Framework page.

🤞 Don’t miss these tips!

🤞 Don’t miss these tips!

Solverwp- WordPress Theme and Plugin