Introducing CoSAI: Securing AI Systems Through Open Collaboration

The Coalition for Secure AI (CoSAI) was announced today at the Aspen Security Forum. Hosted by the OASIS global standards body, CoSAI is an open-source initiative designed to give all practitioners and developers the guidance and tools they need to create Secure-by Design AI systems. CoSAI will foster a collaborative ecosystem to share open-source methodologies, standardized frameworks, and tools. 

CoSAI brings together a diverse range of stakeholders, including industry leaders, academics, and other experts, to address the fragmented landscape of AI security.

- CoSAI's founding Premier Sponsors are Google, IBM, Intel, Microsoft, NVIDIA, and PayPal. Additional founding Sponsors include Amazon, Anthropic, Cisco, Chainguard, Cohere, GenLab, OpenAI, and Wiz.

- CoSAI is an initiative to enhance trust and security in AI use and deployment.

- CoSAI's scope includes securely building, integrating, deploying, and operating AI systems, focusing on mitigating risks such as model theft, data poisoning, prompt injection, scaled abuse, and inference attacks. 

- The project aims to develop comprehensive security measures that address AI systems' classical and unique risks. 

- CoSAI is an open-source community led by a Project Governing Board, which advances and manages its overall technical agenda, and a Technical Steering Committee of AI experts from academia and industry who will oversee its workstreams.

The Need for CoSAI

Artificial intelligence (AI) is rapidly transforming our world and holds immense potential to solve complex problems. To ensure trust in AI and drive responsible development, it is critical to develop and share methodologies that keep security at the forefront, identify and mitigate potential vulnerabilities in AI systems, and lead to the creation of systems that are Secure-by-Design.

Currently, securing AI and AI applications and services is a fragmented endeavor. Developers grapple with a patchwork of guidelines and standards which are often inconsistent and siloed. Assessing and mitigating AI-specific and prevalent risks without clear best practices and standardized approaches is a significant challenge for even the most experienced organizations.

With the support of industry leaders and experts, CoSAI is poised to make significant strides in establishing standardized practices that enhance AI security and build trust among stakeholders globally.

"CoSAI's establishment was rooted in the necessity of democratizing the knowledge and advancements essential for the secure integration and deployment of AI," said David LaBianca, Google, CoSAI Governing Board co-chair. "With the help of OASIS Open, we're looking forward to continuing this work and collaboration among leading companies, experts, and academia."

"We are committed to collaborating with organizations at the forefront of responsible and secure AI technology. Our goal is to eliminate redundancy and amplify our collective impact through key partnerships that focus on critical topics," said Omar Santos, Cisco, CoSAI Governing Board co-chair. "At CoSAI, we will harness our combined expertise and resources to fast-track the development of robust AI security standards and practices that will benefit the entire industry."

Initial Work

To start, CoSAI will form three workstreams, with plans to add more over time:

- Software supply chain security for AI systems: enhancing composition and provenance tracking to secure AI applications.

- Preparing defenders for a changing cybersecurity landscape: addressing investments and integration challenges in AI and classical systems.

- AI security governance: developing best practices and risk assessment frameworks for AI security.

Participation 

Everyone is welcome to contribute technically as part of the CoSAI open-source community. OASIS welcomes additional sponsorship support from companies involved in this space. Contact [email protected] for more information.

Source:

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.