OWASP 人工知能セキュリティ検証標準

CC BY-SA 4.0arrow-up-right

この著作物は Creative Commons Attribution-ShareAlike 4.0 International Licensearrow-up-right の下でライセンスされています。

CC BY-SA 4.0arrow-up-right

What is AISVS?

The Artificial Intelligence Security Verification Standard (AISVS) is a community-driven catalogue of testable security requirements for AI-enabled systems. It gives developers, architects, security engineers, and auditors a structured framework to design, build, test, and verify the security of AI applications throughout their lifecycle, from data collection and model training to deployment, monitoring, and retirement.

AISVS is modeled after the OWASP Application Security Verification Standard (ASVS)arrow-up-right and follows the same philosophy: every requirement should be verifiable, testable, and implementable.

What AISVS is NOT

  • Not a governance framework. Governance is well-covered by NIST AI RMFarrow-up-right, ISO/IEC 42001arrow-up-right, and EU AI Act compliance guides.

  • Not a risk management framework. AISVS provides the technical controls that risk frameworks point to, but does not define risk assessment methodology.

  • Not a tool recommendation list. AISVS is vendor-neutral and does not endorse specific products or frameworks.

How AISVS complements other standards

Standard
Focus
AISVS relationship

OWASP ASVS

Web application security

AISVS extends ASVS concepts to AI-specific threats

Awareness of top LLM risks

AISVS provides the detailed controls to mitigate those risks

Awareness of top agentic AI risks

AISVS provides the detailed controls to address agentic-specific threats

NIST AI RMF

AI risk governance

AISVS supplies the testable technical controls that AI RMF references

ISO/IEC 42001

AI management systems

AISVS complements with implementation-level security verification

Verification Levels

Each AISVS requirement is assigned a verification level (1, 2, or 3) indicating the depth of security assurance:

Level
Description
When to use

1

Essential baseline controls that every AI system should implement.

All AI applications, including internal tools and low-risk systems.

2

Standard controls for systems handling sensitive data or making consequential decisions.

Production systems, customer-facing AI, systems processing personal data.

3

Advanced controls for high-assurance environments requiring defense against sophisticated attacks.

Critical infrastructure, safety-critical AI, high-value targets, regulated industries.

Organizations should select a target level based on the risk profile of their AI system. Most production systems should aim for at least Level 2.

How to use AISVS

  • During design. Use requirements as a security checklist when architecting AI systems.

  • During development. Integrate requirements into CI/CD pipelines, code reviews, and testing.

  • During security assessments. Use as a verification framework for penetration testing and audits.

  • For procurement. Reference specific requirements when evaluating AI vendors and third-party models.

Requirement Chapters

Appendices

Contributing

We welcome contributions from the community. Please open an issuearrow-up-right to report bugs or suggest improvements. We may ask you to submit a pull requestarrow-up-right based on the discussion.

プロジェクトリーダー

このプロジェクトは Jim Manicoarrow-up-right によって設立されました。現在のプロジェクトリーダーは Jim Manicoarrow-up-right, Otto Sulinarrow-up-right, Rico Komendaarrow-up-right, Russ Memisyaziciarrow-up-right です。

ライセンス

本プロジェクトのすべてのコンテンツは Creative Commons Attribution-Share Alike v4.0arrow-up-right ライセンスの下にあります。

Last updated