OWASP 人工知能セキュリティ検証標準
この著作物は Creative Commons Attribution-ShareAlike 4.0 International License の下でライセンスされています。
What is AISVS?
The Artificial Intelligence Security Verification Standard (AISVS) is a community-driven catalogue of testable security requirements for AI-enabled systems. It gives developers, architects, security engineers, and auditors a structured framework to design, build, test, and verify the security of AI applications throughout their lifecycle, from data collection and model training to deployment, monitoring, and retirement.
AISVS is modeled after the OWASP Application Security Verification Standard (ASVS) and follows the same philosophy: every requirement should be verifiable, testable, and implementable.
What AISVS is NOT
Not a governance framework. Governance is well-covered by NIST AI RMF, ISO/IEC 42001, and EU AI Act compliance guides.
Not a risk management framework. AISVS provides the technical controls that risk frameworks point to, but does not define risk assessment methodology.
Not a tool recommendation list. AISVS is vendor-neutral and does not endorse specific products or frameworks.
How AISVS complements other standards
OWASP ASVS
Web application security
AISVS extends ASVS concepts to AI-specific threats
OWASP Top 10 for LLMs
Awareness of top LLM risks
AISVS provides the detailed controls to mitigate those risks
NIST AI RMF
AI risk governance
AISVS supplies the testable technical controls that AI RMF references
ISO/IEC 42001
AI management systems
AISVS complements with implementation-level security verification
Verification Levels
Each AISVS requirement is assigned a verification level (1, 2, or 3) indicating the depth of security assurance:
1
Essential baseline controls that every AI system should implement.
All AI applications, including internal tools and low-risk systems.
2
Standard controls for systems handling sensitive data or making consequential decisions.
Production systems, customer-facing AI, systems processing personal data.
3
Advanced controls for high-assurance environments requiring defense against sophisticated attacks.
Critical infrastructure, safety-critical AI, high-value targets, regulated industries.
Organizations should select a target level based on the risk profile of their AI system. Most production systems should aim for at least Level 2.
How to use AISVS
During design. Use requirements as a security checklist when architecting AI systems.
During development. Integrate requirements into CI/CD pipelines, code reviews, and testing.
During security assessments. Use as a verification framework for penetration testing and audits.
For procurement. Reference specific requirements when evaluating AI vendors and third-party models.
Requirement Chapters
Appendices
Contributing
We welcome contributions from the community. Please open an issue to report bugs or suggest improvements. We may ask you to submit a pull request based on the discussion.
プロジェクトリーダー
このプロジェクトは Jim Manico によって設立されました。現在のプロジェクトリーダーは Jim Manico, Otto Sulin, Russ Memisyazici です。
ライセンス
本プロジェクトのすべてのコンテンツは Creative Commons Attribution-Share Alike v4.0 ライセンスの下にあります。
Last updated