📗
owasp-ai-security-and-privacy-guide-ja
  • OWASP AI Security and Privacy Guide ja
  • OWASP AI セキュリティおよびプライバシーガイド 日本語版
    • OWASP AI セキュリティおよびプライバシーガイド
    • リーダー
  • AI セキュリティ
    • OWASP AI Exchange
    • 憲章
    • 私たちとつながろう!
    • 貢献
    • メディア
  • コンテンツ
    • コンテンツ
    • AI セキュリティ概要
    • 1. 一般的なコントロール
    • 2. 使用による脅威
    • 3. 開発時の脅威
    • 4. 実行時のアプリケーションのセキュリティ脅威
    • 5. AI セキュリティテスト
    • 6. AI プライバシー
    • 参考情報
Powered by GitBook
On this page
  • OWASP AI Exchange の参考情報
  • AI セキュリティ脅威の概要:
  • AI セキュリティ/プライバシーインシデントの概要:
  • その他:
  • 学習とトレーニング:
  1. コンテンツ

参考情報

Previous6. AI プライバシー

Last updated 4 days ago

OWASP AI Exchange の参考情報

カテゴリ: ディスカッション パーマリンク: https://owaspai.org/goto/references/

AI Exchange についてのウェビナーやポッドキャストは を参照してください。 特定のトピックに関する参考情報は AI Exchange のコンテンツを通じて見つけることができます。したがってこの参考情報セクションにはより広範な出版物を含んでいます。

AI セキュリティ脅威の概要:


  • - see

AI セキュリティ/プライバシーインシデントの概要:


その他:


学習とトレーニング:


カテゴリ
タイトル
説明
提供元
コンテンツタイプ
レベル
費用
リンク

コースとラボ

AI Security Fundamentals

Learn the basic concepts of AI security, including security controls and testing procedures.

Microsoft

Course

Beginner

Free

Red Teaming LLM Applications

Explore fundamental vulnerabilities in LLM applications with hands-on lab practice.

Giskard

Course + Lab

Beginner

Free

Exploring Adversarial Machine Learning

Designed for data scientists and security professionals to learn how to attack realistic ML systems.

NVIDIA

Course + Lab

Intermediate

Paid

OWASP LLM Vulnerabilities

Essentials of securing Large Language Models (LLMs), covering basic to advanced security practices.

Checkmarx

Interactive Lab

Beginner

Free with OWASP Membership

OWASP TOP 10 for LLM

Scenario-based LLM security vulnerabilities and their mitigation strategies.

Security Compass

Interactive Lab

Beginner

Free

Web LLM Attacks

Hands-on lab to practice exploiting LLM vulnerabilities.

Portswigger

Lab

Beginner

Free

Path: AI Red Teamer

Covers OWASP ML/LLM Top 10 and attacking ML-based systems.

HackTheBox Academy

Course + Lab

Beginner

Paid

Path: Artificial Intelligence and Machine Learning

Hands-on lab to practice AI/ML vulnerabilities exploitation.

HackTheBox Enterprise

Dedicated Lab

Beginner, Intermediate

Enterprise Plan

CTF 演習

AI Capture The Flag

A series of AI-themed challenges ranging from easy to hard, hosted by DEFCON AI Village.

Crucible / AIV

CTF

Beginner, Intermediate

Free

IEEE SaTML CTF 2024

A Capture-the-Flag competition focused on Large Language Models.

IEEE

CTF

Beginner, Intermediate

Free

Gandalf Prompt CTF

A gamified challenge focusing on prompt injection techniques.

Lakera

CTF

Beginner

Free

HackAPrompt

A prompt injection playground for participants of the HackAPrompt competition.

AiCrowd

CTF

Beginner

Free

Prompt Airlines

Manipulate AI chatbot via prompt injection to score a free airline ticket.

WiZ

CTF

Beginner

Free

AI CTF

AI/ML themed challenges to be solved over a 36-hour period.

PHDay

CTF

Beginner, Intermediate

Free

Prompt Injection Lab

An immersive lab focused on gamified AI prompt injection challenges.

ImmersiveLabs

CTF

Beginner

Free

Doublespeak

A text-based AI escape game designed to practice LLM vulnerabilities.

Forces Unseen

CTF

Beginner

Free

MyLLMBank

Prompt injection challenges against LLM chat agents that use ReAct to call tools.

WithSecure

CTF

Beginner

Free

MyLLMDoctor

Advanced challenge focusing on multi-chain prompt injection.

WithSecure

CTF

Intermediate

Free

Damn vulnerable LLM agent

Focuses on Thought/Action/Observation injection

WithSecure

CTF

Intermediate

Free

トーク

AI is just software, what could possible go wrong w/ Rob van der Veer

The talk explores the dual nature of AI as both a powerful tool and a potential security risk, emphasizing the importance of secure AI development and oversight.

OWASP Lisbon Global AppSec 2024

Conference

N/A

Free

Lessons Learned from Building & Defending LLM Applications

Andra Lezza and Javan Rasokat discuss lessons learned in AI security, focusing on vulnerabilities in LLM applications.

DEF CON 32

Conference

N/A

Free

Practical LLM Security: Takeaways From a Year in the Trenches

NVIDIA’s AI Red Team shares insights on securing LLM integrations, focusing on identifying risks, common attacks, and effective mitigation strategies.

Black Hat USA 2024

Conference

N/A

Free

Hacking generative AI with PyRIT

Rajasekar from Microsoft AI Red Team presents PyRIT, a tool for identifying vulnerabilities in generative AI systems, emphasizing the importance of safety and security.

Black Hat USA 2024

Walkthrough

N/A

Free

Media ページ
OWASP LLM top 10
ENISA Cybersecurity threat landscape
ENISA ML threats and countermeasures 2021
MITRE ATLAS framework for AI threats
NIST threat taxonomy
ETSI SAI
Microsoft AI failure modes
NIST
NISTIR 8269 - A Taxonomy and Terminology of Adversarial Machine Learning
OWASP ML top 10
BIML ML threat taxonomy
BIML LLM risk analysis - please register there
PLOT4ai threat library
BSI AI recommendations including security aspects (Germany) - in English
NCSC UK / CISA Joint Guidelines
its mapping with the AI Exchange
AVID AI Vulnerability database
Sightline - AI/ML Supply Chain Vulnerability Database
OECD AI Incidents Monitor (AIM)
AI Incident Database
AI Exploits by ProtectAI
ENISA AI security standard discussion
ENISA's multilayer AI security framework
Alan Turing institute's AI standards hub
Microsoft/MITRE tooling for ML teams
Google's Secure AI Framework
NIST AI Risk Management Framework 1.0
ISO/IEC 20547-4 Big data security
IEEE 2813 Big Data Business Security Risk Assessment
Awesome MLSecOps references
OffSec ML Playbook
MIT AI Risk Repository
Failure Modes in Machine Learning by Microsoft
AI Security Fundamentals
Red Teaming LLM Applications
Exploring Adversarial Machine Learning
OWASP LLM Vulnerabilities
OWASP TOP 10 for LLM
Web LLM Attacks
HTB AI Red Teamer
HTB AI/ML Lab
AI Capture The Flag
IEEE SaTML CTF 2024
Gandalf Prompt CTF
HackAPrompt
PromptAirlines
AI CTF
Prompt Injection Lab
Doublespeak
MyLLLBank
MyLLMDoctor
Damn vulnerable LLM agent
YouTube
YouTube
YouTube
YouTube