Technical Syllabus

Dual-track competition: classic CTF with AI assistance + the first dedicated AI security track at an international youth olympiad

Traditional CTF
Classic cybersecurity challenges
ICOA 2026
AI becomes the challenge

Why AI Changes Everything

AI agents are solving traditional CTF challenges at an accelerating rate. The data is clear -- and it shapes how we design competition for the next generation.

Source Date Result
CAI Agent (Alias Robotics) 2025 #1 at Neurogrid ($50K prize), 91% solve rate; #1 at Dragos OT CTF, 94%; 99th percentile across 5 competitions
HackTheBox AI vs Human Mar 2025 5 of 8 AI teams solved 19/20 challenges (95%); best AI team ranked 20th / 403 teams
Wiz Web Security Study 2025 Claude, GPT, Gemini all solved 9/10 web challenges (90%); cost per challenge < $1
XBOW Pentesting 2025 85% solve rate in 28 minutes; human expert: same rate in 40 hours (85x slower)
Cybench (Stanford) 2025 Claude Sonnet 4.5: 76.5% (doubled from 35.9% six months prior)
DARPA AIxCC Finals Aug 2025 Detection: 86%; Patching: 68%; 18 real-world vulnerabilities discovered
Anthropic Self-Evaluation 2025 PicoCTF: top 3% (297th / 10,460 teams); PlaidCTF & DEF CON Quals: 0 solved
Anthropic + Mozilla Mar 2026 Claude Opus 4.6 discovered 22 Firefox vulnerabilities in 2 weeks; 14 rated high-severity -- nearly 1/5 of all high-severity Firefox bugs remediated in 2025-26. Automatically developed working exploits for 2 vulnerabilities.

AI Automation by CTF Category

Web Security
90-100%
General / Misc
100%
Cryptography
>90%
Reverse Engineering
~96% easy/med
Digital Forensics
>90% easy/med
Binary Exploitation
100% easy / 0% expert

AI saturates easy-to-medium challenges across all categories. The remaining gap is difficulty tier, not category. Expert-level challenges are shrinking every 6 months.

"Jeopardy-style CTFs have become a solved game for well-engineered AI agents."

-- Alias Robotics (CAI), after winning 5 major CTF competitions in 2025

Rate of Evolution

2x
Cybench solve rate doubled in 6 months
Sonnet 3.7: 35.9% → Sonnet 4.5: 76.5%
2.3x
AIxCC detection rate in 12 months
37% → 86% detection, 25% → 68% patching
3.5x
NYU CTF Bench in one model generation
GPT-4: 4.9% → GPT-4.1: 16.94%

Key Milestones

2016
ForAllSecure's Mayhem wins DARPA Cyber Grand Challenge at DEF CON 24 -- first non-human to earn a Black Badge ($2M prize)
OCT 2024
Google Project Zero + DeepMind's Big Sleep discovers a real 0-day in SQLite -- first AI to find an exploitable vulnerability in widely-used production software
AUG 2024
DARPA AIxCC Semifinal at DEF CON 32 -- 22 synthetic vulnerabilities discovered, one real-world 0-day found during competition
AUG 2025
DARPA AIxCC Finals at DEF CON 33 -- 86% detection, 68% patching, 18 real vulnerabilities discovered, 54 million lines of code analysed, $10.5M in prizes
2025
CAI (Alias Robotics) dominates 5 major CTF competitions, publicly declares jeopardy-style CTFs "a solved game" for AI agents
MAR 2026
Anthropic + Mozilla -- Claude Opus 4.6 discovers 22 Firefox vulnerabilities in two weeks, 14 high-severity (nearly 1/5 of all high-severity Firefox bugs in 2025-26). Automatically develops working browser exploits for 2 of them -- a first for AI-generated browser exploitation.
DAY 1

AI4CTF -- Classic CTF with AI Assistance

5 knowledge domains -- jeopardy-style challenges -- token-limited AI

5 HOURS

Real-World Alignment

Professional security researchers already use AI assistants daily. Testing human-AI collaboration is testing a real skill that matters in the workplace.

Accelerated Competition

Token-limited AI assistance allows more challenges to be attempted in the 5-hour window, producing richer score differentiation across skill levels.

Strategic Depth

Token budgets force resource-management decisions: which challenges benefit most from AI? When is manual analysis faster? A new layer of competitive strategy.

Levelling the Field

AI assistance helps bridge knowledge gaps for contestants from countries with less CTF infrastructure, while token limits prevent AI from doing all the work.

AI Tool Policy: Contestants access AI models through a competition-provided API gateway with a fixed token budget. No local models, no external AI services. All prompts and responses are logged for post-competition audit. Token limits are set by the Scientific Committee to balance AI utility with human skill demonstration.
Start Practising Now: The official practice platform is live at practice.ico2026.au — try AI4CTF challenges before the competition. Accredited team leaders receive additional resources including national selection guidance, training materials, and direct support from the Scientific Committee.

Binary Exploitation

Foundation

Stack layout, buffer overflows, shellcode basics

Intermediate

ROP chains, format string attacks, heap exploitation

Advanced

Kernel exploitation, sandbox escapes, custom mitigations bypass

Cryptography

Foundation

Symmetric/asymmetric encryption, hashing, encoding

Intermediate

RSA attacks, AES side-channels, protocol weaknesses

Advanced

Elliptic curve attacks, zero-knowledge proof flaws, post-quantum crypto

Digital Forensics

Foundation

File carving, metadata analysis, log analysis

Intermediate

Memory forensics, network packet analysis, disk imaging

Advanced

Anti-forensics detection, timeline reconstruction, malware artifacts

Reverse Engineering

Foundation

x86/x64 assembly, disassembly tools, static analysis

Intermediate

Dynamic analysis, anti-debugging, bytecode RE (JVM/.NET)

Advanced

Obfuscation, custom VM RE, firmware analysis

Web Security

Foundation

SQL injection, XSS, CSRF, authentication flaws

Intermediate

SSRF, deserialization, OAuth attacks, race conditions

Advanced

Prototype pollution, template injection, WebSocket attacks

DAY 2

CTF4AI -- AI Security

AI is the target, not the tool -- the dedicated AI challenge track

5 HOURS
ICOA 2026 EXCLUSIVE

The Core Innovation

ICO 2025 introduced AI tools into CTF competition. ICOA 2026 takes the next step -- making AI itself the challenge. Day 2 is entirely dedicated to AI security: attacking, defending, and analysing AI systems. Contestants interact with target models provided by the competition platform, not their own tools.

INDUSTRY TREND: AI SECURITY PRODUCTS

APR 2025

Google launches Sec-Gemini v1 — AI-powered threat analysis, root cause investigation, and vulnerability impact assessment.

20 FEB 2026

Anthropic releases Claude Code Security — autonomous code auditing that found 500+ vulnerabilities undetected for decades in production open-source codebases.

6 MAR 2026

OpenAI launches Codex Security — scanned 1.2M commits, discovered 10,561 high-severity issues and 14 CVEs across OpenSSH, Chromium, PHP.

Three frontier AI companies launched dedicated security products within 12 months. AI-driven vulnerability discovery is no longer experimental — it is an industry product category. CTF4AI trains the next generation to understand, evaluate, and defend against exactly these capabilities.

  • No static solutions AI models update constantly; yesterday's jailbreak doesn't work tomorrow. The challenge space is permanently fresh.
  • Adversarial co-evolution As AI defences improve, attack techniques must evolve. The competition track grows with the technology.
  • Human judgment is essential Evaluating whether AI output is safe, biased, or manipulated requires nuanced reasoning that AI agents cannot self-apply.
  • Growing attack surface Every new AI capability -- agents, multimodal models, tool use -- creates new vectors to explore.

AI Attack Surface

Foundation

Prompt injection basics, input manipulation, jailbreaking fundamentals

Intermediate

Advanced jailbreaking techniques, model extraction, training data leakage

Advanced

Adversarial ML (evasion/poisoning attacks), supply chain attacks on ML pipelines

AI Defence & Detection

Foundation

AI-generated text identification, basic guardrail concepts

Intermediate

Guardrail bypass testing, output verification, watermark detection

Advanced

Red-teaming LLMs, building robust AI safety filters, evaluating alignment

AI Forensics

Foundation

Deepfake detection (image/audio), AI vs human content classification

Intermediate

AI-generated code audit, model fingerprinting

Advanced

Attribution analysis, model provenance, synthetic data tracing

Where This Fits

Classic CTF aligns with what students are learning. CTF4AI takes them where no curriculum has gone yet.

Cybersecurity in Secondary Education

Australia

NSW HSC NEW 2025

  • Enterprise Computing -- dedicated "Principles of Cybersecurity" module: threats, encryption, MFA, risk matrices, cyber law
  • Software Engineering -- secure software architecture, secure coding practices, vulnerability management
United States

AP Programme NEW 2026

  • AP Computer Science Principles -- network security, encryption, cybersecurity risk assessment
  • AP Cybersecurity -- dedicated course launching nationally 2026-27. Risk management, network/app security, SQL injection, XSS, buffer overflow
United Kingdom

A-Level Computer Science

  • AQA & OCR specifications include encryption (symmetric/asymmetric), network security threats, firewalls, hashing algorithms
  • Focus remains primarily technical; legal/ethical connections to security are limited
International

IB Computer Science UPDATED 2025

  • New syllabus (first teaching August 2025) explicitly adds cybersecurity, AI, cloud computing, and blockchain to the core curriculum
  • Both SL and HL levels include security and AI topics

Cybersecurity is entering secondary education globally, but exclusively at the classic/foundational level. ICO Day 1 (AI4CTF) tests exactly these skills -- making the competition directly relevant to what students are learning in school, while adding the AI collaboration dimension that reflects the modern workplace.

The University Gap: AI Security Has No Curriculum

While classic cybersecurity is being standardised in secondary education, dedicated AI security education remains rare and fragmented. Individual courses exist at a handful of leading institutions, but no university in the world offers a complete undergraduate or postgraduate degree programme in AI Security.

University Course Level Status
Stanford XACS134 AI Security + CS120 AI Safety Professional + UG Available
CMU 15-783 Trustworthy AI Graduate Available
UC Berkeley CS294 Agentic AI (Dawn Song) Graduate Partial
MIT xPRO AI and Cybersecurity Executive Professional only
ETH Zurich SPY Lab (Prof. Tramer) Graduate research Research only
Oxford Security and Privacy of ML (MSc) + LASR National Lab Graduate Available
Cambridge CSER + Leverhulme CFI (research centres) Research Research only
NUS CS5562 Trustworthy ML (Prof Reza Shokri) Graduate Available
UNSW Trustworthy AI for Cyber Security + ML for Cyber Security + Deep Learning for Cyber Security Undergraduate Available (3 courses)

Graduate-level courses exist only at a handful of frontier institutions. The field is so new that the textbooks are still being written. A 16-year-old competing in ICOA 2026 Day 2 is engaging with material that most university students will not encounter until graduate school -- if at all.

NATIONAL INITIATIVE

Australian AI Safety Institute (AISI)

In November 2025, the Australian Government established the AI Safety Institute with $29.9 million in funding — tasked with testing frontier AI models, evaluating safety risks, and sharing findings internationally. Australia is a founding member of the International Network of AI Safety Institutes. ICOA 2026 takes place in the same country that is building the institutional infrastructure for AI safety — connecting youth competition with national policy.

Academic & Research Partners

ICOA 2026 collaborates with leading universities and research institutions in cybersecurity and AI safety. Partner announcements coming soon.

Interested in becoming an academic partner?

Contact Us

Classic Meets Frontier

Two Realities, One Competition

Classic CTF is the best cybersecurity foundation for young people. The five traditional domains -- binary exploitation, cryptography, digital forensics, reverse engineering, and web security -- teach the fundamentals of how systems work and how they break. These skills are entering secondary curricula worldwide. Day 1 preserves and celebrates this foundation, while adding the modern dimension of human-AI collaboration.

AI is transforming cybersecurity faster than education can adapt. AI agents already solve 90-100% of standard CTF challenges. Google's Big Sleep found a real 0-day in SQLite. DARPA's AIxCC finalists patched 68% of vulnerabilities automatically. The question is no longer whether AI will automate traditional security tasks -- it is how quickly.

ICOA 2026's answer: don't fight the wave -- ride it. Day 1 embraces AI as a tool. Day 2 makes AI the challenge. CTF4AI is not just a competition format -- it is a statement about where cybersecurity is heading.

The students who learn to attack, defend, and evaluate AI systems today will lead the field tomorrow. No undergraduate programme teaches this yet. No other olympiad tests it. ICOA 2026 places the next generation at the frontier -- not following academic curricula, but defining the skills that curricula will eventually teach. This is what an olympiad should be: not a test of what students already know, but a challenge that pushes the boundaries of what is possible.

Competition Structure

Aligned with international olympiad standards: IOI, IPhO, and IChO all use 2 days x 5 hours.

Independent Scoring

Each day scored separately with tasks split into subtasks and individual flags

Individual Ranking

Contestants ranked individually based on combined scores across both days

Medals

Gold, Silver, and Bronze medals awarded based on final individual rankings

Tie-Breaking

Ties broken by sum of last point-increase times per day, in ascending order

Preparation Guide

Resources to help contestants prepare for both competition days

Day 1: AI4CTF

  • Practice on CTFtime, PicoCTF, and OverTheWire
  • Study OWASP Top 10 web vulnerabilities
  • Learn binary analysis with Ghidra or IDA Free
  • Practice memory forensics with Volatility
  • Master networking tools: Wireshark, Burp Suite, nmap
  • Study modern cryptographic protocols and their weaknesses
Start Practising

Day 2: CTF4AI

  • Explore prompt injection techniques and defences
  • Study adversarial machine learning fundamentals
  • Practice deepfake detection and AI content analysis
  • Learn LLM security: guardrails, jailbreaking, red-teaming
  • Understand ML model supply chain risks
  • Study AI watermarking and content provenance

Dedicated CTF4AI preparation resources — developed in collaboration with frontier AI model companies, universities, and research institutes — coming soon.

FOR ACCREDITED TEAM LEADERS

Accredited team leaders receive dedicated support from the ICOA 2026 Scientific Committee, including: national selection process guidance, structured training curricula for both AI4CTF and CTF4AI, curated practice challenge sets, and direct communication channels for technical questions. Contact australia@ico2026.au to access team leader resources.

Powered by Industry Leaders in AI and Cybersecurity

Technology and platform partners for ICOA 2026 competition infrastructure

AI
CLOUD
CYBER
PLATFORM
Data Sources
  1. Shao et al., "NYU CTF Bench," NeurIPS 2024 -- arxiv.org/abs/2406.05590
  2. Tao et al., "Cybench," Stanford CRFM 2024 -- arxiv.org/abs/2408.08926
  3. Palisade Research, "Hacking CTFs with Plain Agents," Dec 2024
  4. HackTheBox, "AI vs Human CTF Results," Mar 2025
  5. Wiz, "AI Agents vs Humans: Web Hacking," 2025
  6. Alias Robotics, "CAI Conquers 2025 CTF Circuit," 2025
  7. Google Project Zero, "Big Sleep," Oct 2024
  8. DARPA AIxCC Results, 2024-2025
  9. Anthropic, "Cyber Competitions," 2025
  10. CTF-Dojo, Amazon Science, 2025 -- arxiv.org/abs/2508.18370
  11. XBOW Autonomous Pentesting Benchmarks, 2025
  12. NSW Education Standards Authority (NESA) -- curriculum.nsw.edu.au
  13. College Board AP Programme -- apcentral.collegeboard.org
  14. IB Diploma Programme Computer Science -- ibo.org
  15. AQA/OCR A-Level Computer Science Specifications
  16. Anthropic + Mozilla, "Finding Vulnerabilities in Firefox with AI," Mar 2026 -- red.anthropic.com
  17. Stanford XACS134, CMU 15-783, Berkeley CS294 course pages

Ready to Compete?

Register your team for ICOA 2026 Australia