AI Security Roadmap

From AppSec to AI Security

A curated 8-week roadmap for software and security engineers pivoting into AI Security. Only the resources worth your time. No tutorials, no filler. Follow the weeks in order.

0 / 48

Who this is for

You have a background in software engineering or application security and you want to work on AI systems without becoming an ML researcher. You can read English technical content and you have ~12 hours per week.

How to use it

Follow the weeks in order. Each week contains reading, videos, docs or papers, one practical exercise, and one self-evaluated checkpoint. Mark items as you go — progress is stored in your browser only.

Some resources are paid books (marked $). If you cannot buy them, the paper and blog equivalents are enough to follow along.

Week 1

LLM Foundations I — Tokens, Prompts, Determinism

Build an accurate mental model of what an LLM is and how inputs shape outputs. Without this, everything downstream becomes cargo-culting.

Resources

Exercise

Checkpoint

Week 2

LLM Foundations II — Attention, Context, Sampling

Understand the internals well enough to reason about attack surfaces. You do not need to implement a transformer — you need to know why injection works at the attention level.

Resources

Exercise

Checkpoint

Week 3

OWASP LLM Top 10 — The Attack Surface

Map the attack surface as industry has agreed on it. This is the taxonomy every other AI Security document in 2026 builds on.

Resources

Exercise

Checkpoint

Week 4

Prompt Injection — Direct and Indirect

Understand LLM01 at research-paper depth. It is the most exploited and least defended class in production.

Resources

Exercise

Checkpoint

Week 5

Sensitive Information Disclosure — LLM02

Understand where PII and secrets leak in an LLM pipeline and why generic DLP tools miss most of it.

Resources

Exercise

Checkpoint

Week 6

Supply Chain and Model Risks — LLM03, LLM05

Understand the non-input attack surface. Most AppSec engineers underweight this because classic web apps don't have it.

Resources

Exercise

Checkpoint

Week 7

Evals and Guardrails — Tooling

Move from theory to tools. You cannot call yourself an AI Security engineer without having run at least one eval harness against a model.

Resources

Exercise

Checkpoint

Week 8

Fuzzing, Red-Teaming Basics, First Public Output

Consolidate the 8 weeks and ship your first public artifact. Public output is how this plan becomes visible.

Resources

Exercise

Checkpoint

Upcoming — to unlock

These months will be published as their author completes them. The roadmap grows one bimester at a time, in sync with real field study.

What this roadmap does not cover