WHAT YOU
LEARN
Four tools. Seven modules. Everything you need to build and deploy full SaaS products using AI coding agents. Self-paced video lectures, practice quizzes, and hands-on projects.
FOUR INSTRUMENTS
[ 01 ]
[ AI-NATIVE IDE ]
Cursor
- >Inline code generation and multi-file editing
- >Codebase-wide context and @-mentions
- >Composer for multi-step changes across files
- >When to accept, reject, and iterate on suggestions
- >Project setup and configuration for maximum speed
[ 02 ]
[ CLI AGENT ]
Claude Code
- >Terminal-first autonomous coding workflow
- >Full repo access and multi-step task execution
- >Tool use and file system operations
- >Prompt engineering for complex code generation
- >Reviewing and verifying agent-written code
[ 03 ]
[ OPEN SOURCE AGENT ]
OpenCode
- >Self-hosted AI coding agent setup
- >Extensibility and custom tool integration
- >Transparent operation and audit trails
- >Comparing open-source vs proprietary agents
- >Team deployment and configuration
[ 04 ]
[ METHODOLOGY ]
Workflows + Best Practices
- >When to trust AI output and when to intervene
- >Prompt engineering patterns for production code
- >Code review workflows for AI-generated code
- >Architecture decisions with AI assistance
- >Shipping full-stack apps: auth, payments, deployment
7 MODULES. ZERO FILLER.
Foundations — Why AI Coding?
The economic case for AI-assisted development. Why 20% efficiency gains are massive. Why multiples of productivity are revolutionary. Setting up your development environment from zero.
- >The economics of developer productivity
- >Why speed matters more than code elegance
- >Senior vs junior developer advantages with AI
- >Setting expectations: what AI can and cannot do
Cursor Deep Dive
Cursor's semantic indexing is its key competitive advantage. We cover every setting, every hotkey, and every workflow pattern that matters. Privacy mode, Max Mode, billing awareness, and the queue trick that chains safe commands.
- >Complete Cursor settings walkthrough
- >Semantic search and codebase indexing
- >Agent window management and parallelization
- >Queue messages for chained workflows
- >Billing and cost monitoring
Model Selection & Strategy
Not every model is good at everything. Anthropic is hyperactive and proactive — great for planning. GPT Codex is surgical and precise — best for debugging. Gemini is cheap and fast — ideal for execution. You'll learn when to use each.
- >Three-model strategy: Anthropic, GPT, Gemini
- >Cost-benefit analysis per model
- >Tokens per minute and why speed matters
- >SWE-bench: useful starting point, not gospel
- >Max Mode vs Auto: why Auto will burn you
Cursor Rules — The Guardrail System
LLMs are indeterminate. Cursor rules put boundaries on what AI can do to get predictable outcomes. We give you the exact rule set refined over 20,000 hours — documentation-first, test-driven, self-healing.
- >Prime directive: always ingest docs first
- >Mandatory vs optional rule architecture
- >UltraThink protocol for deeper reasoning
- >Library discipline: stopping AI from reinventing components
- >Self-learning loop: agent commits → rules track mistakes → docs update
MCPs, Context7 & External APIs
MCPs are glorified API wrappers — and that's exactly why they're powerful. We cover Context7 for library docs, Perplexity for web search, and the exact protocol for interfacing with any external API without guessing.
- >What MCPs actually are (API wrappers with tool injection)
- >Context7 and Perplexity as essential MCPs
- >External API protocol: scrape docs → test scripts → verify shapes
- >Tool search and context bloat management
- >When to build your own MCP
Greenfield to Production
Plan with Opus. Build with a cheaper model. This module covers the entire lifecycle: PRD writing, plan mode, greenfield execution, compression event management, and the transition from greenfield to feature development.
- >PRD writing for AI consumption
- >Plan mode: Opus for planning, Codex/Gemini for execution
- >Greenfield execution: 90% of your app in one session
- >Compression events: when to start a new window
- >Post-greenfield: debugging patterns and feature deployment
Exam Preparation
Timed practice builds under exam conditions. Architecture planning under pressure. The exact workflow for building a deployed micro-SaaS in 3 hours using everything you've learned.
- >Mock exam walkthroughs
- >Architecture planning in the first 15 minutes
- >Deployment checklist: auth, payments, database, hosting
- >Common failure modes and how to avoid them
- >Parallelization strategies for the exam
REAL WORDS. REAL EXPERIENCE.
“AI agentic coding is about gathering the correct context at the right time to allow AI to make correct additions, edits, debugging, and greenfield builds.”
— Module 01
“Every agent is a new junior developer who just started the job today. They need to read the docs. They need to understand the imports. They need context.”
— Module 04
“Greenfield is where AI shines. All the work, all the intelligence comes in round two — debugging and adding features in a systematic and intelligent way.”
— Module 06
THREE MODELS. THREE PERSONALITIES. ONE WORKFLOW.
Not every model is good at everything. Experienced AI developers match the model to the task. This is one of the first things we teach.
VIBE CODING VS. PROFESSIONAL AI DEVELOPMENT
Anyone can ask AI to write code. That’s not what we teach. We teach the methodology that makes AI-assisted development reliable, maintainable, and production-ready.
[ VIBE CODING ]
[ WHAT WE TEACH ]
Ask AI to build something and hope for the best
Write a PRD, plan with Opus, execute with a cheaper model
No documentation. No rules. YOLO.
Cursor rules as guardrails. Documentation-first. Self-healing error correction.
One model for everything
Three-model strategy matched to task type
Run the code and see if it works
Test-driven development. Unit tests before implementation. Regression prevention.
When it breaks, ask AI to fix it. When that breaks, start over.
Rolling dev server logs. Structured debugging. The AI reads the errors, not you.
Ship once. Abandon when complexity hits.
Compression-aware sessions. Context engineering. Projects that scale past greenfield.
[ VIBE ]
Ask AI to build something and hope for the best
[ PRO ]
Write a PRD, plan with Opus, execute with a cheaper model
[ VIBE ]
No documentation. No rules. YOLO.
[ PRO ]
Cursor rules as guardrails. Documentation-first. Self-healing error correction.
[ VIBE ]
One model for everything
[ PRO ]
Three-model strategy matched to task type
[ VIBE ]
Run the code and see if it works
[ PRO ]
Test-driven development. Unit tests before implementation. Regression prevention.
[ VIBE ]
When it breaks, ask AI to fix it. When that breaks, start over.
[ PRO ]
Rolling dev server logs. Structured debugging. The AI reads the errors, not you.
[ VIBE ]
Ship once. Abandon when complexity hits.
[ PRO ]
Compression-aware sessions. Context engineering. Projects that scale past greenfield.
“This is agentic coding 101, so we’re not going to be doing some of the goofy shit that you’ve seen on the internet. We’re not going to be running 25 agents simultaneously. That has to be done with great caution.”
/// FROM COURSE LECTURES
TWO TRICKS THAT SAVE HOURS EVERY DAY.
These are not in any other course. They come from 20,000 hours of hard-won experience.
[ INNOVATION 01 ]
ROLLING LOGS — THE AI READS THE ERRORS, NOT YOU
Traditionally, you’d look at your terminal, see an error, copy it, and paste it into the AI chat. That’s a human bottleneck. Instead, we write all dev server output to a rolling log file. The agent reads the file directly. No copy-paste. No human in the loop. Debugging speed goes from minutes to seconds.
[ INNOVATION 02 ]
QUEUE COMMANDS — CHAIN SAFE OPERATIONS WHILE YOU SLEEP
Cursor lets you queue messages that send after the current task completes. Use this to chain guaranteed-safe operations. Step 1: Have the agent analyze docs vs. codebase for discrepancies. Step 2: Queue “now update the documentation.” You walk away. When you come back, both tasks are done correctly.
START FREE
All lectures and quizzes. No credit card. No expiration.