/// CURRICULUM

WHAT YOU
LEARN

Four tools. 16 modules. 25+ video lectures distilled from 5,098 hours of hands-on agentic development. Everything you need to build and deploy full-stack SaaS products using AI coding agents.

Self-paced video lectures, practice quizzes, and hands-on projects. The progression is deliberate: Why → Setup → Rules → Models → Build → Harden → Ship.

/// TOOLS COVERED

FOUR INSTRUMENTS

[ 01 ]

[ AI-NATIVE IDE ]

Cursor

  • >Semantic indexing — the most important feature in the entire IDE
  • >Hotkeys, agent windows, queue trick for chaining safe operations
  • >Max Mode vs Auto: why Auto will burn you every time
  • >Privacy mode, billing awareness, and cost monitoring
  • >SSH port forwarding and remote development

[ 02 ]

[ CLI AGENT ]

Claude Code

  • >Terminal-first autonomous coding across your entire repo
  • >CLAUDE.md and AGENTS.md — cursor rules equivalents for CLI agents
  • >Proactive and hyperactive: great for automation and planning
  • >When to unleash it and when to rein it in
  • >Cost management: $100–200/mo subscription vs per-token

[ 03 ]

[ OPEN SOURCE AGENT ]

OpenCode

  • >Self-hostable, transparent, extensible AI coding agent
  • >Custom tool integration and audit trails
  • >Full control over your tooling without vendor lock-in
  • >Team deployment and configuration strategies
  • >When open-source agents beat proprietary ones

[ 04 ]

[ METHODOLOGY ]

Workflows + Best Practices

  • >Three-model strategy matched to task type
  • >Documentation-first development and self-healing docs loop
  • >Test-driven development as the control system for AI output
  • >Context engineering: gathering the right context at the right time
  • >Compression awareness and when to start a new window
/// MODULE BREAKDOWN

16 MODULES. ZERO FILLER.

M01

Why AI Coding? — Agentic Coding 101

The economic case for AI-assisted development. A 20% efficiency gain is utterly massive. A 2–5x productivity multiple is revolutionary. Context engineering is the core skill — gathering the right context at the right time. Why senior developers are in a privileged position. Why junior developers should focus on architecture, not syntax.

  • >The economics of developer productivity at scale
  • >Context engineering as the #1 skill in AI-assisted development
  • >Parallelization: running multiple agents with great caution
  • >Why maintainability still matters when AI writes fast
  • >Setting expectations: what AI can and cannot do
M02

Cursor Setup — Environment & Settings

Cursor is your Swiss army knife. Every setting explained with rationale. Semantic indexing is the most important feature. Hotkeys: Ctrl+B, Ctrl+Backtick, Ctrl+I, Ctrl+P. Auto-approve tool use. Queue messages over stop-and-send. Disable autocomplete. Monitor billing weekly.

  • >Complete Cursor settings walkthrough with rationale
  • >Semantic indexing and grep indexing (beta)
  • >Queue trick: chain safe operations while you sleep
  • >Privacy mode, SSH port forwarding, remote development
  • >Billing monitoring: $1,200 usage shown as real example
M03

Model Selection & Cost Strategy

Three model slots needed: Anthropic (proactive and agentic), GPT Codex (surgical and precise), Gemini (cheap and fast). TPM matters — fast models keep your brain engaged. Always use Max Mode. Never use Auto. Vibe-check new models in 30 minutes.

  • >Three-model strategy matched to task type and personality
  • >TPM and why speed matters more than you think
  • >Cost: 2x billing over 200K input tokens — how to manage it
  • >SWE-bench as a starting point, not gospel
  • >Compression: context window management and when to start fresh
M04

Cursor Rules — The Guardrail System

LLMs are indeterminate systems. Cursor rules put boundaries on what AI can do to get predictable, deterministic outcomes. Prime directive: always ingest documentation first. TDD is mandatory for AI — regressions are the #1 risk. Library discipline. UltraThink protocol.

  • >Prime directive: ingest relevant docs before implementation
  • >Mandatory vs optional rule architecture
  • >Test-driven development as the control system for AI output
  • >UltraThink protocol: suspending zero-fluff for deeper reasoning
  • >Library discipline: preventing AI from reinventing existing components
  • >Response format: normal mode vs deep reasoning mode
M05

Cursor Rules Continued — Self-Healing & Docs

The self-learning loop: agent commits after each run, cursor rules track recurring mistakes, docs update after every code change. External API protocol: scrape docs, write test scripts, verify real shapes, then code. Rolling dev server logs. File naming as AI discoverability.

  • >Self-learning loop: docs prevent errors → agent updates docs → fewer errors
  • >External API protocol: never guess endpoints or shapes
  • >Rolling dev server logs (2,000+ lines to file for agent reading)
  • >Mandatory vs optional rules and context bloat management
  • >File naming conventions for AI semantic search
M06

MCPs — Model Context Protocol

MCPs are glorified API wrappers — and that is exactly why they are powerful. Context7 for library docs and Perplexity for web search cover 90% of needs. More is not better — each MCP adds context bloat. Anthropic models pick up tools eagerly; GPT is reluctant. Custom MCPs for your own APIs.

  • >What MCPs actually are: API wrappers with tool description injection
  • >Context7 + Perplexity as the essential MCP pair
  • >Context bloat: why more MCPs is not better
  • >Dwarf Fortress analogy: you suggest, you do not command
  • >Escalation strategies: ALL CAPS, threatening consequences, begging
  • >When to build your own MCP for repeated agent access
M07

Final Setup — Indexing & Polish

Indexing is Cursor’s most important feature. If indexing is disabled, everything breaks. Cursor ignore for files AI shouldn’t touch. Brand name changes: use find-and-replace, not AI. Enable early access. Close unnecessary tabs.

  • >Indexing verification and troubleshooting
  • >Cursor ignore for sensitive or irrelevant files
  • >Grep indexing (beta) for faster code search
  • >Docs crawling vs Context7 for niche APIs
  • >Tab hygiene and workspace management
M08

Greenfield — Planning & First Build

Plan mode first — drives the agent to complete the full task even if it takes hours. PRD writing for AI consumption. Opus for planning (fills UX gaps no other model matches). 90% of your application done in one session. The AI shines at greenfield. Do not watch file changes during greenfield — it is a waste of time.

  • >PRD writing: tech stack, business objectives, exam requirements
  • >Plan mode with Opus: let it ask smart questions
  • >Greenfield execution: hands-off, 90% in one session
  • >Schema design: watch progress bitfield, email system, SMTP
  • >Anti-cheat: server-side timing, not git commit timestamps
M09

Post-Greenfield — Database, Testing & Iteration

The first compression event is fine. 4+ compressions — start a new window. Docker-backed Postgres with npm scripts. Switching models mid-context causes unpredictable issues. Plans are immune to compression. Context front-loading via plan mode + test writing.

  • >Docker-backed Postgres: db:up, db:down, db:push, db:seed, db:setup
  • >Compression events and context window management
  • >Feature implementation with GPT Codex (not Opus)
  • >Plans survive compression: use them as persistent reference
  • >CLAUDE.md and AGENTS.md: cursor rules for Claude Code
M10

Building & Polishing — Frontend, E2E, Marketing

Multi-agent parallelization: Claude Code for marketing, Cursor for backend. E2E testing with Playwright: browser-first investigation. If a route genuinely fails, fix the code — tests memorialize stable state. Email markdown editor with live preview. Nav consistency across routes. Above-the-fold obsession.

  • >Running multiple parallel agents safely
  • >E2E testing with Playwright: investigate routes first, then test
  • >HTML entity encoding and rendering correctness
  • >Email template editor with real-time preview and test send
  • >Navigation consistency: one shared navbar, admin-only destinations
  • >Above-the-fold CRO: line height, white space, dual CTA buttons
M11

Domain, Stripe & Webhooks

Domain selection and exact-match SEO benefits. Stripe integration with restricted secret keys — never full-access. Webhook configuration: Payment Failed, Succeeded, Session Completed/Expired. Using AI to determine required API scopes. Strip navigation during checkout (Amazon pattern).

  • >Domain setup and exact-match domain SEO advantages
  • >Stripe API with restricted keys and webhook secrets
  • >Webhook event configuration and environment variables
  • >CRO: stripping navigation during checkout flow
  • >Brutalist design principles for developer-facing UI
M12

Admin CRUD, Email Campaigns & Drip Sequences

Build admin CRUD early — AI reduces implementation cost to nearly zero. Email drip campaigns with complex sequencing and drag-and-drop ordering. Use agents like employees — know their strengths and weaknesses. Cascade delete operations for data integrity.

  • >Admin table views with full CRUD operations
  • >Email drip campaigns with sequencing and automation
  • >Model strengths and weaknesses for task assignment
  • >Cascade delete and relational cleanup strategies
  • >YouTube transcripts as documentation source material
M13

Deployment — DNS, Coolify & Production Hardening

Deploy early and often. Coolify as self-hosted alternative to Vercel. Middleware auth gating unexpectedly blocking public routes — only surfaces in production. Prisma migration bundled into deploy scripts. Environment variable management for production.

  • >DNS A record configuration and fast propagation
  • >Coolify setup for Next.js (the hard-won lessons)
  • >Middleware debugging: public route gating issues
  • >Prisma database config, seed data, and migration bundling
  • >Environment variable hierarchy: .env, .env.development, .env.production
M14

Gap Analysis, Security & Anti-Cheat Engineering

Multi-pass implementation: initial build, gap analysis, refinement. Verify .env and secrets manually — never trust AI. Bitfield video tracking, speed enforcement, server-timed exams. GitHub App integration with deploy keys. Playback anomaly detection with forgiving validation.

  • >Second-pass gap analysis after every major agent run
  • >Secret management: restricted API keys and env verification
  • >Bitfield data structure for second-by-second video tracking
  • >Playback speed enforcement and anomaly detection
  • >GitHub App permissions, deploy key lifecycle, EC2 workspaces
  • >Socratic debugging: forcing the model to test invariants
M15

Final Features — .edu Discounts, Certificates & Polish

Email collection before checkout — gorgeous friction. Stripe coupon auto-application for .edu domains. Student CRUD with search, sort, and filter. Window focus pause detection. Full-site browser crawling for runtime error detection. Favicon, logo, and asset management.

  • >Email + name collection early for drip campaigns and certificates
  • >.edu detection with automatic Stripe coupon application
  • >Admin debug routes for test data manipulation
  • >Window focus pause detection for video anti-cheat
  • >Automated full-site error detection via browser crawling
  • >File naming conventions: logo-square.svg, logo-horizontal.svg
M16

Exam Preparation & Timed Practice

Timed practice builds under exam conditions. Architecture planning under pressure. The exact workflow for building a deployed SaaS in 5 hours using everything you have learned. Common failure modes and recovery strategies.

  • >Mock exam walkthroughs under realistic time pressure
  • >Architecture decisions in the first 15 minutes
  • >Deployment checklist: auth, payments, database, hosting
  • >Common failure modes and recovery strategies
  • >Multi-agent parallelization strategies for the exam
  • >Build, fix, commit, push — the iterative deployment cycle
/// FROM THE LECTURES

REAL WORDS. REAL EXPERIENCE.

AI agentic coding is entirely about context. Every single time you start a new agent window, it figures out the entire world from scratch. Context engineering is the skill.

Module 01 — Foundations

Cursor rules put boundaries on what the AI can and should do in order to get predictable outcomes from an indeterminate system. This is the single most important artifact in any AI-assisted project.

Module 04 — Guardrails

Greenfield is where AI shines. Do not micromanage the process. Let the agent execute the plan. 90% of the application — completely done in one session.

Module 08 — Greenfield

The documentation-first approach creates a self-reinforcing loop: docs prevent errors, the agent updates docs after changes, fewer errors next time. This is the source of a superior approach.

Module 05 — Self-Healing

A high tokens-per-minute rate is extraordinarily valuable. You can follow along in your head during an agentic run. Fast models keep your brain engaged — slow models tempt distraction.

Module 03 — Model Strategy

Deploy early and often. Use deployment feedback as part of development, not as a final step. The nature of deployment for Next.js is perfectly optimized for these environments.

Module 13 — Deployment
/// LEARNING PATH

WHY → SETUP → RULES → MODELS → BUILD → HARDEN → SHIP

The lectures follow a deliberate progression. Each phase builds on the last. You can’t skip ahead and expect it to work — context engineering requires understanding why each layer exists.

[ PHASE 01 ]

THE CASE FOR AI CODING

1–2 LECTURES

The economic case, not the hype. Why 20% efficiency gains are massive for any business. Why multiples of productivity are beyond dispute. Why senior developers have a privileged position — and why juniors need to focus on architecture, not syntax.

[ WHAT YOU LEARN ]

  • >The productivity math that changes careers
  • >Context engineering as the core skill
  • >Why AI coding is about delegation, not replacement
  • >Setting realistic expectations for what AI can and cannot do
/// MODEL STRATEGY

THREE MODELS. THREE PERSONALITIES. ONE WORKFLOW.

Not every model is good at everything. Experienced AI developers match the model to the task. This is one of the first things we teach.

PLAN WITH ANTHROPICDEBUG WITH GPTEXECUTE WITH GEMINI
/// THE RULEBOOK

15 RULES DISTILLED FROM 5,098 HOURS.

These rules were distilled from 27 lecture transcripts. They capture repeated engineering lessons for AI-assisted development: predictable execution, safer integrations, faster debugging, and lower regression risk.

Click any category to see the rules and the reasoning behind them. Every rule is “HIGH” frequency — they were repeated across multiple lectures because they matter that much.

[ IF ONLY FIVE RULES ]

  1. 01Ingest docs first, then code.
  2. 02Re-run plan plus gap analysis after every major agent pass.
  3. 03Verify .env and secrets manually.
  4. 04Run route-level and runtime checks, not just static tests.
  5. 05Keep cursor rules and docs updated with each meaningful change.
/// THE DIFFERENCE

VIBE CODING VS. PROFESSIONAL AI DEVELOPMENT

Anyone can ask AI to write code. That’s not what we teach. We teach the methodology that makes AI-assisted development reliable, maintainable, and production-ready.

[ VIBE ]

Ask AI to build something and hope for the best

[ PRO ]

Write a PRD, plan with Opus, execute with a cheaper model. Let the AI ask questions. Get 90% of your app in one session.

[ VIBE ]

No documentation. No rules. YOLO.

[ PRO ]

Cursor rules as guardrails. Documentation-first development. Internal docs prevent 90% of errors. The agent updates docs after every change.

[ VIBE ]

One model for everything

[ PRO ]

Three-model strategy: Anthropic for planning, GPT Codex for surgical precision, Gemini for cheap fast execution. Match the model to the task.

[ VIBE ]

Run the code and see if it works

[ PRO ]

Test-driven development is mandatory for AI. Write failing tests first. Implement. Verify. Regressions are the #1 risk and tests are the control system.

[ VIBE ]

When it breaks, ask AI to fix it. When that breaks, start over.

[ PRO ]

Rolling dev server logs piped to file. The AI reads 2,000+ lines of errors directly. Socratic debugging prompts force deeper investigation.

[ VIBE ]

Ship once. Abandon when complexity hits.

[ PRO ]

Compression-aware sessions. Context engineering. Plans survive compaction. Projects that scale past greenfield into production.

“Anyone can ask AI to write code. We teach the methodology that turns AI from a novelty into a production tool — cursor rules, documentation-first development, test-driven AI, and the three-model strategy that makes every session predictable.”

/// FROM COURSE LECTURES

See the Full Methodology
/// DEBUGGING INNOVATIONS

TWO TRICKS THAT SAVE HOURS EVERY DAY.

These are not in any other course. They come from 5,098 hours of hard-won experience.

[ INNOVATION 01 ]

ROLLING LOGS — THE AI READS THE ERRORS, NOT YOU

Traditionally, you’d look at your terminal, see an error, copy it, and paste it into the AI chat. That’s a human bottleneck. Instead, we write all dev server output to a rolling log file. The agent reads the file directly. No copy-paste. No human in the loop. Debugging speed goes from minutes to seconds.

> npm run dev >> logs/dev.log 2>&1
> agent: reading logs/dev.log...
> agent: found TypeError at line 47 of /api/auth/route.ts
> agent: fixing... ✓ resolved

[ INNOVATION 02 ]

QUEUE COMMANDS — CHAIN SAFE OPERATIONS WHILE YOU SLEEP

Cursor lets you queue messages that send after the current task completes. Use this to chain guaranteed-safe operations. Step 1: Have the agent analyze docs vs. codebase for discrepancies. Step 2: Queue “now update the documentation.” You walk away. When you come back, both tasks are done correctly.

YOU:
Compare current docs to codebase. Find gaps.
AGENT:
[analyzing 12 files...] Found 3 discrepancies.
QUEUED →
"Now update the docs to match reality."
AGENT:
[updating docs/AUTH.md, docs/STRIPE.md...] ✓ Done.

/// 16 MODULES. 25+ LECTURES. ZERO COST.

START FREE. PROVE IT WHEN READY.

Every lecture, quiz, and resource is free forever. No credit card. No trial. The $150 certification exists only when you want verifiable proof you can ship.

Create Free Account