Curriculum Outcomes Capstone Includes FAQs WhatsApp
Applications Open · 12-Week Live Bootcamp

Become a Job Ready
AI First Product Manager
in 12 Weeks

A 12-week, mentor-led live cohort for PMs, engineers & builders who want to ship a portfolio-ready AI product — RAG, agents, evals, GTM, with dedicated interview prep baked in and a live Demo Day.

12 Weeks Live
45 Live Sessions
8h Per Week
2x Mock Interviews
Shailesh Sharma
SS
Shailesh Sharma AI Product Builder Cohort IIT Kanpur & IIM Bangalore Alumni YouTube · 15K Followers
RAG AI Agents Evals Demo Day
Sat + Sun · 10:30 AM–12:30 PM & 2:30–4:30 PM IST
45 live sessions
Live Demo Day + certificate
10 Interview Prep + 10 Demo Sessions
Portfolio-ready AI product capstone
Alumni community + job board
Who this is for

Built for builders who want to ship real AI

Not for passive learners. This is a doing cohort — you ship something real by Week 12.

PMs transitioning into AI PM roles — building hands-on credibility with working prototypes, not just theory

Mid-to-senior PMs (3–10 years) who want to lead AI initiatives or move to AI-first companies

Engineers, data scientists, and designers pivoting into AI Product Management roles

PMs preparing for AI PM interviews — product sense, metrics, strategy, behavioural, and technical AI questions

Builders who want a structured path through AI fundamentals, RAG, AI agents, evals, and GTM — with a capstone

Anyone who has watched scattered AI tutorials and wants a single, sequenced, mentor-led program

Reviews & Testimonials

What our students are saying

Real feedback from students who've learned with Shailesh across courses, YouTube, mentorship, and 1:1 coaching.

🎬 Video Testimonials
Harshit's StoryPM at Indeed
Aishwarya's StoryPM at Microsoft
Shikhar's StoryPM at Shipturtle
📸 Feedback & Reviews
Testimonial
Testimonial
Testimonial
Testimonial
Testimonial
Testimonial
Testimonial
Testimonial
Testimonial
Testimonial
Testimonial
Testimonial
Testimonial
Testimonial
Testimonial
Testimonial
Testimonial
Testimonial
Testimonial
⭐ 5/5Avg Rating
12 wksIdea to Portfolio
100%Shipped a Capstone
12-Week Curriculum

Week-by-week breakdown

Saturdays + Sundays · 4 sessions/week (10:30 AM–12:30 PM & 2:30–4:30 PM IST) · 8 hrs/week · Weeks 1–10

Phase 1 · Weeks 1–3
AI Foundations
Week 1
AI Fundamentals & Algorithms
Lock your capstone problem statement
Build AI literacy from first principles. Understand what makes a product 'AI-powered' and lock in your capstone problem statement.
🔵 Sat AM — Concept: AI Fundamentals
  • Supervised vs Unsupervised Learning — how models learn from labelled vs unlabelled data
  • What makes a product 'AI-powered' vs just data-driven
  • The AI Flywheel: how user data compounds model quality and moat
  • AI product taxonomy: predictive, generative, agentic
🔵 Sat PM — Concept: Algorithms & Use Cases
  • Logistic Regression, Clustering, Decision Trees, SVM, Random Forest, XGBoost
  • Match each algorithm to a real PM use case
  • Identify the ML technique for your capstone + write algorithm selection rationale
📝 Sun AM — Interview Prep 1: First Principles
  • CIRCLES and STAR frameworks for AI PM interviews
  • Clarifying questions: user, metric, constraint, timeline, model constraints
  • Build your personal clarifying question bank (10 Qs)
🟣 Sun PM — Office Hours
  • Q&A on Week 1 concepts + assignment review
  • Review AI Opportunity Canvas submissions
Capstone Milestone: AI Opportunity Canvas + clarifying question bank (10 Qs) delivered.
Week 2
Generative AI & ML Systems
LLMs, transformers, production pipelines
Build a real mental model of how LLMs and ML production systems actually work — and apply it to your capstone.
🔵 Sat AM — Concept: Generative AI Deep Dive
  • Deep Learning: neurons, backpropagation, transformer architecture
  • LLMs: tokenisation, attention, pre-training vs fine-tuning vs RLHF
  • Advanced Prompting: CoT, ToT, few-shot, system prompts
🔵 Sat PM — Concept: ML Systems & Pipelines
  • Training vs inference pipelines, batch vs streaming data
  • Feature stores, model registries, experiment tracking
  • Monitoring: data drift, model degradation, latency vs accuracy tradeoffs
📝 Sun AM — Interview Prep 2: AI Product Sense
  • Evaluating AI features: accuracy, latency, trust, explainability
  • Improvement framework: current state → pain points → prioritised solutions
  • Answer 2 product sense questions live + write structured improvement pitch
🟣 Sun PM — Office Hours
  • GenAI Q&A + pipeline review + prompt engineering practice
  • Peer review of pipeline maps
Capstone Milestone: GenAI feature defined. ML pipeline map + 2 product sense answers delivered.
Week 3
GenAI Tools, AI PM Role & RAG
Ship your first working RAG prototype
Understand the modern AI stack and ship your first working RAG prototype.
🔵 Sat AM — Concept: GenAI Tools & AI PM Role
  • GenAI tool landscape: ChatGPT, Claude, Gemini, Cursor, Midjourney
  • AI product stack layers: infra → model → application → UX
  • What AI PMs actually do in 2026: scope, skills, cross-functional dynamics
🔵 Sat PM — Concept: RAG Deep Dive
  • RAG architecture: retriever + generator + knowledge base
  • Vector DBs, embeddings, chunking strategies, retrieval tuning
  • RAG vs fine-tuning — PM decision framework
📝 Sun AM — Interview Prep 3: Metrics RCA & Guesstimate
  • RCA for AI: segmentation by cohort, device, model version
  • Guesstimate frameworks: market sizing, cost estimation
  • Solve 2 RCA + 2 guesstimate questions live
🟢 Sun PM — Demo Hours: Build RAG Prototype
  • Build a simple RAG prototype using no-code tools
  • Connect knowledge base, configure retrieval, test 5 queries
  • Peer pairing: test each other's prototypes
Capstone Milestone: AI stack diagram + RAG prototype + 3 RCA / guesstimate answers.
Phase 2 · Weeks 4–7
Building AI Products
Week 4
AI Agents: Concept + Hands-on Build
Design and prototype your AI agent same day
Design and prototype an AI agent — autonomy levels, tool use, memory, HITL — same weekend as the concept.
🔵 Sat AM — Concept: AI Agents Deep Dive
  • Autonomy levels L1–L4, tool use, memory types (short/long/episodic)
  • Planning loops: ReAct, chain-of-agents, reflection patterns
  • Multi-agent systems: orchestrator-worker, MCP, A2A
  • Real examples: Claude computer use, Copilot agent mode, Devin
🟢 Sat PM — Demo: AI Agents Hands-on Build
  • Design the agentic workflow for your capstone
  • Prototype using Claude, LangChain, or no-code tools
  • Test on 3 scenarios: happy path, edge case, failure
📝 Sun AM — Interview Prep 4: Metrics NSM & Execution
  • North Star metric + counter-metrics + guardrail metrics
  • 'DAU dropped 20% — walk me through it' execution questions
  • OKR setting for AI product teams
🟢 Sun PM — Demo Hours: Agent Prototyping Continued
  • Refine agent based on Sat test results
  • Continue MCP & A2A prototyping
  • Peer review + mentor feedback on architecture decisions
Capstone Milestone: Agent architecture designed + prototype built + NSM + execution answers banked.
Week 5
Advanced Evals: Spec → Test → Launch Threshold
Write production evals, define "good enough to ship"
Write production evals, build a golden test set, and decide what 'good enough to ship' means for your product.
🔵 Sat AM — Concept: Advanced Evals
  • Eval types: automated metrics, human evaluation, LLM-as-judge
  • BLEU, ROUGE, faithfulness, groundedness, RAGAS for RAG
  • Eval pipeline tools: LangSmith, Braintrust, PromptFoo
🟢 Sat PM — Demo: Evals Hands-on Implementation
  • Write the eval spec for your capstone AI feature
  • Create 10 golden test cases (happy path + edge cases)
  • Run manual evals, score results, set launch threshold
📝 Sun AM — Interview Prep 5: AI Strategy, Growth & Pricing
  • Growth: PLG vs sales-led, AI data network effects
  • Pricing: token-based, usage-based, outcome-based, flat subscription
  • 'How would you grow Perplexity by 10x?' — answer live
🟣 Sun PM — Office Hours: Eval Spec Review
  • Review eval specs and golden test sets
  • Q&A on eval metrics + troubleshoot scoring issues
Capstone Milestone: Eval framework written + golden test set (10 cases) + launch threshold defined + growth & pricing answers banked.
Week 6
Spec-Driven Dev & Live Build
Spec → prototype → test → ship in one weekend
Master spec-driven development. Use Claude/AI tools to go from spec to working prototype — in one weekend.
🔵 Sat AM — Concept: Spec-Driven Dev & Claude Skills
  • Anatomy of a great spec: context, user stories, constraints, examples, eval criteria
  • Vibe coding: prototyping without full engineering support
  • Spec-first AI development — the modern PRD-to-shipped handoff
🟢 Sat PM — Demo: Spec → Prototype Live Build
  • Write full spec → generate prototype using Claude/AI tools
  • Iterate: spec → build → test → revise → rebuild
  • Document build process as mini case study
📝 Sun AM — Interview Prep 6: GTM & Market Entry
  • 'How would you launch a new AI coding assistant?' — end to end
  • Structuring GTM: market sizing → beachhead → channels → metrics
  • AI-specific GTM: trust, explainability, enterprise procurement
🟢 Sun PM — Demo Hours: Lovable Prototyping
  • Continue building on Lovable
  • Run eval golden set against prototype
  • Mentor office hours for stuck projects
Capstone Milestone: Full spec written + working prototype built + tested against evals + GTM answers banked.
Week 7
AI Product Design & UX Polish
Trust signals, HITL, AI-native UX patterns
Design for non-determinism. Polish your prototype with trust signals, HITL, and AI-native UX patterns.
🔵 Sat AM — Concept: AI Product Design & UX
  • Designing for non-determinism, trust calibration, streaming + skeleton screens
  • HITL patterns: approval flows, correction loops, escalation
  • Explainability UX, confidence indicators, anti-patterns
🟢 Sat PM — Demo: UX Audit & Prototype Polish
  • 8-principle UX audit of your capstone feature
  • Redesign error + loading states; write AI UX copy
  • Peer UX review, polish prototype based on audit findings
📝 Sun AM — Interview Prep 8: Behavioural Interview
  • STAR for AI PM contexts: shipping under uncertainty, data science disagreements
  • 'Why AI PM?' — authentic answers
  • Start building your 5-story bank
🟢 Sun PM — Demo Hours: N8N Agent Building
  • Hands-on N8N agent building session
  • Final prototype refinements + mini case study completion
Capstone Milestone: UX audit complete + prototype polished + mini case study finalised.
Phase 3 · Weeks 8–10
Strategy & Depth
Week 8
AI Risks, Biases & Product Metrics
Responsible AI + the full metrics stack
Add the responsible-AI and analytics layer your capstone needs to ship credibly.
🔵 Sat AM — Concept: AI Risks & Biases
  • Bias types: data, representation, algorithmic, output
  • Hallucinations: why LLMs confabulate + PM mitigation strategies
  • Regulatory landscape 2026: EU AI Act, India DPDP Act
🔵 Sat PM — Concept: Product Metrics & AI Analytics
  • AI operational metrics: accuracy, token cost, latency, uptime
  • A/B testing AI features: non-determinism + evaluation lag challenges
  • Using AI to analyse metrics: SQL + LLM combos
📝 Sun AM — Interview Prep 8: AI General Questions
  • RAG vs fine-tuning, transformers explained, LLM production risks
  • STAR bank: 5 stories + 5 AI general knowledge Qs answered
  • Record 'Why AI PM?' and review
🟣 Sun PM — Office Hours: Risk + Metrics Q&A
  • Review risk audits + metrics frameworks
  • Peer feedback on dashboards + STAR answer practice
Capstone Milestone: Risk audit + responsible AI section + full metrics stack (NSM, guardrails, A/B test plan) + STAR bank (5 stories).
Week 9
GTM, Model Selection & Mock Interview Round 1
Moats, model decisions, go-to-market + first full mock loop
Lock GTM strategy, make your model selection decision, and complete your first full 3-round mock interview loop.
🔵 Sat AM — Concept: GTM & Market Entry
  • PLG vs sales-led vs community-led for B2B AI tools
  • Competitive moats: data, distribution, UX, switching cost, speed
  • Launch playbook: beta → early access → GA
🔵 Sat PM — Concept: Model Selection, Latency & Tradeoffs
  • Model comparison: accuracy, latency, cost/token, context window
  • Cost-quality tradeoffs: GPT-4o vs Haiku vs open-source
  • Build vs buy vs fine-tune decision matrix
📝 Sun AM — Interview Prep 7: Evals, Model Selection & Tradeoffs
  • 'How would you evaluate a RAG customer support bot?'
  • 'Our AI feature takes 8 seconds — what do you do?'
  • Framework: constraints → options → criteria → recommendation
📝 Sun PM — Mock Interview Round 1
  • Full 3-round loop (45–60 min): Product Sense + Metrics + Strategy
  • Mentor as interviewer with real-time scoring
  • Structured feedback + top 2 improvement areas per round
Capstone Milestone: GTM one-pager + model selection matrix + cost estimates + latency strategy. Mock Round 1 scores recorded.
Week 10
Enterprise Case Studies & Mock Interview Round 2
Pattern recognition + final interview mastery
Analyse real enterprise AI launches, complete both mock rounds, and lock your 15-answer story bank — job-ready before Demo Day.
🔵 Sat AM — Concept: Enterprise AI Case Studies (B2C)
  • Consumer AI product: idea → scale — full post-mortem
  • Apply Weeks 1–9 frameworks to analyse each case
🔵 Sat PM — Concept: Enterprise AI Case Studies (B2B)
  • Enterprise AI deployment: procurement, pilot, rollout
  • Common success patterns + post-mortems of AI product failures
📝 Sun AM — Mock Interview Round 2
  • Full 3-round loop: Behavioural + Technical AI + Case Study
  • Stricter scoring than Round 1, peer interviewer practice
  • Compare Round 1 vs Round 2 scores + final readiness assessment
🟣 Sun PM — Office Hours: Interview Debrief + Story Bank
  • Group debrief: common mock interview mistakes
  • Rewrite weakest 3 answers + finalise 15-answer story bank
  • Practice elevator pitch for capstone
Capstone Milestone: Case study teardown + interview prep doc + 15-answer story bank finalised. Both mock rounds complete.
Phase 4 · Weeks 11–12
Demo Prep & Demo Day
Week 11
Capstone Refinement & Mock Demo
Turn your capstone into a tight 8-minute story
Turn your capstone into a tight 8-minute story. Rehearse until it's Demo-Day ready.
🔵 Sat AM — Concept: Capstone Refinement
  • Capstone structure: Problem → User → AI Solution → Stack → Evals → Metrics → Risks → GTM
  • Handling 'why not just use ChatGPT?' — peer review with scoring rubric
🎤 Sat PM — Mock Demo Presentations
  • Live mock demo: 5 min each + mentor feedback + panel Q&A practice
  • Peer feedback on 2 projects + tighten narrative
🎤 Sun AM — Final Polish + Rehearsal
  • Full timed run-through + interview framing of capstone
  • Write 2 interview answers using capstone as story
  • Last-minute troubleshooting + confidence building
Capstone Milestone: Full deck (10–12 slides) + demo video recorded + mock demo delivered. Final version ready for Demo Day.
Week 12
🎤 Demo Day
Live presentation · Portfolio published · Certificate awarded
Ship publicly. Present your AI product to a panel. Walk away job-ready with portfolio + certificate.
🎤 Demo Day Session 1
  • 8-min capstone presentation + 5-min Q&A from mentor + peer judge panel
  • Scoring: problem clarity, solution depth, technical credibility, metrics
🎤 Demo Day Session 2
  • Remaining presentations + Best Project awards
  • Speed interview round: 2 Qs per learner in front of cohort
Wrap-up & Next Steps
  • Portfolio published + LinkedIn launch post
  • Certificate awarded + alumni community access
  • Async course + job board + accountability partner assigned
✅ COURSE COMPLETE. Portfolio-ready capstone delivered. Interview answers polished. AI PM job-ready.
Capstone Milestones

Week-by-week deliverable map

Every learner builds toward the same outcome — a portfolio-ready AI product with full spec, prototype, evals, GTM, and live demo.

WkMilestoneWhat You DeliverFormat
01Problem DefinitionAI Opportunity Canvas — problem, user segment, solution gaps, AI angleNotion doc / 1-pager
02Technical FoundationML Pipeline Map — algorithm selected, data pipeline, model cardDiagram + rationale
03RAG PrototypeRAG design + working prototype: architecture, knowledge base, retrieval strategyArchitecture doc + prototype
04Agent PrototypeAgent design + working prototype: autonomy level, tools, memory, HITL, tested on 3 scenariosArchitecture doc + prototype
05Eval FrameworkEval spec + 10-case golden set + launch threshold + evals run on prototypeEval spec + test results
06Full Spec + PrototypeComplete spec + polished prototype, mini case study startedSpec doc + prototype link
07UX Audit + Polish8-principle UX audit, HITL pattern, AI UX copy, prototype finalised, case study completeUX audit + prototype
08Quality & MetricsRisk audit + responsible AI section + NSM/supporting/guardrail metrics + A/B test planRisk doc + metrics
09Strategy + Model DecisionGTM one-pager + model comparison matrix + cost + latency strategyStrategy doc + model doc
10Interview Ready15-answer story bank, 2 mock rounds scored, enterprise case study teardownInterview doc + case analysis
11Presentation ReadyFull capstone deck (10–12 slides) + demo video + mock demo + 2 interview answersDeck + demo video
12🎤 Demo DayLive 8-min presentation + portfolio published + LinkedIn post + certificate awardedLive demo + portfolio
Everything Included

Every single thing you get

Not a recorded course. Every item below is live, hands-on, and mentor-guided.

45 live sessions across 12 weeks
10 hands-on Demo / Demo Hours sessions
8 structured Interview Prep modules
2 full Mock Interview rounds (mentor scored)
Office Hours throughout for Q&A + STAR practice
Portfolio-ready AI product capstone (all 12 weeks)
Working RAG prototype + AI Agent prototype
Eval framework with golden test set + launch threshold
Spec-driven dev workflow using Claude / AI tools
8-principle UX audit + AI UX copy
Bias audit + Responsible AI section + 3 launch guardrails
Full metrics stack: NSM + supporting + guardrails + A/B
GTM one-pager + model selection matrix + cost strategy
15-answer STAR story bank across 2 mock rounds
Final capstone deck (10–12 slides) + recorded demo video
🎤 Live Demo Day + Best Project awards + speed interviews
Certificate of completion
Async course access + alumni community + job board
All session recordings + bonus content vault
Accountability partner for post-cohort job search
Core Outcomes

What you walk away with

Every output below is built, not watched. You'll have a portfolio to show on Demo Day.

01

A portfolio-ready AI capstone: problem → spec → prototype → evals → GTM → Demo Day

02

AI fundamentals mastered — supervised/unsupervised learning, deep learning, LLMs, CoT + ToT prompting

03

Working RAG system and AI agent prototype with documented architecture and HITL checkpoints

04

Production-grade evals — golden test sets, RAGAS, LLM-as-judge, launch threshold defined

05

AI PM strategy depth — model selection, latency, cost-quality tradeoffs, GTM, competitive moats

06

8 structured Interview Prep modules + 2 full mock rounds with mentor scoring and written feedback

07

15-answer STAR story bank, polished interview prep doc, capstone framed as interview answers

08

Live Demo Day — panel Q&A, certificate awarded, portfolio published, LinkedIn launch post

Frequently Asked

Your questions, answered

Do I need to be a developer or engineer to join?
No. The cohort is designed for PMs, engineers, designers, and data scientists alike. All prototyping is done using no-code and AI tools (Claude, LangChain, no-code RAG builders). You need to be able to think through product decisions and use AI tools to build — not write production code.
How much time per week do I need to commit?
8 hours per week for Weeks 1–10 (4 live sessions of 2 hrs each on Saturday + Sunday). 6 hours per week for Weeks 11–12 (Demo Prep and Demo Day). All sessions run on Saturday and Sunday mornings and afternoons IST — designed not to conflict with weekday work.
What if I miss a session?
All sessions are recorded and shared. Office Hours run throughout the cohort so you can catch up. We strongly recommend attending live — the real value is in Q&A, peer reviews of your capstone work, and real-time mentor feedback on your specific prototype.
What does the capstone look like at the end?
A complete portfolio-ready AI product: a written spec, working prototype (RAG or agent), eval framework with a golden test set and launch threshold, GTM one-pager, risk + metrics doc, a polished 10–12 slide deck, and a 3-min recorded demo video. You present it live on Demo Day in front of a panel.
Is this heavy on theory or hands-on building?
Heavily hands-on. Every concept week is paired with a Demo/Demo Hours session where you build something the same week. By the end you will have built a working RAG prototype, a working AI agent, run your own evals, and shipped a full spec-driven prototype — all before Demo Day.
How does the application process work?
Fill in the application form. The cohort is capped — applications are reviewed on a rolling basis. If you're a fit, you'll receive the payment link. Seats are allocated in order of approved application + payment.

Ready to Build Your AI Product?

12 weeks. 45 live sessions. Working prototypes. Mock interviews. Live Demo Day. Rolling applications.

Apply Now WhatsApp