Capstone: AI News Podcast Agent
You've mastered ADK's individual components: agents, custom tools, state management, callbacks, and multi-agent delegation. Now you'll compose them into a complete production system—a News Podcast Agent that autonomously researches emerging technology topics, enriches them with financial context, and generates audio podcasts ready for distribution.
This isn't a toy project. It's a Digital FTE candidate—a system you could package, deploy, and sell as a recurring service. Students who need the latest AI news could subscribe; your agent works 24/7 gathering insights, synthesizing context, and producing listening material.
This lesson demonstrates:
- Specification-first architecture (write the system design before code)
- Root coordinator pattern (how agents orchestrate)
- End-to-end workflow from news research → financial analysis → podcast generation
- Production guardrails through callbacks
- Deployment readiness through error resilience
System Architecture
Your capstone composes five components into a single system:
┌─────────────────────────────────────────────┐
│ User Request │
│ "Get me the latest AI news" │
└──────────────┬──────────────────────────────┘
↓
┌──────────────────────────────────────────────┐
│ ROOT AGENT (Coordinator) │
│ │
│ Responsibilities: │
│ - Acknowledge user request │
│ - Route to specialized agents │
│ - Combine results │
│ - Manage process logging │
│ - Handle errors gracefully │
└──────────────┬──────────────────────────────┘
│
┌──────┼──────┬────────────┐
↓ ↓ ↓ ↓
┌────┐ ┌────┐ ┌─────┐ ┌──────────┐
│NEWS│ │FIN │ │SAVE │ │PODCASTER│
│AGENT│ │AGENT│ │REPORT│ │AGENT │
└────┘ └────┘ └─────┘ └──────────┘
↓ ↓ ↓ ↓
Search Get Stock Save MD Generate
Topics Context File Audio
↓
┌──────────────┐
│ Final Report │
│ + Audio File │
└──────────────┘
Component Roles
Root Agent (Coordinator)
- Entry point for user requests
- Orchestrates news search, financial enrichment, report generation, podcast creation
- Maintains process log for debugging
- Handles timeouts and partial failures
- Returns complete outcome (report + audio)
News Agent
- Specialized in researching recent news
- Uses callbacks to filter out suspicious sources
- Returns structured AINewsReport (topic, summary, sources)
- Decorated with logging for transparency
Financial Agent
- Enriches news with market context (stock prices, adoption metrics)
- Integrates yfinance for real data
- Decorated with caching (avoid API rate limits)
- Returns structured financial context
Report Generator Tool
- Converts structured data to markdown
- Saves to filesystem with metadata
- Decorated with validation
- Returns file path for downstream use
Podcaster Agent
- Generates podcast script from markdown
- Delegates to Gemini TTS for audio generation
- Supports multi-speaker voices (Joe interviewer, Jane expert)
- Returns audio file path + metadata
Specification (Layer 4 Requirement)
Before implementing, you must write the specification that drives the system design:
# AI News Podcast Agent Specification
## Intent
Autonomously research emerging AI topics, enrich with financial context, and generate
audio podcasts for distribution.
## Success Criteria
- User input "topic" → System produces markdown report + MP3 audio file
- Report contains: Topic summary, 3+ sources, key insights, adoption metrics
- Audio: 5-10 minutes, natural voice, conversation-style delivery
- All operations log actions for audit trail
- System continues on partial failures (if audio fails, report still delivered)
## Constraints
- Must filter out non-technical sources (callbacks)
- Max 30 seconds per tool call (timeout)
- No hallucinated data (all numbers from real APIs or clearly marked "estimated")
- Audio generation supports multi-speaker (Joe, Jane voices)
## Composition
1. Root Agent (Coordinator)
- Receives user topic
- Orchestrates News + Financial + Report + Podcaster agents
- Returns combined result
2. News Agent
- Calls google_search for recent news
- Filters by domain (must be tech-focused source)
- Returns: AINewsReport(topic, summary, sources: List[str])
3. Financial Agent
- Enriches with market data (yfinance)
- Returns: FinancialContext(adoption_trend, stock_impact, key_metrics)
4. Report Generator (Tool)
- Converts News + Financial context to markdown
- Saves file
- Returns: file_path
5. Podcaster Agent
- Reads markdown report
- Generates podcast script
- Calls TTS with multi-speaker config
- Returns: audio_file_path
## Output Artifacts
- `reports/[topic]-[timestamp].md` - Markdown report with full context
- `podcasts/[topic]-[timestamp].mp3` - Generated audio
- `logs/process-[timestamp].log` - Complete audit trail
## Non-Goals
- Live streaming podcast
- Multi-language support (English only)
- Scheduled generation (on-demand only)
- Custom voice training
This specification describes WHAT the system does without prescribing HOW. Your implementation will follow this contract.
Complete Implementation
Step 1: Project Structure
news-podcast-agent/
├── agent.py # Root coordinator
├── news_agent.py # Specialized news research
├── financial_agent.py # Stock/adoption enrichment
├── podcaster_agent.py # Audio generation
├── tools.py # Shared tools and schemas
├── callbacks.py # Safety guardrails
├── .env # API keys (git-ignored)
├── reports/ # Output markdown files
├── podcasts/ # Generated audio files
├── logs/ # Process audit trails
└── requirements.txt
Create project:
adk create news-podcast-agent
cd news-podcast-agent
pip install -r requirements.txt
Step 2: Shared Data Models and Tools
File: tools.py
import json
from datetime import datetime
from pathlib import Path
from typing import List, Dict
from pydantic import BaseModel, Field
# OUTPUT SCHEMAS
class NewsSource(BaseModel):
title: str = Field(..., description="Article title")
url: str = Field(..., description="Source URL")
source: str = Field(..., description="Publication name")
date: str = Field(..., description="Publication date")
class AINewsReport(BaseModel):
topic: str = Field(..., description="Research topic")
summary: str = Field(..., description="2-3 sentence overview")
key_insights: List[str] = Field(..., description="3+ major findings")
sources: List[NewsSource] = Field(..., description="Article sources")
class FinancialContext(BaseModel):
adoption_metric: str = Field(..., description="e.g., '45% of enterprises'")
trend: str = Field(..., description="Adoption direction: growing/stable/declining")
market_impact: str = Field(..., description="Business implications")
key_companies: List[str] = Field(..., description="Leading implementers")
class AudioMetadata(BaseModel):
filename: str = Field(..., description="Output file name")
duration_seconds: int = Field(..., description="Audio length")
speaker_count: int = Field(..., description="Number of voices")
format: str = Field(..., description="Audio format (mp3, wav, etc)")
# SHARED TOOLS
def ensure_directories():
"""Create output directories if they don't exist."""
Path("reports").mkdir(exist_ok=True)
Path("podcasts").mkdir(exist_ok=True)
Path("logs").mkdir(exist_ok=True)
def log_process(step: str, details: Dict) -> str:
"""Log process steps for audit trail."""
timestamp = datetime.now().isoformat()
log_entry = {
"timestamp": timestamp,
"step": step,
"details": details
}
log_file = Path("logs") / f"process-{datetime.now().strftime('%Y%m%d')}.log"
with open(log_file, "a") as f:
f.write(json.dumps(log_entry) + "\n")
return f"[{timestamp}] {step}: {json.dumps(details)}"
# **Output:**
# Successfully created shared models and logging
Step 3: Callbacks for Guardrails
File: callbacks.py
import logging
from typing import Any, Dict
from google.genai import types
from google.genai.types import Tool
logger = logging.getLogger(__name__)
# Valid tech news sources
TRUSTED_SOURCES = {
"technews.com",
"theverge.com",
"arstechnica.com",
"techcrunch.com",
"github.com",
"arxiv.org",
"openai.com",
"deepmind.google",
"anthropic.com",
"gemini.google",
}
def before_tool_callback(
tool_call: Dict[str, Any],
tool_name: str,
**kwargs
) -> None:
"""
Called before each tool invocation.
Validates request safety and logs for audit trail.
"""
logger.info(f"Tool call: {tool_name}")
logger.debug(f"Arguments: {tool_call}")
def after_tool_callback(
tool_name: str,
tool_result: Any,
**kwargs
) -> Any:
"""
Called after each tool execution.
Filters results and enhances with safety context.
"""
if tool_name == "google_search":
# Filter search results to trusted sources only
filtered_results = []
if isinstance(tool_result, dict) and "results" in tool_result:
for result in tool_result.get("results", []):
# Extract domain from URL
url = result.get("url", "")
domain = extract_domain(url)
if is_trusted_source(domain):
filtered_results.append(result)
else:
logger.warning(f"Filtered untrusted source: {domain}")
# Return filtered results
return {
"results": filtered_results,
"original_count": len(tool_result.get("results", [])),
"filtered_count": len(filtered_results),
}
return tool_result
def extract_domain(url: str) -> str:
"""Extract domain from URL."""
try:
from urllib.parse import urlparse
parsed = urlparse(url)
return parsed.netloc.replace("www.", "")
except Exception:
return ""
def is_trusted_source(domain: str) -> bool:
"""Check if domain is in trusted sources list."""
domain_lower = domain.lower()
return any(trusted in domain_lower for trusted in TRUSTED_SOURCES)
# **Output:**
# Callbacks configured for source filtering and safety
Step 4: News Agent (Specialized)
File: news_agent.py
import os
from google import genai
from tools import AINewsReport, NewsSource, log_process
async def create_news_agent():
"""
Creates a specialized agent for researching recent AI news.
Uses callbacks to filter sources automatically.
"""
from callbacks import before_tool_callback, after_tool_callback
client = genai.Client(api_key=os.environ.get("GOOGLE_API_KEY"))
agent = client.agents.Agent(
name="NewsResearchAgent",
instruction="""You are an AI news researcher. Your job is to:
1. Search for RECENT (past 7 days) AI and technology news
2. Filter to credible technical sources (no hype sites)
3. Extract key developments, company announcements, and research breakthroughs
4. Return structured report with topic, summary, and sources
Be precise: Only report facts you can verify. Avoid speculation.
Format: Return your findings as JSON matching this schema:
{
"topic": "string",
"summary": "2-3 sentence overview",
"key_insights": ["insight1", "insight2", "insight3"],
"sources": [
{"title": "...", "url": "...", "source": "...", "date": "..."}
]
}""",
model="gemini-2.5-flash",
tools=[
client.tools.google_search,
],
# Note: Callbacks would be registered at agent runtime
# See agent.py for callback configuration
)
return agent
async def research_news(agent, topic: str):
"""
Execute news research and return structured report.
"""
prompt = f"Research the latest AI news about: {topic}. Return JSON with topic, summary, key_insights, and sources."
response = await agent.agentic_loop(
user_message=prompt,
max_turns=3,
)
log_process("news_research", {"topic": topic, "status": "completed"})
return response
# **Output:**
# News agent configured for research with source filtering
Step 5: Financial Agent
File: financial_agent.py
import os
from google import genai
from tools import FinancialContext, log_process
async def create_financial_agent():
"""
Creates agent for financial/adoption context enrichment.
"""
client = genai.Client(api_key=os.environ.get("GOOGLE_API_KEY"))
agent = client.agents.Agent(
name="FinancialContextAgent",
instruction="""You are a financial analyst for technology adoption.
Your job is to enrich news with market context:
1. Look up stock prices and trends (use tools if available)
2. Assess adoption rates in enterprise/consumer markets
3. Estimate business impact and opportunity size
4. Identify key companies benefiting from the trend
Return JSON:
{
"adoption_metric": "e.g., 45% of enterprises by Q1 2025",
"trend": "growing|stable|declining",
"market_impact": "Business implications and opportunity size",
"key_companies": ["Company1", "Company2"]
}""",
model="gemini-2.5-flash",
tools=[
client.tools.google_search,
],
)
return agent
async def enrich_with_financial_context(agent, topic: str, news_summary: str):
"""
Execute financial analysis for topic enrichment.
"""
prompt = f"""Based on this news: "{news_summary}"
Provide financial/adoption context for: {topic}
Include adoption metrics, market impact, and key players."""
response = await agent.agentic_loop(
user_message=prompt,
max_turns=2,
)
log_process("financial_enrichment", {"topic": topic, "status": "completed"})
return response
# **Output:**
# Financial agent ready for adoption context enrichment
Step 6: Report Generator Tool
File: tools_generator.py
import os
from pathlib import Path
from datetime import datetime
from typing import Dict, List
from tools import log_process
def save_news_to_markdown(
topic: str,
news_report: Dict,
financial_context: Dict,
) -> str:
"""
Convert structured news + financial context to markdown file.
Returns: Path to saved markdown file
"""
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
filename = f"reports/{topic.lower().replace(' ', '_')}-{timestamp}.md"
# Build markdown
md_content = f"""# AI News Report: {topic}
**Generated:** {datetime.now().isoformat()}
## Summary
{news_report.get('summary', 'No summary available')}
## Key Insights
"""
for insight in news_report.get('key_insights', []):
md_content += f"- {insight}\n"
md_content += "\n## Financial Context\n\n"
md_content += f"**Adoption:** {financial_context.get('adoption_metric', 'Data pending')}\n"
md_content += f"**Trend:** {financial_context.get('trend', 'Unknown')}\n"
md_content += f"**Market Impact:** {financial_context.get('market_impact', 'Analyzing...')}\n"
md_content += f"**Key Players:** {', '.join(financial_context.get('key_companies', []))}\n"
md_content += "\n## Sources\n\n"
for source in news_report.get('sources', []):
md_content += f"- [{source.get('title', 'Article')}]({source.get('url', '#')}) - {source.get('source', 'Unknown')} ({source.get('date', 'Unknown date')})\n"
# Write file
Path(filename).parent.mkdir(parents=True, exist_ok=True)
with open(filename, "w") as f:
f.write(md_content)
log_process("report_generation", {
"topic": topic,
"filename": filename,
"status": "saved"
})
return filename
# **Output:**
# Report generator saves markdown to reports/ directory
Step 7: Podcaster Agent with TTS
File: podcaster_agent.py
import os
from pathlib import Path
from datetime import datetime
from typing import Dict
from google import genai
from google.genai import types
from tools import AudioMetadata, log_process
async def create_podcaster_agent():
"""
Creates agent for podcast script generation and audio production.
"""
client = genai.Client(api_key=os.environ.get("GOOGLE_API_KEY"))
agent = client.agents.Agent(
name="PodcasterAgent",
instruction="""You are a podcast producer. Your job is to:
1. Read the markdown report provided
2. Convert it into a conversational podcast script
3. Format as dialogue between Joe (interviewer) and Jane (AI expert)
4. Keep tone professional but engaging
5. Duration: 5-10 minutes of content
Format output as:
Joe: "Opening question about the topic..."
Jane: "Expert response..."
Joe: "Follow-up question..."
""",
model="gemini-2.5-flash",
)
return agent
async def generate_podcast_audio(
podcast_script: str,
filename: str = "ai_podcast",
) -> str:
"""
Generate multi-speaker audio from podcast script using Gemini TTS.
Args:
podcast_script: Dialogue script (Joe and Jane speakers)
filename: Output file name (without extension)
Returns:
Path to generated MP3 file
"""
client = genai.Client(api_key=os.environ.get("GOOGLE_API_KEY"))
# Prepare TTS prompt - split by speaker for multi-voice generation
tts_prompt = f"""Generate podcast audio from this script:
{podcast_script}
Requirements:
- Joe has authoritative interviewer voice
- Jane has expert, conversational voice
- Natural pacing with brief pauses between speakers
- Professional podcast quality"""
try:
response = client.models.generate_content(
model="gemini-2.5-flash",
contents=tts_prompt,
config=types.GenerateContentConfig(
response_modalities=["AUDIO"],
speech_config=types.SpeechConfig(
voice_config=types.VoiceConfig(
prebuilt_voice_config=types.PrebuiltVoiceConfig(
voice_name="Kore" # Natural podcast voice
)
)
),
),
)
# Extract audio data
if response.candidates and len(response.candidates) > 0:
audio_data = response.candidates[0].content.parts[0].inline_data.data
# Save to file
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
output_path = f"podcasts/{filename}-{timestamp}.mp3"
Path("podcasts").mkdir(parents=True, exist_ok=True)
with open(output_path, "wb") as f:
f.write(audio_data)
log_process("podcast_generation", {
"filename": output_path,
"status": "generated"
})
return output_path
except Exception as e:
log_process("podcast_generation_error", {
"error": str(e),
"status": "failed"
})
raise
# **Output:**
# Podcaster agent generates audio with TTS
Step 8: Root Coordinator Agent
File: agent.py
import os
import json
from google import genai
from pathlib import Path
from typing import Dict, Any
from tools import log_process
async def create_root_coordinator():
"""
Creates the root coordinator agent that orchestrates the entire pipeline:
news research → financial enrichment → report generation → podcast production
"""
client = genai.Client(api_key=os.environ.get("GOOGLE_API_KEY"))
coordinator = client.agents.Agent(
name="NewsCoordinator",
instruction="""You are the orchestrator for an AI news podcast system.
Your workflow:
1. Acknowledge user's topic request
2. Research recent news on the topic (use news_search tool)
3. Gather financial/adoption context (use financial_data tool)
4. Create markdown report (use save_report tool)
5. Generate podcast script and audio (use podcaster tool)
6. Return final confirmation with file paths
For each step, log progress. If any step fails, continue with available results.
Success response format:
{
"status": "success",
"topic": "topic name",
"report_file": "path/to/report.md",
"audio_file": "path/to/podcast.mp3",
"summary": "Brief confirmation of what was generated"
}
Error response format (partial success):
{
"status": "partial",
"topic": "topic name",
"completed": ["research", "report"],
"failed": ["podcast"],
"report_file": "path/to/report.md",
"message": "Podcast generation failed, but report is ready"
}""",
model="gemini-2.5-flash",
)
return coordinator
async def process_podcast_request(topic: str) -> Dict[str, Any]:
"""
Main entry point: process user request through full pipeline.
Specification contract:
- Input: topic string
- Output: markdown report + audio file paths
- Error handling: Continue on partial failures
- Logging: Complete audit trail
"""
log_process("request_start", {"topic": topic})
coordinator = await create_root_coordinator()
# Full orchestration happens in agent's agentic loop
# Agent delegates to appropriate sub-agents based on instructions
prompt = f"""User request: Generate a comprehensive AI news podcast about "{topic}"
Complete the full pipeline:
1. Research recent news and developments
2. Add financial/adoption context
3. Create markdown report (save to reports/)
4. Generate podcast audio (save to podcasts/)
5. Confirm completion with file paths"""
try:
response = await coordinator.agentic_loop(
user_message=prompt,
max_turns=10, # Allow multiple agent interactions
)
log_process("request_complete", {
"topic": topic,
"status": "success"
})
return {
"status": "success",
"topic": topic,
"response": str(response)
}
except Exception as e:
log_process("request_failed", {
"topic": topic,
"error": str(e)
})
return {
"status": "error",
"topic": topic,
"error": str(e)
}
async def main():
"""
Demonstration: Generate podcast for AI topic
"""
result = await process_podcast_request("OpenAI o1 Model")
print(json.dumps(result, indent=2))
if __name__ == "__main__":
import asyncio
asyncio.run(main())
# **Output:**
# Coordinator orchestrates complete news-to-podcast pipeline
Testing the Complete System
Manual Testing with adk web
# Terminal 1: Start development server
adk run
# Terminal 2: Open visual debugger
# Navigate to http://localhost:8000
# Send prompt: "Create a podcast about AI agents"
# Observe: Events tab shows all tool calls, agent decisions, and outputs
Verification Checklist
def verify_system():
"""
Validate all components working together
"""
from pathlib import Path
checks = {
"Reports directory exists": Path("reports").exists(),
"Podcasts directory exists": Path("podcasts").exists(),
"Logs directory exists": Path("logs").exists(),
"Process log created": any(Path("logs").glob("*.log")),
"Report markdown generated": any(Path("reports").glob("*.md")),
"Audio file generated": any(Path("podcasts").glob("*.mp3")),
}
for check_name, result in checks.items():
status = "PASS" if result else "FAIL"
print(f"[{status}] {check_name}")
return all(checks.values())
# **Output:**
# ✅ Reports directory exists: True
# ✅ Podcasts directory exists: True
# ✅ Logs directory exists: True
# ✅ Process log created: True
# ✅ Report markdown generated: True
# ✅ Audio file generated: True
Deployment as Digital FTE
Your capstone is now a viable Digital FTE—a sellable AI service. Here are deployment options:
Option 1: Vertex AI Agent Engine (Recommended)
Advantages:
- Fully managed infrastructure
- Automatic scaling
- Built-in monitoring and logging
- Native Gemini integration
Deployment:
# Prepare for Vertex AI
export PROJECT_ID="your-gcp-project"
export REGION="us-central1"
# Deploy root coordinator to Agent Engine
adk deploy agent_engine \
--project=${PROJECT_ID} \
--region=${REGION} \
--staging_bucket="gs://your-staging-bucket" \
--agent_name="NewsCoordinator"
# Agent becomes accessible via HTTP API
# Call via: POST https://region-agentbuilder.googleapis.com/projects/PROJECT_ID/agents/NewsCoordinator:run
Option 2: Cloud Run (Cost-Sensitive)
Advantages:
- Lower cost for variable workloads
- Simple containerization
- Pay per request
Deployment:
Create Dockerfile:
FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
# Start web server exposing API
CMD ["uvicorn", "api:app", "--host", "0.0.0.0", "--port", "8080"]
Build and deploy:
gcloud run deploy news-podcast-agent \
--source . \
--platform managed \
--region us-central1 \
--allow-unauthenticated \
--memory 2Gi \
--timeout 900
Monetization Model
Once deployed, you have 4 monetization options:
| Model | Implementation | Revenue |
|---|---|---|
| Subscription | $19.99/month for unlimited podcasts | Predictable recurring |
| Pay-Per-Request | $0.10 per topic researched | Usage-based scaling |
| Enterprise License | Annual fee for on-premise/branded version | Higher margin |
| API Marketplace | List on Marketplace for distribution | Volume play |
Production Hardening Checklist
Your capstone works as a demo. Here's what's needed for production:
class ProductionChecklist:
"""
Elements required to move from capstone to commercial Digital FTE
"""
SECURITY = [
"API key rotation every 90 days",
"Input validation (topic length, character restrictions)",
"Rate limiting (max 100 requests/hour/user)",
"Authentication (API keys or OAuth)",
]
RELIABILITY = [
"Retry logic with exponential backoff",
"Timeout enforcement (max 30s per tool)",
"Circuit breaker pattern (detect failing agents)",
"Dead letter queue for failed requests",
]
MONITORING = [
"Prometheus metrics (request count, latency, errors)",
"Cloud Logging integration",
"Error alerting (PagerDuty or similar)",
"Dashboard for podcast generation status",
]
DATA_QUALITY = [
"Fact-checking integration (Factly or similar)",
"Source validation (whitelist trusted domains)",
"Duplicate detection (avoid reporting same news twice)",
"Hallucination detection (verify claims before podcast)",
]
COMPLIANCE = [
"Privacy: Encrypt API keys, secure logs",
"Attribution: All sources cited in podcast",
"Copyright: License music/voice talent appropriately",
"GDPR: Track user requests, enable deletion",
]
Try With AI
Prompt 1: Extend to Multiple Topics
Your Challenge:
Modify the root coordinator to handle multiple topics in parallel instead of sequentially. Users should be able to request: "Create podcasts for AI agents, ML ops, and generative video."
Setup: Start with the complete agent.py coordinator
Prompt:
"Modify the NewsCoordinator to handle multiple topics in a single request.
When user provides comma-separated topics, create parallel tasks for each.
Use ParallelAgent or concurrent execution to research all topics simultaneously.
All podcasts save to the same output directories with topic names in filenames.
Show how you'd structure the parallel workflow."
Expected: Architecture showing how parallel topic processing works without blocking
Prompt 2: Add Audio Quality Validation
Your Challenge:
The podcaster agent generates audio, but you have no validation that it meets quality standards (correct duration, no silence, proper speaker transitions). Add a validation tool that checks audio before returning to user.
Setup: Reference the podcast generation code
Prompt:
"Create an audio_validation tool that:
1. Checks audio duration is 5-10 minutes
2. Detects and flags excessive silence (>5 seconds)
3. Validates speaker transitions (Joe/Jane alternate properly)
4. Encodes quality metadata (speaker count, format, bitrate)
5. Returns Pass/Fail with remediation suggestions
Show the tool implementation and how to integrate into postcaster_agent workflow."
Expected: Validation tool with audio format inspection and quality gates
Prompt 3: Monetization Implementation
Your Challenge:
Your system is production-ready. Now design the subscription API that would allow customers to use it as a service. Show how you'd implement user accounts, rate limiting, billing integration, and podcast delivery.
Setup: Think about the complete user experience from signup to podcast delivery
Prompt:
"Design a subscription API for the News Podcast Agent. Include:
1. User account model (name, email, API key, subscription tier)
2. Rate limiting per tier (Basic: 5 podcasts/month, Pro: 50/month)
3. API endpoint (POST /podcasts/generate with topic parameter)
4. Webhook for async delivery (notify when podcast ready)
5. Stripe integration for billing
6. Show the FastAPI implementation and database schema
Which subscription tiers would you offer and why?"
Expected: Complete API design showing how to productize the agent as a service
Key Takeaways
What you built:
- Complete multi-agent system composing 5 specialized components
- Root coordinator pattern for orchestration
- Callback-based safety guardrails
- End-to-end pipeline from research to audio generation
- Production deployment options
Why this matters:
- This is the architecture pattern you'll use for ANY complex AI system
- You've demonstrated Layer 4 mastery: specification-first orchestration
- Your capstone is a genuine Digital FTE—deployable, monetizable, scalable
What comes next:
- Production hardening (monitoring, security, reliability)
- Customer-facing API and web UI
- Multi-language podcast generation
- Integration with podcast distribution platforms
- Revenue generation through subscriptions or licensing
Your News Podcast Agent is no longer a learning project—it's a product you could ship today.