AbortController and Timeouts
Your AI chat application streams responses beautifully. Tokens flow in, the UI updates in real-time, users see the model "thinking." Then a user clicks away mid-response, navigates to another page, or presses the stop button. What happens to that in-flight request?
Without cancellation, the request continues consuming resources. The streaming connection stays open. Tokens keep arriving that nobody will see. In a CLI, pressing Ctrl+C might kill the process entirely rather than gracefully stopping the AI generation.
AbortController solves this. It's the standard way to cancel fetch requests, streaming connections, and any async operation. When a user navigates away, clicks "Stop generating," or presses Ctrl+C, you call abort() and the request terminates cleanly.
This lesson teaches the cancellation patterns that every production AI interface needs.
Why Cancellation Matters for AI
AI API calls are different from typical HTTP requests:
| Traditional API | AI Streaming API |
|---|---|
| Response in 50-200ms | Response over 5-30 seconds |
| Small payload, single response | Continuous token stream |
| User waits briefly | User might change their mind |
| Failure is obvious | Partial success is common |
A 15-second streaming response creates many opportunities for cancellation:
- User clicks "Stop generating" because the response is going off-track
- User navigates to a different page
- User presses Ctrl+C in a CLI
- Network becomes unreliable mid-stream
- You need to enforce a timeout for cost control
Without proper cancellation, these scenarios create resource leaks, confusing UX, or crashed applications.
AbortController Fundamentals
AbortController is a standard web API that provides a signal to cancel async operations.
The Core Pattern
// Create a controller and extract its signal
const controller = new AbortController();
const signal = controller.signal;
// Pass signal to fetch
const response = await fetch("https://api.openai.com/v1/chat/completions", {
method: "POST",
headers: {
"Content-Type": "application/json",
"Authorization": `Bearer ${process.env.OPENAI_API_KEY}`,
},
body: JSON.stringify({
model: "gpt-4",
messages: [{ role: "user", content: "Hello!" }],
stream: true,
}),
signal, // The key: pass the abort signal
});
// Later, to cancel:
controller.abort();
Output:
// When abort() is called, fetch throws an AbortError
// The request is cancelled, connection is closed
Three steps:
- Create controller:
new AbortController() - Pass signal to fetch:
{ signal: controller.signal } - Call abort when needed:
controller.abort()
Handling AbortError
When you call abort(), the fetch Promise rejects with an AbortError. You need to catch this and handle it differently from real errors:
async function fetchWithCancellation(
url: string,
signal: AbortSignal
): Promise<string> {
try {
const response = await fetch(url, { signal });
return await response.text();
} catch (error) {
// Check if this was a cancellation
if (error instanceof Error && error.name === "AbortError") {
console.log("Request was cancelled by user");
return ""; // Return empty or handle gracefully
}
// Real error - rethrow
throw error;
}
}
// Usage
const controller = new AbortController();
// Cancel after 2 seconds (simulating user action)
setTimeout(() => controller.abort(), 2000);
const result = await fetchWithCancellation(
"https://api.example.com/slow",
controller.signal
);
Output:
Request was cancelled by user
// result is ""
The key check: error.name === "AbortError". This distinguishes user-initiated cancellation from network failures, API errors, or other problems.
Why Not Just Ignore the Error?
You might think: "Just catch all errors and ignore them." But consider:
// DON'T DO THIS
try {
const response = await fetch(url, { signal });
processResponse(response);
} catch {
// Silently ignore all errors - BAD!
}
This hides real problems. Network failures, API rate limits, authentication errors all get swallowed. AbortError is the only error you should intentionally suppress, because it represents a deliberate user action, not a failure.
Practical Example: Cancellable AI Chat
Let's build a complete example with a cancel button:
interface ChatMessage {
role: "user" | "assistant";
content: string;
}
interface ChatState {
messages: ChatMessage[];
isStreaming: boolean;
controller: AbortController | null;
}
const state: ChatState = {
messages: [],
isStreaming: false,
controller: null,
};
async function sendMessage(userMessage: string): Promise<void> {
// Create new controller for this request
state.controller = new AbortController();
state.isStreaming = true;
state.messages.push({ role: "user", content: userMessage });
state.messages.push({ role: "assistant", content: "" }); // Placeholder for response
try {
const response = await fetch("https://api.openai.com/v1/chat/completions", {
method: "POST",
headers: {
"Content-Type": "application/json",
"Authorization": `Bearer ${process.env.OPENAI_API_KEY}`,
},
body: JSON.stringify({
model: "gpt-4",
messages: state.messages.slice(0, -1), // Exclude placeholder
stream: true,
}),
signal: state.controller.signal,
});
// Stream tokens (simplified - full SSE parsing in Lesson 4)
const reader = response.body!.getReader();
const decoder = new TextDecoder();
while (true) {
const { done, value } = await reader.read();
if (done) break;
const chunk = decoder.decode(value);
// Append to last message (the assistant placeholder)
const lastMessage = state.messages[state.messages.length - 1];
lastMessage.content += extractContent(chunk);
// Update UI here
console.log(lastMessage.content);
}
} catch (error) {
if (error instanceof Error && error.name === "AbortError") {
console.log("[Stopped by user]");
// Optionally append to message: " [stopped]"
const lastMessage = state.messages[state.messages.length - 1];
lastMessage.content += " [generation stopped]";
} else {
console.error("Chat error:", error);
throw error;
}
} finally {
state.isStreaming = false;
state.controller = null;
}
}
function cancelGeneration(): void {
if (state.controller) {
state.controller.abort();
}
}
// Simplified content extraction (real implementation in Lesson 4)
function extractContent(chunk: string): string {
// SSE data extraction placeholder
return chunk;
}
Output:
// User sends: "Explain quantum computing"
// Tokens stream: "Quantum" "computing" "uses"...
// User clicks cancel
[Stopped by user]
// Message shows: "Quantum computing uses... [generation stopped]"
Key patterns:
- New controller per request: Don't reuse controllers; each request gets its own
- Store controller in state: So cancel button can access it
- finally block cleanup: Always reset state regardless of success/cancel/error
- User feedback on cancel: Show "[generation stopped]" so users know what happened
Timeouts with AbortSignal.timeout()
Sometimes requests hang indefinitely. API servers can freeze, networks can stall, or AI models can enter infinite loops. You need timeouts.
The Timeout Pattern
async function fetchWithTimeout(
url: string,
options: RequestInit,
timeoutMs: number
): Promise<Response> {
// AbortSignal.timeout() creates a signal that aborts after specified time
const response = await fetch(url, {
...options,
signal: AbortSignal.timeout(timeoutMs),
});
return response;
}
// Usage: 30-second timeout for AI requests
try {
const response = await fetchWithTimeout(
"https://api.openai.com/v1/chat/completions",
{
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ model: "gpt-4", messages: [...] }),
},
30000 // 30 seconds
);
// Process response...
} catch (error) {
if (error instanceof Error && error.name === "TimeoutError") {
console.error("Request timed out after 30 seconds");
}
}
Output:
// If request takes more than 30 seconds:
Request timed out after 30 seconds
Notice the error name changes: AbortSignal.timeout() throws TimeoutError, not AbortError. This distinction helps you provide specific error messages.
Timeout vs Manual Abort
When you need both user cancellation AND timeout, combine them:
async function fetchWithTimeoutAndCancel(
url: string,
options: RequestInit,
userSignal: AbortSignal,
timeoutMs: number
): Promise<Response> {
// Combine user cancellation and timeout
const combinedSignal = AbortSignal.any([
userSignal,
AbortSignal.timeout(timeoutMs),
]);
const response = await fetch(url, {
...options,
signal: combinedSignal,
});
return response;
}
// Usage
const controller = new AbortController();
// Wire up cancel button
document.getElementById("cancel")?.addEventListener("click", () => {
controller.abort();
});
try {
const response = await fetchWithTimeoutAndCancel(
"https://api.openai.com/v1/chat/completions",
{ method: "POST", body: JSON.stringify({...}) },
controller.signal,
30000
);
} catch (error) {
if (error instanceof Error) {
if (error.name === "AbortError") {
console.log("Cancelled by user");
} else if (error.name === "TimeoutError") {
console.log("Request timed out");
} else {
throw error;
}
}
}
Output:
// Depending on what happens first:
Cancelled by user
// OR
Request timed out
AbortSignal.any() aborts when ANY of the provided signals aborts. The error type tells you which signal triggered the abort.
CLI Cancellation: Handling Ctrl+C
In CLI applications, users expect Ctrl+C to stop operations gracefully, not crash the program. Wire SIGINT to your AbortController:
async function streamWithCancel(prompt: string, signal?: AbortSignal): Promise<void> {
const response = await fetch("https://api.openai.com/v1/chat/completions", {
method: "POST",
headers: {
"Content-Type": "application/json",
"Authorization": `Bearer ${process.env.OPENAI_API_KEY}`,
},
body: JSON.stringify({
model: "gpt-4",
messages: [{ role: "user", content: prompt }],
stream: true,
}),
signal,
});
const reader = response.body!.getReader();
const decoder = new TextDecoder();
while (true) {
const { done, value } = await reader.read();
if (done) break;
process.stdout.write(decoder.decode(value));
}
console.log(); // Final newline
}
// Main CLI entry point
async function main(): Promise<void> {
const controller = new AbortController();
// Handle Ctrl+C gracefully
process.on("SIGINT", () => {
console.log("\n[Cancelling...]");
controller.abort();
});
const prompt = process.argv[2] || "Hello!";
try {
await streamWithCancel(prompt, controller.signal);
} catch (error) {
if (error instanceof Error && error.name === "AbortError") {
console.log("[Cancelled by user]");
process.exit(0); // Clean exit
}
throw error;
}
}
main().catch(console.error);
Output:
$ npx tsx cli.ts "Explain async patterns"
Async patterns in TypeScript...
^C
[Cancelling...]
[Cancelled by user]
$
The key: process.on("SIGINT", ...) catches Ctrl+C before Node.js exits, giving you a chance to abort gracefully and clean up.
Preventing Double Ctrl+C
Users might press Ctrl+C multiple times. Handle this gracefully:
let isAborting = false;
process.on("SIGINT", () => {
if (isAborting) {
// Second Ctrl+C: force exit
console.log("\n[Force quit]");
process.exit(1);
}
isAborting = true;
console.log("\n[Cancelling... press Ctrl+C again to force quit]");
controller.abort();
});
Output:
^C
[Cancelling... press Ctrl+C again to force quit]
// If pressed again:
^C
[Force quit]
This pattern is common in CLI tools: first Ctrl+C attempts graceful shutdown, second Ctrl+C forces immediate exit.
Complete Production Example
Here's a production-ready function combining all patterns:
interface StreamOptions {
prompt: string;
signal?: AbortSignal;
timeoutMs?: number;
onToken?: (token: string) => void;
onComplete?: () => void;
onCancel?: () => void;
onError?: (error: Error) => void;
}
async function streamAIResponse(options: StreamOptions): Promise<string> {
const {
prompt,
signal,
timeoutMs = 60000, // Default 60 seconds
onToken,
onComplete,
onCancel,
onError,
} = options;
// Combine user signal with timeout if both provided
const effectiveSignal = signal
? AbortSignal.any([signal, AbortSignal.timeout(timeoutMs)])
: AbortSignal.timeout(timeoutMs);
let fullResponse = "";
try {
const response = await fetch("https://api.openai.com/v1/chat/completions", {
method: "POST",
headers: {
"Content-Type": "application/json",
"Authorization": `Bearer ${process.env.OPENAI_API_KEY}`,
},
body: JSON.stringify({
model: "gpt-4",
messages: [{ role: "user", content: prompt }],
stream: true,
}),
signal: effectiveSignal,
});
if (!response.ok) {
throw new Error(`API error: ${response.status} ${response.statusText}`);
}
const reader = response.body!.getReader();
const decoder = new TextDecoder();
while (true) {
const { done, value } = await reader.read();
if (done) break;
const chunk = decoder.decode(value);
fullResponse += chunk;
onToken?.(chunk);
}
onComplete?.();
return fullResponse;
} catch (error) {
if (!(error instanceof Error)) {
throw error;
}
if (error.name === "AbortError") {
onCancel?.();
return fullResponse; // Return partial response
}
if (error.name === "TimeoutError") {
const timeoutError = new Error(`Request timed out after ${timeoutMs}ms`);
onError?.(timeoutError);
return fullResponse; // Return partial response
}
onError?.(error);
throw error;
}
}
// Usage in CLI
async function main(): Promise<void> {
const controller = new AbortController();
process.on("SIGINT", () => controller.abort());
const response = await streamAIResponse({
prompt: "Explain quantum computing briefly",
signal: controller.signal,
timeoutMs: 30000,
onToken: (token) => process.stdout.write(token),
onComplete: () => console.log("\n[Done]"),
onCancel: () => console.log("\n[Cancelled]"),
onError: (err) => console.error("\n[Error]:", err.message),
});
console.log(`\nTotal response length: ${response.length} chars`);
}
main();
Output:
$ npx tsx stream.ts
Quantum computing uses quantum mechanics...
[Done]
Total response length: 847 chars
$ npx tsx stream.ts
Quantum computing uses quant^C
[Cancelled]
Total response length: 28 chars
This function:
- Accepts optional user cancellation signal
- Has configurable timeout (defaults to 60 seconds)
- Provides callbacks for tokens, completion, cancellation, and errors
- Returns partial response on cancellation (useful for showing what was generated)
- Distinguishes between user cancellation and timeout
Common Patterns Summary
| Pattern | When to Use | Key Code |
|---|---|---|
| Manual abort | Cancel button, navigation | controller.abort() |
| Timeout | Prevent hanging requests | AbortSignal.timeout(ms) |
| Combined | Both user cancel and timeout | AbortSignal.any([...]) |
| CLI SIGINT | Ctrl+C handling | process.on("SIGINT", ...) |
| Graceful shutdown | Double Ctrl+C handling | Track abort state flag |
Try With AI
Prompt 1: Build a Cancellable Search
I'm building a search-as-you-type feature that calls an AI API for each keystroke.
When the user types a new character, I need to cancel the previous search request
before starting a new one.
Help me implement this pattern:
1. Store the current AbortController
2. Cancel the previous request when a new one starts
3. Handle the race condition where results might arrive out of order
Show me the debounced search with cancellation pattern.
What you're learning: How to manage multiple overlapping requests where only the most recent matters. This is essential for search boxes, autocomplete, and any UI where user input triggers API calls faster than responses arrive.
Prompt 2: Implement Retry with Timeout
I want to combine retry logic with timeout handling. My AI API sometimes fails
transiently (rate limits, network blips), but I also need to prevent hanging
requests.
Design a function that:
1. Retries up to 3 times on transient errors
2. Has a timeout for each individual attempt
3. Has a total timeout for all attempts combined
4. Distinguishes between "gave up after retries" and "total timeout exceeded"
What's the best way to structure this with AbortController?
What you're learning: How to compose multiple async patterns (retry + timeout) cleanly. Production AI applications need both resilience (retry) and bounded latency (timeout), and combining them correctly is non-trivial.
Prompt 3: Cleanup on Page Navigation
In a React/Next.js application, I'm streaming AI responses. When the user
navigates to a different page, the component unmounts. I need to:
1. Cancel any in-flight AI requests
2. Clean up the ReadableStream reader
3. Prevent "setState on unmounted component" warnings
4. Handle the case where navigation happens mid-token
Show me the useEffect cleanup pattern with AbortController for streaming AI.
What happens to partial responses when the user navigates away?
What you're learning: How AbortController integrates with React's component lifecycle. This is critical for any React application that streams AI responses, preventing memory leaks and console warnings.
Safety note: When implementing cancellation in production, always test edge cases: What happens if the user cancels immediately after starting? What if the server has already finished but the network is slow? AbortController handles these correctly, but your cleanup code needs to be robust to any timing.