Deploy to Production
What You Will Learn
In this lesson, you will deploy your AI Employee from your laptop to a VPS so it runs when you sleep.
James opened his laptop in the morning and found three unanswered WhatsApp messages from the night. His agent had stopped checking leads at 11:47 PM, the exact minute he closed his laptop lid.
He looked at the gateway log. Thirteen lessons of building, customizing, securing. His agent had a personality, skills, plugins, voice, multi-agent routing, and a custom approval gate. But it stopped working every time he closed his laptop lid.
"I want this running when I sleep," he said.
Emma pulled up a Hetzner pricing page. "Five dollars a month. Two vCPUs, four gigs of RAM, forty gigs of SSD." She turned the screen toward him. "Your agent runs on less hardware than your coffee maker."
"How long to set it up?"
"Budget forty-five minutes. Fifteen for the VPS. Fifteen for OpenClaw and onboarding. Fifteen for the paper cuts you will definitely hit." She paused. "After that? Your agent never sleeps."
You are doing exactly what James is doing: taking an agent that works on your laptop and moving it to infrastructure that never sleeps.
Your AI Employee runs on your laptop, which sleeps, loses Wi-Fi, and shuts down for updates. This lesson moves it to a server that runs 24/7. By the end, your agent responds from a datacenter, and you access its Control UI through an encrypted tunnel.
If you do not want to deploy right now, read through the steps and understand the process. You can deploy later when you are ready. The exercises at the end work either way.
Choose Your Deployment Path
Three paths, same endpoint: an agent that runs when you sleep. Pick by the learner you are right now, not by which sounds most "production."
| Your situation | Pick | Why |
|---|---|---|
| New to servers, want the free path with the fewest gotchas | Managed | Pre-installed image, pre-wired Model Studio, no SSH, no firewall |
| Finished Lesson 2, comfortable with SSH, want to practice the CLI on a server you control | VPS Native | Same wizard as Lesson 2, matches Ch 57/Ch 58 commands exactly |
| Already run Docker Compose for other services and want OpenClaw to fit that fleet | VPS Docker | Same isolation model as your existing containers |
For beginners and the zero-cost path, pick Managed. Alibaba Cloud's Simple Application Server ships with OpenClaw pre-installed, pre-wired to Model Studio's Singapore region (1 million free tokens per model), and skips every place beginners get stuck: security groups, SSH keys, systemd, Telegram bot tokens. You get the same agent running 24/7 without touching the command line. The free trial costs $0.99/month for the VPS and can cost $0 for the LLM.
Pick VPS Native if you finished Lesson 2 and want the hands-on learning experience of running the same openclaw CLI on a server. It is not harder than Managed in theory, but in practice it has six places beginners get stuck (Alibaba console dropdowns, security groups, public-IP allocation, SSH auth, Model Studio region selection, Telegram bot creation). The upside is that the commands you learn here are the exact commands you will type in Chapter 57 and Chapter 58 when you extend your agent with MCP servers and build TutorClaw.
Pick VPS Docker only if you already run Docker Compose. It works, but it adds docker compose exec openclaw-gateway to every command for the rest of Part 5.
- Managed (Easiest)
- VPS Native (Hands-on)
- VPS Docker (Advanced)
One-Click Managed Server
Alibaba Cloud's Simple Application Server comes with OpenClaw pre-installed. No Docker, no SSH, no manual configuration. Your agent is running 24/7 within minutes.
Pricing: Starting at $0.99/month (promotional). Regular price ~$8/month for a 2 GB instance. With the free token tier (see Step 6), your LLM cost can be $0.
Steps:
- Go to the OpenClaw on Alibaba Cloud setup page
- Select a Simple Application Server with the OpenClaw image (2 GB+ memory)
- Choose your region and subscription duration
- Complete payment
- In the SAS Console, open your instance and run the firewall configuration command
- Set up your API key through Model Studio:
- Open Model Studio and select the Singapore region from the region dropdown
- Generate your API key in the Singapore region
- Select a model from the Singapore region's model list (avoid Qwen Max, it is expensive)
- Enable the free quota limit option to restrict usage to the 1 million free tokens per model
- Access the dashboard URL shown in your instance details
For the complete server setup walkthrough, see the Alibaba Cloud OpenClaw guide.
Every model listed in Model Studio's Singapore region includes 1 million free tokens. If you skip this and use the default region, Alibaba charges for all token usage immediately. Default models like Qwen Max are expensive. With a zero-credit account, Alibaba sends an overdue notice and suspends your account within 24 hours, blocking all Model Studio access.
Enable the free quota limit option to cap usage at the free tier. With this setting, your only cost is the server instance ($0.99-$8/month).
Your OpenClaw gateway is now running in the cloud. The dashboard is your Control UI.
Verify your model. In the Control UI dashboard, confirm that the active model is one of the free-tier models available in the Singapore region. The default model may be different and expensive. In the instance UI, select Model Studio as your provider and pick a model from the Singapore region dropdown.
After provisioning, connect a messaging channel. For WhatsApp integration on the managed server, follow the Alibaba Cloud WhatsApp integration guide. For Telegram or Discord, SSH into your instance and configure the channel using the same flow from Lesson 2.
Send a test message. If the agent responds, you are deployed.
Alibaba also offers a 1-year free trial on ECS (Elastic Compute Service). If you want zero-cost self-managed setup instead of the one-click managed server, switch to the VPS Native tab. Alibaba ECS is fully supported there, with a dedicated provisioning walkthrough.
Set Up Your Own Server (Native Install)
This is the same flow you ran in Lesson 2, with one extra step: SSH to a server first. The OpenClaw installer handles Node.js, systemd, and the CLI. Your muscle memory from Lesson 2 transfers exactly.
This path requires comfort with SSH and the Linux command line. If typing ssh root@... is unfamiliar, use the Managed tab instead. You get the same result with zero command line.
You need: A Linux server with at least 2 vCPUs and 4 GB RAM, and basic experience with SSH and terminal commands.
| Provider | Monthly Cost | Best For |
|---|---|---|
| Hetzner CX21 | $5/mo | Cheapest paid option, 2 vCPU / 4 GB RAM / 40 GB SSD |
| Alibaba ECS | Free 1 year | Zero-cost free trial, then ~$8/mo after |
| DigitalOcean | $6/mo | Familiar if you already have an account |
| Vultr | $6/mo | Similar specs |
| Oracle Cloud | Free | Always-Free ARM, 4 vCPU / 24 GB (slower provisioning) |
Provision Your Server
Pick one of these two walkthroughs. Every step after this section is identical whichever provider you choose.
Option A — Hetzner ($5/month): the simplest paid path, up in under two minutes.
- Sign up at hetzner.com/cloud
- Create a new project, click Add Server
- Select Ubuntu 24.04, CX21
- Add your SSH key (or let Hetzner email you the root password)
- Click Create & Buy Now and note the public IP address
Option B — Alibaba ECS (free for 1 year): zero-cost if your Alibaba account is new. This path has more console dropdowns than Hetzner; budget 15-20 minutes for the console work alone.
Alibaba ECS is powerful but beginner-hostile: the console has many dropdowns, the security group is a separate resource you have to configure manually, and public IP allocation is not automatic. If you want the free path with the fewest gotchas, switch to the Managed tab — you will be up in under 10 minutes with zero command line. Option B below is for learners who want the hands-on experience and do not mind hitting 2-3 paper cuts along the way.
- Sign up at alibabacloud.com and activate your free trial. A credit card and identity verification are typically required even for the free tier.
- Open the ECS console and click Create Instance. Pick Pay-As-You-Go as the billing method so the free-trial credit applies.
- Select Ubuntu 24.04 as the image and a free-trial eligible instance type with at least 2 vCPU / 4 GB RAM (the
ecs.t6-c1m2.largeor similar burstable family usually qualifies). - On the network tab, check "Assign Public IPv4 Address" explicitly — Alibaba does not allocate one by default, and without it SSH will never connect. Keep the default bandwidth plan for the free-trial period.
- On the security group tab, open port 22 to
0.0.0.0/0so SSH can reach the instance from your laptop. If you want to harden it later, restrict the source to your laptop's public IP. - On the authentication tab, add your SSH key pair (create one in the console if you do not have one) OR set a root password. Either works; the SSH key is cleaner.
- Launch the instance. Wait ~30 seconds for it to reach the Running state, then copy the public IP from the instance detail page.
Alibaba publishes a complete end-to-end tutorial with screenshots at Deploy OpenClaw on Alibaba Cloud ECS with Telegram Integration. It covers the console clicks, Model Studio API key setup, and Telegram bot creation with visual references. Use it as a companion to this lesson if you get lost on any of the steps above.
If your goal is truly zero cost, you must use Alibaba Model Studio as your LLM provider and select the Singapore region in Model Studio. Every model in Singapore includes 1 million free tokens. If you skip this and use another provider (Claude, OpenAI, Gemini), or pick Model Studio in a non-Singapore region, you will be billed for every token. On a zero-credit trial account, Alibaba sends an overdue notice and suspends your account within 24 hours of the first charge — blocking ECS, Model Studio, and everything else. Enable Model Studio's "free quota limit" option to cap usage at the free tier. The onboarding-step tip below shows the exact Custom Provider base URL for Singapore Model Studio.
SSH In and Install OpenClaw
Log into your server. Replace YOUR_VPS_IP with the IP from the previous step:
ssh root@YOUR_VPS_IP
Run the OpenClaw installer. This is the same installer you would run on a fresh laptop: it detects your OS, installs Node.js if needed, installs the openclaw CLI, and registers the gateway as a systemd service:
apt-get update
curl -fsSL https://openclaw.ai/install.sh | bash
If your VPS put you in as a non-root user (Alibaba sometimes creates an ecs-user, DigitalOcean creates ubuntu), prefix both commands with sudo:
sudo apt-get update
curl -fsSL https://openclaw.ai/install.sh | sudo bash
Verify the CLI is on your PATH:
openclaw --version
If you see a version number, the install succeeded. If you see command not found, the installer added openclaw to a new shell config file. Either close your SSH session and reconnect, or reload your shell config in place:
source ~/.bashrc # or ~/.zshrc, depending on your shell
Run Onboarding
Run the same wizard you ran in Lesson 2:
openclaw onboard
Choose your model provider, authenticate, and select a model. When prompted for channel, pick Telegram or Discord now, or skip and add one in the next step.
If you picked the Alibaba ECS free-trial path and want the truly zero-cost setup, you must use Alibaba Model Studio in the Singapore region as your LLM provider. Every model in that region includes 1 million free tokens per month.
Before onboarding: open Alibaba Model Studio in a browser, switch the region dropdown to Singapore, create an API key, and enable the "free quota limit" toggle so you cannot accidentally blow past the free tier.
In the openclaw onboard wizard:
- Select Custom Provider (not Gemini, not Claude, not OpenAI)
- For the base URL, paste one of these DashScope endpoints:
- Standard Model Studio:
https://dashscope-intl.aliyuncs.com/compatible-mode/v1 - Coding Plan (if you have one):
https://coding-intl.dashscope.aliyuncs.com/v1
- Standard Model Studio:
- Paste your Singapore-region API key
- Pick
qwen-plusorqwen3-coder-nextas the model - Never pick
qwen-max— it is the most expensive model in the catalog and will burn through the free tier on one long conversation
If you would rather use Gemini, Claude, or OpenAI as a paid provider, skip this tip and follow the standard wizard prompts — but your deployment will no longer be free.
If you cancel the wizard, the gateway runs but no model is configured. Nothing responds. If you need to reconfigure later, run openclaw onboard again, or set the provider manually:
openclaw config set model.provider google
openclaw config set model.model gemini-2.5-flash
Connect a Channel
If you skipped channel setup during onboarding, add one now. The Alibaba walkthrough covers Telegram end to end with screenshots; the commands are the same on any VPS.
First: Create Your Telegram Bot
Before openclaw channels add --channel telegram can run, you need a bot token from Telegram. This is a one-time setup.
- Open Telegram on your phone or desktop and search for
@BotFather(the blue-checkmark official bot) - Start a chat with BotFather and send
/newbot - Pick a display name for your bot (e.g., "James AI Employee")
- Pick a username that ends in
bot(e.g.,james_ai_employee_bot) - BotFather replies with a message containing your bot token, which looks like
1234567890:ABCdefGhIJKlmNoPQRsTUVwxyz - Copy this token — you will paste it into the OpenClaw CLI in the next step
Anyone with this token can control your bot. Do not paste it into public chats, commit it to git, or share it in screenshots.
Then: Connect the Channel
Back in your SSH session:
Telegram (easiest for production):
openclaw channels add --channel telegram
Paste the bot token from BotFather when prompted. Then pair your Telegram account:
- Open your bot in Telegram (search for its username) and send
/start - The bot replies with your Telegram user ID and a one-time pairing code
- Return to your SSH session and approve the pairing when the CLI prompts for it (or run
openclaw pairing approve telegram <CODE>) - The bot confirms "OpenClaw access approved. Send a message to start chatting."
Discord:
openclaw channels add --channel discord
WhatsApp (requires a dedicated phone number):
openclaw channels add --channel whatsapp
openclaw channels login --channel whatsapp
Restart the gateway to pick up the new channel:
openclaw gateway restart
Send a test message. If the agent responds, you are deployed.
Verify Health
The native install registers a systemd service. Check it is alive:
openclaw doctor
openclaw doctor is your first stop for any problem: it checks the model provider, channel credentials, file permissions, and daemon status in one command. Run it before reading any logs.
If you want to tail the gateway log directly:
journalctl -u openclaw-gateway -f
Set Up Your Own Server (Docker)
Pick the Docker tab only if you already run Docker Compose on this VPS and want OpenClaw to fit your existing fleet. For a fresh single-agent deployment, the VPS Native tab is simpler, shorter, and matches the commands used in Chapter 57 and Chapter 58.
If you want container isolation or prefer a different provider, set up a VPS manually with Docker Compose.
This path requires comfort with SSH, the Linux command line, and Docker. If terms like ssh, chown, or docker compose exec are unfamiliar, use the Managed tab instead. You get the same result with fewer steps and no command line.
You need: A Linux server with at least 2 vCPUs and 4 GB RAM, and basic experience with SSH, terminal commands, and Docker.
| Provider | Monthly Cost | Notes |
|---|---|---|
| Alibaba ECS | Free 1 year | Free trial, then ~$8/mo |
| Hetzner CX21 | $5/mo | 2 vCPU, 4 GB RAM, 40 GB SSD |
| DigitalOcean | $6/mo | Similar specs |
| Vultr | $6/mo | Similar specs |
| Oracle Cloud | Free | Always Free ARM, 4 vCPU/24 GB |
On Hetzner (example):
- Sign up at hetzner.com/cloud
- Create a new project
- Click Add Server
- Select Ubuntu 24.04, CX21
- Add your SSH key (or let Hetzner email you the root password)
- Click Create & Buy Now
- Note the IP address
WhatsApp is a single-connection protocol. You cannot load-balance it across multiple pods. The linked-device session is stateful, tied to one gateway process. Docker Compose on a single VPS is the right architecture for one AI Employee; Kubernetes is the wrong tool here.
SSH In and Install Docker
Log into your server remotely. Replace YOUR_VPS_IP with the IP address from step 7 above:
ssh root@YOUR_VPS_IP
If Hetzner emailed you a root password, enter it when prompted. If you added an SSH key during server creation, the login happens automatically.
Now install Docker, the tool that runs your agent in an isolated container:
apt-get update
apt-get install -y git curl ca-certificates
curl -fsSL https://get.docker.com | sh
Verify both installed correctly:
docker --version
docker compose version
If both print a version number, you are ready.
Clone and Configure
Download the OpenClaw source code:
git clone https://github.com/openclaw/openclaw.git
cd openclaw
Create the folders where your agent stores its configuration and workspace files:
mkdir -p /root/.openclaw/workspace
chown -R 1000:1000 /root/.openclaw
The chown line gives ownership to user 1000, which is the user your agent runs as inside Docker. Skip this and you get "permission denied" errors later.
Generate a gateway token (a random password for your Control UI) and create the configuration file:
GATEWAY_TOKEN=$(openssl rand -hex 32)
cat > .env << EOF
OPENCLAW_IMAGE=ghcr.io/openclaw/openclaw:latest
OPENCLAW_GATEWAY_TOKEN=$GATEWAY_TOKEN
OPENCLAW_GATEWAY_BIND=lan
OPENCLAW_GATEWAY_PORT=18789
OPENCLAW_CONFIG_DIR=/root/.openclaw
OPENCLAW_WORKSPACE_DIR=/root/.openclaw/workspace
EOF
Save your gateway token. Print it now and copy it somewhere safe. You need this to log into the Control UI:
echo $GATEWAY_TOKEN
Pull and Launch
Start your agent. The -d flag runs it in the background so it keeps running after you close the terminal:
docker compose up -d
First pull takes 1-2 minutes (Docker downloads the pre-built image). Check status:
docker compose ps
If it shows Restarting, check docker compose logs -f openclaw-gateway. If you see Gateway start blocked — gateway.mode not configured:
docker compose run --rm --no-deps --entrypoint node openclaw-gateway \
dist/index.js config set gateway.mode local
docker compose restart openclaw-gateway
Run Onboarding
Your agent is running but has no brain yet. Run the setup wizard inside the container to connect it to an LLM:
docker compose exec openclaw-gateway openclaw onboard --no-install-daemon
Same wizard from Lesson 2: choose your model provider, authenticate, select a model. The --no-install-daemon flag tells it Docker manages the process lifecycle, so no system daemon is needed.
If you cancel the wizard, the gateway looks running but no model is configured. Nothing responds. Complete the wizard, or set the provider manually:
docker compose exec openclaw-gateway openclaw config set model.provider google
docker compose exec openclaw-gateway openclaw config set model.model gemini-2.5-flash
Connect a Channel
Your agent can think now, but it has no way to receive messages. Your local WhatsApp is linked to your laptop's gateway. You need a separate channel for the VPS.
Telegram (easiest for production):
docker compose exec openclaw-gateway openclaw channels add --channel telegram
Discord:
docker compose exec openclaw-gateway openclaw channels add --channel discord
WhatsApp (requires a dedicated phone number):
docker compose exec -it openclaw-gateway openclaw channels add --channel whatsapp
docker compose exec -it openclaw-gateway openclaw channels login --channel whatsapp
Restart after adding the channel:
docker compose restart openclaw-gateway
Send a test message. If the agent responds, you are deployed.
Access the Control UI
The gateway binds to 127.0.0.1. It is not accessible from the public internet. To reach the Control UI from your laptop, open an SSH tunnel:
ssh -N -L 18789:127.0.0.1:18789 root@YOUR_VPS_IP
Open http://127.0.0.1:18789/ in your browser and paste the gateway token.
If your local gateway is already using port 18789, use a different local port:
ssh -N -L 19000:127.0.0.1:18789 root@YOUR_VPS_IP
Then open http://localhost:19000. If the page loads but shows no data, fix the allowed origins:
# Managed and VPS Native:
openclaw config set gateway.controlUi.allowedOrigins \
'["http://localhost:18789","http://127.0.0.1:18789","http://localhost:19000","http://127.0.0.1:19000"]' \
--strict-json
# VPS Docker:
docker compose exec openclaw-gateway openclaw config set \
gateway.controlUi.allowedOrigins \
'["http://localhost:18789","http://127.0.0.1:18789","http://localhost:19000","http://127.0.0.1:19000"]' \
--strict-json
The Security Model
No reverse proxy. No TLS certificates. No WAF.
| Component | Role |
|---|---|
| Loopback bind | Gateway only on 127.0.0.1, nothing external can reach it |
| SSH tunnel | Encrypted point-to-point from your laptop to the VPS |
| Gateway token | Authentication for the Control UI once tunnel is open |
The SSH key IS the authentication. The tunnel IS the encryption. The loopback binding IS the access control. For a single-operator deployment, this is the correct security posture.
Production Security Hardening
Before any customer touches your agent, run the security audit:
# Managed and VPS Native:
openclaw security audit
# VPS Docker:
docker compose exec openclaw-gateway openclaw security audit
On a default installation, expect critical findings from groupPolicy set to open and warn findings for credential directory permissions.
DM Access Policies
Lesson 2 used pairing mode to onboard you: your number was auto-approved, and your first self-test worked immediately. For production, you need to make a deliberate choice about who can DM your agent. WhatsApp offers four modes, set via channels.whatsapp.dmPolicy:
| Mode | Behavior when a stranger DMs | Use case |
|---|---|---|
pairing | Bot replies with a one-time code; operator approves via CLI | Personal use, small team onboarding |
allowlist | Silently blocked. Only numbers in allowFrom can DM. | Production with a known contact list |
open | Anyone can DM (requires allowFrom: ["*"]) | Public support or community bots |
disabled | All DMs ignored. Group-only bots. | Announcement channels, group utilities |
Your own linked number is auto-allowed in every mode. That is why your Lesson 2 self-test worked without configuration.
To change modes, the easiest path is the same configure command you learned in Lesson 2:
openclaw configure --section channels
Pick WhatsApp, choose Modify settings, and select the new policy. The allowFrom list accepts E.164 numbers (for example, ["+15551234567", "+442071838750"]).
For production, pick allowlist. It is the only mode that blocks unknown senders without generating pairing codes that expire, hit caps, or require human approval. If you need to add people later, run openclaw configure --section channels again, or if you kept pairing mode for onboarding, use openclaw pairing approve whatsapp <CODE> to add them one at a time.
dmPolicy: "open" combined with tool access is how, in the installer's own words, "a bad prompt tricks the agent into doing unsafe things." Only use open for bots with no tool access, or with a minimal tool profile.
Two Commands to Zero Criticals
# Managed and VPS Native:
openclaw config set groupPolicy allowlist
chmod 700 ~/.openclaw/credentials/
# VPS Docker:
docker compose exec openclaw-gateway openclaw config set groupPolicy allowlist
chmod 700 /root/.openclaw/credentials/
Run the audit again. Zero criticals.
The Hardening Checklist
dmPolicy=allowlistwith explicitallowFromnumbers (notpairingoropenfor production)groupPolicy=allowlist(notopen)- Credentials directory =
700permissions - Tool profile =
messagingorminimal(notcoding) - Log redaction enabled:
openclaw config set logRedaction tools - Backup verified:
openclaw backupcreates a portable backup; test the restore - /commands awareness: all OpenClaw slash commands (
/think off,/forget,/sessions) are accessible to every approved user with no role gating. Awareness mitigation only: add a note in the system prompt that these commands are operator-only
Cost Analysis
| Item | Monthly Cost |
|---|---|
| Hetzner VPS (CX21) | $5 |
| Model provider (paid) | $50-100 |
| Telnyx voice (optional) | $11 |
| Domain + DNS | ~$1 |
| Total | ~$67-117 |
If you chose the Alibaba Cloud managed server with the Singapore region free tier, your monthly cost can be as low as $0.99. The 1 million free tokens per model covers learning and light production with no LLM charges.
For heavier usage beyond the free tier, the model provider becomes the dominant cost. The real optimization is not cheaper hardware; it is fewer tokens per interaction. That is why Lesson 4 spent time on workspace file optimization and Lesson 9 covered heartbeat cost management.
Try With AI
Exercise 1: Deploy or Trace
If you have a VPS, follow Steps 1-7 and deploy. If you do not, trace the deployment by reading each step and predicting what goes wrong if you skip it.
For each of the 7 deployment steps, write one sentence
describing what it does and what breaks if you skip it.
What you are learning: Production deployment is sequential. Skipping onboarding (Step 5) leaves a running gateway that never responds. Skipping channel setup (Step 6) means the VPS gateway has no way to receive messages.
Exercise 2: Map the Security Model
Draw a diagram showing: your laptop, the SSH tunnel,
the VPS, and the gateway bound to 127.0.0.1.
Label where authentication happens and where encryption happens.
Why is no TLS certificate needed?
What you are learning: The SSH tunnel replaces three components (reverse proxy, TLS termination, API gateway) with one. The security model is simple because the attack surface is small: SSH key authentication plus loopback binding.
Exercise 3: Calculate Your Costs
Calculate the monthly cost of running your AI Employee
in production. Include: VPS, model provider at your
expected message volume, and any optional services.
Compare this to the cost of a human performing the same tasks.
What you are learning: The infrastructure cost ($0.99-$15/month) is trivial. With the managed server free tier, the model cost can be $0 for light usage. Beyond the free tier, the model provider ($50-100/month) becomes the dominant cost. The economics favor AI Employees when the agent handles enough volume to justify any paid model usage.
What You Should Remember
The Right Deployment
A $5/month VPS (2 vCPU, 4 GB RAM) running a single OpenClaw gateway is the right production deployment for one AI Employee. Not Kubernetes. Not serverless. Not multi-region. One server, one gateway, one agent. If you are new, Alibaba's managed server is the fewest-gotchas free path. If you finished Lesson 2 and want the hands-on experience, the native VPS path uses the exact same openclaw commands you will use in Ch 57 and Ch 58.
SSH Tunnel Security
The gateway binds to 127.0.0.1. The SSH tunnel encrypts traffic from your laptop to the VPS. The SSH key is your authentication. No TLS certificate, no reverse proxy, no API gateway needed. Three components replaced by one.
Cost Reality
Infrastructure ($5/month VPS or $0.99-$8/month managed) is trivial. On the managed server with Singapore region free tokens, the model provider can cost $0 within the 1 million token limit per model. Beyond the free tier, the model provider ($50-100/month at moderate volume) becomes the dominant cost. Workspace optimization (shorter SOUL.md, lighter heartbeats) reduces that dominant cost.
The Pattern Repeats
The VPS setup is Lesson 2 on different hardware. Same wizard, same crash loop, same fixes. Thirteen lessons on your laptop were not just about features; they built the debugging instincts you need when the same problems appear on a server with no one else around.
When Emma came back, James had his phone in one hand and a terminal SSH session in the other. "It is responding from Germany."
"How long?"
"Forty-two minutes. openclaw was not on my PATH until I reopened the shell. Then the CORS thing when I tunneled to port 19000." He paused. "Same debugging pattern as Lesson 2, though. Run openclaw doctor, read the output, fix the config."
Emma nodded. "The CORS paper cut caught me too, first time. I expected it to just work."
She looked at the terminal. Health endpoint returning 200. Gateway uptime climbing. "Your agent runs when you sleep now. That is what separates a demo from a product."
James thought about his old job. The operations team had a saying: production is the thing that works at 3 AM when nobody is watching. His agent was that now.
"The hardest part was not the deployment," he said. "It was realizing that the setup from Lesson 2 repeats almost exactly on the VPS. Same wizard, same config, same crash loop. I already knew the fixes."
"That is the point." Emma closed her laptop. "Thirteen lessons on your laptop were not just about learning features. They were about building the instincts you need when the same problems appear on a server with no one else around to ask."