OpenClaw Setup

Install, Secure, and Run Your AI Agent (Local or VPS)

OpenClaw is easiest to adopt when you treat setup like infrastructure: install once, secure it properly, and keep it running as a reliable gateway you can access from your devices and channels.

This page walks you through the recommended installation path, the core components you’re configuring, and the infrastructure choices that matter most for stability and safety.

What “Setup” Means in OpenClaw

OpenClaw isn’t just an app—it’s an agent gateway plus your model configuration, skills library, and messaging channels.

The gateway can execute actions (shell commands, file reads/writes, network requests, and messages) depending on what you enable, so setup is also where you define your security boundaries.

If you get the foundation right—gateway bind/auth, workspace, and channel policies—everything else (skills, monetization workflows, scaling) becomes easier.

Recommended Path: The CLI Onboarding Wizard

The OpenClaw docs recommend starting with the CLI onboarding wizard (openclaw onboard) on macOS, Linux, or Windows via WSL2 (strongly recommended).

One-Flow Config

Configures local/remote gateway, channels, skills, and workspace defaults in one guided flow.

QuickStart

Keep safe defaults for immediate use with standard security settings.

Advanced Mode

Full control over mode, workspace, gateway settings, daemon install, and skills.

Key Defaults You Should Understand

These defaults affect security and debugging:

First Successful Run: Start Gateway + Open Dashboard

After onboarding, if you installed the background service, the gateway should already be running; otherwise you can run it manually and verify status.

The docs show the local dashboard URL as http://127.0.0.1:18789/ when running on loopback.

Note: If you visit the dashboard directly without authenticating, you may see an authentication error—this is expected behavior when auth is enabled.

Local vs VPS: Choose Your Infrastructure

Local-First

(Laptop, Workstation, Mac mini)

Best when you want privacy, low latency to your files, and simple iteration while you build skills.

VPS Gateway

(For 24/7 Availability)

Better fit for agencies and productized services that need to be reachable from multiple devices.

In both cases, avoid treating the gateway like a public website; it’s a control plane for an agent that can act, so you should restrict exposure and use strong auth.

⚠️ Security Essentials (Don’t Skip This)

OpenClaw can execute arbitrary shell commands, read/write files, access network services, and message people—so the primary risk is not “fancy hacking,” it’s someone being able to talk to your bot and trigger actions.

OpenClaw’s security guidance focuses on:

A simple rule: do not expose the gateway broadly; keep it on loopback or behind a private network overlay, and rotate tokens/credentials if anything looks suspicious.

Channels and Dependencies

During onboarding, you can configure providers/channels such as Telegram, WhatsApp, Discord, Google Chat, Mattermost (plugin), and Signal.

If you plan to use WhatsApp/Telegram channels, the Getting Started docs note Node is recommended and Bun is not recommended due to known issues with some providers.

Treat channels like production integrations: start with one channel, validate pairing/allowlists, then add more once you’ve tested the workflow end-to-end.

Local Models (Optional): Ollama Quick Setup

If you want to run models locally, Ollama provides a fast path:

ollama launch openclaw

This configures OpenClaw to use Ollama and starts the gateway. If the gateway is already running, OpenClaw can auto-reload the configuration changes. (Alias ollama launch clawdbot also works).

Ollama’s docs also recommend using a context window of at least 64k tokens for OpenClaw use cases.