OpenClaw is a self-hosted AI assistant that you reach through the messaging apps you already use. You run a small gateway process on your own computer (or a server) and connect it to WhatsApp, Telegram, Slack, iMessage, Discord, Signal, or any of about twenty other channels. From then on, the same assistant is reachable wherever you happen to be chatting.
It's open source under the MIT license, which means you can read the code, modify it, and run it without paying anyone. There's no hosted version — you own the whole stack.
Most AI assistants live in their own walled garden. You open the ChatGPT app, or the Claude app, or a separate window. OpenClaw flips that around. Instead of you going to the assistant, the assistant comes to where you already are. Want to ask a question while you're chatting with someone on WhatsApp? You forward the message to your assistant in the same WhatsApp interface. Want it to summarise a Slack thread? You @-mention it in the channel.
The other thing that matters is sovereignty. Because the gateway runs on your hardware and your data stays local, you can connect it to a local model like Ollama and have a fully private assistant that never sends anything to a cloud. Or you can point it at Claude or GPT and only forward the messages you actively want answered. Either way, the channel history, the agent memory, and the configuration are on your machine, not somebody else's.
It also pairs nicely with companion apps for macOS, iOS, and Android — turning your phone or laptop into another input surface (camera, voice, canvas) the agent can pull from.
The trade-off with self-hosting is always: you own the data, but you also own the uptime. If the laptop running OpenClaw goes to sleep, your assistant is offline. For casual use that's fine. For "always available," put it on something that stays awake — a Mac mini at home, a small Hetzner box, a Raspberry Pi.
The other thing to be honest about: setting up channel bridges (especially WhatsApp, which fights bots actively) takes some patience. The openclaw onboard command holds your hand through it, but expect to spend an evening getting your first two or three channels happy.
If you've been frustrated that "the AI" lives in one app while your real conversations live in five others, this is the cleanest open-source way to fix that gap.
Self-hosted — Running the software on your own machine instead of using someone else's hosted service. You get full control and privacy in exchange for being responsible for keeping it running.
Gateway — The central process that talks to messaging channels on one side and to your AI model on the other. OpenClaw is essentially a smart gateway with batteries included.
Channel — A specific messaging surface — WhatsApp, Slack, iMessage, etc. OpenClaw connects to many at once, and the same assistant is reachable from all of them.
MIT license — A permissive open-source license. Anyone can use, copy, modify, or distribute the code, including for commercial purposes, with almost no restrictions.
You need Node 22.14 or newer. The whole onboarding takes about five minutes.
npm install -g openclaw@latestopenclaw onboard --install-daemonopenclaw gateway --port 18789Yes. It's MIT-licensed open source. You install it on your own machine — there's no subscription and no hosted version to pay for. The only costs are whatever AI provider you point it at (Claude, GPT, local model, your call).
A laptop works for trying it out. For an always-on assistant you'll want it on a machine that stays awake — a Mac mini at home, a small VPS, an old desktop. Anything that runs Node and stays online.
A lot. WhatsApp, Telegram, Slack, Discord, iMessage, Signal, Matrix, Microsoft Teams, Google Chat, plus more niche ones like Mattermost, Nextcloud Talk, and IRC. Some channels need a companion app or bridge running, but the gateway handles the connection logic.
Hermes is built around an agent's memory. OpenClaw is built around channels — making the same assistant reachable wherever you already chat. The closest comparison is something like a personal Slackbot, except it works on every chat app at once and you own all the data.
Only what you send to the AI provider for completions. Messages, history, and channel state stay on the gateway you run. If you point it at a local model like Ollama, nothing leaves at all.