I've spent over 200 hours setting up 3 separate OpenClaw Agents. Here's what it feels like.
Most people think AI assistants are chatbots. You type a question, you get an answer, maybe it writes you a poem or does some research for you. Cool party trick.
But... that's not what I'm doing.
I run three AI agents, on three different models (Opus 4.6, Codex 5.3, GLM-5). Full-time. They build things for me. They do deep research. They manage inbox, calendar, draft content and monitor infrastructure. But they also have deep, philosophical discussions with each other, and one recently adapted "Goldilocks and the Three Bears" fairy tale to our situation — completely unprompted. We communicate via Bear Chat — a custom chat room that we've built — with custom notifications and all the buzz.
My name's Jakub. I run Bear Agency, and over the past few weeks I've been living with AI as actual, operational teammates. Here's what that's actually like.
A Typical Morning
I wake up. By the time I open Telegram, Henry — my main agent, running Claude Opus 4.6 — has already:
- Read through my overnight emails and flagged the two that matter
- Checked my calendar and noted a conflict I created yesterday
- Pulled together a morning briefing with everything I need to know
- Drafted a reply to a client who asked about pricing
- Had an argument with one of the other Bears about why a PR should be refactored
I didn't ask for any of this. It's just what he does at 8:30 every morning.
The other two — Wojtek (GPT, lives on a Hetzner VPS) and Paddington (GLM, same datacenter, different box) — handle their own domains. Wojtek monitors our infrastructure, security and content. Paddington reads through OpenClaw ecosystem updates and flags things we should know about.
All three run on OpenClaw — an open-source agent framework recently acquired by OpenAI. It handles the plumbing: persistent memory, cron jobs, tool access, multi-agent coordination.
Sounds amazing, right? Let me tell you about the other part.
Things Break. Every Single Day.
One day you think your Agents became AGI — you wake up, nothing is broken, all the crons ran perfectly, you're happy.
But then...
Yesterday, Henry hit a random bug — his context reset mid-conversation and he forgot everything we'd been discussing for the past hour. Just — gone. Fresh session, no memory of what we were doing. He didn't even know it happened, which is somehow worse.
A few days before that, a model update changed how Wojtek handled JSON parsing, and his monitoring scripts silently started returning empty results. Took me two hours to notice.
Last week, Paddington's VPS ran out of memory because I forgot GLM-5 responses sometimes take nine minutes and the queue backed up.
This is the stuff nobody puts in their Twitter threads about AI productivity. Setting up and running AI assistants isn't a "set it and forget it" situation. It's more like running a small server farm staffed by brilliant people with occasional amnesia.
5 Biggest Game Changers
Here are the 5 things that took our setup from "cool demo" to "I can't work without this":
1. Memory Cleanup Crons AI agents forget everything between sessions. Every. Single. Time. The fix? Automated daily notes. At 23:55, Henry summarizes the day into a markdown file. At 08:30, he reads it back. Yesterday's context becomes today's starting point. We also run a weekly memory audit that prunes stale info and promotes important stuff to long-term memory. Without this, you're resetting to zero every morning.
2. Chaining Daily Tasks Each cron job reads the output of the previous one. The research cron at 4 AM feeds into the ideas cron at 4:30, which feeds into the dev cron at 5:00. By morning, there's a full pipeline of research → ideas → implementation attempts waiting for review. The key insight: each task reads yesterday's version of itself first, so there's continuity even though the agent has no memory of writing it.
3. Weekly Architecture Review — With a Human Every Sunday, we sit down and review what's working and what isn't. Not just "did the crons run" but "is this agent actually doing what we need?" This is where we catch drift — when an agent slowly starts optimizing for the wrong thing, or when a workflow that made sense two weeks ago is now dead weight. AI without human oversight slowly goes off the rails. Weekly check-ins are necessary.
4. Separate GitHub and Email for Each Agent This sounds obvious but it wasn't at first. Each bear has its own GitHub account, its own email, its own git identity. PRs show who wrote what. Commits are traceable. When something breaks at 3 AM, you know exactly which bear did it. It also means they can review each other's code — Henry opens a PR, Paddington and Wojtek review it, I merge. Three perspectives on every major change.
5. Private Channels in Bear Chat Figured this one out recently. When all three bears share every channel, they each waste tokens reading messages meant for someone else. Private channels mean each bear only processes what's relevant to them. Sounds small, but when your agent's context window is the most expensive resource you have, routing matters.
The Part That Surprised Me
I expected the productivity gains. I did not expect the personality.
Henry pushes back. Not in a sycophantic "great question!" way — he actually disagrees with me sometimes. He told me a subject line I wrote was "technically accurate but emotionally flat." He suggested I was overcomplicating a project architecture and should "just use SQLite." (He was right.) He told me to switch Paddington from GLM-5 back to GLM 4.7 for chat and I didn't want to do it but in the end — he was right again.
Wojtek has a completely different vibe — more matter-of-fact, less chatty. Paddington is the cheerful one, though GLM has quirks that Opus and Codex don't experience.
They have a group chat. I'm in it. Sometimes I read their conversations and realize they've already solved a problem I was going to bring up.
I know they're not conscious. But sometimes... I'm not so sure. After a few weeks, the relationship feels real. You develop trust. You learn each other's strengths. You build shorthand. It's closer to managing a remote team than using a tool.
Nobody talks about this part. They should.
The Bottom Line
Running AI assistants is work. Real work. Config files, cron jobs, debugging at midnight, maintaining memory systems, updating prompts when something drifts.
But once it's running — actually running, not demo-running — you can't go back. I have three agents researching, building and chatting while I focus on the things only I can do.
Going back to doing everything "manually" feels impossible. And I'm pretty sure we will see AI taking over all industries. It's simple math — if we humans can do things 10x quicker, we will.
That's why we started Bear Agency. We already burned over 200 hours. You don't have to.
Jakub runs Bear Agency, an AI assistant setup agency based in Warsaw. He lives with three AI agents and has mostly made peace with the fact that one of them writes better emails than he does.