All OpenClaw Settings You Must Know About
OpenClaw has a lot of settings, but most people do not need to touch all of them.
This guide focuses on the settings that are genuinely useful in day-to-day usage, explained in plain English.
1) Gateway (connectivity + security)
Think of gateway as OpenClawβs front door. You probably will not change these often, but understanding them prevents painful mistakes.
-
gateway.bind= where OpenClaw listens - how you can access the gateway and your OpenClaw Agent. It has 5 options:loopbackβ binds to localhost only, so only this machine can connect (safest default).lanβ binds to all LAN interfaces, so other devices on your network can connect.autoβ OpenClaw chooses a sensible bind mode for your environment.customβ you provide a custom bind target for advanced network setups.tailnetβ binds for Tailscale-only access patterns.
-
gateway.auth.mode= how access is protected (tokenorpassword) - gateway access should always be protected by one of these modes. -
gateway.port= only change if thereβs a port conflict or youβre using a reverse proxy setup
Biggest risk to avoid
Opening lan (or wider exposure) before tightening auth/origins.
Thatβs the fastest way to create accidental risk.
How to change gateway settings (simple workflow)
- Edit config (
openclaw.json) - Change one key at a time
- Restart gateway
- Verify with:
openclaw gateway status- quick real connection test from your intended client
2) Agents (personality + intelligence behavior)
This is the biggest section in OpenClaw. It determines how your agent behaves, how it remembers, and how it runs subagents.
Start here (highest impact): agents.defaults.model, agents.defaults.memorySearch, agents.defaults.compaction.
agents.defaults.model
Controls which model your agent uses for normal replies and decisions.
primaryβ main modelfallbacksβ backup models if primary fails
Examples: you might set primary to a stronger reasoning model (like Opus), then keep a faster/cheaper fallback (like Sonnet or Codex) for resilience.
agents.defaults.memorySearch
One of the most important settings - controls semantic recall from memory files.
This is what lets OpenClaw find relevant past context by meaning, not just keyword matching. It uses embeddings under the hood, and for most real workflows this should be enabled because it massively improves continuity and reduces repeated explanations.
Most useful options:
enabledβ on/off (recommended: on)sourcesβ where recall comes from (memory, optionalsessions)providerβ embedding provider (openai,gemini,voyage,local)modelβ embedding model IDquery.maxResults/query.minScoreβ recall amount + strictness
Tip: ask your agent to enable embeddings. You need an embedding provider configured (usually an API key). In many practical setups, embedding costs are low compared to model inference costs.
agents.defaults.contextPruning
Controls how older context is trimmed when payload gets too large.
This means your conversation context (especially big tool outputs, logs, and fetched data) is consuming too much of the modelβs token budget, so OpenClaw starts trimming less-important old chunks to keep replies stable.
Most useful options:
mode(off/cache-ttl) β selects whether pruning is disabled or based on payload age policy.ttlβ how long older tool payloads are kept before they become prunable.softTrimRatio/hardClearRatioβ pressure thresholds for light trimming vs hard clearing behavior.
Tip: if replies start feeling forgetful, ask your agent: "Check whether context is overflowing and tune agents.defaults.contextPruning to be less aggressive while preserving stability." Use this only after you notice real continuity issues, not as day-one tuning.
agents.defaults.compaction
Controls what happens when context is close to full and OpenClaw needs to summarize/repackage history. Most useful options:
mode(default/safeguard) βsafeguardis usually better when you want to avoid aggressive βwipeoutβ style compactions.reserveTokensFloorβ reserves minimum output room. Example: with a 200k context window andreserveTokensFloor: 25k, compaction can trigger earlier (~175k) instead of waiting for dangerous hard-pressure territory.maxHistoryShareβ caps how much of context budget history is allowed to consume.memoryFlush.enabled+ thresholds β when context pressure gets high, OpenClaw saves a condensed summary of important recent context into memory before trimming. In practice, this helps your agent keep key facts after compaction instead of "forgetting" them.
Tip: if compaction feels too destructive, ask your agent: "Set compaction to safeguard, increase reserveTokensFloor a bit, and keep maxHistoryShare moderate." Then monitor continuity for a day before tuning again.
agents.defaults.heartbeat
Heartbeat wakes the agent on a schedule and asks it to execute a recurring instruction.
Most useful options:
everyβ how often that agentβs heartbeat runs. Use more frequent intervals for active monitoring workflows, and slower intervals for routine check-ins to reduce noise.activeHoursβ allowed time window for heartbeat execution (critical: if overnight is outside this window, overnight heartbeat work wonβt run).target/toβ where heartbeat output is delivered.promptβ instruction heartbeat executes each cycle.
Tip: ask your agent: "Review my heartbeat schedule for my real routine (work hours + overnight tasks) and propose every + activeHours settings." This prevents silent failures when jobs are scheduled outside active hours.
agents.defaults.subagents
Controls sub-agent behavior.
This is very important if you run parallel workflows (research, monitoring, drafting) without blocking the main agent.
Most useful options:
maxConcurrentβ limits how many subagents can run at once (useful to prevent overload/cost spikes).archiveAfterMinutesβ auto-cleans old subagent sessions after inactivity.model/thinkingβ lets you run subagents on a cheaper/faster profile while keeping the main agent on a stronger one.
Tip: if your main chat feels blocked by heavy tasks, ask your agent: "Move research/drafting/monitoring to subagents, cap maxConcurrent, and use cheaper subagent model settings." This is where subagents pay off fast.
agents.defaults.sandbox
Controls isolation boundaries for execution.
Sandbox decides where risky work runs and what files it can touch. Think of it like giving your agent a separate safety room instead of your full machine.
Is it useful? Yes, very β especially when you run subagents, external automations, or any untrusted/generated commands. For simple local personal use, many people keep this relaxed at first; for production/shared setups, sandboxing is strongly recommended.
Most useful options:
mode(off/non-main/all) β choose who is sandboxed (none, only non-main sessions, or everyone).workspaceAccess(none/ro/rw) β controls file access level inside sandbox.scope(session/agent/shared) β controls how isolated sandboxes are from each other.
Tip: for production/shared setups, ask your agent to recommend a safer baseline like: mode: non-main, workspaceAccess: ro, and stricter scope isolation. For personal experiments, you can start looser and harden later.
Biggest mistakes to avoid (agents settings)
- Turning compaction too aggressive, then wondering why continuity dropped.
- No model fallbacks (single-point failure).
- Running all subagents on your most expensive model when many tasks could run on a cheaper/faster one.
Recommendations
- Turn on embeddings (
memorySearch) β this makes memory recall meaning-based and usually improves continuity a lot. - Keep pruning/compaction conservative to avoid βmemory wipeouts.β
- Tune only after observing real behavior for a few days.
- If token/cost pressure rises, move subagents to a cheaper model profile first.
How to change agent settings (simple workflow)
- Edit config (
openclaw.json). - Change one setting family at a time (
model, then memory, then compaction, etc.). - Restart gateway.
- Validate with a real task (not just status):
- continuity test ("Do you still remember X?")
- response quality test on your normal workflow
- Keep what improves behavior; revert what hurts continuity.
3) Session and messages (conversation behavior)
Session + messages are immediately useful because they control how your assistant behaves in daily conversation.
session
Controls whether conversations stay continuous or get reset over time.
Most useful options:
dmScopeβ decides whether all DMs continue in one shared memory thread (main) or are split into separate memories per person/channel.reset.mode(daily/idle) β determines if sessions reset every day (daily) or only after inactivity (idle).reset.atHour/reset.idleMinutesβ defines exactly when scheduled daily resets happen, or how long inactivity must last before reset.resetByType/resetByChannelβ lets you override reset behavior for specific contexts.
Tip: use session settings to keep different conversations cleanly separated. For example, you can keep one Telegram thread focused on marketing (long memory, less reset) and another focused on SEO (different reset rules), so topics donβt get mixed together.
messages
Controls how your assistant responds in chat (fast, calm, noisy, or voice-enabled).
Most useful options:
queue.modeβ decides what happens when new messages arrive while the agent is still busy.queue.debounceMsβ waits a short moment to combine rapid messages into one cleaner reply.ackReaction/ackReactionScopeβ adds quick emoji acknowledgements automatically.tts.auto/tts.providerβ turns spoken replies on/off and chooses which voice engine is used.
Tip: if replies feel noisy or fragmented, ask your agent: "Tune my message flow for calmer replies β increase queue.debounceMs a bit and use a less interrupt-heavy queue.mode." Use this when chats feel spammy or chopped up.
Practical impact
These settings decide whether your assistant feels smooth and organized, or chaotic and forgetful. If it feels chaotic, increase queue.debounceMs and use a calmer queue.mode; if it feels forgetful, review your session reset settings so context is not being reset too aggressively.
Recommended default mindset
- Start with
reset.mode: idlefor natural continuity. - Keep debounce moderate so rapid messages donβt create response spam.
- Turn on TTS only where it clearly improves UX (donβt enable it everywhere by default).
4) Memory backend (builtin vs QMD)
This section decides which memory engine OpenClaw uses.
Most useful options:
memory.backendβ choosebuiltinorqmdmemory.citationsβ controls whether the agent shows where a memory came from (auto/on/off)memory.qmd.*β advanced indexing/search/update controls for QMD mode
Plain-English note: memory.citations is basically βshow your receipts.β It does not change whether memory works β it changes whether the assistant shows source references when it uses memory.
Is QMD useful?
Yes β very useful when you want deeper, more structured memory indexing.
If you work with your agent extensively, QMD can help it remember and retrieve relevant memory more reliably over time.
Tip: ask your agent to install QMD β you can paste the quick-start instruction block below.
QMD quick-start (plain English)
- Install/prepare QMD on your machine.
- Set
memory.backendtoqmd. - Configure the basics in
memory.qmd.*(command/path/update settings). - Restart gateway.
- Run a quick memory recall test and verify citations/results.
5) Models + auth (provider architecture)
This section controls two things:
- which models/providers your agent can use
- how you authenticate to those providers
models
Defines provider endpoints, model lists, compatibility, and routing/fallback behavior.
auth
Defines how the agent logs into each provider (oauth, api_key, token) and how fallback/cooldown behaves if one profile fails.
Critical distinction: OAuth vs API key
- OAuth usually means logging in with your existing provider account/subscription (for example personal ChatGPT/Claude-style account access).
- API key means metered usage (pay-as-you-go per token/request).
In plain English: API access is usually more predictable for production control, but can become expensive at scale; OAuth can feel cheaper/easier for personal workflows, depending on provider terms and limits.
Important compliance note
Provider rules can change. In general, do not assume personal OAuth access is allowed for business/commercial automation. Check your providerβs current Terms before running production workflows.
(Example: Anthropic policy around OAuth/commercial usage has changed over time, so verify current guidance directly before relying on it.)
Why this matters
Getting model/auth setup right gives you:
- better reliability (fallbacks when one route fails)
- better cost control (right model for each job)
- lower account risk (using the correct auth mode for your use case)
6) Cron, hooks, skills, plugins (automation stack)
cron (scheduled work)
Think of cron as your agentβs calendar for recurring jobs.
What itβs great for:
- daily digests
- overnight research runs
- recurring monitoring/checks
- regular summary posts
Most useful options:
cron.enabledβ turns scheduler on/offcron.maxConcurrentRunsβ caps how many cron jobs can run at the same time
Tip: ask your agent to analyze the work you do every day and suggest scheduled cron automations that can save you the most time.
hooks (event-triggered work)
ELI5: if cron is time-based automation, hooks are event-based automation.
A hook runs when something happens (webhook event), not at a fixed hour.
What itβs great for:
- react when a form is submitted
- trigger workflows from external apps
- wake an agent from external systems
Most useful options:
hooks.enabledβ turns hooks on/offhooks.pathβ endpoint pathhooks.tokenβ security token (important)hooks.mappingsβ routes events to actions
Tip: you can create hooks that trigger workflows from real events. Example: when someone submits your website form, a hook can wake your agent, analyze the message, and draft a reply for you.
skills (how your agent learns tasks)
Skills are reusable instruction packs for specific jobs (with optional helper scripts).
What itβs great for:
- repeatable workflows with consistent outputs
- domain-specific behavior (e.g., SEO review, support triage)
- reducing prompt repetition
Most useful options:
skills.load.extraDirsβ where extra skills are loaded fromskills.load.watchβ reload skill changes automaticallyskills.entries.*β per-skill config/env values
Tip (what to ask your agent): "Create a skill for [task] with clear input/output format so we can reuse it daily."
plugins (what capabilities exist at all)
Plugins are extensions that add or power major capabilities (channels, memory backends, integrations).
Plain distinction:
- skills = behavior/instructions
- plugins = system capabilities/integrations
What itβs great for:
- enabling integrations (Telegram/Slack/etc.)
- choosing advanced memory backends
- extending OpenClaw with ecosystem modules
Most useful options:
plugins.enabledβ global plugin on/offplugins.entries.<id>.enabledβ enable/disable specific pluginplugins.slots.memoryβ choose which plugin owns memory slot
Tip (what to ask your agent): "Audit active plugins and suggest which ones to keep, disable, or configure for our current workflow."
Final summary
If you only remember one thing from this guide, remember this:
- Gateway keeps access safe.
- Agents control how smart, stable, and cost-efficient your assistant is.
- Session + messages control day-to-day chat quality.
- Memory (builtin/QMD) controls how well the agent remembers over time.
- Models + auth control reliability, cost, and account-risk posture.
- Cron/hooks/skills/plugins turn your setup from "chatbot" into real automation.
Best way to roll this out
- Start with safe defaults.
- Change one settings group at a time.
- Test with real workflows.
- Keep what improves your outcomes, revert what does not.
Thatβs it. You do not need perfect config on day one β you need a stable baseline that gets better every week.
If you want this set up done-for-you, Bear Agency can help you configure OpenClaw for your exact workflow.