OpenClaw Stopped Responding? Fix "Context Limit Exceeded"

Compact history, keep momentum

It’s the classic “too much of a good thing.” Your session gets long, ideas flow, then—silence or a blunt Context Limit Exceeded error. Your model is out of short‑term memory. Here’s the fast fix.

If your LLM stops responding, it’s not moody—it’s out of context window. Use /compact to summarize the conversation, reclaim tokens, and continue without losing the thread.

Why Is This Happening?

Every LLM has a Context Window—its short‑term memory. Each message and response consumes tokens. When the total exceeds the model’s limit, the AI can’t “read” the entire thread, and responses stall or error.

The Solution: The /compact Command

OpenClaw includes a built‑in feature to fix this without starting a brand‑new chat. /compact summarizes the conversation into a concise state and replaces the oversized history with that summary.

How to use it

  • Identify the Stall: If you see “Context Limit” or mid‑sentence stops, focus the input box.
  • Type the Command: Enter /compact and press Enter.
  • The Magic: OpenClaw asks the LLM to generate a concise summary of key points, decisions, and context.
  • Memory Reset: The large message history is replaced with this single summary.
  • Result: You keep the “soul” of the session and regain 80–90% of the context window.
/compact

Pro‑Tips for Managing Context

Be Proactive

Don’t wait for errors. If the thread is getting long, run /compact before the model slows down.

Selective Pruning

Manually delete redundant code blocks or “thank you” messages. Every token counts in long sessions.

Switch Models

When possible, use a long‑context model (e.g., Claude 3.5 Sonnet, GPT‑4o) if your API credits allow.

Troubleshooting: What if /compact Fails?

If the context is so full the model can’t even process a summary request:

Back to Troubleshooting Overview

View All Fix Guides