These are simple chat interfaces. One window, one isolated piece of context.
In linear chat, your “context engineering” is mostly accidental. The interface decides what the model sees: typically the running conversation plus some hidden system instructions plus whatever retrieval the product adds. So when a user says: “It’s giving weird answers,” a lot of times the real issue isn’t the prompt, it’s the context. Some earlier line, assumption, or tangent is still in the thread and it’s quietly shaping everything that comes after.
This is why in linear chat apps you see behaviors like:
| Advantages | Disadvantages |
|---|---|
| Simple | Lack of control |
| Low cognitive overhead (just “talk to it”) | Context is implicit and always-on (hard to “turn off” parts of the conversation) |
| Great for quick questions and straightforward tasks | Long threads get noisy: irrelevant history gets dragged forward |
| Easy to ship and easy to understand as a product | Hard to reproduce outcomes because context drift accumulates over time |
| Natural conversational feel | Hard to compare alternatives (you end up opening multiple tabs/threads) |
| Works well when the user’s intent is stable | Hard to run parallel lines of thought without mixing them |
That tradeoff with Linear chat apps is fine when we need quick answers, but it gets painful when we’re trying to work along with AI to arrive something we need.
Non-linear chat interface like RabbitHoles AI allow you to plug and unplug context. These interfaces offer users superior control to engineer context than linear chat apps. Instead of the single window always dragging the entire thread forward, you can treat context more like a set of building blocks. This makes context engineering visible. It becomes a deliberate act instead of something that happens passively as the chat scrolls.
You can think of it like this:
Most real work is not one clean conversation. It’s more like:
| Advantages | Disadvantages / tradeoffs |
|---|---|
| Context control: You decide what is in scope for a response | More complexity: More controls means more decisions for the user |
| Less drift: You can isolate experimental branches from the “main” line | Higher cognitive load: You’re not just chatting, you’re managing context |
| Parallel thinking: You can explore multiple approaches side-by-side | Onboarding required: It’s not immediately obvious why branching and context modules matter |
| Better iteration: You can keep source material stable while swapping prompts, or vice versa | Potential over-engineering: For simple tasks, it can feel like too much machinery |
| More reproducible outputs: Since the context set is explicit, results are easier to recreate | |
| Better collaboration (often): context modules can be shared, reused, and standardized |