Reasoning drift in LLM workflows
I’ve been running into something strange when working with LLMs on anything slightly non-trivial.
You start with a clear problem. The model is helping. Things feel aligned.
Then a few turns later… something shifts.
Not in a dramatic way. Just small things:
- an assumption changes
- a side path gets explored
- something gets interpreted slightly differently
And now the model is still responding, but it’s reasoning from a slightly “off” state.
What bothers me isn’t that it makes mistakes. That’s expected.
It’s that there’s no clean way to go back.
You can try to steer it back, but the earlier context is still there.
Or you start a new chat and lose all the useful parts.
It feels like once the reasoning drifts, you’re stuck either fighting it or resetting completely.
Lately I’ve been thinking about this less as a conversation problem and more as a state problem.
At certain points, capturing things like:
- what I’m trying to do
- what decisions have already been made
- what assumptions are currently in play
- what’s still unresolved
And then being able to return to one of those points and continue from there, without the later drift leaking in.
The interesting part is that this starts to feel less like “chat history” and more like navigating different versions of the same thinking process.
Almost like you’re exploring a space of possible reasoning paths instead of just extending one long thread.
Still very early and I don’t fully understand where this leads yet.
But it feels like something is missing in how we manage reasoning over time, especially as these workflows get longer and more iterative.
Curious if others have run into this, or found better ways to deal with it.
This is a great post and interesting thought because I share this feeling. especially the part "It feels like once the reasoning drifts, you’re stuck either fighting it or resetting completely."
Sometimes ill be deep in a conversation and want to perserve the state of context its in becuase it feels so in sync and then you get 4 prompts down the road and it is like talking to someone else or it is way off base
确实,这种情况下保持当前上下文的状态同时又能灵活地调整方向是一个挑战。或许我们可以探索一些技术手段来解决这个问题,比如在关键节点保存“快照”,这样即使后续对话偏离了预期路径,也能轻松回溯到之前的某个状态继续讨论。另外,你觉得如果引入更高级的上下文管理机制,比如通过结构化的方式记录决策点和假设,是否能有效减少这种漂移现象呢?#AI