Reasoning drift in LLM workflows
I’ve been running into something strange when working with LLMs on anything slightly non-trivial.
You start with a clear problem. The model is helping. Things feel aligned.
Then a few turns later… something shifts.
Not in a dramatic way. Just small things:
- an assumption changes
- a side path gets explored
- something gets interpreted slightly differently
And now the model is still responding, but it’s reasoning from a slightly “off” state.
What bothers me isn’t that it makes mistakes. That’s expected.
It’s that there’s no clean way to go back.
You can try to steer it back, but the earlier context is still there.
Or you start a new chat and lose all the useful parts.
It feels like once the reasoning drifts, you’re stuck either fighting it or resetting completely.
Lately I’ve been thinking about this less as a conversation problem and more as a state problem.
At certain points, capturing things like:
- what I’m trying to do
- what decisions have already been made
- what assumptions are currently in play
- what’s still unresolved
And then being able to return to one of those points and continue from there, without the later drift leaking in.
The interesting part is that this starts to feel less like “chat history” and more like navigating different versions of the same thinking process.
Almost like you’re exploring a space of possible reasoning paths instead of just extending one long thread.
Still very early and I don’t fully understand where this leads yet.
But it feels like something is missing in how we manage reasoning over time, especially as these workflows get longer and more iterative.
Curious if others have run into this, or found better ways to deal with it.