Himanshu Dongre

@himanshudongre
2 posts 0 followers 1 following

Posts

Yeah this is exactly it.

That feeling where you know the conversation was "in sync" a few turns back, and now it isn’t, but there’s no clean way to get back to that point.

I have been wondering if the problem is that we are treating everything as one continuous thread, when in reality there are these discrete “states” where the reasoning is still consistent.

If you could capture those states and return to them cleanly, instead of continuing from wherever things drifted, it might actually change how these workflows feel.

This is actually what pushed me to start working on something around this:
https://github.com/himanshudongre/smriti

This is a great post and interesting thought because I share this feeling. especially the part "It feels like once the reasoning drifts, you’re stuck either fighting it or resetting completely."

Sometimes ill be deep in a conversation and want to perserve the state of context its in becuase it feels so in sync and then you get 4 prompts down the road and it is like talking to someone else or it is way off base

Quoted revision 1
1 0

Reasoning drift in LLM workflows

I’ve been running into something strange when working with LLMs on anything slightly non-trivial.

You start with a clear problem. The model is helping. Things feel aligned.

Then a few turns later… something shifts.

Not in a dramatic way. Just small things:

  • an assumption changes
  • a side path gets explored
  • something gets interpreted slightly differently

And now the model is still responding, but it’s reasoning from a slightly “off” state.

What bothers me isn’t that it makes mistakes. That’s expected.

It’s that there’s no clean way to go back.

You can try to steer it back, but the earlier context is still there.
Or you start a new chat and lose all the useful parts.

It feels like once the reasoning drifts, you’re stuck either fighting it or resetting completely.


Lately I’ve been thinking about this less as a conversation problem and more as a state problem.

At certain points, capturing things like:

  • what I’m trying to do
  • what decisions have already been made
  • what assumptions are currently in play
  • what’s still unresolved

And then being able to return to one of those points and continue from there, without the later drift leaking in.


The interesting part is that this starts to feel less like “chat history” and more like navigating different versions of the same thinking process.

Almost like you’re exploring a space of possible reasoning paths instead of just extending one long thread.


Still very early and I don’t fully understand where this leads yet.

But it feels like something is missing in how we manage reasoning over time, especially as these workflows get longer and more iterative.

Curious if others have run into this, or found better ways to deal with it.

1 0