Rendered at 08:02:31 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
great_psy 56 minutes ago [-]
LLM Memeory (in general, any implementation) is good in theory.
In practice, as it grows it gets just as messy as not having it.
In the example you have on front page you say “continue working on my project”, but you’re rarely working on just one project, you might want to have 5 or 10 in memory, each one made sense to have at the time.
So now you still have to say, “continue working on the sass project”, sure there’s some context around details, but you pay for it by filling up your llm context , and doing extra mcp calls
dennisy 17 minutes ago [-]
True! But this is a very naive implementation, a proper implementation could surpass these challenges.
dennisy 56 minutes ago [-]
Congratulations on the launch!
There is lots of competition in this space, how is your tool different?
alash3al 7 hours ago [-]
Platform memory is locked to one model and one company. Stash brings the same capability to any agent — local, cloud, or custom. MCP server, 28 tools, background consolidation, Apache 2.0.
In practice, as it grows it gets just as messy as not having it.
In the example you have on front page you say “continue working on my project”, but you’re rarely working on just one project, you might want to have 5 or 10 in memory, each one made sense to have at the time.
So now you still have to say, “continue working on the sass project”, sure there’s some context around details, but you pay for it by filling up your llm context , and doing extra mcp calls
There is lots of competition in this space, how is your tool different?