I built LocalGPT over 4 nights as a Rust reimagining of the OpenClaw assistant pattern (markdown-based persistent memory, autonomous heartbeat tasks, skills system).
It compiles to a single ~27MB binary — no Node.js, Docker, or Python required.
Key features:
- Persistent memory via markdown files (MEMORY, HEARTBEAT, SOUL markdown files) — compatible with OpenClaw's format
- Full-text search (SQLite FTS5) + semantic search (local embeddings, no API key needed)
- Autonomous heartbeat runner that checks tasks on a configurable interval
- CLI + web interface + desktop GUI
- Multi-provider: Anthropic, OpenAI, Ollama etc
- Apache 2.0
Install: `cargo install localgpt`
I use it daily as a knowledge accumulator, research assistant, and autonomous task runner for my side projects. The memory compounds — every session makes the next one better.
GitHub: https://github.com/localgpt-app/localgpt
Website: https://localgpt.app
Would love feedback on the architecture or feature ideas.
I do think that local-first will end up being the future long-term though. I built something similar last year (unreleased) also in Rust, but it was also running the model locally (you can see how slow/fast it is here[1], keeping in mind I have a 3080Ti and was running Mistral-Instruct).
I need to re-visit this project and release it, but building in the context of the OS is pretty mindblowing, so kudos to you. I think that the paradigm of how we interact with our devices will fundamentally shift in the next 5-10 years.
[1] https://www.youtube.com/watch?v=tRrKQl0kzvQ
reply