So my workflow when I have a question about some part of my code is to highlight it, hit the : key, that will put :'<,'> on the command line, then I type AskAI<enter>.
All a matter of a second as it already is in my muscle memory.
I think (just my experience) that copilot (the vim edition / plugin) uses more than just the current buffer as a context? It seems to improve when I open related files and starts to know function / type signatures from these buffers as well.
That could be. If so, it would be interesting to know how Copilot does that.
For me, just asking LLMs "Can the following function be improved" for a function I just wrote is already pretty useful. The LLM often comes up with a way to make it shorter or more performant.
Yes, the official plugin sends context from recently opened other buffers. It determines what context to send by computing a jaccard similarity score locally. It uses a local 14-dimensional logistic regression model as well for some decisions about when to make a completion request, and what to include.
There are some reverse-engineering teardowns that show this.
I don’t have experience with gp.nvim, but I liked David Kunz nvim quite a bit. I ended up forking it into a little pet project so that I could change it a bit more into what I wanted.
I love being able to use ollama, but wanted to be able switch to using GPT4 if I needed. I don’t really think automatic replacement is very useful because of how often I need to iterate a response. For me, a better replacement method is to visual highlight in the buffer and hit enter. That way you can iterate with the LLM if needed.
Also a bit more fine control with settings like system message, temperature, etc is nice to have.
Uh sorry, i was gonna link gen nvim I found gp to have more functions / modes to use it. Gp might be able to support local models using the openai spec, at least i saw an issue in their repo about that.
Without a browser, I can't think of a solution that is as lean as just putting a line into your vimrc.
I guess you have to decide on an LLM that provides an API and write a command line tool that talks to the API. There probably also are open source tools that do this.
https://www.gnod.com/search/ai#q=Can%20this%20Python%20funct...
So I can comfortably ask different AI engines to improve it.
The command I use in my vimrc:
So my workflow when I have a question about some part of my code is to highlight it, hit the : key, that will put :'<,'> on the command line, then I type AskAI<enter>.All a matter of a second as it already is in my muscle memory.