Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For vim, I use a custom command which takes the currently selected code and opens a browser window like this:

https://www.gnod.com/search/ai#q=Can%20this%20Python%20funct...

So I can comfortably ask different AI engines to improve it.

The command I use in my vimrc:

    command! -range AskAI '<,'>y|call system('chromium gnod.com/search/ai#q='.substitute(iconv(@*, 'latin1', 'utf-8'),'[^A-Za-z0-9_.~-]','\="%".printf("%02X",char2nr(submatch(0)))','g'))
So my workflow when I have a question about some part of my code is to highlight it, hit the : key, that will put :'<,'> on the command line, then I type AskAI<enter>.

All a matter of a second as it already is in my muscle memory.



I think (just my experience) that copilot (the vim edition / plugin) uses more than just the current buffer as a context? It seems to improve when I open related files and starts to know function / type signatures from these buffers as well.


That could be. If so, it would be interesting to know how Copilot does that.

For me, just asking LLMs "Can the following function be improved" for a function I just wrote is already pretty useful. The LLM often comes up with a way to make it shorter or more performant.


Yes, the official plugin sends context from recently opened other buffers. It determines what context to send by computing a jaccard similarity score locally. It uses a local 14-dimensional logistic regression model as well for some decisions about when to make a completion request, and what to include.

There are some reverse-engineering teardowns that show this.


I just tried the gpt4, without any modifications it's impressively worse than the current chat model


What did you try?


Running some queries in a new chatgpt session and via the API. I tried adding the same system prompt on both.

I can run one for you, if you want :)


"some queries"?

Show them, so we can discuss?



There's also https://github.com/David-Kunz/gen.nvim which works locally with ollama and eg. mistral 7B.

Any experience/comparison between them?


I don’t have experience with gp.nvim, but I liked David Kunz nvim quite a bit. I ended up forking it into a little pet project so that I could change it a bit more into what I wanted.

I love being able to use ollama, but wanted to be able switch to using GPT4 if I needed. I don’t really think automatic replacement is very useful because of how often I need to iterate a response. For me, a better replacement method is to visual highlight in the buffer and hit enter. That way you can iterate with the LLM if needed.

Also a bit more fine control with settings like system message, temperature, etc is nice to have.

https://github.com/dleemiller/nopilot.nvim


Uh sorry, i was gonna link gen nvim I found gp to have more functions / modes to use it. Gp might be able to support local models using the openai spec, at least i saw an issue in their repo about that.


That’s nice! I would like to do something similar but my vim session are all remote over ssh, can we make it work without browser?


Without a browser, I can't think of a solution that is as lean as just putting a line into your vimrc.

I guess you have to decide on an LLM that provides an API and write a command line tool that talks to the API. There probably also are open source tools that do this.


just call a reverse-SSH-tunneled open (macos) or xdg-open (linux) as your netrw browser.

I use this daily, works well with gx, :GBrowse, etc




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: