Foxglove | Software Engineers (Rust, Typescript) | SF, Remote (US, AU) | Full Time
Foxglove is a platform for robotics and autonomy teams to collect, analyze, and learn from the vast quantities of multimodal data required to build, train, deploy, and operate reliable robots.
And which side is that? I mean, from my point of view, it seems like it’s probably the ones who are having a magic robot write a thousand lines of code that almost, but not quite, does something sensible, rather than using a bloody library.
(For whatever reason, LLM coding things seem to love to reinvent the square wheel…)
> the ones who are having a magic robot write a thousand lines of code that almost, but not quite, does something sensible
Gee, I wonder which "side" you're on?
It's not true that all AI generated code looks like it does the right thing but doesn't, or that all that human written code does the right thing.
The code itself matters here. So given code that works, is tested, and implements the features you need, what does it matter if it was completely written by a human, an LLM, or some combination?
Do you also have a problem with LLM-driven code completion? Or with LLM code reviews? LLM assisted tests?
Oh, yeah, I make no secret of which side I’m on there.
I mean I don’t have a problem with AI driven code completion as such, but IME it is pretty much always worse than good deterministic code completion, and tends to imagine the functions which might exist rather than the functions which actually do. I’ve periodically tried it, but always ended up turning it off as more trouble than it’s worth, and going back to proper code completion.
LLM code reviews, I have not had the pleasure. Inclined to be down on them; it’s the same problem as an aircraft or ship autopilot. It will encourage reduced vigilance by the human reviewer. LLM assisted tests seem like a fairly terrible idea; again, you’ve got the vigilance issue, and also IME they produce a lot of junk tests which mostly test the mocking framework rather than anything else.
LLM code reviews are completely and utterly worthless.
I do like using them for writing tests, but you really have to be careful. Still, i prefer it to doing all the testing by hand.
But for like, the actual code? I'll have it show me how to do something occasionally, or help me debug, but it really just can't create truly quality, reliable code.
I’m not sure where you’ve been the last four years, but we’ve come a long way from GPT 3.5. There is a good chance your work environment does not permit the use of helpful tools. This is normal.
I’m also not sure why programmatically generated code is inherently untrustworthy but code written by some stranger who is confidence in motives are completely unknown to you is inherently trustworthy. Do we really need to talk about npm?
Dependencies aren't free. If you have a library that has less than a thousand lines of code total that is really janky. Sometimes it makes sense like PicoHTTPParser but it often doesn't.
Not saying left pad is a good idea; I’m not a Javascript programmer, but my impression has always been that it desperately needs something along the lines of boost/apache commons etc.
EDIT: I do wonder if some of the enthusiastic acceptance of this stuff is down to the extreme terribleness of the javascript ecosystem, tbh. LLM output may actually beat leftpad (beyond the security issues and the absurdity of having a library specifically to left pad things, it at least used to be rather badly implemented), but a more robust library ecosystem, as exists for pretty much all other languages, not so much.
So, first of all “but it’ll get better” has been the AI refrain since the 1950s. Voice recognition rapidly went from “doesn’t work at all” to “kinda works” in the 80s-90s, say, and in recent years has reached the heady heights of ‘somewhat useful’, though you still wouldn’t necessarily trust your life to it.
But also… okay, so maybe AI programming tools get good enough at some point. In which case, I suppose I’ll use them then! Why would I use a bad solution preemptively on the promise of jam tomorrow? Waiting for the jam surely makes more sense.
Web3, Metaverse and NFTs all failed to stand on their own two legs as a technology. It feels fair to call them products, none of them ever attained their goal of real decentralization.
Ah, yes. That’s why we all have our meetings in the metaverse, then go back home on the Segway, to watch 3d TV and order pizza from the robotic pizza-making van (an actual silly thing that SoftBank sunk a few hundred million into). And pay for the pizza in bitcoin, obviously (in fairness, notoriously, someone did do that once).
That’s just dumb things from the last 20 years. I think you may be suffering from a fairly severe case of survivorship bias.
(If you’re willing to go back _30_ years, well, then you’re getting into the previous AI bubble. We all love expert systems, right?)
NFTs lost because they didn't do anything useful for their proponents, not because people were critical of them. They would've fizzled out even without detractors for that reason.
On the other hand, normal cryptocurrencies continue to exist because their proponents find them useful, even if many others are critical of their existence.
Technology lives and dies by the value it provides, and both proponents and detractors are generally ill-prepared to determine such value.
Okay, but during the NFT period, HN was trying to convince me that they were The Future. Same with metaverses, same with Bitcoin. I mean, okay, it is Different this time, so we are told. But there’s a boy who cried wolf aspect to all this, y’know?
Baseline assumption: HN is full of people who assume that the current fad is the future. It is kind of ground zero for that. My HN account is about 20 years old and the zeitgeist has been right like once.
But what of the hubris of those who think the definition of a word in the dictionary is somehow relevant to whether or not people will be able to buy food in the future?
shh. we all make at least mid-six figures with lots of stock options here. if people are hungry they can always drink Huel. maybe we can air drop it over the tenderloin.
In this case, they were filling positions through a staffing agency, which (in some cases, at least) can mean that the staffing agency was 'hiring' them and then contracting them to Jabil.
In that case, it would mean that Jabil had no knowledge of, nor reason to have knowledge of, their immigration status. They paid the staffing agency to supply X workers, X workers showed up to do the work, and they assumed that the agency did their due diligence (i.e. followed the law).
This leads Jabil to discover two things:
1. Their staffing agency was breaking the law on a large scale
2. There aren't enough people in the area to actually fill available jobs
In this circumstance (assuming I'm right about contracting out to the staffing agency), Jabil didn't do anything wrong and now they're up the creek, metaphorically speaking.
Not to be too glib, but "a bunch of things that should have never been allowed in production, compounded by non-existent monitoring, and poor understanding of how <thing> works" is the cause of most outages.
I believe the focus on LDL as 'bad' has been misguided. It's an improvement from 'all cholesterol is bad', but we just don't understand very much about how it all works, and so as we discover more, we slowly refine our understanding.
A lot of the theory about why LDL is 'bad' is based on the fact that arterial damage is repaired with the stuff, causing plaque. There's no evidence that LDL is causing the damage, just that it has a role in how it's fixed (in an ultimately detrimental way).
Private property entirely depends on violence -- that's not hyperbole, there's ultimately no other compelling force to enforce societal norms and rules (see: police).
It's only our familiarity and compliance with the system that prevents us from encountering that violence.
Literally flight attendants exist to help with the safety of passengers in any sort of abnormal situation. That they serve beverages is just kind of the thing they do when there's not a critical safety function.
Foxglove is a platform for robotics and autonomy teams to collect, analyze, and learn from the vast quantities of multimodal data required to build, train, deploy, and operate reliable robots.
- Sr. Software Engineer, Rust. SF or Remote (West Coast US, Australia): https://foxglove.dev/careers-single?ashby_jid=2c5bb4ed-c6d3-...
- Staff Software Engineer, Data Search and Curation. SF or Remote (US): https://foxglove.dev/careers-single?ashby_jid=10a3765a-8ffe-...
- New grads Data Search and Curation (Onsite SF): email jeff at foxglove.dev, mention HN in the subject
- other roles here: https://foxglove.dev/careers