> if they are able to hit a few gas/oil carriers with drones there, nobody is going to use that body of water
It’s a lot more feasible to escort tankers after the Strait than it is before, when American warships have to come close to shore. Iran doesn’t have the resources to deny access to the entire Indian Ocean.
> Iran doesn’t have the resources to deny access to the entire Indian Ocean.
I have what may be a scale issue in my imagination, so bear with me if this is silly.
There are reports of international drug transport via seaborne drones in the 0.5-5 tonne range, and of these crossing the Pacific, and the cost of the vehicles is estimated to be around 2-4 million USD each. If drug dealers can do that, surely Iran (and basically everyone with a GDP at least the size of something like Andorra's) should be able to make credible threats to disrupt approximately as much non-military shipping as they want to worldwide?
> if drug dealers can do that, surely Iran (and basically everyone with a GDP at least the size of something like Andorra's) should be able to make credible threats to disrupt approximately as much non-military shipping as they want to worldwide?
Sure. Do you think that means worldwide shipping would shut down?
And the point isn't to take the risk to zero. But to a level where military escorts can feel safe.
> Do you think that means worldwide shipping would shut down?
I think there's a danger of that, at least if countermeasures are not easily available for normal shipping.
Even 1-on-1 rather than 1-v-everyone, there's too many players (not all of them nations) with too many conflicting goals and interests. If Cuba tried to do it, could they credibly threaten to sink all sea-based trade involving the USA? If not Cuba, who would be the smallest nation that could?
And the same applies to Taiwan and China, in both directions, either of which would be fairly dramatic on the world stage, even though China also has land options. Or North Korea putting up an effective anti-shipping blockade against Japan.
> But to a level where military escorts can feel safe.
Are there enough military ships to do the escorting?
> the US and Europe would be pretty fucked since we depend on it much more.
China could still get resources from russia and is much more self sustained
America would be fine. We have the Americas and Asia to trade with, and Iran can’t restrict those oceans in any meaningful way.
Europe, the Middle East, Africa and non-China Asia would get screwed.
? There's really not much discussion of Iran being a problem outside the Gulf.
Iran can control the Gulf and therefore 20% of global carbons.
This is enough to put the world economy into recession.
America is not 'isolated' from the global economy.
US carbon produces don't give smack about the nation generally - they will sell to the highest bidder.
If global Oil prices skyrocket - you will pay that at the pump.
US is net carbon exporter, but there is trade - the refineries in the south are designed for heavy crude from Venezuela and Canada etc.
Yes, some national policies could alter a bit, but only in emergency, and the current Administration does not give a * about national issues, other than populist blowback. They will prefer their oil buddies by default, but with a lot of leaway for 'gas prices' causing voting problems.
US companies sell abroad, a global recession affects everything.
Just google OPEC crisis - you can see what high oil prices do, they screw everything up.
There's 100% chance of global recession if Gulf stays closed.
Given the 'leverage' in US market that can come way down. US GDP is currently held up with AI spending - if that math falters, that AI investment slows down, the US drops into recession, that causes flight from equities etc etc.
I don't think we need to speculate about anything outside of the Gulf.
It's bad, it needs to be resolved.
You see this calamity in the daily statements from WH - they are 'in out in out in out' in the same day they say 'witdhdraw' and then 'we must open the strait'.
It is very unlikely that Russia depends on Iran for its drone production. Iran is not producing any critical components that you could not get elsewhere. The export of Iranian drones was probably close to zero already after the last year's shootout.
Russia is still selling a comparable amount of oil as before the war (7 mb/d). The price going up (URALS was 50 at the start of the year, now it's more than double at 110) is definitely a great boon for them, as selling oil is one of the most important revenue streams for them.
Another interesting development is the ridiculous amount of background bluring in photos. Turns out you can find surprisingly large number of garages, warehouses, treelines, etc based on a single photo.
And the real punchline is that the deluge of papers barely matters, as the academic field is barely moving, and the most interesting innovations are happening on the product side.
I have been in both academia and industry for years, and I don't think the model you describe is true anymore. It was definitely true 10 years ago, but the situation has flipped. Now, I see really ambitious and impactful research coming out of industry labs. Academia is often lagging behind the state of the art because they lack the resources (data, compute, and skills) to compete.
Academia is also incentivized such that everyone works on the same popular topics to secure grants and citations. This is currently LLMs, where academia needs to compete with multi-billion corporations on a technology that is notoriously expensive. In effect, many researchers work on topics that are pretty non-consequential from the get go (such as N+1th evaluation dataset), but it's the only way for them to stay relevant.
I recently talked with a PI from a well-known university lab, and asked why they were doing a startup, given the ML research problems they were working on.
They said a company was the only way to get access to the compute power they needed for that research.
A startup sounds like probably a good solution, if they get paired with the right product- and business-minded people, and together they find a winning collaboration. (Edit: Or if they get acquired rapidly in the AI boom, and negotiate the right deal to enable their research longer-term.)
One key reason you’re wrong is that many interesting things aren’t even getting published, they’re on the DL for years and eventually make it to public spheres and products.
Academia is just a daycare at this point, and many labs shouldn’t exists or get funding. The people who move the field aren’t necessarily the ones with the most citations, they’re usually hard at work in places that don’t publish at all.
It's pure armchair psychology, but this type of project always makes me think about anxiety. Who really needs this level of self observation and control? At the same time, I really enjoy reading about it and I find the window into somebody else's world intriguing.
> Who really needs this level of self observation and control
I liked doing similar things in the past. There's no anxiety in the equation, just pure curiosity. How many times have I done a thing a month/year? I was always curious about stuff like this, much like the OP. There's also the hacker spirit in play - designing the apps for tracking stuff.
I tried using Gemini for some light historical research. It could not stop using tech metaphors. Lords were the CEOs of their time, pope was the most important influencer, vassal uprisings were job interviews, etc. The metaphors were almost comically useless and imprecise, and Gemini kept using them even when I explicitly asked it to not do that.
> It could not stop using tech metaphors. Lords were the CEOs of their time, pope was the most important influencer, vassal uprisings were job interviews, etc.
That happens all the time if the previous discussion was about the other subject you don't want (tech in this case): LLMs (not just Gemini) go out of their way to reconcile the two topics.
As an example at some point I asked about the little shrooms people (the tiny people people do hallucinate all mostly the same when eating a particular mushrooms) to a LLM and forgot to begin a separate discussion and asked... About the root "-trinsic" in "intrinsic" and "extrinsic" and the city of "trinsic" in the Ultima game. Oh man... The LLM went wild. I totally forgot I asked about the little shrooms people hallucination but the LLM didn't forget and went totally nuts.
I think you'll get better result if you launch a new discussion and specify "Context: history" or "Context: cooking". Once it goes off the rail, asking it to "not do that" ain't really working: by that point it's just gone, solid gone.
I think that's Gemini trying to personalize the answer specifically for you. It really leans heavily into that to the point of being galling.
You can give it additional instructions in the settings, but you have to be careful with that too. I've put my tech stack and code preferences in there to get better code examples. A while later I asked it about binary executable formats and it started ending every answer with "but the JVM and v8 take care of that for you."
Which is both funny in an "I, Robot" kind of way, and irritating. So I told it to ignore my tech stack. I have a master's in CS and can handle a bit of technical detail.
Turns out, Gemini learned sarcasm. Every following answer in that thread got a paragraph that started with something like "But for your master brain, this means..."
The new memory feature in Gemini got turned on by default and every answer came out like this. It kept working in details from one particularly long thread. Everything was framed in terms of the common elements. Everything. I turned it off immediately.
This seems like a huge risk factor for users who are at risk for schizophrenia - if someone is using the LLM as an "AI companion", the model is likely to reinforce, or even suggest, illusory connections between events or experiences the user has described in their conversations.
Even Gemini 2.5 was extremely snarky. I basically disable all guardrails via prompts and instructions, and it started getting snippy at me for apparently acting like a know-it-all.
reply