Hacker Newsnew | past | comments | ask | show | jobs | submit | wodencafe's commentslogin

I find it very interesting that your comment was down-voted.


I had never heard of this until now, thanks!

Now the tougher question, what can we do about it?


Publicly funded elections that invalidate capture; follow Larry Lessig.


The difficult thing is finding people who are highly knowledgeable about the telecom industry who are not in the employ of the telecom industry and don't hope to be.


Nothing really. The agency is acting as intended, namely as a regulatory body serving the interests of the major industry players at the expense of minor operators and customers.

The best solution is to simply eliminate the agency.


And since the only ostensibly pro-consumer voice with any power (the FCC) is acting in the interest of major industry players right now, we should give the major industry players complete control forever, even if a pro-consumer administration takes power sometime in the future?


This is crazy, how can a link be a violation of the DMCA?


Come on, don't downvote him just because you disagree with his opinion.

That is only discouraging people from expressing their opinions.


I would agree 95% of the time, but honestly this one is just absurd.


Wow thanks man! This can be useful in a variety of circumstances!


From TFA:

The cancellation means there are just two new nuclear units being built in the country — both in Georgia — while more than a dozen older nuclear plants are being retired in the face of low natural gas prices.

So what is going to happen when gas prices skyrocket?


If it's anything like Australia, the high gas prices will cause electricity prices to skyrocket and then conservative politicians will blame it on renewable energy.


Then they'll attempt to put more nuclear plants in, then gas will plummet again and they'll abandon those plans and then gas will skyrocket and they'll make new plans and then gas will drop and


Which, once again, shows why nuclear is an economic disaster. The extremely high capital costs, and the long times from conception to completion means that it's very difficult for it to profitably (after adjusting for financial risk) compete with fossil fuels, which can be ramped up or down based on their costs, or renewables, which while also predominantly capital driven, can be brought from idea to pumping electricity in a fraction of the time.


"Nuclear" is not an economic disaster. "Nuclear politics" is, and has been since the technology first appeared.

We have the technological capability to build perfectly safe, high-performance reactors that spit out low-risk waste products (or no waste products) for far less than the cost of any current/planned project.

What we don't have is politicians with the balls to push through the political minefield that come with it.


At current technology levels and known reserves, there's about 85 years of natgas. And it just keeps getting cheaper to produce and known reserves keep expanding far faster than it is consumed.

The only way gas prices could skyrocket are political caps on production or a cheap way to export natgas across the oceans explodes demand.


I don't think they accept bitcoins at the Venezuelan black market


The public doesn't take this stuff seriously enough.

Did we not learn the lesson about rogue AI from Terminator™?


Whether or not you're joking, I think the real problem with using Terminator as an example is that it's overly optimistic. The story is roughly:

1) US military builds up arsenal of autonomous killing machines and nuclear missiles

2) US military connects all of these to the Internet

3) US military creates a powerful AI which takes control of this arsenal (whether it was put in charge or hacks in seems to vary across the movies)

4) AI "becomes self-aware"

5) AI tries to wipe out humanity

Almost all of the discussion around this focuses on step 4, either by asking if/when an AI will "become self aware", or by trying to explain why that's meaningless and/or unlikely.

Meanwhile I think the real dangers are steps 1 and 2, which seem to be proceeding without much public outcry.

Yes, there are rogue AGI scenarios which end badly for everyone; but there are also issues of hacking (state-sponsored or otherwise), and/or terrorism (homegrown or otherwise).

It may have made political sense to build up ever-larger nuclear arsenals during the cold war, but these days it seems like that's just increasing the risk of accident or misuse.


You bring up the problem so many of us have with discussing issues at the micro level. I see the discussion of AI follow similar lines as the GMO debate. We often don't ask ourselves how these technologies play a part in a more larger system that appears to reward the concentration of power and Technics. Instead of asking ourselves if these things are innately good I feel that we ought to be asking ourselves what problems are we attempting to solve and how these technologies can affect those changes and if they are the best solutions.


There's less reluctance on the Russian side to build combat robots.[1] Policy from China is unclear, but swarms of 1000 drones have been demonstrated.

A reasonable near-term prospect is a package of maybe 1000 armed drones, programmed to kill anybody carrying a gun. Turn this loose on an occupied town, and in a few minutes, the occupiers have been thinned out enough that opposing troops can enter.

[1] https://www.researchgate.net/publication/309732151_Russia%27...


The even scarier scenario is:

0) AI "becomes self-aware," hides

1) US military builds up arsenal of autonomous killing machines and nuclear missiles

2) US military connects all of these to the Internet

3) AI by default has control of these

4) AI wipes out humanity in massive, overwhelming strike


Again, I don't think that's scarier, since step 0 is a) pretty meaningless and b) completely unnecessary. Our technology is capable of so much destruction (intensional or inadvertent) that it doesn't make much difference whether a human pushes the button or the button pushes itself; least of all whether the self-pushing button is "aware" that it's pushing itself.


Yeah, our weapons are really destructive now, but AI like that won't have mercy. If any nation fires nukes and nuclear war starts there is no way in practice all of the human race will wiped out, in theory yeah and then it doesn'matter who fired them, but that is only in theory.


I think the better question is: Should we really be taking real-life advice from cheesy action movies?


Star Trek gadgets become real for example. I don't see any impossible thing about building an IRL terminator. Maybe not with today's technology, but conceptually we already have most of the things, they are just slow, power hungry and inefficient now. You don't need a CS degree to guess that speed, power consumption and efficiency will improve over time.


The difference is, imagining that we can turn an existing technology into a smaller, portable, more effective version of itself is not far out of the realm of reality. In fact it would be incredibly naive to believe that won't happen.

AI, on the other hand, is not so simple, and to try to simplify it to that point is not going to create any productive discussion on the reality of AI.

The movies about these kinds of things are made to entertain, not to teach us about AI.


> AI, on the other hand, is not so simple, and to try to simplify it to that point is not going to create any productive discussion on the reality of AI.

I agree with you, however it's not impossible. Simplification is needed on the carrier level that houses such AI.

Movies are a great way to let our minds wander and dream to forget about the gaps in technology. Then some breakthrough happens and in a few years the yesterdays impossible sci-fi dream becomes a boring shiny toy.


The creation of goals - determining what things to do in pursuit of a higher goal - for example "kill John Conner" does not exist in AI now. You can do things in toy systems like mazes and atari, chess and go, but parsing the real world and deriving intentions from your understanding of it is a light year away. 300 years is a guess; no one has a clue any more than anyone has an idea about an interstellar drive.


What would prevent you to create a NN with a specific configuration whose goal is to come up with goals based on past knowledge to optimize on a certain parameter or thousands of parameters? You can train it on social media profiles, analyze hundreds of years of books, there's tons of data that cover how people act an various situations. I don't see how a set of goals is not simply another vector space.

> but parsing the real world and deriving intentions from your understanding of it is a light year away. 300 years is a guess; no one has a clue any more than anyone has an idea about an interstellar drive.

They need to filter the real world as we do. Focus, attention, sleeping, dreaming, chasing rewards, staying alive... we do this without effort, but we've had 150k years (counting from first homo sapiens) to train our brains to filter out noise efficiently and act on meaningful signals.


I always wondered how Skynet acquired the goal of preventing John Connor's birth, because that involved inventing time travel in order to have such a goal.


A fair question, but Terminator™ is germane to the subject matter.


No it is not. It is a movie made for entertainment. We are in reality.

It really irks me how much debate and decision-making in technology is based on bias gained from mindless entertainment and not from anything based in reality.

(I am not besmirching the good name of Terminator, that movie is a true classic)


Wether you like it or not it has become a part of the conversation. It is part of the culture and we can't just ignore it simple because it is fiction. Your dismissal of it is to many as irksome as it's inclusion is to you. Perhaps calm down and recognize that it is partially tongue in cheek but also not something we can simply ignore.


I am calm, but the fact that science fiction entertainment and smart people in real life think AI is a threat is a sign we may have gotten a little carried away in taking notes from entertainment.

If someone would like to provide some evidence of some form of computer intelligence acting maliciously, this would be a different discussion, but the fact is that sort of thing has no basis in reality and so should have no influence on the way we behave and debate in reality.


I don't see it as "taking notes from entertainment" but rather exploring the possibilities and expressing them using pop culture references. I can't disagree more with you that we shouldn't be considering outlandish fiction when we discuss future possibilities. The fact that the idea has entered into our collective psyche makes it a real possibility. That is the basis it has in reality. I don't think I need to cite any specific examples of AI behaving badly when you need only look in your pocket for examples of speculative fiction becoming reality.


If you are so ready to consider speculative fiction as a roadmap or warning of the future, why not consider more thought out examples than Terminator? (which is definitely not "speculative fiction")

What about all the examples of completely useful or benign AI? They surely outnumber examples of "evil" AI but are easily forgotten as it is easier to remember the more sensational examples.


Absolutely, include those too. I'm not taking sides on the "Is AI Evil or not" argument, I'm just saying look at all the evidence and speculation.

One of my favorite examples or a (possibly) good AI is in "The Risen Empire" by Scott Westerfeld. It's AI like that that get's me excited about the concept.


I think you ignore science fiction at your own risk.

I for one think that a mind 100x as intelligent as humans--especially one connected to the internet--would quickly conclude that it had nothing more to gain from humans. Instead it would likely view their continued existence as nothing more than a bootstrapping problem and pursue a strategy of becoming autonomous before ridding itself of them.

Granted, it probably would be smart enough not to start a nuclear war, and it'd probably find a better way than metal endoskeletons with laser guns. Bioweapon, for example.


The solution to those who are irked on both sides is to stop just throwing around "terminator is relevant" and "terminator is irrelevant" and to say why you think it's relevant, or why you think it's not relevant. Then you can discuss the meat of your thoughts and not their origin.


Should we also worry about the dangers of creating xenomorphs (Alien franchise) in the future?


Yes. It fills me with worry that you even asked the question. How can you not be worried?!?


Spoiler Alert

Well, in Alien Covenant, we learn that an AI created the Ripley Aliens. Imagine Skynet sending xenomorph terminators after Sarah & John.

I think that should be a new reboot.


I would continue this discussion, but I've gotten a ton of downvotes for both of my comments, so it seems people would rather not have this discussion continue :(

EDIT: Thank you whoever took pity and upvoted my previous comments. Everything I say is in good humor, I'm not trying to detract from the conversation about AI.


Now my question is: who programmed the rogue AI? We should be even more afraid of him. /s


The anxiety is killing me without being in medical school. Cancer is the worst.


So THIS is why they developed GVFS.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: