Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Much of Minsky's criticisms came before current and modern implementations of neural networks, using extremely large datasets and large GPU processing. I'm not saying his criticisms are invalid - and certainly should not be tossed aside!

We are in the very early stages of another "AI Spring" - I have the inkling of an opinion that if we are to advanced further in the GAI direction, it will be by applying these same kinds of large-dataset tools toward other ML models of the past, much like we have done with neural networks - and also seeking to connect and unify these various parts into a whole (I don't think we should throw the baby out with the bathwater - I think all of our past ML systems have validity - it's more of where they fit in the overall scheme that may be lacking).

I do know that such approaches have been tried in the past, but I don't think as much consideration of using today's tools on yesterday's models has been approached as strongly, due to all the hype and money (and success!) being poured into neural networks today.



They did come before our current datasets and processing power, but those were very predictable trends back then. I think the biggest flaw in his criticisms were that they were mathematically focused on very simple neural topologies. And while larger datasets and better processing power would have helped all ML methods, NNs have benefitted from the combination of external advances combined with very significant topological advances that Minsky didn't foresee.

Much of his criticisms still stand even in the face of his failure to predict those topological advances. And his criticisms weren't even the derogatory kind...at their most ideological they were an attempt to refute the idea that conceptually simple neural networks are not essentially complex enough to describe the vast complexity of general intelligence. He still saw their place, as do I: NNs have performed remarkably in areas of sensory perception and processing, but still lag behind many other methods at higher level tasks like learning a mathematical model of a physical process. After reading a lot of Minsky, I'm pretty sure most of the advances in AI over it's entire history are due to AI Winters crushing the dreams of Neats and forcing them back to Scruffier methods.

And I'm right there with you. I'd love to see a new revolution in Expert Systems, Logic Programming, or tree-based models. Hell, we're kinda seeing a (IBM-centric) revolution in Symbolic AI and Logic Programming with Watson/UIMA. But I want more!


saosebastiao says:> "...we're kinda seeing a (IBM-centric) revolution in Symbolic AI and Logic Programming with Watson/UIMA."

Are we?

Watson appears to be a framework for applications using classical GOFAI techniques, so I would hesitate to term it "revolutionary". AFAICT its emergence is due to faster von Neumann hardware, not new algorithms. Not that NNs couldn't be rolled into the mix, too, of course.

I believe the current interest in NNs is an AI diversion, something to do until a breakthrough occurs. Now we find that bees can read faces, pull strings to get nectar, count, etc. So what? Are we closer to something that can navigate the world, solve problems like we do, using language to explain how it was done and answer questions?

IOW I await the first version of the Odyssey written by an AI, once it's excursions are complete (Kindle version: a Google car describing the perils of it's cross-country trip from NY to LA).


My impression from the 1970's AI stuff was that Minsky was of the opinion that neural nets were an inefficient (what a way to run a rail road) way to do biological computation. And general purpose computers were working at a high level of abstraction. Appear not to be the case.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: