Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

“_Specifically_ working on Artificial _General_ Intelligence” is a bit like “As a physics major I’ve decided to specialise in physics.”


Well.... There's the possibility that there's a certain degree of myopia in the AI field as a whole. As in, we know that there are some pretty gaping holes in our models and understanding, and most of the effort is spent on refining approaches that we have already validated.

Maybe a better analogy would be "specialize physical theories that work in all environments, not have to be adapted separately for the ocean, space, the atmosphere, the forest" etc.


There's certainly a lot to be done, and we'll likely need new approaches. But is it really productive to be tackling a general problem when we don't even know how to solve specific sub-problems? Especially if solving said sub-problems would bring a lot of value in its own right.


It’s basic research. No one knows anything about which approaches will work. If a genius millionaire technologist wants to dedicate all their time and effort to any novel approach, I’d strongly endorse it. (Not that it matters; they’ll do it anyway).

I feel likewise for any research effort; it’s not like this will put all cancer research on hold, or more immediately practical AI research. It’s just a few hundred people globally :) And it’s such promising technology.


Not sure where is your analogy applicable. There are lots of people doing image recognition, voice recognition, NLP. None of it on its own relates that much to reinforcement learning and multitask solving. In fact in the last year I saw only a few papers trying to do nearly all of the above with a single NN.


And is that single NN better at any of these tasks when compared to specialized approaches? Don't get me wrong, I agree that AGI is the end goal, I am just not convinced that trying to solve the general problem (before simpler problems are solved) is the most productive way forward.

To go back to my analogy, physicists don't have a unified theory yet, but have a good understanding of, say, quantum mechanics or planet motion. Solving these sub-problems got them closer to the end-goal, however, and brough a lot of value on the way. Why should we tackle AGI any differently?


I would have to find the paper, but generally yes. If I remember correctly one paper presented a network, that was able to recognize giraffes on images, never seeing a giraffe during training.

It was trained on embedding sentences and images into the same latent space.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: