Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Lots of talk about false positives. As a human, when dealing with fuzzy logic, I like to focus on recognizing when something is wrong to reduce false positives.

Maybe Google should put more effort in training AIs to recognize when they don't know instead of shoehorning an answer.

An analogy is I would expect a non-stupid human who has never seen a screw before, but trained to work with hammering nails to quickly recognize something is wrong the first time they're presented with a screw and not attempt to hammer it flush.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: