May 27, 2017

Nope, not at all. Like I said, we have had autonomous vehicles operating in normal traffic for decades now. Autonomous vehicles do not require AGI, far from it. Waymo, Uber, Tesla and more are all competing to bring autonomous vehicles to the mass consumer market and indeed most estimates claim that we'll have autonomous trucks by 2027: https://arxiv.org/abs/1705.08807

I hate how everyone thinks they know enough to talk about AI because it's so buzzy/trendy right now.

Modern AI is not pretending to be AGI. No one is claiming to be going for AGI, and whatever successes we have been seeing lately have to do with applied AI in solving specific problems, not AGI.

https://en.m.wikipedia.org/wiki/History_of_autonomous_cars

BTW did you even look at the survey? Because that's the opinion of actual AI researchers across the world.

This is easily Google able info, BTW, clearly your background is not AI.

May 27, 2017

Modern AI, things like AlphaGo are examples of applied AI. "Common sense" falls within the realm of artificial general intelligence, which is a line of research that's largely abandoned now in favor of applied AI. Modern AI solutions are engineered to solve very very specific problems. You are never going to see attempts to teach "common sense".

https://arxiv.org/abs/1705.08807

With that said, the above is what the world's AI researchers think is possible hopefully within my lifetime using just applied AI without the notion of "common sense".

Common sense is AGI. That's not the goal anymore. The goal is to do things like self driving cars. Both Google and Tesla have placed vehicles on the road that have driven for literally millions of miles.

The idea is to build a bunch of classifiers and regression models and use them together in an ensemble to solve your problem. The same approach is being applied successfully to a lot of unrelated fields where deep learning is concerned.

Also, modern AI doesn't even pretend to be biological in nature, in fact we'll known researchers like Andrew Ng make a point in saying that they are only biologically inspires and that's where the commonalities end.

There are other models like HTM that are way more ambitious and want to come up with a single generalized scheme to solve a broad range of problems, AGI style. These guys think biology is important and are trying to emulate the neocortex. They ARE going for AGI, common sense, etc.