4 Reasons That Show How Far Machine Learning Still Has to Go

Every day we hear and read about how machine learning is changing the face of technology. From social media to virtual assistants like Siri and Alexa, IoT, and even automobiles, algorithms analyze terabytes of data and make faster decisions.

Storing data and harnessing technology to make life easier has become cheaper. In an article for HBR magazine, Andrew Ng, the founder of Google Brain wrote, “If a typical person can do a mental task with less than one second of thought, we can probably automate it using AI either now or in the near future.”

People often think of Artificial Intelligence (AI) and Machine Learning (ML) as the same. But there is a difference. AI means machines perform “intelligent” tasks – not only repetitive ones. They adapt to different situations and present us with outcomes accordingly.

ML is a more specific subset of AI. It works on the idea that machines can “think” for themselves and learn without our constant supervision.

Deep learning is ML’s most dominant technique. It’s essentially a statistical method to teach machines how to classify patterns using (artificial) neural networks. These networks memorize categories and apply them to similar situations in a roughly reliable manner.

Most people believe that if we add 100X more layers and 1000X more data, a neural net can to do anything a human being does. But not everything is copacetic right now.

Technology virtuosos like Elon Musk and the late Stephen Hawking have raised concerns about AI, declaring that it will turn out to be a menace to mankind.

Although possible, that future still is far away. Because Machine Learning itself has a lot of catching up to do when it comes to evolution.

Gary Marcus, professor of cognitive psychology at NYU, had a brief stint as director of Uber’s AI lab. He believes deep learning is “greedy, brittle, opaque, and shallow.”

Here’s why he thinks so, and how that currently pans out for mankind.

  1. Greedy

Deep learning systems demand tremendous sets of data for training. A computer works on binary input/output data and electrical voltages. To build smart algorithms, it needs enough real-world examples of the scenario (read training data).

But all this comes at a cost. Not just in terms of currency for buying data, but also hiring and training people to collect and feed it into the system.

Google and Facebook can afford infinite data for their systems, because Google answers over 1.2 trillion search queries each year, while Facebook has over 2.2 billion monthly active users.

But for categories at smaller levels, this is still an obstacle to overcome. For instance, Alexa and Siri still respond with pre-defined answers to many questions, and cannot adapt to new scenarios.

  1. Brittle

When a neural net gets subjected to a “transfer test” – situations that differ from training – it cannot contextualize the situation. That’s why it breaks. Quality issues have always been a challenge for deep learning.

Context matters while understanding natural language. A word can hold different meanings in different languages. A phrase can mean different things in different legal documents. And while ML has developed breadth in reading and understanding, context is still one area where it struggles.

Technology has a long way to go before matching human translation and understanding.

  1. Opaque

Conventional programs have accessible code that can be debugged and fixed. Parameters of deep learning, on the other hand, can only be interpreted in terms weight within a mathematical geography.

In other words, the output of ML cannot be explained clearly even now, which leads to concerns about their reliability and biases. Do you remember Ultron from Avengers 2?

  1. Shallow

Neural networks are programmed extensively for pattern-recognition. Most of these are done with an ideal environment in mind. But we don’t live in an ideal world, do we? Far from it.

In the real world, humans are irrational species with millions of conflicting emotions, actions, and thoughts. Remember the Milgram experiment, where the majority of subjects who claimed that they would extend kindness to others ended up providing a (fake) shock to someone who couldn’t spell?

Deep learning has little or no knowledge about the world or human psychology. They do not understand cross-cultural norms and values unless they have large volumes of data to learn it. This lack of sensitivity can prove costly while trying to enter new markets, predict human behaviour, and so on.

These limitations prove that automation, as shown in movies like iRobot and Total Recall, is still a distant dream.