By Jutta Stienen -
Superintelligence [A reflection from our CEO]
Technological progress is often slower than we imagine.
This may be an unpopular opinion but let’s have a look at the science fiction movies of the past. In “Back to the Future” for example, which was released in 1989, Robert Zemeckis imagined the year 2015 with flying cars, fully automated bars and entire menus cooked out of little pods at the push of a button. “Blade Runner”, released in 1982, imagined a year 2019 with flying cars too. Moreover, we would have colonized other planets and there would be AI androids, that look like humans running around.
Of course, not all fiction is good fiction in the sense that it is a realistic prediction of the future considering the laws of physics. Flying cars for example seem cool until we think about how much noise they would produce flying over our neighbourhoods all the time. But we can take the predicted self-driving cars instead, an innovation for which we wait for… what feels like an eternity.
Artificial Intelligence, arguably one of the most consequential technological innovations of our time, was the object of countless predictions for a few decades now. 1950 Alan Turing predicted that it will become very hard in the future to differentiate a human and a machine and invented his famous test to address that problem. James Cameron imagined in 1981, that a supercomputer called Skynet would become self-aware in 1997. Other predictions include that AI will replace most of our jobs and ultimately the enhancement of ourselves into a super-intelligent cyborg or alternatively the extinction of human race.
However, it seems to take a little longer than we imagined. A thing like Skynet, the Artificial General Intelligence (AGI) from the Terminator movie does not exist. And where are all the autonomous robots with human level intelligence that should exist today?
The author of this article is Tonio Meier, CEO and founder of GUURU Solutions Ltd. Before founding GUURU, Tonio was Head of Customer Service at Salt (former Orange) for more than 10 years. This article was first published here.
We are nowhere near human level intelligence AI.
Today, big tech companies like Google and Facebook embrace AI in every business process and would probably fall apart without the technology. But applying AI to one task is a very different thing than having an AGI, being capable to learn any discipline and work across all processes.
In the last decade or so a step forward came with machine learning and conceptual models which essentially means that machines could start training themselves instead of being programmed. But the learning processes applied for machines are still vastly inferior to learning processes of humans or even most animals.
Yann LeCun, the chief AI scientist of Facebook, famously says that the most intelligent machine still has less common sense than a house cat and that we are still only at the very beginning of developing AGI.
We still do not even have a great autonomous chat-bot today!
Despite all the predictions tens of years ago that Customer Service would be one of the first tasks made redundant by AI, it still has not happened. Why? Because AI can be trained to do one task very well so that the machine often becomes better than humans in this given task but as soon as we modify the task a little bit the machine is lost.
Customer Service essentially is about an interaction with a human user. If the interaction becomes a conversation the machine quickly falls short and the user is frustrated. That’s why at GUURU we are betting on a combination of AI with human experience. By the way not only because of the shortcomings of the machine but also because a human-to-human conversation can create a stronger bond to the user and thus lead to higher brand loyalty. In some use cases, it is business-critical for a brand to put humans first.
A few months ago, we started using a new machine learning engine called GPT-3 to power our AI. GPT-3 is a gigantic new model which was released in summer 2020 by OpenAI. The new model contains 175 Billion parameters which in other words means, it knows a lot of the entire internet. It is much bigger than previous models available from Microsoft or Google, which we used before.
The improvement of the new technology is impressive but it does not substitute humans.
Without the need for training, you give it a question and a little bit of context and it will give you an answer that sounds very much like written by a human. It can even write an entire article about the topic. You can test that for yourself for example by using a writing assistant service like ShortlyAI.
However, you can still not let it chat with clients autonomously. One reason is that it undergoes the data drift problem, which essentially means its knowledge is not updated in real-time. For example, GPT-3 does not know what the Coronavirus is, because it was trained before the outbreak happened and therefore before Covid-19 was known on the internet. This problem can be mitigated in the future by having more regular releases of the model.
Another problem is edge cases. GPT-3 does not know much about less commonly known things such as the specifics of a product that is available only in one country or a process of a local company. This is because this information isn’t a strong signal amongst all the information on the web and therefore the model will likely not have enough confidence to do something with it. As a result, the answer will be wrong or inaccurate. But this problem too is solvable, by adding more data and processor capacity to the machine.
So, what keeps us from building an AI with human level intelligence or above?
When researching the topic, I personally think David Deutsch, the renowned British-Israelian physician, is very insightful. He says that the AGI field of study has been disappointing in making significant progress for more than 6 decades and we are nowhere near. He suspects that the breakthrough will not come by adding more data or processor capacity to the machine. Rather, we would have to see a breakthrough in Philosophy to understand how human knowledge is created. According to Deutsch, it must be possible to generate AGI because the human brain consists of atoms by the law of physics. But solving that problem would require one of the best ideas ever.
*The header image is a picture from the movie Wall-E from Pixar (Disney).
Follow us on: