Machines are outsmarting humans not only in games but also in the labor market. In many fields today, the use of pattern recognition and big-data analysis show how AI’s capacities have already exceeded those of humans. Algorithms are increasingly being used in finance, human resources, and medicine – to do things that humans were either unable to do, or unable to do so well, or as fast. Machines today do not only perform mechanical or manual tasks once performed by humans, but they are also performing thinking tasks, where it was long believed that human judgment was indispensable. From self-driving cars set for adoption in over 20 countries; to the quick transition from self-flying planes towards fully autonomous aircraft, the AI uptake is getting wild. With examples of a robot performing surgery on a living pig; or Google’s DeepMind AI doctors performing better than human personnel, these innovations are beginning to look like Nollywood “juju.”
Stephen Hawking (et al) once argued that “there are no fundamental limits to what can be achieved with AI.” Whilst this is arguable (as I think that AI will never be able to think faster than the speed of light), AI is definitely out-inventing human researchers, out-manipulating human leaders, and even developing weapons that we are as humans are incapable of understanding. But has general AI exceeded the intellectual capacity of humans and if possible, how long will it take for us to get there?
Predictions are generally difficult to determine, especially about the future. Albert Einstein once stated that “Nuclear energy will never be obtainable”; Watson had said “there is a world market for maybe only 5 computers”; and, Lord Kelvin (as President of Royal Society at the time), said “X-rays will be a hoax”. Now, because the goalpost for what AI is keeps shifting, I don’t think general AI can be likened to spotting the chick that will grow into a sturdy cock from its days of hatching. With the hype of AI’s unstoppable capability, it still doesn’t conclude on its superior intelligence. It is possible that predictions about AI outperforming humans in all tasks confuse intelligence to do a task with the capability to improve one’s intelligence to do that task. Even in Neil Bostrom’s book Superintelligence, where he posits that a system can heighten intelligence to become a knowledge superpower, he admits that a “superintelligent AI” would have more knowledge than any human but maybe “lacking in instinct, social skills, and imagination.” Whilst his position makes sense, I wonder how we would know if and when AI achieves things like imagination and consciousness.
There are claims that human intelligence is nothing special and that perhaps we may not be the smartest specie-entity in this world (and beyond). But there is also a strong position in law that only humans are universally recognized with the legal ability of intelligence (demonstrated through rational thinking and human behavior). Therefore, all notions of intelligence should relate to human intelligence and never to AI. Perhaps comparing the technology with humans is a flawed exercise. We should focus on the way humans and AI complement each other and work to mitigate impending challenges that AI brings, such as the growing abstraction of our human experience for capitalist ends, and how AI may exacerbate common threats to our humanity. We need an international normative framework to help achieve ethical, transparent and accountable use of AI, ensuring that humanity is at every step of the invention only served by AI and never ruled by it.