Even though engineers working in the field of artificial intelligence (AI) seem to be ambitious, daily developments seem to be more mundane. For instance, Google Duplex, which according to their website is capable of conducting natural conversations to accomplish “real world” tasks over the phone, still would not count as a progress towards meaningful AI. It is surely an impressive progress that by making a phone call, Google Duplex can make reservations or get holiday hours without the other party realizing whether there was a human-being or merely a computer. In addition to this, the project seems to have achieved its objective in a legitimate way given its limited scope which only entails accomplishment of three tasks; nevertheless it would sound more ambitious if Google Duplex goes beyond helping users to schedule appointments. This is certainly not to underestimate the work done by Google Duplex, regardless of its small scope, it counts as an important first step towards achievement of bigger goals.
At the dawn of the development stage of AI, dreams of AI were big; revolutionizing various fields ranging from healthcare to education were on the agenda of many global tech companies. The reason underpinning the slow progress in the field of AI is that the field itself seems to have no clue of how to make meaningful progress.
One of the reasons to make AI technology achieve its objective is to limit its data types, such as in the case of Google Duplex, so that they can be explored to a great extent. By limiting the technology to a closed domain, it can be trained in-depth within a specific domain. For instance, in the case of Google Duplex, there is a human-like speech only after being deeply trained in related domains of speech-recognition. Yet, open-ended conversations are not even on the long-term agenda.
Rather than announcing AI achievements too early and with a fanfare, individuals involved in the field of AI should take into account that genuine developments go beyond the existing capabilities within the field. As seen in the case of Google, being a company with the largest computing power, huge amounts of data and the most talented researchers in the field may not suffice.
At the heart of the problem lies the inability of AI to cope with the infinite complexity of language. Similar to combining specific symbols by following a set of simple arithmetic rules in order to develop mathematical equations, one can make an infinite number of sentences by combining a reasonable amount of words bearing in mind the basic set of rules of grammar. If AI needs to be genuine, it needs to be able to manage all these possible sentences rather than a fragment of them. Depending on the boundaries of the scope of a conversation, difficulty of having it will differ. While it may be easier to develop a computer program for a conversation with a narrower scope, the same may not be true when having a conversation with a native speaker of a foreign language. Knowing all the phrases in a dictionary or following a script based on a template would not equal to being able to speak that language.
Needless to say, Google Duplex makes use of machine learning techniques to extract possible phrases based on a huge data set of human conversations rather than simple book-like templates. Yet, the core problem does not change: regardless of the data available and patterns discerned, the data will never capture the creativity of human beings. Given the fluidity of the life, there are a variety of ways real life conversations can be held. Therefore, the universe of possible sentences is too complex to be estimated.
Knowledge engineering existed long ago before the rise of AI. It refers in general to the process of encoding complex knowledge in contrast to discovering statistical patterns in large data sets as in the case of machine learning. The purpose of knowledge engineering is to develop a formal set of rules to apply the main aspects of human understanding in computer programs. Yet, this process of knowledge engineering is still unfinished and requires a close collaboration with cognitive psychologists to make machines share some of the human cognition skills.
Although today’s dominant approach to AI did not work out, current limitations may serve as lessons to be learnt. If machine learning could not get beyond making reservations, it may be time to re-consider current strategies to apply intelligence to machines. Perhaps, it will remind engineers that programming skills on their own may not be enough to overcome these limitations and it may be time to go back to the basics of the philosophy of the mind in order to understand the meaning of intelligence in-depth.