Home AI Thinking Differently About A.I.

Thinking Differently About A.I.

0
1049
Thumb1

The field of AI (artificial intelligence) has witnessed significant successes in terms of solving well-defined problems. Yet, so far, no step seems to have been taken towards the direction of creative problem-solving.

It is often said that if an issue cannot be solved the reason is trying to solve the wrong issue. If this is the case for AI, perhaps we can start to ask the question of what would be the right question to be solved.

According to Ullman, the author of Life in Code,  the concept of abstraction which is embedded within our notion of AI needs to be taken seriously as AI and machine learning can be described as main techniques for abstraction. The parameters refer to a specific way for representing complexity in terms of rules to be followed by a machine. Even though accomplishment in the development of a system with greater accuracy for several tasks, such as image recognition, are highly praised, it is often forgotten that the same systems can turn out to be harmful to human-beings when they give the wrong answers. Although human-beings can err as well the fact that machines cannot represent their wonder or be clueless by making the statement “I don’t know”, some things may be easily left out.

According to Chapman, one of the main issues with AI is that our lack of understanding about how to evaluate the progress made so far. AI is an interdisciplinary field encompassing not only science and engineering, but also design, philosophy, and math. In addition to these main fields, Chapman adds the sixth field of ‘spectacle’, which refers to providing good demos. While science refers to the development of predictive models, engineering refers to the development of the pragmatic applications of these models.

So far, abstractions regarding our understanding of intelligence are far from acting as scientific models or useful engineering results. Given an exact definition of intelligence, we cannot fully grasp the related abstractions about intelligence and can only look at demos and tell whether something looks intelligent or not. This does not suffice for making progress.

From the data point of view, big data includes individuals, which raises a warning against too much focus on abstraction. The history of a particular data set reflects the history of discrimination or bias as well. Forgetting that data refers to individuals would be committing a big mistake, as data in the form of some kind of abstraction provides the ground for decision-making. In a similar vein, the automated abstraction engines project these same biases onto the future based on these data abstractions. So, in order to develop fair systems, it is mandatory that these systems can go beyond accepting or rejecting the way data abstracts individuals.

Our ability to sort things by making categories is a form of abstraction which the human mind is good at. On the other hand, we can also accomplish things which could not be modeled by AI abstractions. We can sometimes get bored while we can sometimes find things interesting. Other times, we can change our minds make mistakes. All of these are part of being a human. How would all of our other capacities reflect onto AI?

The real value for any trained system lies in the fact that it should be able to make an error. Overfitting would certainly point to bad performance on a real-world data system. Yet, would it make sense to expect a system to perform well in the real world if we don’t expect the same for training data? Perhaps, we need to reject all abstractions and start to think differently about AI.

Previous articleSo Socially Good They’re Scary
Next articleBurnout Exhausts Easy Answers
Ayse Kok
Ayse completed her masters and doctorate degrees at both University of Oxford (UK) and University of Cambridge (UK). She participated in various projects in partnership with international organizations such as UN, NATO, and the EU. She also served as an adjunct faculty member at Bogazici University in her home town Turkey. Furthermore, she is the editor of several international journals, including IEEE Internet of Things Journal, Journal of Network & Computer Applications (Elsevier), Journal of Information Hiding and Multimedia Signal Processing...etc. She has also played the role of the guest editor of several international journals of IEEE, Springer, Wiley and Elsevier Science. She attended various international conferences as a speaker and published over 100 articles in both peer-reviewed journals and academic books. Moreover, she is one of the organizing chairs of several international conferences and member of technical committees of several international conferences. In addition, she is an active reviewer of many international journals as well as research foundations of Switzerland, USA, Canada, Saudi Arabia, and the United Kingdom. Having published 3 books in the field of technology & policy, Ayse is a member of the IEEE Communications Society, member of the IEEE Technical Committee on Security & Privacy, member of the IEEE IoT Community and member of the IEEE Cybersecurity Community. She also acts as a policy analyst for Global Foundation for Cyber Studies and Research. Currently, she lives with her family in Silicon Valley and works for Google in Mountain View.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

HTML Snippets Powered By : XYZScripts.com