Home Technology AI The Hidden Aspect of AI
Earth & fiber optics

The Hidden Aspect of AI

0
796
Thumb1

Metaphysics always starts from mundane objects in order to make distinctions. The world is full of latent things, systems that remain or are meant to remain below the threshold of the observable. For example, the operations of the NSA were latent until Snowden disclosed them.

According to L. Floridi, Professor of Ethics at the Digital Ethics Lab, at the University of Oxford, what there is in the world can be categorized as follows:

  • real (system + model);
  • virtual (only the model), or;
  • latent (only the system).

“System” refers to the real object which creates models. In order to give a specific example, we can consider a chair. A model of the chair refers to the design and function of a chair. When the model in one’s mind and the physical chair in the real world do not coincide, one might fall over the chair as it was not expected to be there.

Latency can be defined as a system without its model. In other words, the representable remains unrepresented. The word latent means, in the original Latin, concealed or unknown. A latent object would be a chair that may look like a foreign object due to the fact that it cannot be recognized as a particular thing in a particular culture.

This concept of latency is important as there are many things which are experienced as being latent. To give a specific example, physical suffering is often unbearable to be experienced and there are no models to describe this experience. In order to provide a description of suffering, various models can be taken from the sciences or opinions of those who underwent suffering may be taken; yet none of these will prepare the individual for the sheer physical sensation. Human beings cannot be explained completely in detail or in-depth as they are not automatons, whose every movement can be predicted.

According to Heidegger, human beings have the tendency to conceptualize everything by considering both objects and humans as being able to be re-used for a higher purpose. An example would be the employability model for higher education in terms conceiving learners as employable subjects which will yield profit to the knowledge economy after spending a fixed amount of hours in educational institutions and gathering a certain amount of skills. This is an example of embedding in action.

Our basic need to think in models led us to put individuals into nice boxes and categories and reduce them to something – aka to embed them. We reduce them to objects, to machines, or to a bundle of mere psychological traits. As we can only think in models we cannot fully model the human side of the existence. Therefore, there is something always hidden or ‘latent’ about being human.

One of the core aims of education is about making learners understand the world through models. We are often being taught oversimplified models of the world, which are contested later on in life. While such myths may be needed to construct a model that our minds could comprehend at that age, we develop the capacity to understand complexity in later years. The more abstract, complex and obscure these models become, the truer we assume them to be.

Often times, we get the idea that the more abstract the model, the better it would be or the richer the concepts, the better. As human-beings got drawn towards the more abstract model which is also the most inhuman one, they become those persons that are afraid to experience anything that they don’t have a model for.

As we continue to think in models, we will also try to model our human existence. That is the case when developing AI (artificial intelligence) through sophisticated algorithms. Engineers try to build models to explain behavior, the social movements, violence, etc. While they may know that these models do not explain everything, they refine them, and keep on going modeling human behavior. Occasionally, they may consult other sciences such as sociology, psychology, economics, anthropology to model specific types of behavior.

Within the light of these definitions, one question that we may need to ponder is as follows: How can we make technology more about humanity and less about models of humanity?

Previous articleInfographic: Power to the People
Next articleWhy Do We Still Do RFPs?
Ayse Kok
Ayse completed her masters and doctorate degrees at both University of Oxford (UK) and University of Cambridge (UK). She participated in various projects in partnership with international organizations such as UN, NATO, and the EU. She also served as an adjunct faculty member at Bogazici University in her home town Turkey. Furthermore, she is the editor of several international journals, including IEEE Internet of Things Journal, Journal of Network & Computer Applications (Elsevier), Journal of Information Hiding and Multimedia Signal Processing...etc. She has also played the role of the guest editor of several international journals of IEEE, Springer, Wiley and Elsevier Science. She attended various international conferences as a speaker and published over 100 articles in both peer-reviewed journals and academic books. Moreover, she is one of the organizing chairs of several international conferences and member of technical committees of several international conferences. In addition, she is an active reviewer of many international journals as well as research foundations of Switzerland, USA, Canada, Saudi Arabia, and the United Kingdom. Having published 3 books in the field of technology & policy, Ayse is a member of the IEEE Communications Society, member of the IEEE Technical Committee on Security & Privacy, member of the IEEE IoT Community and member of the IEEE Cybersecurity Community. She also acts as a policy analyst for Global Foundation for Cyber Studies and Research. Currently, she lives with her family in Silicon Valley and works for Google in Mountain View.

LEAVE A REPLY

Please enter your comment!
Please enter your name here