Home Technology AI How to Make AI More Intelligent?
blank
machine processing ideas and knowledge into data to be spread around, with production line, funnel and cloud

How to Make AI More Intelligent?

1507
Thumb1

Any question about the future is susceptible to unknown unknowns, futile speculations about things undiscovered. We need examples we can observe and interrogate now.

The crux of the AI (Artificial Intelligence) problem is prediction under uncertainty; reasoning entails activities of prediction; predicting general rules as common sense; and so on. When everything is rooted in prediction, including the goal, every open problem appears incremental.

According a recent article for Harvard Business Review, if a typical person can do a mental task with less than one second of thought, this task can probably be automated using AI either now or in the near future. Data and the talent to expertly apply the software are the only scarce resources impeding progress for these one-second tasks.

While predictions are usually associated with machine learning, explanations are usually associated with scientific discovery. If the system sufficiently reflects the environment, the prediction applies; if it doesn’t, the prediction fails. This is what it means to “mirror an ever-changing world.” This problem is endemic to the task of prediction.

The “explanation” is a human interpretable description derived from the more complex model. Such empirical generalizations are formalized in models such as Bayesian networks and structural equations. Observations cannot simply fit the model, for a model can be created to fit any set of facts or data. A scientific theory that merely summarized what had already been observed would not deserve to be called a theory.

Explanations (or theories) consist of interpretations of how the world works and why. These explanations are expressed in formalisms as mathematical or logical models. Models provide the foundation for predictions, which in turn provide the means for testing through controlled experiments.

Make AI

Adapted from David Deutsch, Apart from Universes

In this schema, each component serves a functional role and each may stand-alone. Iteratively, explanations may be tested via their predictions, and their results may draw attention to explanatory gaps in need of attention. Explanations behave more like living ecosystems than static artifacts, while predictions (a part) is subsumed by explanations (the whole).

The philosopher Nancy Cartwright, in her influential book, How the Laws of Physics Lie, highlights the difference between a generalized account capable of subsuming many observations (phenomenological laws), and the specificity needed for models to actually predict the real world. “The route from theory to reality is from theory to model, and then from model to phenomenological law. The phenomenological laws are indeed true of the objects in reality — or might be; but the fundamental laws are true only of objects in the model.”

A model is less valuable if it is merely descriptive. We want predictions we can act on, to buy a stock or treat a disease. Judea Pearl, being a pioneer of modern AI and probabilistic reasoning, places in his book The Book of Why, associative predictions on the lowest rung of intelligence; interventions and more imaginative counterfactual reasoning as stronger forms. Pearl explains why this higher order knowledge cannot be created from the probabilistic associations that characterize inductive systems.

Scientific conjectures are often described as leaps of imagination. The “data” of imagination, counterfactuals, are by definition not observed facts. To give a specific example, every morning the sun rises in the east. Yet the deeper explanation of the sunrise permits unobservable data, such as what’s happening when the sun is obscured by the clouds. It even admits the imagination, the counterfactual case of observers in orbit. Quite unlike data, knowledge is composed of a rich lattice of mutually supporting explanations.

The right move today within the sphere of the digital knowledge ecosystem would be to integrate deep learning, which excels at perceptual classification, with symbolic systems, which excel at inference and abstraction. Explanatory power is already deployed in the methodologies of machine learning and in the selection of data.

Inductive systems like deep learning are powerful tools. When heat engines were first invented at the start of the industrial revolution, the initial interest was in their practical applications. But gradually, over 100 years, this instrumental view gave way to a much deeper theoretical understanding of heat and thermodynamics. These explanations eventually found their way into almost every modern-day branch of science, including quantum theory. The tool gave us heat engines; explanations gave us the modern world. That difference is truly breathtaking.

We do not know whether AI will follow the same progression, from these first practical applications to a deep theoretical understanding of knowledge creation. Yet, when it does, strong AI will follow inevitably.

Previous articleTrading Psychology & Lessons learnt
Next articleRelevance for Leaders in the AI Age
Ayse Kok
Ayse completed her masters and doctorate degrees at both University of Oxford (UK) and University of Cambridge (UK). She participated in various projects in partnership with international organizations such as UN, NATO, and the EU. She also served as an adjunct faculty member at Bosphorus University in her home town Turkey. Furthermore, she is the editor of several international journals, including those for Springer, Wiley and Elsevier Science. She attended various international conferences as a speaker and published over 100 articles in both peer-reviewed journals and academic books. Having published 3 books in the field of technology & policy, Ayse is a member of the IEEE Communications Society, member of the IEEE Technical Committee on Security & Privacy, member of the IEEE IoT Community and member of the IEEE Cybersecurity Community. She also acts as a policy analyst for Global Foundation for Cyber Studies and Research. Currently, she lives with her family in Silicon Valley where she worked as a researcher for companies like Facebook and Google.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

blank