Home Technology AI Building A Mental Architecture for AI
Artificial intelligence head, low poly style 3d vector wireframe shattered object. Modernistic background can be used in projects on subject of human intelligence and imagination.

Building A Mental Architecture for AI

0
1218
Thumb1

A number of scientists and engineers fear that, once we build an artificial intelligence smarter than we are, a form of A.G.I known as artificial general intelligence, doomsday may follow. Bill Gates and Tim Berners-Lee, the founder of the World Wide Web, recognize the promise of an A.G.I., a wish-granting genie rubbed up from our dreams, yet each has voiced grave concerns. Elon Musk warns against “summoning the demon,” envisaging “an immortal dictator from which we can never escape.” Stephen Hawking declared that an A.G.I. “could spell the end of the human race.” Such advisories aren’t new. In 1951, the year of the first rudimentary chess program and neural network, the A.I. pioneer Alan Turing predicted that machines would “outstrip our feeble powers” and “take control.”

Many people in tech point out that artificial narrow intelligence, or A.N.I., has grown ever safer and more reliable—certainly safer and more reliable than we are. We aren’t eager to contemplate the prospect of our irrelevance. The time Google, unable to prevent Google Photos’ recognition engine from identifying black people as gorillas, banned the service from identifying gorillas. Smugness is probably not the smartest response to such failures. “The Surprising Creativity of Digital Evolution,” a paper published in March, rounded up the results from programs that could update their own parameters, as superintelligent beings will. When researchers tried to get 3-D virtual creatures to develop optimal ways of walking and jumping, a bug-fixer algorithm ended up “fixing” bugs by short-circuiting their underlying programs. In sum, there was widespread “potential for perverse outcomes from optimizing reward functions that appear sensible.” That’s researcher for ¯\_(ツ)_/¯.

Thinking about A.G.I.s can help clarify what makes us human, for better and for worse. Have we struggled to build one because we are so good at thinking that computers will never catch up? Or because we’re so bad at thinking that we can’t finish the job?

Related Article:   The Future of Investing is Quantamental

Artificial intelligence has grown so ubiquitous—owing to advances in chip design, processing power, and big-data hosting—that we rarely notice it. We take it for granted when Siri schedules our appointments and when Facebook tags our photos and subverts our democracy. Computers are already proficient at picking stocks, translating speech, and diagnosing cancer, and their reach has begun to extend beyond calculation and taxonomy.

Can we claim our machines’ achievements for humanity? In “Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins,” Garry Kasparov, the former chess champion, argues both sides of the question. Some years before he lost his famous match with I.B.M.’s Deep Blue computer, in 1997, Kasparov said, “I don’t know how we can exist knowing that there exists something mentally stronger than us.” Yet he’s still around, litigating details from the match and devoting big chunks of his book (written with Mig Greengard) to scapegoating everyone involved with I.B.M.’s “$10 million alarm clock.” Then he suddenly pivots, to try to make the best of things. Using computers for “the more menial aspects” of reasoning will free us, elevating our cognition “toward creativity, curiosity, beauty, and joy.” If we don’t take advantage of that opportunity, he concludes, “we may as well be machines ourselves.” Only by relying on machines, then, can we demonstrate that we’re not.

In Steven Spielberg’s “A.I. Artificial Intelligence,” the emotionally damaged scientist played by William Hurt declares of robots, “Love will be the key by which they acquire a kind of subconscious never before achieved—an inner world of metaphor, of intuition . . . of dreams.” Love is also how we imagine that Pinocchio becomes a real live boy. What makes us human is doubt, fear, and shame, all the allotropes of unworthiness.

Related Article:   Fixing Photography

In the incisive “Life 3.0: Being Human in the Age of Artificial Intelligence,” Max Tegmark, a physics professor at M.I.T. who co-founded the Future of Life Institute, suggests that thinking isn’t what we think it is:

A living organism is an agent of bounded rationality that doesn’t pursue a single goal, but instead follows rules of thumb for what to pursue and avoid. Our human minds perceive these evolved rules of thumb as feelings, which usually (and often without us being aware of it) guide our decision making toward the ultimate goal of replication. Feelings of hunger and thirst protect us from starvation and dehydration, feelings of pain protect us from damaging our bodies, feelings of lust make us procreate, feelings of love and compassion make us help other carriers of our genes and those who help them and so on.

living organism

Rationalists have long sought to make reason as inarguable as mathematics, so that, as Leibniz put it, “there would be no more need of disputation between two philosophers than between two accountants.” But our decision-making process is a patchwork of kludgy code that hunts for probabilities, defaults to hunches, and is plunged into system error by unconscious impulses, the anchoring effect, loss aversion, confirmation bias, and a host of other irrational framing devices. Our brains aren’t Turing machines so much as a slop of systems cobbled together by eons of genetic mutation, systems geared to notice and respond to perceived changes in our environment—change, by its nature, being dangerous.

Related Article:   In the Digital Economy You are For Sale

That ability to think, in turn, heightens the ability to threaten. Artificial intelligence, like natural intelligence, can be used to hurt as easily as to help. Hector Levesque argues that, “in imagining an aggressive AI, we are projecting our own psychology onto the artificial or alien intelligence.” In truth, we’re projecting our entire mental architecture. The breakthrough propelling many recent advances in A.I. is the deep neural net, modelled on our nervous system. Last month, the E.U., trying to clear a path through the “boosted decision trees” that populate the “random forests” of the machine-learning kingdom, just started requiring that judgments made by a machine be explainable. The decision-making of deep-learning A.I.s is a “black box”; after an algorithm chooses whom to hire or whom to parole, say, it can’t lay out its reasoning for us. Regulating the matter sounds very sensible and European—but no one has proposed a similar law for humans, whose decision-making is far more opaque.

Previous articleInvesting in Data Analytics to Improve Business Value
Next articleInfographics Digest — Vol. 2
Ayse Kok
Ayse completed her masters and doctorate degrees at both University of Oxford (UK) and University of Cambridge (UK). She participated in various projects in partnership with international organizations such as UN, NATO, and the EU. She also served as an adjunct faculty member at Bosphorus University in her home town Turkey. Furthermore, she is the editor of several international journals, including those for Springer, Wiley and Elsevier Science. She attended various international conferences as a speaker and published over 100 articles in both peer-reviewed journals and academic books. Having published 3 books in the field of technology & policy, Ayse is a member of the IEEE Communications Society, member of the IEEE Technical Committee on Security & Privacy, member of the IEEE IoT Community and member of the IEEE Cybersecurity Community. She also acts as a policy analyst for Global Foundation for Cyber Studies and Research. Currently, she lives with her family in Silicon Valley where she worked as a researcher for companies like Facebook and Google.

LEAVE A REPLY

Please enter your comment!
Please enter your name here