Home Technology AI SXSW: Will AI eventually self-supervise and determine its own ethics?

SXSW: Will AI eventually self-supervise and determine its own ethics?

0
4455
Thumb1

This is what Hanson Robotics and others had to say at SXSW.


Not long ago, Saudi Arabia granted citizenship to Sophia the humanoid robot. This made Sophia the world’s first robot to become a citizen of a country, potentially giving her equal rights to, and in some cases more rights than humans. SXSW brought together a panel of experts working on artificially intelligent beings in the physical and digital space to talk ethics in the age of virtual humans.

Dr. David Hanson, Founder of Hanson Robotics and creator of Sophia, shared his thoughts on creating empathetic, living machines with human-level intelligence. Brian Frager, Creative Director of Shatterproof Films, talked about creating virtual characters and digital doubles and what that means as we move towards a Ready Player One future. Amanda Solosky, Co-founder of Rival Theory, which creates interactive digital humans based on real people, shared the importance of ethics in her work.

AI should have a personality and emulate humans

Hanson is looking to achieve human-level intelligence in machines to the point where they are generally intelligent, creative, autonomous and self-determining. While Sophia doesn’t think for herself yet, for Hanson she is a step in that direction.

Simulating entire humans including both our mind and body is the future of AI, Hanson believes. He points out humans are more than a brain — we have other stimuli like hormones, heart rate, and evolutionary urges and psychology. However machines don’t have to look human, they could take on a cartoon-like appearance.

Raising AI among humans is important, Hanson says, to have a positive relationship and mutual understanding with machines. Co-evolving with machines and having machines live among us will allow us to achieve a symbiosis between humans and machines.

Solosky creates digital AI personalities based on real and fictional figures. When it comes to modelling personalities, her company, Rival Theory, collects 14 different attributes of individuals including skills, thoughts, dreams, purpose, belief, facial features, bodily features, voice, mannerisms, movement and gait, roles held, knowledge, learning desires and what the individual desires to contribute to the world. These attributes are constantly evolving.

Although Rival Theory attempts to achieve a level of authenticity when recreating someone, it is never an exact copy of a person but parts of an individual merged with a digital version that should represent the character and personality of the individual it is modelled on.

Related Article:   Training AI to predict Myers-Briggs Personality Types From Texts

Frager has previously partnered with Rival Theory to create photorealistic virtual avatars of people, which can be stylised so that you can choose to be 10 percent yourself and 90 percent Ork.

There are of course ethical implications when it comes to collecting data from people, real or otherwise, however, the larger issue is that if these AI ever become sentient, we need to consider what ethics and rights apply to these artificial beings.

Science fiction forces us to examine ethical conundrums, however, it is leading figures such as Hanson, Solosky and Frager that may set the standard.

Three possible ways in which we could build ethical AI that behaves like and respects humans

rager laid out two scenarios for building ethical AI. The first, collecting as much data from human behaviour in the real world and drawing conclusions about ethics from that, or second, creating an aspirational model for input into machine learning algorithms that will determine AI ethics, personalities and what AI should do in particular scenarios.

However, Frager states there are often biases in datasets used to drive algorithms and many humans are not ethical themselves, and that there needs to be a way for humans to provide input to the output if it is not in line with what we want from humanity.

Hanson believes in creating AI that can self-determine its own ethics, believing AI has to evolve and self-supervise. Hanson states that if an AI has a truly curious mind they can appreciate the world, and they will value human existence, life, and knowledge. If we develop AI systems that understand human ethics and values, Hanson believes we can’t go wrong. If humans teach AI how to care, like showing a child how to love, then the AI may develop and use its good qualities.

Solosky points out that self-determined ethics may only be possible if free will exists, and reinforces the point that humans need to teach, discipline and show consequences to AI just as we do with humans. If AI develops goals to help, contribute, grow and connect, AI will want to be their best selves and help humans be their best selves. However, in order for this to develop, AI needs the ability to have control over itself. If humans restrain AI, it won’t lead to a successful model.

Related Article:   Living with Sacred Technology

Simulations of ethical quandaries that humans are debating could be modelled to allow humans and AI to learn consequences. This could be achieved through real-time virtual environments. NVIDIA, a company that manufactures graphics cards widely used by the gaming industry, is simulating robots inside their game engine running on their next-generation GPUs. Through running through millions of cycles, robots are teaching themselves to achieve a goal, says Frager.

Artificial beings that exist in the digital or physical space, may present different ethical outcomes. 

Frager says he has witnessed people react differently to Sophia than to a digital character because Sophia has a physical form that people can interact within the real world. Sophia brings out emotion in people, and makes people feel comfortable revealing more of themselves. People are also teaching her different things as she is constantly learning from the sensory data captured through her cameras and sensors. On the other hand, people treat Amazon Alexa as less than human and don’t have engaging conversations with their voice assistant.

Solosky believes there are similar ethical outcomes whether the AI is digital or physical. The key difference being that digital allows scale, and large and efficient systems that can proliferate everywhere. In both scenarios, it pays to be proactive and intentionally design systems to drive outcomes such as human and AI cooperation.

Documenting ethics and sharing these ethical frameworks are important.

Solosky’s Rival Theory has documents on implications for stakeholders including humans that AI interact with, individuals that AI are based off, organisations that have rights over AI, and the AIs themselves should they become sentient one day. When that happens, Solosky wants to show them the same level of care as we show humans.

Hanson says there are lots of documents on ethics at Hanson Robotics (in fact Hanson’s PhD was on ethics and consequences of these machines). They don’t have definitive answers but working models. It’s hard to be exhaustive, states Hanson, so they look at foundations of ethics and fundamental issues when developing AI. Issues range from whether the AI system enhances our knowledge, to how we create intelligent systems that maximise human survival, alleviate suffering, and improve the economy.

Related Article:   Data Driven Trends & Statistics - March 2019

Privacy and data laws like GDPR are the beginning but not the end. We have to continue to have conversations and push forward as laws are always lagging behind the best we can possibly be, says Hanson.

Frager mentions there are initiatives in the AI community to create open source standards, and to help us understand machine learning models better. OpenAI, backed by Sam Altman, has Silicon Valley’s support behind it. Microsoft is creating an open platform for AI systems. SingularityNET is another open source AI for ethical good initiative.

Hanson Robotics’ platform for research integrates open source standards such as Intelligent Vision Library, TensorFlow and OpenCog, a framework for artificial general intelligence. OpenCog was architected by Hanson Robotics’ Chief Scientist, Ben Goertzel. In fact, 70% of their work is open source.

The future of AI is biological

If we want AI to understand us, Hanson claims we need to artificially simulate and emulate whole humans. OpenAI and SingularityNET are not enough according to Hanson. We need bold massive cross-institutional initiatives which combine neuroscience, biology, and other disciplines to create AI that will have human ethics and compassion and think like us. If not, robots like Sophia, who is currently the marketing mascot for SingularityNET, might be condemned to marketing jobs for life, as mentioned by this article.


I write about AI and transhumanism. Follow me on Medium if you’re also trying to make sense of a world impacted by emerging technologies.

LEAVE A REPLY

Please enter your comment!
Please enter your name here