Thumb1

As technology shapes the way we interact every day, our interactions are increasingly shaped by algorithms as well, whether we recognize it or not.

The practice of intentionally guiding user behavior is known as “persuasive technology.” The field of behavior design started at Stanford under B.J. Fogg, who is the father of persuasive technology. At that time, he referred to this field as “captology,” as human-beings were considered as being captive of technology. Later on, the name evolved to be persuasive technology.

With the advent of the iPhone, individuals have adopted behaviors that weren’t possible before. As we live so much in a digital world, the way we design software and how we design technology, all of a sudden has an impact on how we interact with each other.

There is usually less interest in who controls the narratives that have influence in today’s society, particularity with large digital media platforms.  In the realm of persuasive technology,  the public is not particularly aware that they are being persuaded. That is the reason why the news of Cambridge Analytica using people’s psychographic data came as a huge shock to most people.

In many ways we may think that is business as usual. After all, Cambridge Analytica is just one of many firms that are interested in persuading people to buy a product or be interested in trying a new service. The traditional model of advertising is really what undergirds most of the digital technologies that we are engaging with.

Persuasive technologies have focused on continuous behavior change, while nudging has a stronger focus on momentary behavior change. The solution lies not in teaching everyone how to be able to design persuasive technologies. Yet, it is ensuring that individuals understand the principles of what these different systems are doing. Because if they understand the principles of persuasive systems, then that also means that they are able to reject unwanted influence. This means that they are still in control.

How can we bring ethical concerns at the conception of such technology? Since all related data is gathered anyway, what is required from engineers is to look for patterns that are unusual. In addition to this, there should exist more diversity in the teams. Those design teams have these intellectual, cultural, and social blind spots. So the diversity is to de-risk a specific product so that it isn’t used in an unintended manner.

Moreover, not only institutions or companies, but also users should start to ask those questions. If there is an algorithmic bias, how can this be verified? Is there evidence that has been verified across some standard, some certification?

Contrary to popular belief, investing more and more into private companies to provide the backbone and infrastructure for learning and for knowledge is not necessarily the correct step in tackling these issues.

Someone is designing the technologies that might be persuading us, and those should be transparent. We should understand the ethical frameworks around those technologies. We should understand whether we have an opportunity to opt out. We should be thinking about whether the public can be harmed by those persuasions in both the short and long terms. Without having that kind of transparency and control, there is a lot at stake in the digital realm.

We are at a crucial moment where we might fully embrace certain kinds of projects that we can’t easily come back from. It is a great time for us to be reflective.

Comments

comments

LEAVE A REPLY

Please enter your comment!
Please enter your name here