Home Technology Data Science ‘Captology’: The Tech of Persuasion
I can get it. Hopeful resourceful trapped girl stretching her hands to her gadget making attempts asking for help while being kidnapped

‘Captology’: The Tech of Persuasion


As technology shapes the way we interact every day, our interactions are increasingly shaped by algorithms as well, whether we recognize it or not.

The practice of intentionally guiding user behavior is known as “persuasive technology.” The field of behavior design started at Stanford under B.J. Fogg, who is the father of persuasive technology. At that time, he referred to this field as “captology,” as human-beings were considered as being captive of technology. Later on, the name evolved to be persuasive technology.

With the advent of the iPhone, individuals have adopted behaviors that weren’t possible before. As we live so much in a digital world, the way we design software and how we design technology, all of a sudden has an impact on how we interact with each other.

There is usually less interest in who controls the narratives that have influence in today’s society, particularity with large digital media platforms.  In the realm of persuasive technology,  the public is not particularly aware that they are being persuaded. That is the reason why the news of Cambridge Analytica using people’s psychographic data came as a huge shock to most people.

In many ways we may think that is business as usual. After all, Cambridge Analytica is just one of many firms that are interested in persuading people to buy a product or be interested in trying a new service. The traditional model of advertising is really what undergirds most of the digital technologies that we are engaging with.

Persuasive technologies have focused on continuous behavior change, while nudging has a stronger focus on momentary behavior change. The solution lies not in teaching everyone how to be able to design persuasive technologies. Yet, it is ensuring that individuals understand the principles of what these different systems are doing. Because if they understand the principles of persuasive systems, then that also means that they are able to reject unwanted influence. This means that they are still in control.

How can we bring ethical concerns at the conception of such technology? Since all related data is gathered anyway, what is required from engineers is to look for patterns that are unusual. In addition to this, there should exist more diversity in the teams. Those design teams have these intellectual, cultural, and social blind spots. So the diversity is to de-risk a specific product so that it isn’t used in an unintended manner.

Moreover, not only institutions or companies, but also users should start to ask those questions. If there is an algorithmic bias, how can this be verified? Is there evidence that has been verified across some standard, some certification?

Contrary to popular belief, investing more and more into private companies to provide the backbone and infrastructure for learning and for knowledge is not necessarily the correct step in tackling these issues.

Someone is designing the technologies that might be persuading us, and those should be transparent. We should understand the ethical frameworks around those technologies. We should understand whether we have an opportunity to opt out. We should be thinking about whether the public can be harmed by those persuasions in both the short and long terms. Without having that kind of transparency and control, there is a lot at stake in the digital realm.

We are at a crucial moment where we might fully embrace certain kinds of projects that we can’t easily come back from. It is a great time for us to be reflective.

Previous articleInfographics Digest Vol. 13 – 2018: The Year of IoT and its mega trends
Next articleChanging Workforce Demands Diversity and Flexibility
Ayse Kok
Ayse completed her masters and doctorate degrees at both University of Oxford (UK) and University of Cambridge (UK). She participated in various projects in partnership with international organizations such as UN, NATO, and the EU. She also served as an adjunct faculty member at Bosphorus University in her home town Turkey. Furthermore, she is the editor of several international journals, including those for Springer, Wiley and Elsevier Science. She attended various international conferences as a speaker and published over 100 articles in both peer-reviewed journals and academic books. Having published 3 books in the field of technology & policy, Ayse is a member of the IEEE Communications Society, member of the IEEE Technical Committee on Security & Privacy, member of the IEEE IoT Community and member of the IEEE Cybersecurity Community. She also acts as a policy analyst for Global Foundation for Cyber Studies and Research. Currently, she lives with her family in Silicon Valley where she worked as a researcher for companies like Facebook and Google.


Please enter your comment!
Please enter your name here