Home Technology AI Co-creating History with AI
Evolution of Man and Technology silhouettes

Co-creating History with AI


A new division of history occurred with the advent of the digital information revolution, the way we now live described as hyperhistory. This new way of living is characterized by certain features:

1) Our technologies are now capable of more than just mediation between us – as users – and other technologies or the world. ICTs (Information Communication Technologies) are also able to control our other technologies.

To give a specific example, home automation software can shop from an online grocery based on what it determines the individual wants and what is available in the fridge.

2) The narrative of our lives generate vast stores of data which are consumed by ICTs in ways that we do not actively endorse or control. We cannot even attune our awareness to what processes are involved in creating this data.

An example would be the smartphone in one’s pocket capturing the location and proximity to nearby landmarks, or  the social graph of personal calendar events. These in turn feed complex engines of analysis and prediction, fitting the individual into semi-anonymized profiles of aggregate behavior, then in turn generating even more data.

3) Societies that have entered hyperhistory are particularly vulnerable to attacks using ICTs. There are a multitude of vectors of attack from infrastructure controlled by ICTs, through economic disruption to disinformation campaigns that damage social cohesion and political institutions.

Coordinated attacks against infrastructure or strategically important programs, ransomware (such as the Wannacry virus that targeted the NHS), and other emerging patterns of attack are well-understood by cybersecurity experts.

4) In hyperhistory, individual human-beings attempt to define themselves in terms of the ICTs they use. They try to fit into the imposed data models and system processes, redefining what it means to be a human-being for themselves, through their understanding of their role within these structural systems.

Social networks require us to reconsider the types of relationships we have and structure the information we share through the filters of the data models of these systems.

Our relationship with the wider society is further defined by our identities in the systems of governments, corporations and other institutions.  Whether it is government systems defining citizenship, welfare beneficiaries or taxpayers or it is corporate networks defining authorized personnel, the overlapping identities imposed by these systems are internalized in how we talk about ourselves with each other in society.

5) The distinction between being connected (“online”) and disconnected (“offline”) becomes meaningless when the human-being cannot avoid being tracked and managed, just like any other component within the network. We are always connected, always mediating ourselves via digital technologies. These technologies have become transparent.

One of the major players in the development of the hyperhistory is AI (artificial intelligence) which seems to be a a catch-all phrase for a wide-ranging set of technologies most of which apply learning techniques from statistics to find patterns in large sets of data and make predictions based on those patterns. From the critical, like law enforcement, healthcare, and humanitarian aid, to the mundane, like shopping, AI seems to be the answer to all our problems. Yet, while the progress of hyperhistory seems to be beneficial for the humankind, it is also worth reflecting upon who is driving the regulatory agenda and who benefits from it?

This question needs to be answered because letting industry needs drive the AI agenda presents real risks. With so many digital giants of Silicon Valley located in the US, one particular concern regarding AI is its potential to mirror societies in the image of US culture and to the preferences of large US companies, even more than is currently the case. These tech companies sit on troves of data, which can be turned into the feeding material for new AI-based services.

digital giants

A related concern is how much influence these companies have over AI regulation. In some instances, they are invited to act as co-regulators. Much can be said in favor of such open norm-setting venues that aim to address AI regulation by developing technical standards, ethical principles, and professional codes of conducts outside of the dab-and-drag of regulatory processes. Yet again, the question that needs to be asked is: who benefits? The solutions presented by these initiatives are often framed in terms of ethical frameworks or narrow solutions that lead to fair, accountable, and transparent AI. Yet, they don’t address questions of hard-regulation or the Internet’s business model of advertising and attention.

How AI systems function and, by extension, what regulatory problems they raise is highly contextualized. A US-based commercially-driven agenda is naturally going to be an irrelevant fit for much of the rest of the world.

One way would be to ensure that there is equitable stakeholder representation when regulating AI. Yet, we are not sufficiently hearing the concerns of the Global South. Those voices are especially relevant, as their countries are often used as ‘test-beds’ for technology that will be rolled out across the rest of the world.

Similarly, it is important to go beyond the fairness rhetoric and start to formulate what other fundamental values should be included. In focusing on narrowly defined conceptualizations of fairness, accountability, and transparency, what are we leaving behind? It should be evident that merely ensuring that there is a civil society representative or academic in the room for every industry representative is not enough. Even in cases where various stakeholders are invited to join the regulatory process in equal numbers, there is no internal equality. Corporations simply have more resources to dedicate towards such processes.

To ensure we all benefit from these technologies we need to guarantee a diverse set of concerns and values are represented – equitably – when setting the regulatory agenda for AI. As it says in the Qur’an;

“And those who harm believing men and believing women for [something] other than what they have earned have certainly born upon themselves a slander and manifest sin.” (Quran; 33:58)

This is not only about technology, it also about re-writing the history as well as making technology more ethical and human.

Previous articleIcon: The Embodiment Of Democracy And The Free Market
Next articleWhich Under-the-Radar Stocks Should I Consider with Significant Disruptive Potential?
Ayse Kok
Ayse completed her masters and doctorate degrees at both University of Oxford (UK) and University of Cambridge (UK). She participated in various projects in partnership with international organizations such as UN, NATO, and the EU. She also served as an adjunct faculty member at Bosphorus University in her home town Turkey. Furthermore, she is the editor of several international journals, including those for Springer, Wiley and Elsevier Science. She attended various international conferences as a speaker and published over 100 articles in both peer-reviewed journals and academic books. Having published 3 books in the field of technology & policy, Ayse is a member of the IEEE Communications Society, member of the IEEE Technical Committee on Security & Privacy, member of the IEEE IoT Community and member of the IEEE Cybersecurity Community. She also acts as a policy analyst for Global Foundation for Cyber Studies and Research. Currently, she lives with her family in Silicon Valley where she worked as a researcher for companies like Facebook and Google.


Please enter your comment!
Please enter your name here