Home AI Embedding Cyber Trust Into The Black Box
Internet lock

Embedding Cyber Trust Into The Black Box

0
1004
Thumb1

One of the shared issues in the fields of both information security and artificial intelligence (AI) is the need to make an explanation to the end user with the objective of gaining trust. Trust is often hard to establish. In our daily lives, we may often have a tendency to trust those individuals who explain to us why they do what they do. Trust involves an explanation regarding an individual which might be simply accepted by others so that their decisions are based upon them. From this perspective, trust and explanation may appear to be common partners in everyday life. This same principle might also apply to our digital interactions in the cyberspace. While artificial agents need to explain their decision to the user in order to gain trust, the website designers should explain to a client the reason for why they are able to do their transactions online safely.

In the field of AI, one of the most researched topics has been the expert systems which can be roughly defined as systems offering solutions to complex problems ranging from financial decision-making to medical diagnosis which would require a human expert to be solved in the real world. According to Ye & Johnson (1995), the following explanation types are common in expert systems: 

  • Traces: Traces provide a detailed level of record of related parts for reasoning. 
  • Justifications: The focus is geared towards the logical aspect of arguments. 
  • Strategies: These higher-level approaches are being implemented by the expert systems to the information at hand.

A system needs also to be emphasized in terms of the absence of visibility or observability. In general, the term black box is used to refer to systems whose outputs are created based on specific inputs without an understanding of the inner workings. In a more philosophical sense, as mentioned by Latour (2005), a black box refers to something in which actants have become invisible. In his actor-network theory, Latour defines an actant as anything that participates in actions within a network of relations in order to be realized. According to Latour, the process of blackboxing is related to other concepts as well: 

  • Translation: This term implies that possibilities and intentions for action change when actants build forces. These possibilities and intentions are referred to as the ‘action program’ by Latour. 
  • Delegation: Part of a program can be delegated to different actants. 
  • Composition: This term refers to the phenomena that actants in a network constitute a composite actant to which actions can be attributed.

As security of a system is not something that can be observed instantly by the user, it is crucial to explain the system security to the end-user, although this is not a functional requirement of the system. Producing feasible results does not imply for a system that it’s secure. In order to protect the system against intruders, some insights should exist into the measures taken.

When providing an explanation on the security measures of a system, the major goal is to obtain transparency in order to make users understand the designers’ choices to protect them against intruders. There are different opinions when it comes to providing an explanation on these protection mechanisms. While some might argue that such a transparent explanation might also empower the attackers’ capabilities, others might argue that these protection mechanisms can only be improved by means of public scrutiny. By clarifying the procedures embedded into the system design and related alternatives in case something goes wrong, the level of transparency would be increased. Maintaining the security mechanisms within the black box by removal of the transparency explanations is referred to as ‘security by obscurity.’

The explanation on the system’s security should provide the user with the opportunity to make an informed and reasonable decision among alternatives on whether to accept a specific procedure or not. This is hard to achieve as the goal of achieving transparency might require other subgoals which might be partially delegated to system designers and partially to the system itself in the form of help functions on how the program operates and how it is protected.

When it comes to the security in AI, justification emerges as a subgoal if the user is not satisfied with a response to a transparency question. On the other hand, transparency emerges as a subgoal if the user is not satisfied with a response to a justification question. So, when deeper levels of explanation are asked for, there might be alternations between the explanation types. Therefore, depending on whether it is a ‘why’ or ‘how’ question (transparency or justification) the outer layer of the system specifies what the explanation tree would look like.

If expert systems, which create as much confidence as individuals in terms of the explanations they provide could be provided, they may become more like a blackbox as the requirement for recognizing precisely how they function may become less visible. 

In the field of AI, if an explanation for confidence is provided, the user is able to grasp whether the system’s decision makes sense or not. Given the fact that we are surrounded by technologies which gather data about us in order to make decisions for us, the question of how to design security-sensitive systems in order for them to provide sufficient explanations to the users needs to be answered. In the absence of required information on what is happening inside a security-sensitive system, the user consent would also be missing and eventually users cannot be held responsible for anything.

In case of too little information, explanations would not establish any trust for the user and would leave the black box closed. On the contrary, too much information would also not establish any trust for the user as it would make it difficult for the user to understand the system and to process all the details. In a similar vein, too much information would also fail in gaining the confidence of the user, as only some indications of the complete trace of reasoning might be required for the user. On the other hand, too little information will not provide an explanation for confidence. 

In general, the right goal (answering a ‘why’ or ‘how’ question) along with the right amount of information could result in obtaining the informed consent of the user. In other words, the level of abstraction should be right in order to be able to speak about user responsibilities within the context of informed consent. This should not be interpreted that there would no longer be any responsibility upon the designers as long as user consent is obtained. On the contrary, the designers bear the responsibility of designing systems in such a way that users could access the right explanations, which eventually makes them feel encouraged to behave responsibly.

While in information security the objective of an explanation is to provide transparency about the system’s black box, in AI, an explanation aims to obtain user confidence in system’s decisions, which does not necessitate the opening of the black box. This is because of the fact that the user’s main interest is in why a particular outcome is judged to be a good decision rather than how the decision was made by the system.

Unless there is an awareness raised on how to allocate the user responsibilities correctly, providing the right amount and type of information for obtaining informed consent on using the system and its outputs and eventually gaining these trust in cyberspace would remain futile. 


REFERENCES

Latour, B. (2005). Reassembling the social: An introduction to actor-network-theory. Oxford: Oxford University Press.Google Scholar

Ye, L., & Johnson, P. (1995). The impact of explanation facilities on user acceptance of expert systems advice. MIS Quarterly, 19(2), 157–172.Google Scholar

Previous articleWhat are the different types of DLTs & how they work?
Next article260 million people endorse LinkedIn
Ayse Kok
Ayse completed her masters and doctorate degrees at both University of Oxford (UK) and University of Cambridge (UK). She participated in various projects in partnership with international organizations such as UN, NATO, and the EU. She also served as an adjunct faculty member at Bogazici University in her home town Turkey. Furthermore, she is the editor of several international journals, including IEEE Internet of Things Journal, Journal of Network & Computer Applications (Elsevier), Journal of Information Hiding and Multimedia Signal Processing...etc. She has also played the role of the guest editor of several international journals of IEEE, Springer, Wiley and Elsevier Science. She attended various international conferences as a speaker and published over 100 articles in both peer-reviewed journals and academic books. Moreover, she is one of the organizing chairs of several international conferences and member of technical committees of several international conferences. In addition, she is an active reviewer of many international journals as well as research foundations of Switzerland, USA, Canada, Saudi Arabia, and the United Kingdom. Having published 3 books in the field of technology & policy, Ayse is a member of the IEEE Communications Society, member of the IEEE Technical Committee on Security & Privacy, member of the IEEE IoT Community and member of the IEEE Cybersecurity Community. She also acts as a policy analyst for Global Foundation for Cyber Studies and Research. Currently, she lives with her family in Silicon Valley and works for Google in Mountain View.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

HTML Snippets Powered By : XYZScripts.com