Robotic Agents and Human Principals

4 min read


Development of science and technologies are anticipated to lead to the emergence of artificial cognition and robots with technical and cognitive capabilities of human beings. Future workforces shall be able to compute faster, work without emotional dependencies and with no physical fatigue. Also, human beings may be able to upgrade themselves to Cyborgs and having additional capabilities by deploying High-Tech tools implanted to their bodies. Such developments change the working environment and probably the socio-economic aspects of the societies. While the Agent-Principal theory of Eisenhardt has been one of the foundations based on which the organizational theories are shaped, what happens to relationships of Agent and Principal when agents are robots and probably the principals are human beings? To understand this, first, we review the Agency Theory and then we will discuss the impacts of robotic development and replacement of human beings with working robots on agency theory and its implications on organizational theory.

Agency theory developed by Kathleen Eisenhardt (1989) is one of the important organizational theories that have a great impact on contractual arrangements between any principal with any agent, whether they are individuals or legal entities working together. Agency theory, not only helps to understand why organizations are formed and in what ways and what types of services full-time recruitment is suitable, but also it helps us to understand in which conditions output-based contracts are suitable ones. The agency theory shapes the whole idea of how human beings working relations are shaped which then determines the cooperation framework in a continuum of different delivery methods ranging from behavior-based (Principal determines the actions and the agent follows the lead) to a complete output-based (Agent determines the required actions and takes the responsibility of the output) method. Bureaucracies are more behavior-based in content however organizations may be formed by a collection of output-based cooperation arrangements. An example of such organizations are project-based consultancy corporations in which freelance consultants are acting as agents and the organization (principal) acts as the liaison between them and the final customers.

Kathleen Eisenhardt (1989) the agency theory (Kathleen Eisenhardt is currently the Stanford W. Ascherman M.D. Professor and Co-Director of the Stanford Technology Ventures Program)

One of the main questions that arise from the utilization of artificial intelligence instead of deficient humans (if we are allowed to use this term to account the huge gap between the computing and cognitive capabilities of the human beings and the future artificially intelligent robots), is whether there will be a contract between the principal and the agent. Do human principals require to hire artificially intelligent robots or they are bought and maintained by them? If the robots have cognition, then what makes it ethically justified to exploit the robots without compensation of their efforts. If the artificial cognition which is beyond intelligence, to be used, then who is allowed to say that free will is only the right and trait of the humans and not the entities with artificial cognition? The organizational theory is based on this very preliminary assumption that humans have the right not to cooperate with each other and form the organizations, but do the artificial cognition have the same right or ability?

Aside from the moral aspect of exploiting artificial cognition, if we assume that artificial cognition in the future has the ability and right to not choose to cooperate, then again the issue of agency theory is a valid subject with robots.

The agency theory has different prepositions in which the working relation between the agent and principal is determined with respect to different factors. In general, the ability of the principal to define the work and determination of the details of the tasks that finally leads to the required outcome is the main factor in choosing behavior-based contracts over output-based contracts. This ability is strengthened through the use of information systems that are adequate enough to help the principal to verify the activities of the agent. Also, measurability and correspondent certainty of the output is in favor of the output-based contracts. And the final matter is the risk aversion of the principal and/or the agent can be effective in choice of the contract type such that the more principal is risk-averse and less agent has risk aversion, the output based contract is preferred more. As far as we are considering the principal-agent relationship between humans, the above prepositions are valid and logical, however when deploying artificial cognition, this relation is impacted by the very different nature of the principal and the agent.

For example, is it rational to discuss risk aversion of artificial intelligence? Risk aversion is not based on the ability of the two parties to count the probabilities and measuring the risk level. Psychological traits of a person impacts on being more or less risk-taker and personal experiences as well as biological differences of people have impacts on their willingness to accept or reject the risk. But how is a robot with artificial cognition presumed to act in terms of risk aversion? Is it possible to consider a robot that is willing to accept risk of delivering output with a determined quality? Isn’t it already assumed that artificial cognition is able to accept higher risks due to its higher computing capacities and capabilities? What are the concerns in the legal aspects of transferring risks to robots and if a legal case is not attributable to a robot then what is the reason to assume free will for it?

Another important aspect to be considered is that while the robotic agents are much intelligent than the principals, then it seems to be inappropriate to form a behavior-based cooperation framework with artificiality intelligent robots. In that case, the cooperation framework will be more of an output-based type which then leads to humans having less control as principals on their robotics agents and it may (and with high probability) lead to the replacement of principals by even more artificially intelligent and cognitive entities. Humans are ultimately thrown out of production cycle and if that happens then new bureaucracies are formed in which the principal-agent theory has to be revisited considering the specific traits of the artificial cognition of robots as principals and agents.

This prophecy of future organizations might be frightening. It is indeed frightening and it seems to be very probable and inevitable. The reason we fear is that we lose control on the production cycle that has been developed through millennia of our existence on earth to provide us with our needed products. Loss of our control on production means putting our survival at risk and leave it to artificially intelligent robots to make decisions for us. Very much like the animals we have tamed and we control for our benefits but this time we are creating something that is going to control us and probably tame us, but for what reason? This is not known!

Artwork by Boris Groh

On the other hand, utilization of robots and artificial cognition instead of humans as principals and agents might be beneficial for us because it releases us from the burden pressure of the production cycle and liberates us to have actions to do rather than works. We will be freed from the labor efforts and gives us the opportunity to do what we really want to do however we need to find a way to stay smarter than artificially cognitive entities.

If what is portrayed above happens which is very probable, then the agency theory has to be revised and restudied based on the new characteristics of new entities of the organizations in human-robot and robot-robot working frameworks. Organizations in different aspects from the perspective of organizational morale and ethics to other aspects such as best practice models for management, leadership and motivation, all and all, are supposed to be redefined and revised. Yuval Noah Harari states in his book “Homo Deus” that liberalism as the dominant religion of the modern era is pushing the humans towards end of Homo-sapiens and end of humanity as know today. If that prophecy comes true, how do the future organizations look like?

Hamed Qadim Hamed is a graduate of B.Sc. in engineering from Petroleum University of Technology and Master of Business Administration in Project Management from Asia Pacific International College and has more than fifteen years of experience in the oil and gas industry. He has been active in the domain of entrepreneurship during the last 7 years. His interest in studying philosophy and human sciences, besides his working experience, has shaped his focus area on philosophical aspects of entrepreneurship. He is now studying and practicing to determine a framework for Spiritual Entrepreneurship.

Schedule a DDIChat with Hamed Qadim

5 Replies to “Robotic Agents and Human Principals”

  1. The article points to the evolution towards a “social democracy”, though, perhaps, robot driven. I am not sanguine that agency theory is as binary as portrayed in this article. There are too many hybrids today ranging from the Mondragon in Spain to the more complex issues in the US, for example, as we see in current union negotiations or the many government regulations that have arisen around workers, the environment and consumers which impact on the simple models one finds when googling agency theory.

    This becomes even more evident when one considers that robots will not be acting autonomously in this matrix
    It is becoming, globally, when one considers such movements as the “arab spring”, the Wall Street movement and even “Black Lives” matter as well as the numerous protest movements globally.

    1. Thanks Tom for taking your time and providing feedback. I hope I get more replies over the subject from your side. There are some very critical questions assuming that the robots with artificial intelligence and/or artificial cognition be in the position to replace human beings as labor force required for production of goods and services. If that happens, then will the economic system of capitalism replace men with robots? And if that happens probably there will be political impacts too because then the human economic value does not protect his/her social and political rights! That is what Harari also asks in his book “Homo Deus”.

      The article however is focusing on another aspect and is asking some other questions. I am wondering if we utilize artificial cognition should there be any labor contract and rights for them just like human beings? And if that is applicable from the perspective of morality and ethics, then what would be the form of the cooperation. I agree that agency theory is focusing on two binary extremes of output-based and behavior-based contract types though it is still valid to consider a continuum of contractual arrangements between principals and agents and each cooperation model in the continuum is affected by the prepositions that are suggested in the agency theory. But, does that apply to robotic agents too? I doubt it! And that is why i suggest that we need to discuss the nature of the relationship between human principals and robotic agents to be able to foresee the future organizations. And I do believe that the presumption that robots will not be autonomous might be a choice to us as human beings however I believe that the efficiency and effectiveness of this autonomy will impose itself to us. This might be tragic or it might be a blessing, we do not know. But we should discuss the consequences of such autonomy.

      Once again thanks for your feedback. I am delighted to learn more from you through these discussions.

      1. the issue between humans and robots with respect to Agency Theory, has been examined extensively in the literature. Just think about the characters in Star Wars or Mr Data in Star Trek. Agency Theory like economic theory seeks homogeneity of actors and eliminates externalities in order to build a simple model. This, of course, materializes in today’s world with increasingly a mixing of cultures that differentiates between individuals by race, religion and multiple factors. We really see this manifest by the increasing economic split in society. The rise of the “gig” economy as seen with companies like Lyft and Uber which claims drivers are independent and not employees which has just erupted in protests at Uber.

        Whether there are synthetic “humans”, robots in physical form or digital programs, or biological entities (maybe robots can be bred?- Do Androids Dream of Electric Sheep-Blade Runner) Agency Theory has significance as long as those pesky externalities that differentiate humans and robots from each other and within their categories exists. Of course this goes back, in ancient history, to the issue of slavery. It’s a philosophical question and not a reductionist management model or any number of current economic models whether neoclassical or behaviorist.

        At one time there were theories that insects had souls, not to mention my cat.

  2. In my work, I assume there is more one way to describe and understand working together, which in this article is called cooperation. I suggest four:, networking, coordinating, cooperating, and collaborating along a development continuing in which the next level includes the previous. These strategies are best used after examining and resolving time, trust, and turf issues which always affect the degree to which and how well people and organizations can work together. If anyone would like to see the details of my description of working together strategies, I will send a brief summary of them if you contact me at: [email protected]

  3. Hi Arthur,

    As I stated in another post, most management theories work as long as externalities are eliminated which differentiate humans, religion, skin color, sexual orientation, wealth and power, etc. This of course is seen clearly in the “muck rakers” such as Upton Sinclair (The Jungle and King Coal) or Frank Norris’ “The Octopus”. Or, in real life such as the building of the Transcontinental Railroad- think Leland Stanford or James Hill. Then there is the biblical literature- Think Jews in Egypt or the African slave trade within and outside of Africa. Think “Arab Spring”, the Wall Street protest and the multitude of US regulations that deal with workplace rights/justice, not just for line workers.

Leave a Reply

Your email address will not be published. Required fields are marked *