Will AI Become The Most Loved or Hated Technology?

3 min read

One can’t deny that artificial intelligence is a topic of discussion for many. This is partly because AI has been all over the media as of late. We hear about how it will either make us obsolete or put us out of jobs. That it’s scary, and we don’t know what to do when faced with an intelligent robot who will carry out the tasks we used to do — but now no longer can due to physical limitations imposed by age and injury. But what about the other side of the coin? 

What about AI becoming our best friend, our helper, and the one who takes care of us as humans? Will AI be a positive or negative thing in our lives? The answer to that question is still a while away. But there are two things we can be sure of — it will undoubtedly change the way we live and work, and it will come to mean a lot more to us than just an abstract idea or concept. This is evident from its current popularity.

(Image Source: Statista)

AI is already impacting our lives in largely beneficial ways, and the number of intelligent systems we work with daily is growing. For example, modern technology has brought us cars that can drive themselves, smartphones that give us access to almost limitless information, and a plethora of social media and messaging platforms connecting us to friends and family worldwide. AI will continue to shape our experiences positively by bringing invaluable new solutions to sectors currently plagued with inefficient systems and processes, but it does come with its challenges.

Challenge 1: Getting more people involved in AI development

The good news is that businesses are now starting to understand how big of an impact AI can have on their organisation. AI-related activities are predicted to reach £38.46 billion by 2025 — a considerable increase from the mere £3.2 billion spent in 2017. The bad news is that there still isn’t enough development within the AI space, and we see an emerging talent gap.

For example, as per Accenture Strategy’s Machine Learning study, only 7% of companies surveyed have an AI-specific role in their company, while only 19% employ four or more people with machine-learning expertise. This talent gap means there is a lack of people with the knowledge and skills to work with AI. As AI grows and becomes more complex, getting more people involved in the development process will be crucial to ensure it is put to good use.

Another concern surrounding the future of AI is that businesses don’t know when, why or how to start using it. While most companies are aware of AI’s benefits, many still struggle with understanding where it can be implemented effectively and how they should get their company started on the right foot.

In Accenture Strategy’s Machine Learning study, only 50% of surveyed companies were currently using AI and had deployed fewer than two projects. This can largely be attributed to a lack of understanding of how to get started with implementing AI solutions in devices, but some tech giants have surpassed this. Apple’s Liquid Retina XDR mini-LED LCD screen type enables AI-based image detection and computer vision technology to enable image detection for facial recognition unlocking.

It will be necessary for AI developers, as well as the companies that employ them, to communicate the value that AI brings to a business and help them understand how it can benefit their organisation. This will allow businesses everywhere to get started with using AI in ways that will positively influence their future success.

Challenge 2: Getting people onboard with ethical and safety concerns

Finally, there are some ethical and safety concerns surrounding the current state of AI. With the technology only in its infancy, we’re still unsure how AI should be employed to maximise performance and minimise risk.

Ray Kurzweil, futurist and inventor of the first digital reading machine and widely acknowledged expert on the topic, was one of the first people to predict that autonomous systems would supersede humans within a few decades. He was right. Current AI systems involve a ton of computing power that takes up space, but they can still beat human players at chess, and board games like Checkers are already relying on AI-driven technology. 

There are essentially four areas that need to be examined: transparency, fairness, privacy and human values. Transparency includes being clear on how the system works and providing an explanation for the decisions it makes (including the data it was trained on). If something goes wrong with the system, it should be able to explain why. And it should also be able to explain the decisions that were made in the past.

The Fairness scenario involves an AI system equipped with knowledge that could result in discriminatory decisions — for example, how it makes its hiring decision or who gets access to certain services and resources. The critical thing to remember here is that people and their values and beliefs are different. Therefore, it’s essential that AI takes these differences into account and develops methods for making fair judgments.

Privacy is one of the most significant concerns surrounding AI. Even though we trust our phone companies not to sell our data, we shouldn’t assume this will always be the case. The more data we give to a company, the more money they can potentially make off of it. In that vein, the question becomes: Is the security of my data worth the potential revenue it could bring to a company?

And finally, there are Human Values. As the human race becomes increasingly dependent on technology, we must come together across industries, disciplines and locations to ensure that technology is deployed responsibly for our benefit.

Yasmita Kumar I am a writer and have been writing about various topics over many years now. I enjoy writing about my hobbies which include technology and its impact on our everyday life. Professionally I write about Technology and have a keen interest in how it is implemented in news ways.

Leave a Reply

Your email address will not be published. Required fields are marked *