Skip to content

The Dark Sides of AI You Need to Be Aware Of

Just as the potential for utilising AI seems fantastic, the risk of unintended consequences and misuse is equally frightening. Therefore, companies should not only focus on the bright sides of technology. They should also have a good understanding of how the same technology has a dark side, as ignoring this can be costly.

AI is a disruptive technology on par with the internet and mobile phones. We have yet to see its full impact. It has only been a year and a half since the technology exploded with the launch of OpenAI’s ChatGPT, and the development will likely continue rapidly in the coming years.

Be Aware of Both Sides

As always with technological advancements, there is a bright side and a dark side. AI accelerates this dichotomy. The potential of AI in terms of innovation and efficiency seems almost limitless. However, the technology also opens the door to unintended consequences and misuse.

As companies ramp up their AI efforts, it is crucial to be aware of this duality. It’s not just about the benefits. It is equally important to address the dark sides of AI. These can be divided into external and internal dimensions.

EXTERNAL DARK SIDES OF AI

Hacker AI: Frightening in Level and Scale

The external dark sides of AI involve hackers and other criminal actors. We can expect a significant increase in attacks as many processes can now be automated. With AI, hackers can also create extremely convincing content such as phishing emails or phone calls where AI mimics the CFO’s voice.

We have already seen many examples of deep fake content that is very difficult for the average user to detect. And it is alarming how easy it is. Devoteam conducted an experiment where the goal was to imitate a NATO general’s order using AI without prior knowledge. It succeeded in half an hour.

Hackers are not idle; they create sophisticated solutions in the form of “hacker AI” where the technology is used to find vulnerabilities in systems. Perhaps the most ominous aspect is the development of business models. It is now extremely easy to use “Hacking-as-a-Service” where users with limited technical skills can access ready-made software packages complete with instructions for various attacks. Some hacker networks even offer a hotline. AI thus elevates hacking to an unprecedented level, both in quality and quantity.

Trust Under Attack

Everyone can be affected by disinformation – individuals, companies, and entire societies. In the fall of 2023, there were numerous stories, even in established media, about France’s major problem with bedbugs. This led to a drop in tourism, school closures, and the government having to spend significant resources calming the population.

In March 2024, French authorities announced that it was fake news and that Russia was behind many of the negative stories. The purpose was partly to suggest a link between bedbugs and Ukrainian refugees.

This escalation with AI challenges one of the pillars of a democratic society – trust. One cannot trust either the sender or the content. In 1988, Andrew Grove, the longtime leader of Intel, published the book “Only the Paranoid Survive,” which was about the threat from new technologies. The title is even more relevant today. Constant vigilance and scepticism are prerequisites for not being scammed with AI. This often requires extensive training of employees, and companies should upgrade tools to combat hacking and fraud.

INTERNAL DARK SIDES OF AI

Control AI’s Access to Data

From an internal perspective, AI’s performance is based on the data it has access to. Therefore, you must closely monitor what you allow the technology to access and how this data is used. Otherwise, you risk the technology using and sharing data that could harm the business or violate regulations like GDPR. There have been several cases where generative AI accessed sensitive data such as passwords and other confidential documents.

Consider the example where a new HR employee accessed a zip file with confidential data. The file had not been deleted according to the rules. The rights management was insufficient and exposed a larger problem within the company.

The key is good data governance, meaningful oversight of all data, including rights management, classification, deletion, and inspection. Control AI by controlling its foundation, which is data. Data governance, however, is a complex discipline that requires both attention and resources.

AI Must Be Transparent

Another risk is developing AI solutions where you lose control. That is, AI is allowed to do too much on its own because the solution is a black box or because human oversight of the results is inadequate.

Let’s take an obvious example such as case processing. Here, AI can make a significant difference by automating actions such as analysing cases in a case management system to suggest decisions. This would speed up case processing and free up the caseworker’s time to handle more cases.

But what if the technology makes mistakes by favouring or discriminating against certain population groups if the data contains bias? Or if the solution lacks transparency, making it impossible to understand how AI arrived at specific suggestions?

Human Oversight and Traceability

The problem of lacking transparency also arises when AI talks to AI, which is increasingly the case both externally and internally. This could be two systems negotiating contracts. Who makes the decisions here, and are they monitored by humans?

It is therefore essential to have HITL (Human-in-the-Loop) and traceability, two aspects that the EU’s AI Act regulation emphasises. This means an effective control function involving humans and the ability to review AI’s decision-making processes.

Using AI as a Safeguard

However, AI can also be used as a security measure, bringing us back to the duality. One of the strongest weapons against hacker AI is AI itself, identifying vulnerabilities and threats. A good example is Microsoft’s Copilot for Security, which uses GPT-4 and Microsoft’s resources, including data from the 78 trillion security signals the software giant processes daily. Copilot for Security is essentially a chatbot where security experts can get updated information on attacks, threats, and the like, and the solution can analyse code to find potential vulnerabilities.

Summary

AI is a fascinating technology that can benefit society and businesses in many areas by optimising processes, freeing up resources, and creating creative solutions. But like other technologies, AI’s capabilities can have unintended consequences or be misused by dark forces. The success of companies with AI depends on their ability to address both the bright and dark sides.

If you want to know more about a comprehensive and responsible approach to AI, you can contact us via the button below.