Cybersecurity has traditionally relied on manual monitoring of threats and reactive responses to cyber incidents. But the rapid spread of artificial intelligence (AI) enabled solutions across industries is changing the rules. The enhanced capabilities of AI provide new tools for cybercriminals and security professionals alike.
Firms often underestimate the risks posed by implementing AI. This technology relies on collecting massive quantities of data, raising questions about where that information is stored and who can access it.
The fact that many organisations use cloud-based AI tools and services offered by third parties makes it even more important for executives to discuss the cybersecurity implications, as there is the possibility that sensitive data is being transferred between or stored on external servers across borders.
Businesses must ensure their information remains secure from hackers and criminals looking to exploit it, and they have an ethical responsibility to safeguard customer data as well. The consequences of failure in this regard could be dire – financially, reputationally, and societally.
When Vastaamo, a Finnish psychotherapy service provider, suffered a data breach in 2020, the attacker obtained confidential therapy notes belonging to approximately 33,000 patients. The breach and subsequent blackmail attempts against patients caused many to suffer immense anxiety and stress, with reports linking the case to at least one death. The company itself collapsed into bankruptcy, with financial and legal consequences for some key stakeholders.
More than five years later, it remains a tragedy for the individuals affected and a cautionary tale for organisations that fail to review their cybersecurity procedures with the necessary rigour.
What are the new risks?
The cybersecurity risks posed by AI mostly fall into two categories.
Internal security breaches can occur when sensitive business information, intellectual property, or customer and staff data is leaked. Disclosures can be intentional or unintentional. For instance, generative AI (GenAI) chatbots can gain access to sensitive information through users’ prompts and may later divulge this information as a result of prompt injection attacks that exploit vulnerabilities of GenAI applications and their underlying models.
If these types of threats were visualised as weak spots in the company’s armour, the second type of risk – cybercriminals using AI – is like a hostile opponent.
Hackers and cyberattacks are nothing new, but the advent of AI has given them more effective tools. GenAI is already being used to create more realistic phishing emails and deepfakes in different languages, increasing the chances of tricking people into revealing sensitive information. AI–enabled code generators also make it easier for cybercriminals to develop more effective malware to target businesses.
Of course, these are not isolated risks; they can occur in combination. Cyberattacks using AI-written malware could exploit vulnerabilities arising from improper AI implementation in internal processes. Such a scenario has the potential to cause harm to and beyond the targeted firm. Damage could easily affect consumers and disrupt the wider supply chain.
Using AI in cybersecurity
The evolving nature of threats should be cause for caution, but the advanced capabilities of AI can also be used to strengthen cybersecurity. What matters most in determining whether AI is a liability or a reinforcement is how strategically it is implemented.
Security Operation Centres (SOC) and Cybersecurity Incident Response Teams (CIRT) are already harnessing the power of AI in intrusion detection systems and threat intelligence tools. This helps experts to identify, contain, and respond quickly to incidents.
GenAI’s ability to rapidly process large quantities of information can also assist in creating easy-to-comprehend summaries about emerging threats and incidents for the leadership, executives, and other groups of stakeholders.
So, the challenge for businesses is not to shun AI; it’s to ensure AI adoption is targeted at areas of the business where the technology can do the most good and creates the smallest risk. An unspecific rollout across all business functions is how oversights and weak points can emerge.
Business leaders should know where data gathered by AI services is being stored, and systems must be rigorously tested to make sure that internal vulnerabilities are eliminated.
Ensuring resilience
An information security governance and management system is essential. This provides a framework for continuous risk assessments and security audits – both internally and externally.
Collaborating with partners and authorities, as well as forming strategic alliances with other firms and organisations, is incredibly important in boosting resilience to cybersecurity threats, as businesses do not operate in a vacuum, and another organisation in the supply chain succumbing to a cyberattack could cause secondary impacts on anyone else they do business with.
This is especially the case with SMEs, which are significantly less prepared for and more susceptible to cyberattacks and business disruptions. This makes them prime targets for attackers who want to cause large-scale disruptions and data extortions in supply chains.
A similar chain of events transpired in 2019, when suspected Russian intelligence services attacked the US-based IT company SolarWinds. By compromising the firm’s software development processes, the attackers were able to distribute malware among thousands of SolarWinds’ partners and customers, affecting millions of stakeholders globally.
Establishing strong knowledge–sharing networks helps companies to stay well-informed about new risks, security requirements, and opportunities to improve their own practices.
At the same time, leaders must promote a culture within their organisations that educates and empowers employees to play an active role in cyber defence. Customised training can help individuals understand how risks apply to their professional duties, and managers should encourage reporting suspicious events or incidents to cybersecurity teams as best practice.
AI is changing the dangers firms face in cyberspace, while at the same time offering a tool to reinforce cybersecurity practices. But the most important factor in whether a firm is more or less resilient remains human. Investing in employees’ education is vital in promoting informed decision-making, creating the best possible chance of resisting or reacting quickly to any incidents that may emerge.