As artificial intelligence (AI) continues to advance and permeate various industries, it brings about significant benefits and transformative capabilities. However, along with its tremendous potential, AI could also impact organisations’ cyber risk profile by introducing new risks which have not been previously considered. In this article, we will explore the growing threats associated with AI, particularly focusing on how malicious threat actors can take advantage of it. We will also look at how AI can be a double-edged sword and its potential impact if not used correctly by employees. Lastly, we will discuss effective strategies to mitigate these risks and the importance of developing a comprehensive generative AI policy and guidelines.
Risks from External Threat Actors
AI-enhanced attack techniques have resulted in a significant challenge for organisations and cybersecurity experts. Cybercriminals can leverage AI algorithms to automate and streamline various stages of an attack, including reconnaissance, phishing, vulnerability identification, and exploitation.
One prominent area of concern is digital security, where AI-enhanced techniques like spear phishing have become increasingly potent and challenging to detect. Attackers can leverage AI algorithms to create highly personalised and convincing phishing emails, making it more difficult for individuals to distinguish between legitimate and malicious communications. Low skilled attackers who previously struggled to draft good phishing emails due to spelling and grammar mistakes or who lack the ability to create tailored scenarios can now use AI technology to create very good targeted phishing campaigns. The scale and efficiency at which these attacks can be launched pose significant risks to individuals, businesses, and critical infrastructure.
Threat actors and scammers can use AI to create bespoke and highly convincing scripts for conducting vishing and scams over the phone convincing employees to divulge highly sensitive information or performing actions which could allow the attackers to gain access to sensitive resources or services such as accepting an MFA prompt initiate by the threat actor.
Deepfake technology, improved with the use of AI, can be employed to create convincing fake audio or video recordings of key personnel within an organisation. These falsified materials can be used for various malicious purposes, such as spreading false information or facilitating social engineering attacks.
The future is also likely to see the rise of combining AI and deepfake technology to carry out audio or video vishing calls with the voice and image of a known trusted colleague. These attacks will be much harder to identify and defend against.
The emergence of AI has ushered in a new era of cyber threats, enabling individuals with limited knowledge in cybersecurity to leverage advanced tools like PentestGPT for malicious purposes. Tools like PentestGPT operate in an interactive mode, providing guidance and support to attackers throughout their operations. These AI-powered tools empower attackers with the ability to perform complex penetration testing and launch targeted attacks, even without prior access or deep technical expertise. For example, PentestGPT allows attackers to conduct a full compromise of targets by providing instructions and assisting in reasoning and analysis. Such AI tools have significantly reduced the barrier to entry for attackers, enabling them to exploit vulnerabilities and compromise systems with relative ease, emphasising the growing cybersecurity risks associated with AI-driven attacks.
Traditionally, only technically advanced threat actors had the capability to write malware and malicious tools and subsequently use these in attacks. Dark web marketplaces then lowered the bar and allowed anyone with access to the marketplaces to purchase malware and tools which they could use to launch their own attacks. AI could now lower the bar further by allowing threat actors with no programming experience to build their own custom malware or malicious tool.
Furthermore, AI allows any threat actor to easily adapt and obfuscate existing malware, making it difficult for traditional defence mechanisms to keep pace with the evolving threat landscape. AI systems can be leveraged to automate the creation of mutating malware that evades detection by traditional endpoint detection and response (EDR) mechanisms. This new arms race is likely to put pressure on traditional signature based antimalware solutions and accelerate the shift to behaviour based and machine learning solutions.
Moreover, political security faces considerable threats from AI-powered attacks. Hackers can manipulate surveillance systems, analyse mass-collected data, and target specific political groups, leading to severe consequences such as misinformation campaigns and social unrest. The integration of AI systems in controlling drones and other physical assets poses another critical risk. State sponsored attackers could tamper with the machine learning algorithms used by these AI systems to cause them to bypass implemented controls and perform malicious actions, potentially creating threats on a global scale.
Risks from Internal Users
AI technology have proven to be a very valuable tool for a lot of organisations which has helped them improve efficiency and productivity. If not managed carefully it could also cause significant damage to an organisation through the accidental exposure of sensitive information or intellectual property. The last few months has seen several stories of employees accidently leaking sensitive information into AI systems such as ChatGPT or Grammarly.
Employees do not always realise that any information submitted to these AI systems will be used to train and improve the system. In the case of AI driven chatbots, these could also reuse submitted information to answer queries from other users. This was the case with three Samsung employees who entered sensitive source code and meeting recordings on ChatGPT.
Mitigating the Risks
Mitigating the cybersecurity risks associated with AI requires a comprehensive approach that encompasses both technological and policy measures. Organisations should prioritise the following strategies to mitigate the risks from external threat actors.
Firstly, implement a robust security programme that uses a defence in depth strategy. Organisations should adopt advanced threat detection and prevention systems, deploy AI-based security solutions, and regularly update their defences to counter evolving threats. These measures should include real-time monitoring, anomaly detection, and proactive response capabilities. These should be combined with regular penetration testing or red team assessments to ensure that the security solutions are configured correctly and working as expected.
Secondly, fostering employee education and awareness plays a pivotal role in mitigating risks. Employees should be trained on cybersecurity best practices, including recognising advanced phishing and vishing attempts, avoiding suspicious links or downloads, and adhering to strict data handling policies. Regular awareness campaigns and training sessions can significantly reduce the risk caused by human error or negligence.
Thirdly, organisations must develop a comprehensive generative AI policy that encompasses ethical considerations, transparency, and accountability. This policy should address potential biases in AI algorithms, ensure the responsible use of AI, and establish guidelines for data privacy and security. Collaborating with experts in AI ethics and incorporating industry standards can help shape a robust policy framework.
Lastly, fostering collaboration within the cybersecurity community is crucial. Sharing threat intelligence, best practices, and emerging techniques for detecting and mitigating AI-enabled attacks can enhance the collective defence against evolving threats. Public-private partnerships, industry collaborations, and participation in forums and conferences dedicated to cybersecurity can facilitate knowledge exchange and innovation.
Summary
As the adoption of AI continues to proliferate across industries, the cybersecurity risks associated with this technology will grow in tandem. This article has highlighted the escalating threats posed by external threat actors and employees, emphasising the increasing number and complexity of attacks facilitated by AI. By implementing robust security measures, fostering employee awareness, and promoting responsible AI usage, organisations can effectively mitigate these risks and protect their digital infrastructure while still taking advantage of the technology.
Declassification is the process of modifying the assigned classification of an asset to a lower level of sensitivity. As data moves throughout its lifecycle, there may come a time when it no longer maintains the same value or sensitivity as when it was originally classified. The organisation must have a process to declassify data to account for this evolution. When data sensitivity changes from confidential to public, for example, marking, handling, and storage requirements have to be adjusted appropriately. If declassification does not happen, excessive and costly controls may remain in place, leading to financial and business efficiency impacts.
Declassification of assets requires thorough documentation and often multiple levels of approval. The data owner plays a vital role in this process, as they determine the classification level of the data and when it can change. There should be a data governance process within the organisation to determine whether there will be a manual review of data classifications. The organisation could choose to automate the process using rules and applications to find and reclassify data assets. Rules may be based on occurrences of specific events as determined by the data owner or the expiration of a maximum period of time.