Editor’s Question: How are you approaching cybersecurity with regards to AI?

Editor’s Question: How are you approaching cybersecurity with regards to AI?

AI has brought with it new sophisticated cyberattacks. In this month’s Editor’s Question, five experts outline effective cybersecurity approaches with regards to AI, starting below with Maurice Uenuma, VP & GM Americas at Blancco.

AI is a transformative technology that, while still nascent, already shows great potential to enable both hackers and cybersecurity professionals alike. Attackers will benefit from more realistic social engineering schemes, the ability to identify exposed vulnerabilities more quickly and develop new exploits more efficiently. At the same time, defenders will be able to leverage AI-enabled security platforms to more rapidly and accurately detect attacks underway, identify and mitigate vulnerabilities, develop and deploy patches more quickly and so forth. AI will be an integral part of cybersecurity going forward, and CXOs will need to have a working knowledge of both.

GenAI tools will be leveraged for a broad range of personal and business uses, so we must build security and privacy controls into these systems at the outset, while encouraging – and enforcing, when necessary – their responsible use. Take GenAI platforms (like ChatGPT) as a good example. Without sufficient guardrails, the use of GenAI by employees can pose a big risk to an organisation and the employees themselves. Yes, these are powerful tools that can boost both creativity and productivity at work, but by using them, employees may accidentally (or even intentionally, in some cases) sidestep important security controls and safeguards. Sensitive information shared as inputs to GenAI platforms could ultimately expose this data to the public, and there’s even the possibility of GenAI to piece together ‘clues’ that generate accurate corporate data, or Personally Identifiable Information (PII), which should be protected under current regulation.

Training employees on the risk of unintended data exposure through public GenAI platforms is crucial. Organisations also need to be sure to update and create internal policies around what can and cannot be shared as a prompt in a public GenAI tool, as well as policies around what data is being stored or regularly erased. A disciplined approach to the data that employees collect, including the importance of regular data sanitisation to remove unnecessary ‘ROT’ data (redundant, obsolete and trivial), will help to significantly reduce an organisation’s data attack surface in the age of AI. These internal policies are particularly important given the rapid rate of change that regulators will struggle to keep up with. The EU AI Act is certainly a positive step in the right direction, but organisations need to pay close attention to new and evolving standards to ensure their AI practices are compliant.

This technology is rightly being embraced across businesses to enhance operational efficiency and improve productivity, and in turn business outcomes. Yet in terms of future resilience-building, now is the time to carefully consider how GenAI could go wrong and identify ways to mitigate risks in its design, deployment and use. Given the general uncertainty of future risks associated with a new, transformative technology, we must approach security strategy with an emphasis on resilience-building: ensuring that critical systems can continue to operate as intended even when degraded or compromised (for any reason). Our team at Blancco is therefore approaching AI with these considerations in mind and continue to emphasise data management best practice as an essential part of staying secure.

Sanjay Macwan, CIO and CISO, Vonage:

AI hugely enhances the effectiveness of cyberattacks. AI-bolstered attacks have the ability to automatically pinpoint unknown system vulnerabilities, transfer massive amounts of data from systems and evade detection by mimicking normal user behaviour. 

CX teams have a duty to protect customer data, most importantly their financial details, and so must keep critical systems such as CRMs under careful observation to safeguard against AI cybersecurity threats. The best way for enterprises to keep sensitive data secure and maintain excellent customer experience is to leverage the same technology with AI-powered threat detection. 

AI attacks are especially pervasive due to their ability to analyse massive datasets too quickly and evasively for a human agent to detect. AI systems can automate the same processes to quickly locate patterns within datasets that suggest a cybersecurity threat, such as by establishing baselines for normal system behaviour to measure abnormal activity and detect sophisticated cyberattacks before they can cause harm. 

Organisations must maintain good data hygiene habits to ensure that their AI models are effective; they’re only as good as the data they are trained on. To start with, essential data protection practices should comply with regulatory frameworks, such as GDPR and PCI DSS. These require data encryption, firewall installation, strong access controls and regular network monitoring, alongside other practices for safeguarding sensitive information, especially payment data. Data masking techniques can also protect sensitive information when training AI models by replacing real data with fictitious but structurally similar data to avoid compromising sensitive information. 

CX officers should ensure that there are clear governance policies in place for data protection and retention, including regular audits of customer information and data entry training for CX teams. It’s important to remember that no AI model is free from bias, and so should be regularly assessed to guarantee that all customer data is treated equally.

For businesses using cloud-based unified communications tools to engage with customers, cloud-specific ransomware and fraud protection tools are key to countering threats. While APIs can provide additional security, such as authentication and rate limiting, teams should primarily be focused on implementing stringent security testing and management practices.

Muhammad Yahya Patel, Lead Security Engineer, Check Point Software:

Cybercriminals have never been better equipped to cause mass disruption and damage to organisations. Generative AI hit the mainstream in November 2022 with OpenAI’s ChatGPT. At the time of its launch, it was considered a relatively benign tool. The greatest concern was students cutting corners in essay writing. However, leading technology experts and governments across the world are actively working on regulation and legislation for the technology owing to fears it could weaponise disinformation, discrimination and impersonation.

This move towards weaponisation is something we have already seen. As early as a month after the platform was widely available, our researchers identified Large Language Models (LLMs) being used to lower the bar for code generation, helping unskilled threat actors effortlessly launch cyberattacks. In some cases, ChatGPT was creating a full infection flow, from spear-phishing to running a reverse shell. It is only a matter of time before we see automated malware campaigns launched quicker than human beings are capable of.

Obstacles continue to come up as the fight to protect critical services from advancing AI-generated threats develops each day. Attacks on UK’s critical infrastructure bring cybersecurity to the forefront of conversation – which of course is a good thing – but also highlights that when it comes to protecting our public services, there is an urgent need for more robust security. Public awareness and understanding of AI are growing, but with that comes questioning around the solution of the anticipated global risk of AI, which is becoming increasingly difficult to answer.

In practical terms, it means fighting fire with fire – specifically, leveraging the technology that can cause destruction for defensive action to fortify IT infrastructure and bolster the cybersecurity team’s capabilities. The same speed and automation fuelling these attacks can be used to bolster our defences. Something else that is top of mind for cybersecurity professionals is how to protect their AI assets and the associated data lakes. AI poisoning or unintended data sharing is very much an area of concern, so building the right controls around this will enable security teams to proactively identify vulnerabilities and weaknesses in systems, applications and networks before they can be exploited.

Generative AI represents a dual-edged sword in the world of cybersecurity. For the attackers, it’s an accelerant for criminal activities, while for defenders it could help stamp out those rapidly growing fires. For example, by using it to generate realistic, synthetic data that mirrors real-world cyberthreats we can augment existing threat intelligence feeds, providing cybersecurity professionals with a broader and more diverse set of data to analyse. By improving our understanding of emerging threats and countermeasures, we can stay ahead of potential attackers.

Tris Morgan, Managing Director of Cybersecurity, BT Group:

At BT, we’re embracing AI as a way of revolutionising how we look at security, from automating processes to alleviate pressure on our teams to spotting and responding to threats even quicker.

For instance, security teams are only able to review around 12,000 of the 174,000 alerts (on average) they receive per week. Our analysis also found that hackers scan business and personal networks at least once every 30 seconds to find potential weaknesses that act as footholds into a system. With this rate of attempted attacks, it’s impossible for teams to keep up with these threats alongside other areas of their roles. Which is where the focus on an intelligence automation (IA) strategy comes in. This is the practice of using Machine Learning to automate the basics and to complement the intelligence of humans, so they’re free to make more meaningful contributions.

It’s important to remember that AI has proven to be both a benefit and a risk, while organisations and the criminals focused on attacking them find new ways to utilise this technology. There’s no doubt that AI has changed the type of attacks we’re facing. Deepfake scams, for example, are using AI to create more realistic audio and visuals to impersonate trusted individuals and trick victims into sharing sensitive data or make payments.

At the same time, organisations have developed new solutions to defend against evolving attacks. For instance, our Eagle-i platform combines the latest advances in AI and automation with our unique network insight to predict, detect and neutralise security threats before they can inflict damage. The platform also suggests what kind of policies need to be implemented in a firewall to better protect against future attacks, meaning that companies have the option to actively assess their security protocols rather than just having to trust their current tools to keep them safe. This also helps businesses to keep up with the latest updates and developments in the industry.

Integrating AI can provide teams with greater visibility, control and security, alongside crucial AI-powered insights that help companies to optimise digital experience, reduce risk and enable better business decisions. But AI shouldn’t be viewed as a solution to all cyberthreats and it’s important to remember the technology still has limitations. It’s also about culture and empowering a workforce that needs to be aware of the threats that AI can pose and what to do in the case of a deepfake scam or spear-phishing attack. AI should be integrated alongside a comprehensive security strategy and frameworks such as zero trust, requiring all users, both inside and outside an organisation to go through an authentication process. This means cyber teams can always keep track of who is in the network and therefore quickly identify and block potential threats.

Venkatesh Ravindran, VP Enterprise Security and Resilience, Colt Technology Services:

At Colt, we passionately believe that cybersecurity in the telecommunications sector is not just a concern but a fundamental necessity that underpins the entire industry. AI has raised the stakes in cybersecurity, providing new tools to strengthen our defences while also presenting new challenges. AI offers the potential to automate and enhance cyberdefences, making them more robust and efficient. However, it also poses risks: malicious actors can exploit AI to launch sophisticated attacks and, as AI relies on large data volumes, there is the risk of data exposure and breaches. Our approach is to harness the opportunities and benefits presented by AI while preparing for scenarios where malicious actors can misuse it.

Below are the four focus areas Colt is exploring:

  1. Proactive threat detection:

This involves leveraging the capability of AI to analyse large datasets, such as alerts, logs, attack signatures and patterns, to identify and study threats and alert the SOC (Security Operations Centre) analyst with actionable insights before the threat can cause harm, ensuring a proactive rather than reactive approach to cybersecurity

  1. Prioritised vulnerability management:

This focuses on using AI to analyse large sets of data such vulnerability data, asset data and exposure data to prioritise vulnerabilities, allowing us to address the most critical issues first and allocate resources more effectively

  1. Automated security orchestration & response:

Once the threat assessment is completed by AI and presented to human security analysts to act on for more complex use cases, we can use AI-driven automation in parallel to orchestrate and respond to security incidents quickly and efficiently where human intervention is not required, minimising the impact of potential threats.

  1. Risk quantification and compliance monitoring:

The last of the key focus areas involves leveraging AI’s ability to process large volumes of data such as historical security incident data, vulnerability data, gap assessment and risk reports to identify patterns that may indicate potential risks, then providing accurate risk scoring. Similarly, it involves tracking the changes in regulation, to ensure controls are up-to-date.

In addition to these AI-driven initiatives, we are continually updating our security protocols to stay ahead of the evolving threat landscape. This includes:

  1. AI Forum:

We hold forums to focus on the adoption of responsible AI use and the development of security practices to protect the AI systems, as well as raising awareness about AI related risks and how to mitigate them.

  1. Cyber controls:

Implementing guard rails such as DLP (data loss prevention) and access reviews, to protect the exposure and breaches of data collected and processed by AI.

Our approach to cybersecurity in the age of AI is evolving. By leveraging AI to enhance our defences, preparing for AI-powered attacks and cultivating a culture of security within our organisation, we aim to protect our network and our customers from the ever-evolving threats in the digital world.

Browse our latest issue

Intelligent CXO

View Magazine Archive