Artificial intelligence in cybersecurity and privacy: A blessing or a curse?

“In a dark place we find ourselves and a little more knowledge lights our way.” —Yoda, Star Wars Episode III: Revenge Of The Sith 

A long time ago in a galaxy far, far away, the Rebel Alliance, or simply the Rebellion, was at war with the oppressive Galactic Empire (or just the Empire). That’s the basic plot of the “Star Wars” saga briefly (in case you don’t know it). 

The sci-fi franchise is a classic story of good versus evil, and it can be seen as the perfect metaphor for cybersecurity. Imagine cybersecurity and privacy professionals as the Rebellion (the good guys) who keep fighting cybercriminals (the bad guys, or the Empire). 

Just like in “Star Wars” where each side has different “weapons” to fight with—the Force, lightsabers or blasters—in cybersecurity, various tools and technologies can be used for good or bad purposes. For example, encryption, firewalls and penetration testing tools can improve security and protect systems, while malware, ransomware and social engineering can exploit weaknesses and cause damage. 

Artificial intelligence (AI) is now another tool that is shaping the cybersecurity landscape. But here’s the paradox: Its abilities can be wielded for both good and evil. The “Star Wars” movies show how ethics and decisions matter for individuals and societies. Similarly, we need to choose now if advanced technologies like AI will ultimately help or hurt humanity. 

This paradox of AI was the main theme of the 2024 edition of the PwC Cybersecurity and Privacy Day event—and speakers agreed. One of them, Stan Scharnigg, Co-founder of Chunk Works, said: “AI has a dual nature: We can use it for good and for bad.” How do we balance this fine line? 

In this blog, we examine the impact of AI on the cybersecurity and privacy realms, drawing on knowledge exchanged during the different sessions and workshops at the event. We also delve into how AI can be a powerful ally for defending against cyber threats, while also recognising its possible use for harmful intentions. 

The impact of AI on cybersecurity 

Cybersecurity is a critical area of business. This field has evolved with technology and information systems to become an essential part of the modern world. We use the word essential because the modern world is facing many problems, such as geopolitical fragmentation, conflicts, political division, economic difficulties, an ageing population, and climate change.  

This is something that Grant Waterfall, Partner and Cybersecurity & Privacy Leader at PwC Germany and PwC EMEA (Europe, Middle East and Africa), and Mika Lauhde, a Senior Fellow at the Maastricht University Faculty of Law’s Centre of Data Protection and Cybersecurity and a Fellow at the United Kingdom’s Institution of Engineering and Technology, pointed out. 

As both mentioned, the world today is more vulnerable to espionage and cyber-attacks, such as ransomware, malware, data theft, and other cybercrimes. That’s why we need a “Rebellion” of our own of cybersecurity and privacy experts to protect society and businesses by strengthening security and resilience for our networks and prevent privacy violations, financial losses, and data breaches. 

Artificial intelligence can play a significant role in cybersecurity and help address some of the difficulties it faces by offering new ways to detect, examine, and prevent online risks. However, it’s a double-edged sword (or lightsaber). As Grant said, the advancement of AI creates new opportunities to solve cybersecurity problems, but it also introduces new ones. 

Investigative journalist, speaker and author Geoff White shared a similar view: “AI systems can stop hackers, but they can also help them deal with big amounts of data,” simplifying their work. However, before exploring the negative side of AI, let’s first examine how AI can be a powerful force for good.

AI, a blessing in the fight against the bad guys 

AI helps cybersecurity and privacy professionals in many ways, enhancing their ability to protect systems, data, and users from various threats. For instance, it can analyse large volumes of data, spot anomalies, and identify suspicious patterns for threat detection, which helps to find unknown or sophisticated attacks. AI can also defend against cyber-attacks by analysing and classifying network data, detecting malware, and predicting vulnerabilities.  

At the PwC Cybersecurity and Privacy Day event, Dr. Donia Elkateb, Senior IT Security Engineer at the European Investment Bank (EIB), emphasised that security breaches are still a reality today and AI can enhance development, security, and operations in domains such as threat intelligence, vulnerability management and incident response. 

According to Nico Sienaert, Senior GTM Lead Security at Microsoft, AI can strengthen security posture because it enables defenders to continuously evaluate and boost it with real-time insight and context, to quickly and expertly investigate and respond to threats. 

Here are some ways that AI can enhance cybersecurity and privacy: 

  • Threat detection and response: AI systems can learn normal network behaviour and detect anomalies that may hint cyber threats, such as unusual traffic patterns or unauthorised access attempts. Moreover, AI-powered tools continuously check networks and systems, providing real-time alerts and responses to potential security incidents.
  • Predictive analytics: AI can analyse historical data to predict potential threats and vulnerabilities, allowing organisations to proactively address risks before they become critical. AI algorithms can also assess the risk levels of various assets and prioritise security measures based on potential impact.
  • Automated security processes: AI can automate routine security tasks, such as isolating compromised systems, applying patches, and updating firewalls, freeing up human resources for more complex tasks. Moreover, it can help in threat hunting by automatically sifting through vast amounts of data to find indicators of compromise and potential threats.
  • Enhanced data privacy: AI can help in dynamically masking or encrypting sensitive data, ensuring privacy while allowing data analysis and processing. It can also improve access control mechanisms by analysing user behaviour and adapting access permissions based on contextual factors.
  • Fraud detection: AI can analyse user behaviour to detect fraudulent activities, such as unusual login locations or transaction patterns, and trigger alerts or preventative actions.
  • Phishing and malware detection: AI can analyse email content and metadata to detect phishing attempts and malicious attachments, significantly reducing the risk of successful phishing attacks. Additionally, it can identify and classify malware by analysing its behaviour and characteristics, even detecting previously unknown variants.
  • Security analytics and reporting: AI can aggregate and analyse security data from various sources, providing comprehensive insights and detailed reports that help in making informed security decisions.
  • Vulnerability management: AI can continuously scan systems for vulnerabilities, ensuring prompt identification and remediation of security weaknesses. It can also help prioritise and automate the deployment of security patches, reducing the window of exposure to known vulnerabilities. 

Without any doubt, AI can be a potent weapon for cybersecurity and privacy professionals, empowering them by augmenting their skills and knowledge, enabling them to focus on more strategic and creative tasks. However, as Mr Lauhde noted at the end of his speech, AI can help humanity, but we’re not fully there yet. It’s time to flip the coin and look at how AI can be used by cybercriminals for harm. 

The dark side of AI: How hackers use it for evil 

AI has many advantages for cybersecurity and privacy, but we also need to be aware of how the bad guys use it. As Dr Elkateb said, attackers can use ChatGPT and other tools to make and manipulate malicious code. Here are some other examples: 

  • Adversarial machine learning: Attackers can manipulate AI systems by feeding them untrustworthy data, leading to malfunctions such as misdirection or incorrect decision-making.
  • Brute force attacks: AI can be used to perform advanced brute force attacks by quickly generating and testing many possible passwords or encryption keys.
  • Social engineering: AI-powered systems can create persuasive messages or content to trick individuals into revealing sensitive information or performing actions that compromise security.
  • Automated exploit generation: Cybercriminals may use AI to automatically generate computer viruses or other types of malware, increasing the scale and efficiency of cyberattacks. 

The harmful effects of AI may be fewer than the positive ones, but they can have a serious impact on organisations that suffer from them. Clearly, as AI technology advances, so do the strategies for both protecting and compromising digital systems. Security professionals should not ignore the risks of AI, but rather prepare for them by using AI to enhance their capabilities and reduce their vulnerabilities. 

Getting the “Rebellion” ready for AI 

As Dr Elkateb said during her intervention, “embracing artificial intelligence in application security is no longer optional. As attackers are increasingly leveraging AI, integrating AI defences is crucial to stay ahead in the cybersecurity game. Without it, we risk falling behind.”  

Consequently, cybersecurity and privacy professionals, and their organisations, should prepare for AI-driven cyber threats by adopting a multi-faceted approach to enhance their defences while minimising risks and ensuring ethical use of technology. Such an approach involves three main pillars—people, processes and governance—and includes the following best practices: 

  • Evaluate and select the right AI tools: Identify organisational needs where AI can help the most. Compare AI tools based on factors like accuracy, scalability, integration, and vendor support. Pick tools that have strong threat detection, predictive analytics, and automated response features.
  • Achieve a seamless integration: Ensure AI tools can work smoothly with existing security systems and infrastructure. This includes compatibility with SIEM (Security Information and Event Management) systems, firewalls, and endpoint protection solutions. Moreover, use Application Programming Interfaces (APIs) provided by AI tools to enhance data sharing and interoperability with other security systems.
  • Maintain high standards for data quality and privacy: Use high-quality, clean, and relevant data for AI training and operations. Poor data quality can lead to inaccurate predictions and ineffective security measures. Also, apply strong data privacy measures to safeguard confidential information used by AI tools, following the rules of regulations such as the General Data Protection Regulation (GDPR).
  • Foster cross-disciplinary cooperation: Encourage collaboration between cybersecurity professionals, data scientists, and AI experts to effectively implement and manage AI tools.
  • Develop robust AI governance and ethics: Develop a governance framework that outlines the ethical use of AI, data handling practices, and accountability measures. Ensure AI implementations adhere to ethical standards, including transparency, fairness, and avoiding biases in AI algorithms. Lastly, make sure management is aware and onboard with your security strategy.
  • Implement robust security frameworks: Adopt a zero-trust approach to minimise risks associated with AI-driven threats. Conduct regular security audits and risk assessments to identify and mitigate vulnerabilities.
  • Develop incident response plans: Create incident response plans that specifically address AI-driven attacks. Conduct regular drills and simulations to test the effectiveness of response plans.
  • Continuously monitor and assess AI tools: Check the performance of AI tools to ensure they are effectively identifying and mitigating threats. Regularly evaluate and update AI models to adapt to emerging threats and changes in the threat landscape.
  • Automate: Employ AI to handle routine and repetitive security tasks such as log analysis, threat detection, and incident response. This will free up human resources to focus on more strategic activities such as threat intelligence, policy development, and advanced threat hunting.
  • Follow the rules: Keep AI deployment in line with regulations and compliance requirements. This can be challenging because, as Grant noted, we are in the new era of cybersecurity transparency: Regulators are developing new regulations and becoming stricter, and we are moving towards mandatory disclosure.  

Herwig Hofmann, Professor of European and Transnational Public Law and Head of the Department of Law at the University of Luxembourg, confirmed this trend during his speech. He talked about the many rules and processes organisations have to follow today, and the new entities and agencies that are set up, or being set up, to manage these efforts. In his opinion, we are heading towards more immediate information flows and faster regulatory reactions. 

  • Encourage continuous learning and skill development: Keep up with the latest advancements in AI and machine learning as they apply to cybersecurity. Regularly read industry publications, attend conferences, and participate in webinars. Engage in specialised training programmes and certifications focused on AI and its applications in cybersecurity.
  • Enhance user awareness and training: “Security has to be the responsibility of everyone,” said Dr Elkateb during the event. She also underlined the importance of building a community of security champions and a culture around application security to decentralise it. As a result, create personalised training programmes with AI that address specific user behaviour and knowledge gaps. You can also conduct AI-driven simulated attacks and drills to test and improve user readiness and response to real threats. 

By adopting these best practices, cybersecurity and privacy professionals can better prepare for AI-driven cyber threats and effectively use AI to enhance their defences. This dual approach ensures that they stay ahead of emerging threats while maximising the benefits of AI in their security operations.  

Conclusion 

One of the key takeaways from the PwC Cybersecurity and Privacy Day event was the balanced and realistic perspective that all speakers had on AI and its implications for cybersecurity. They didn’t overestimate or underestimate the capabilities and limitations of AI, nor did they succumb to the hype or fear that often surrounds this topic.  

Instead, they focused on the practical and concrete aspects of how AI can be used for good or evil, and how cybersecurity professionals need to navigate the balance between risk and opportunity. We tried to do the same with this blog. 

We showed you how, on one hand, cybersecurity professionals can use it as a tool to improve their threat detection, prevention, and response capabilities, securing organisations’ digital assets and sensitive information, and complying with the relevant regulations and standards. 

But to harness its potential and minimise its risks, they need to adopt some best practices such as continuous monitoring, learning, and training. They also need to be agile and resilient, adjusting to the changing and uncertain future. To quote Mr Scharnigg, they need to act like the Bruce Lee of cybersecurity and “be like water”. 

On the other hand, cybercriminals can take advantage of AI to automate and enhance their attacks. AI can create convincing phishing schemes, adaptive malware, and automated vulnerability discovery. These pose serious challenges for traditional security measures. 

The ever-evolving world of cybersecurity is—and will continue to be—a constant battle between security professionals and cybercriminals. AI has become part of this world, and it won’t go away, making it more complex and creating a paradox: Is it a tool or a weapon? An ally or a foe? A blessing or a curse? It all depends on which side you are on. And just like in “Star Wars” it’s up to you to decide whether to follow the light or the dark side of the Force. 

We hope you choose the light because, as Grant remarked, we need to work together to build a secure digital society. 

What we think 
Maxime Pallez, Cybersecurity Manager at PwC Luxembourg

Artificial intelligence holds immense potential in bolstering cybersecurity defences by swiftly identifying and neutralising threats. However, it must be wielded with caution, as the same tools that protect us can also be exploited by adversaries, needing a balanced approach that emphasises both innovation and ethical vigilance.

Maxime Pallez, Cybersecurity Director at PwC Luxembourg

AI is a double-edged sword in cybersecurity: Attackers have a significant advantage as they can adopt new technologies extremely quickly, bypassing the lengthy validation and testing processes that organisations must undergo. To counter this, defenders need to urgently harness AI’s potential to keep pace with the ever-evolving threats.

Simon Petitjean, Cybersecurity Director at PwC Luxembourg
Simon Petitjean

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to Top