Hacking minds, not systems: how AI is flipping cybersecurity on its head 

Written in collaboration with Adam Walder, a member of The Blog team. 


Every year, our PwC Cybersecurity and Privacy Day brings together the sharpest minds to explore the latest developments and challenges in a world where cyber threats evolve faster than the plot twists in your favourite TV series.  

Through thought-provoking insights and dynamic discussions, the event offers a clear view of the ever-changing cybersecurity and privacy landscape. It’s a platform for industry professionals to connect and share fresh strategies and collaborative solutions to help businesses and individuals to confidently navigate the evolving digital landscape—because, let’s be honest, hackers aren’t waiting around for an invite. 

If you’re curious about how cybersecurity went from basic firewalls to an AI-fuelled chess match against cybercriminal mastermind, then this blog is for you. It recaps our PwC Cybersecurity and Privacy Day events, which focused on the impact of artificial intelligence (AI) on cybersecurity and privacy. We uncover everything from hacker attacks and regulation to the investment hype around AI, plus what Gen Z really wants from a career in cyber. 

Welcome to the age of AI-driven cybersecurity 

Over the past decade, cybersecurity has gone from ‘install a good antivirus and you’re good to go’ to a full-blown AI-powered battlefield. Therefore, the old rules don’t cut it anymore. The crossroads of AI and cybersecurity have ushered in a new era where traditional security measures are being redefined, restructured, and sometimes completely dismantled.  

Today, the challenge goes beyond keeping systems secure. It’s also predicting cybercriminals’ moves before they happen. Think of it as a chess game where your opponent has an AI-powered supercomputer and you’re still reading the rulebook. 

In this evolving landscape, businesses need to adapt swiftly yet strategically to stay resilient, using AI not just as a defence, but as a powerful ally in an increasingly complex cyber arms race. After all, cybercriminals are also turning to AI to enhance their attacks. 

At the same time, organisations need to avoid deploying AI blindly. Its impact on cybersecurity and privacy, particularly the associated risks, needs to be carefully assessed. Reactive security is a thing of the past. Today, success hinges on predictive, AI-driven strategies and, above all, smart collaboration between humans and machines. 

Cybersecurity’s evolution: a decade of change 

During the event, Frédéric Vonner, Advisory Partner, and Simon Petitjean, Cybersecurity Director at PwC Luxembourg, reflected on how cybersecurity has evolved over the past ten years. ‘What started as a discussion purely about digital security now includes privacy concerns and the game-changing influence of AI,’ they explained. In other words, AI didn’t simply join the cybersecurity party. It also took over the DJ booth, rewrote the playlist, and started setting the dance floor on fire. 

One of the more alarming aspects of AI is that cybercriminals are now using it as part of their arsenal—pushing its role far beyond defence. AI-powered attacks can automate hacking attempts, create deepfake content that’s almost impossible to detect, and craft phishing emails so convincing they could trick your bank manager. As a result, businesses need to adopt AI-driven security strategies capable of countering these emerging threats.  

AI: Cybersecurity’s greatest ally… and biggest threat 

Peter Avamale, Director of Cybersecurity, Resilience, and Privacy at PwC Netherlands, painted a vivid picture of AI’s impact on cybersecurity. ‘AI-generated code is advancing at lightning speed, startups are rushing to integrate AI, and CEOs are torn between excitement and panic—kind of like parents handing car keys to a teenager,’ he joked. 

AI is making cybersecurity both more efficient and more terrifying. In a survey of 4,000 CEOs, more than half expressed concern about security risks, even as they acknowledged AI’s potential to significantly boost productivity. And for good reason—how can you be sure the AI tool you are using is secure and respects privacy? Spoiler: if you are not, cybercriminals almost certainly are. 

AI-powered attacks may have seemed like a distant, theoretical threat, but they’re now a reality. The Samsung data breach in 2023 serves as a cautionary tale. The key to staying protected is to implement strong governance, secure data lifecycles, and AI-powered security systems that can detect threats before they escalate.  

The clear message Peter conveyed is that AI isn’t something you control; it’s something you collaborate with strategically. 

AI: The investment boom vs. the hype 

As AI reshapes cybersecurity, it’s also making waves in the venture capital world. According to Nazo Moosa, Managing Director at Paladin Capital, ‘Half of all venture capital investment last year was in AI’. That’s right—half. However, she was quick to point out that not everything labelled as AI actually qualifies as such. ‘Plenty of things that aren’t AI are calling themselves AI,’ she quipped. (Yes, we are looking at you, ‘AI-powered’ toasters that still burn the bread.) 

Nazo stressed that, with so much money flowing into AI, the cybersecurity industry has to be extra cautious. Not all AI tools are created equal, and some are more marketing fluff than actual innovation. She also stated the need for AI-specific security solutions, especially in areas like AI posture management and runtime protection. Because if AI is going to take over the world, cybersecurity better figure out how to keep it in check. 

This was a sentiment mirrored by Nicolas Remarck, Chief Information Security Officer and Head of ICT Risk at Banque Internationale à Luxembourg (BIL) and podcaster. His phrase, ‘AI is eating venture capital for lunch,’ encapsulates the current trend of significant investments pouring into AI-related startups and projects. We have seen evidence in regards to this.  

In an article ‘10 signs AI is eating the world (of venture capital),’ the author writes: ‘The effect could be dramatic since the entire US-based tech startup ecosystem faces numerous headwinds at present. Not least is an already crowded $1 trillion backlog of non-AI unicorns that are primed for exits, due to investors and traditional acquirers narrowing focus on AI.’ 

Nicolas pointed out that, in today’s technological tidal wave, AI has become a buzzword, captivating both investors and tech enthusiasts. However, AI’s complex nature often leaves many people bewildered. That’s why he is on a mission to make topics like cybersecurity and AI accessible to everyone.  

This mission is reflected in his podcastLa cyber sécurité expliquée à ma grand mère (Cybersecurity explained to my grandmother), where he simplifies intricate concepts into terms that even his grandmother can understand. This includes demystifying what AI is and what it isn’t. 

What’s keeping experts up at night? 

Maxime Clementz, the Ethical Hacker and Cybersecurity Senior Manager at PwC Luxembourg, is known for his dedication to monitoring the dark net for emerging threats and trends, often sacrificing sleep for the greater good.  

During his talk at the event, he explored the latest trends in cybersecurity but reminded everyone not to overlook the usual suspects. From phishing scams and AI hacks to Endpoint Detection and Response (EDR) evasion techniques, cloud vulnerabilities, and social engineering, the classics still have an effect. He also highlighted emerging risks, including prompt injections, quantum computing threats to cryptography, and the urgent need for quantum-safe solutions. 

Regulation: The necessary evil or cybersecurity and privacy’s best friend? 

With great power comes… a whole lot of regulations. Alain Herrmann, Data Protection Commissioner at Luxembourg’s National Commission for Data Protection (CNPD), posed the ultimate question: ‘Does regulation kill innovation?’ The answer, as always, is complicated. 

Deploying AI requires compliance with the General Data Protection Regulation (GDPR), and now, with the introduction of the EU Artificial Intelligence Act, even more legal complexities are emerging.  

Transparency, automated decision-making, and data retention are just a few of the challenges businesses need to navigate. Regulatory sandboxes—controlled environments for testing AI— are becoming crucial to ensuring AI systems don’t go rogue before they are fully deployed. 

Meanwhile, Michael Segall, Associate General Counsel for European Privacy Strategy at Amazon, emphasised that regulations aren’t here to kill innovation; they’re here to make sure AI evolves responsibly. ‘GDPR supports innovation. But it must be done ethically and responsibly,’ he said.  

One of the biggest challenges is anonymisation. AI systems don’t forget and making sure they don’t leak personal data is no small task. Here are several pointers to help with this: 

  • Avoid assuming models are anonymous by default: just because personal data isn’t visibly present in outputs doesn’t mean it isn’t embedded in model parameters. 
  • Test for re-identification risks: regularly audit models to see if any personal data could be indirectly retrieved. 
  • Design for privacy upfront: build models with privacy-by-design principles. Don’t feed in sensitive or unnecessary personal data from the start. 
  • Use differential privacy data: when training on personal data is unavoidable, use techniques that obscure individual data points to prevent leakage.
Cracking the code for Gen Z  

While cybersecurity often brings to mind technology and regulations, its true backbone is people. The challenge the industry faces now is how to draw in and hold on to the best minds and talent. Enter Kevin Bouchareb, an HR innovations expert and former Future of Work Director at Ubisoft. He’s got his eye on a big issue—how to make cybersecurity exciting and sustainable for Gen Z.  

Here’s the deal: Gen Z grew up in a world of instant everything. ‘When you’ve had the internet, social media, and same-day delivery your whole life, waiting just isn’t an option,’ Kevin pointed out. And that mindset carries over to work. This generation expects fast, seamless digital experiences, remote-friendly flexibility, and nonstop opportunities to learn and grow.  

Bringing in cybersecurity talent is one thing; it’s keeping engagement that’s the real challenge. Companies need to rethink how they engage with a generation that values identity through action, speed, and purpose beyond work. Instead of framing cybersecurity as a rigid job title, it should be positioned as a dynamic space to ‘do’ and ‘create’, like building systems that protect communities, fight digital threats, or solve real-time crises.

The future of cybersecurity and privacy: where do we go from here? 

The AI revolution is upon us, and cybersecurity stands at a crossroads. The future isn’t just about new technologies, but also how businesses and governments adapt to this rapidly changing landscape. Four key takeaways emerged from the discussions at Cybersecurity and Privacy Day: 

  • Proactive AI security: businesses need to use AI to predict and prevent cyber threats before they occur. 
  • Regulatory alignment: AI compliance is essential for building trust in the digital world. 
  • Human-AI collaboration: far from replacing human intelligence, AI is a partner in cybersecurity. 
  • Talent development: the cybersecurity industry needs to attract the best minds to keep up with AI-driven threats. 

Cybersecurity has evolved beyond simple system defence. It now requires a deep understanding of the motivations, tactics, and technologies shaping the future of digital security. As AI continues to rewrite the rules, the smartest move businesses can make is to embrace its potential while remaining vigilant against its risks. This means that in this high-stakes game of cybersecurity, the only way to win is to stay ahead. 

And if you ever need a reminder to stay cyber-aware: did you hear about the cybercriminal who got away? They ransomware. 

What we think
Simon Petitjean, Cybersecurity Director at PwC Luxembourg
Simon Petitjean, Cybersecurity Director at PwC Luxembourg

For the second year in a row, AI didn’t just join the conversation at our PwC Cybersecurity and Privacy Day—it steered it. As we celebrated ten years of this flagship event, the message was clear: cybersecurity and privacy are no longer separate lanes; they’re converging fast, with AI likely at the wheel of this transformation. The discussions around investment, regulation, and talent reflected a new era, one where trust, technology, and leadership must evolve together.

As businesses embark on their AI journey, much like explorers in uncharted territory, they also need to remember the importance of having brakes in a speeding car. Security and privacy professionals have a duty to clearly identify both the benefits and risks of AI and implement safeguards to protect their organisation and people. Unfortunately, cybercriminals are continually enhancing their attacks with AI, making them more potent. The cat-and-mouse game between organisations and hackers is becoming increasingly complex.

Maxime Pallez, Cybersecurity Director at PwC Luxembourg
Maxime Pallez, Cybersecurity Director at PwC Luxembourg

Did this blog add value for you?

Let us know with a quick rating!

Average rating 0 / 5. Vote count: 0

No votes so far! Be the first to rate this post.

We’re sorry this blog didn’t meet your expectations.

Let us know how we can improve.

Back to Top