From Risk to Confidence: Pioneering the responsible use of generative AI

Since their first inception in the beginning of 2021, generative artificial intelligences (GenAIs) have taken pace incredibly fast, boosted by the general public’s stratospheric adoption rates and the promise of free, unlimited creativity right at everyone’s fingertips. 

Past the initial discovery phase, and its amusing procession of reworked pictures contests, the business world didn’t take long to consider the potential profit that could be derived from a more serious use of the GenAI’s power. 

After all, if your teenager finally managed to write that long-overdue birthday card to Aunt Myrtle with a stellar cost/time/effort/result ratio thanks to an AI, why couldn’t that efficiency be replicated to tackle the tedious, repetitive or time-consuming tasks your company may have to deal with on a regular basis? What if the inspiration and funny angles that one can find in ChatGPT-generated rap-style colleague depictions battles1 could be injected in market reports or strategic reviews to explore new and original insights?

Don’t have time to read the whole blog entry? Then watch our “Blog in 1 minute” video for a quick summary of its main points:

Those are very valid questions, and GenAI-based solutions keep appearing on the market, aiming at answering the high hopes in productivity gains placed in this technology. However, have we witnessed a “gold rush” toward artificial intelligence? 

Well, not exactly: for example, the results of the 2023 edition of our Use of Data Analytics and Artificial Intelligence in Luxembourg survey offers some early insights into the adoption of Generative AI. At this initial stage, 27% of participating companies expressed interest in using GenAI for text generation, while a mere 4% showed interest in image generation or other applications of GenAI. 

Even though the interest for AI in general, and GenAI in particular, has spectacularly risen over the past two years, most companies remain cautious and prefer further investigating GenAI before massively investing in it. At the same time, they are waiting for the authorities to clarify the regulatory landscape—and probably also for the GenAI technology to pass the “peak of inflated expectations” and reach the “plateau of productivity”

Indeed, the potential gains and advantages GenAI provides in terms of productivity, cost-effectiveness and value creation should be considered in the light of the very specific risks associated. In this blog entry, we aim at providing you with an overview of these issues, and a possible approach to deal with them to make the most of a promising, yet challenging technology.  

Generative AI in business: hurdles on the road

The first of these risks derives from the very nature of GenAI: trained on massive data sets and corpora mostly picked from the internet, GenAI models logically reflect the bias and flaws inherent to any kind of online content. 

Lessons have been learnt since the unfortunate experiment of Microsoft’s Tay chatbot, who ended up spouting nonsensical hateful rants involving feminists, Ricky Gervais, burning in hell and Hitler, after just a few hours of conversing online2. 

Still, any content created by GenAI needs to be considered as what it actually is: a synthesis provided by the internet, including all the data related to any given topic—an unfiltered mix of the “good”, relevant, exact data on one hand, and the “bad”, mistaken, offensive or purposely misleading content, on the other hand.

What’s more, most GenAI tools function as an additional layer applied to a limited number of original GenAIs created or supported by large technology companies like OpenAI, Google, or now Amazon. These so-called “foundation models” run on a proprietary algorithm, making it very difficult, if not impossible, to trace the data’s origin or the reasoning underlying a particular result.

This uncertainty in terms of data quality and data integrity is a serious problem in itself for most businesses, but it gets even more acute if you throw in the intellectual property (IP) issue. What if by using GenAI you breach someone else’s IP, even if you don’t intend to do so? How can the data’s originator on which the AI based its answer to your request get their fair reward for this use? Moreover, wouldn’t you be putting your company at risk by feeding the AI with your own confidential data to generate that strategic report?

Another potential issue with GenAI is their tendency to get… well, creative. Granted, they were built precisely for that, and their capacity to offer new perspectives to old problems amounts to a big part of their value. 

But when a researcher asks ChatGPT for references in his work on the effect of urbanisation of the tropical world on bamboo housing, and gets a list of very enticing academic articles that are also entirely made up, that’s a concern. When it leads to a USD100bn loss, like it happened to Google a few months ago, that’s a problem. 

Data scientists call this kind of convincing, but wrong answer “hallucinations”. It’s the probable result of the AIs’ very conception, designed for generating content that, if not necessarily exact, will at least appear to be reasonable. Who would pay for a hallucinating machine?

Finally, the main risk associated with GenAI stems, not from the tools themselves, but from the purpose they may be used for. Generative AIs can be easily enough used by pretty much anyone equipped with a laptop and basic computer knowledge to create text, images, videos or human-like voices with a stunning degree of realism. 

This deep fake content can be fun—like the adventures of prominent political figures set in a Dungeons & Dragons-like world—or no fun at all when it’s about spreading disinformation, attempting to smear a competitor’s reputation, or in support of cyber attacks or other criminal activities. In that regard, wrongly-used GenAI poses a major trust issue within the business world as well as in the society at large.

Is it to say that companies should stay away from generative AI and its promises, at least for the time being?

Responsible AI: Taming the ghost in the shell

It isn’t so much a matter of “if” companies should invest in Gen AI—because ultimately they will—as it is “how” they will implement this technology. After all, the prospect of economic growth derived from using GenAI is just too high to pass on for any business, and there’s too much to lose in missing the wave and staying behind. Most likely than not, we will see a steady increase in its adoption on the market, as it can be expected with a technological shift of this scale. 

What companies should ask themselves is how they can harness the power of GenAI in a way that will provide effective gains, while avoiding the creation of a new range of problems, for themselves or for the society at large. According to a 2022 World Economic Forum report, “the vision of AI ethics needs to be of a good life for individuals and societies with AI, in terms of quality of life, preservation of the planet, human autonomy and the freedom necessary for a democratic and thriving society.”

The answer could be Responsible AI: a way to build trust in and with AI by embedding ethical principles, governance rules and efficient controls at the core of a company’s AI strategy. 

European AI Act: the regulatory framework

The first pillar of Responsible AI, still in the making, is the progressive emergence of a regulatory framework, aimed to bring order in the very unclear legal context in which artificial intelligence evolves today, and establish ethical principles that will guide the AI technology’s future developments. 

Several countries, including major actors such as Israel, Japan and China, have already published draft policies reflecting their own—sometimes very political—approach to AI. When it comes to Luxembourg, the most important is the upcoming European AI Act.

This new regulation, unlikely to come into force before late 2023, or even 2024 (and potentially subject to changes during the course of the legislative process), will set rules depending on the level of risk AI systems pose, from “limited”—users should be made aware that they are interacting with an AI—to “unacceptable”—systems that are considered a threat to people, and will be banned. 

These rules, whose purpose is to “make sure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly”, will shape the market, and as such users and providers alike will need to carefully consider them.

Creating trust by design

The second pillar of Responsible AI relies on the business players themselves: considering trust is at the foundation of any sound business relationship, the majority of legitimate companies have interest in a trustful and trusted AI ecosystem. And it’s certainly most cost-effective to create trust by design, embedding ethics considerations right from the start rather than addressing shortcomings after systems are already operational.

World-famous news agency Reuters, for example, has created a pioneer programme with the camera manufacturer Canon and Stanford University’s research lab Starling, in which every picture alteration is registered in a blockchain, thus ensuring a photograph’s integrity and traceability. Here, a technological solution answers a technological threat to restore confidence.

Trust by design is achieved with a sound understanding of the risks associated with the use of GenAI, and by setting up the right governance and control system to monitor and mitigate these risks. This involves updating cybersecurity systems, making sure GenAI doesn’t pose a threat to data privacy and integrity, and addressing the black box issue—AI outputs that can’t be reliably explained or overseen. 

The human-chatbot duo

Finally, a particular effort should be made for training people that will be led to use GenAI in their daily work. It’s a matter of knowing what GenAI actually is, what one can expect from it as well as its limitations, and what is the best way to interact with it. For people who aren’t data scientists, or with just common computer literacy—meaning, most of us—working with AI can very quickly turn into a frustrating, time-consuming, and ultimately inefficient experience when not using the right prompts, or when the results don’t go the expected way.

What’s more, natural human biases sometimes add up to the machine’s inherent flaws, making it even more difficult to come up with quality results. For example, a recent HEC Paris study showed that participating students were given significantly lower grades (-28% on average) when performing an academic assignment with the help of ChaGPT, compared to working alone and providing answers out of their own reflections.

An explanation of this result could be the students’ tendency to give more credit to the AI than they should, even though they know it isn’t totally reliable, and amend the machine’s work only marginally, where more drastic corrections were needed. This example demonstrates that there are indeed situations where it is better to make use of one’s own knowledge, reasoning and analytical mind, and not rely too heavily on the AI.

Another challenge pointed out by this study is the question of the performance of the human-chatbot duo. This pertains to computer science, but also to more broadly academic fields dealing with human cognition and decision-making, such as behavioural economics or cognitive psychology. It’s the perfect argument for calling for sound employee training, but also for early learning of GenAI best practices, along with critical thinking, in the classrooms. 

Last thoughts

Artificial intelligence has existed and been in use for a long time. In recent years, it has moved beyond the purely technical field of specialised professions to take over a large part of our daily lives. Generative artificial intelligence is the form by which many of us will be led in the very near future to—literally—dialogue with AI in the broad sense, both within the framework of our professional activity and for our private uses. 

It’s therefore of strategic importance for companies and individuals alike to have a deep enough knowledge and ethically sound approach to GenAI to guarantee a safe and efficient use as well as avoiding the pitfalls associated with this technology. So we might as well make friends with the ghost in the shell—the sooner the better. 

A good way to start is by visiting our “Data and AI” page and get in touch with our team of specialists.

 
Notes:
1. Anyone telling you that never happened in their open space is officially nominated for the International Office Hypocrisy Award.
2. We could not find any profanity-free article in support of this story, other than Microsoft’s account of the incident, so we leave it to you, dear readers, to ask your favourite browser (or maybe ChatGPT?) for more accurate depictions of GenAI’s verbal creativity in this instance.

What we think
Saharnaz Dilmaghani

Generative AI is more than just a productivity booster; it’s at the forefront of deep, fundamental shifts in the way we work and think, serving as a catalyst for reimagining what’s possible. As we venture into these new landscapes, the responsibility to navigate machine-generated risks intensifies and we must remain vigilant. The true mark of Responsible AI lies not in avoiding these risks, but in navigating them with ethical foresight and societal wisdom, always bearing in mind that technology serves man—not the other way around. This is the very purpose of our PwC Responsible AI Framework.

Saharnaz Dilmaghani, PhD., Artificial Intelligence & Data Science Senior Associate at PwC Luxembourg

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to Top