When you think of artificial intelligence (AI), your brain might evoke a gentle robot answering questions accurately. Could that friendly machine disclose information you want to keep private and under your control? For now, AI is a result of human inventiveness: we build, train and influence how AI performs, what we want it to do.
In reality, AI is a set of technologies – natural language processing (NLP), machine learning (ML), robotics, etc. – and in many cases it comes down to algorithms, a piece of code with tasks to accomplish. There is an AI-powered algorithm behind most, if not all, digital experiences we have.
The General Data Protection Regulation (GDPR) changes the way Europe understood privacy over the last 20 years or so, and is particularly focused on online privacy. In this article, we explore whether AI, whose reputation in under fire after Facebook/Cambridge Analytica data breach, is an ally to help business comply with the GDPR, and regulators, to properly supervise compliance.
The GDPR in a few lines
The GDPR, adopted in April 2016 and taking effect this May, is the first change to EU privacy laws in 23 years. It replaces the Data Protection Directive 95/46/EC. According to the EU’s GDPR Portal, it “was designed to harmonise data privacy laws across Europe, to protect and empower all EU citizens’ data privacy and to reshape the way organizations across the region approach data privacy “. The GDPR’s key goal is to grant EU residents effective control over their personal data, how it is processed and for what purposes. As long as your business processes data from EU residents (even if it isn’t located in the EU territory) it must adhere to the GDPR.
When AI meets the GDPR
Let’s get this right. No AI technology complies with GDPR without human input. We need to programme the system, to add the piece of code enabling the algorithm to, for example, perform actions as part of a data management framework. What the GDPR asks companies using AI for capturing and processing data from individuals is amongst others:
– Information on the use of the AI technology;
– An assessment of the impact the use of the AI technology exerts on individuals;
– Explain how the technology works, what it takes into consideration when processing individuals’ data to make decisions.
GDPR wants humans to play a prominent role in privacy surveillance, especially when AI technology is managing people’s access to online services, the information they share online, and access to benefits or compensations (loans, mortgages or even scholarships). That’s the reason the GDPR pays special attention to profiling and automated decision-making, increasingly used in a number of sectors including financial services.
Profiling, simply put, is the classification of individuals using their personal data, i.e. characteristics such as age, personal income, gender, civil status, payment records, etc., powered by AI. While this helps companies customise their service offering, it may also lead to the denial of good and services, and inaccurate predictions. Algorithms carry biases because they reflect the human ones. After all, machines learn with data humans provide. To comply with the GDPR, businesses have to include a reviewing mechanism for a human to identify biases if a person appeals an unfavourable decision, for example, related to a home loan rejection (for more information on this regards, please have a look at the article 29 Data Protection Working Party or WP29.)
AI isn’t the bad kid on the block
The world has blamed algorithms for being intrusive, for delivering untrusted information and ultimately, for being key elements in the latest data privacy events. AI isn’t only about algorithms but almost. Algorithms have intention and they logically answer business needs. If the intention contemplates legal requirements and, going further, corporate responsibility principles, AI becomes an ideal partner not only to comply with the GDPR, but with any other obligation the law requires.
When we conceived this blog post, we wanted to give AI a fair treatment, and counter of balance the negative reputation around it related to data privacy. Based on data they learn and manipulate, they make decisions and can solve problems with accuracy. Compliance teams and businesses in general cannot miss the change to take advantage of this technology and be GDPR compliant.
4 ways AI can support GDPR compliance
We’ve listed some interesting examples where AI is a valuable business ally to comply with the GDPR.
- AI can support data governance, from standardisation of processes and data verification, to supervise data management frameworks (for example, using a blockchain-powered solution). Consent management can largely benefit from AI. For instance, GDPR calls businesses to implement smooth, seamless processes for people to revoke consent. These processes must be tied to data lineage, which in turn, is linked to the initial consent. The revocation may also trigger a series of actions within the data management framework. AI technology can ease the proper execution of these processes.
- AI can help understand the GDPR and any subsequent regulation, update or act stemming from it. NLP solutions can read an interpret regulations and determine what the document actually wants to say, what changes to the law affect the business and how, who are the actors involved, etc. Furthermore, NPL teaming up with ML technologies can upgrade the game because the latter one also offers machine-executable functions. Thanks to it, manual reporting and compliance processes will be easier, for instance.
- AI can help detect data breaches and fraud. AI-powered security analytics are advancing quickly and represent a unmissable opportunity to improve cybersecurity. In particular, ML can detect advance threats (for example, by detecting behavioural changes), and also eliminate a large number of manual tasks. Businesses are already using ML combined with NPL to fight fraud. In the case of the GDPR, these technologies can be helpful for more accurate consent management and to help companies execute properly “the right to be forgotten” when an individual requests it.
- With the due consent, AI can help business make the right decisions for creating better client experiences. The use of smart contracts is a valuable example. They are stuffed with sensitive data, and a combine solution bringing together AI and blockchain can ease the administrative burden for both, clients and businesses.
With the enforcement of the GDPR and the business cases we will learn from when complying with the regulation, new, no-yet-imagined AI applications will come to light. Let’s always keep in mind that AI is a human creation, and it requires human influence to perform properly. In current times, when automation starts reigning supreme, the GDPR wants to take the human influence on data privacy back. The idea is make people more accountable for their data and the way they use it.
Where is your business in the GDPR compliance game?
What we think
The AI potential seems to be endless. A responsible take on it will be a key business differentiator. As in most areas, AI is also applicable to regulatory compliance including the GDPR. Tailored algorithms and iterations of them are bound to help business comply with the GDPR challenges though a careful look at biases and what data businesses use to feed the AI technology is necessary.