AI and data privacy laws: regulating the minds of machines


Date: 30 November 2021

Visual representation of a data protection officer

The world is creating massive volumes of data. As of 2021, the estimate stands at 1.145 trillion MB per day. But what's more important is that the speed of creating data is increasing exponentially. As time goes by, 5G will fuel this data explosion.

What you need to know about AI and data privacy

Artificial intelligence will work as a catalyst to this trend. A lot of data-sensitive analytics already relies on machine learning algorithmic decision-making. As AI begins to mature, it will offer powerful ways to use personal information that can intrude on people's privacy. 

The sensitivity of the issue is precisely why AI and Data Privacy laws are top-priority for many countries. There are plenty of assets online for learning about data privacy laws, like Osano's guide to data privacy. However, this article focuses on the laws as well as the challenges faced by regulators in balancing data privacy and AI growth.

AI and data privacy issues

Though there are privacy laws in place, much remains to be figured out. Governments across the globe are in a conundrum. They need to find a way to regulate AI such that it doesn't compromise on data privacy. Think about it. They need to prepare laws that protect data privacy when personal information is used in AI, while still ensuring that they don't restrict AI development.

Data is food for AI. It's what trains the ML algorithm and is often a key differentiator. However, personal data is increasingly becoming regulated. The penalties are hefty. Recently, Facebook paid $5 billion in fines over non-compliance with privacy laws. And it's not just Facebook that has ticked governments off. There are a few of its AI-powered pals that have gotten some fame because of misusing the power of AI.

AI and data privacy scandals

People realize that their data is now less safe online than ever before. However, some don't realize the seriousness of how AI-based systems can use their personal data rather unethically. Following scandals from the past not only highlight the need to keep data protected, but also explain why governments are losing sleep over data privacy issues.

1. Facebook-Cambridge Analytica

Yes, Facebook paid $5 billion, but let's walk through what happened first. For those that don't already know, this is going to be mind-boggling.

The cat was out of the bag when news channels revealed the scandal to the world in 2016. Cambridge Analytica, a data analytics firm, had used Facebook likes to analyse their psychological and social behaviour patterns. 

Then, they used this data for targeting them with election ad campaigns for the 2016 Presidential election in the U.S.

Now, Facebook doesn't sell its users' data for such unethical adventures. But a sneaky developer had found a loophole. 

The developer created a Facebook quiz app that used a Facebook API for collecting users' (and their friends') data. The developer sold this data to Cambridge Analytica, an entity that played a key role in the election's outcome in 2016.

2. Deepfakes

Deepfake is a tool that uses deep learning to replace a person's face in a video with that of another person. The intent here, at least originally, was pure entertainment. But things started to go south when people started using Deepfake to spread rumours by creating fake news videos and pornographic content.

An app called DeepNude emerged soon after in 2019. The app allowed users to add images of women for creating a life-like nude image. The AI community strongly condemned the use of AI for degrading acts.  Fortunately, the app shut down soon after. But there's no telling if similar apps will re-emerge and not create nudes using your publicly available pictures. Scary, right?

3. Clearview

Clearview, an AI firm, helped the police by creating a face recognition system for them. The system was to help the police confirm a criminal's identity. The company says that the system helped the policy put plenty of terrorists and criminals behind the bars. However, it was later revealed that Clearview had blatantly violated data privacy laws and scraped billions of display pictures of social media users from websites like Instagram, Facebook, Twitter, and YouTube.

Now think about how this could easily turn out to be a nightmare. For instance, if AI systems continue to hand over personal data to police officers, or if the systems produce a false positive, that could mean unjust and unfair interrogation of plenty of innocent individuals who had nothing to do with the subject matter.

AI and data privacy laws

The scandals need to stop at some point, right? That requires robust regulation of data, and that's what exactly a few countries have done. GDPR is one of the biggest initiatives towards regulating data privacy. Such laws are created to give AI and data privacy an ecosystem where they can co-exist. However, many countries don't have a dedicated law for regulating data just yet.

1. General data protection regulation (GDPR)

One of the most prominent AI policies today is the GDPR. It's a law that has made the biggest impact on data regulation to date. The EU's aim to lead the world in AI adaptation has compelled EU policymakers to pay serious attention to AI and ML. So how do the GDPR and AI interact? 

Well, the GDPR has a broader scope. But a few provisions in the GDPR address AI specifically. For instance, several provisions in the GDPR discuss the impact of AI's decision-making on people. A lot of these provisions are part of Article 22 of the GDPR. The problem, though, is that GDPR's aim behind these provisions is a little vague. There are two reasons for this. One, the issue is complex. Second, GDPR was approved after a ton of back and forth between legislators.

But let's discuss a few general things that can be drawn from these provisions. Article 22 imposes some general limitations on the use of data for automated decision-making and profiling. Its applicability is restricted to instances where a decision is wholly based on automated processing, which has legal implications or has a similar impact on the data.  Article 22 applies to a very specific set of situations. However, the GDPR has a few provisions that cover all automated decisions and profiling. What's more, wherever automated decision-making and profiling process personal data, all provisions of the GDPR apply.

For now, the GDPR does restrict, or at least make it more difficult for AI to process personal data. Eventually, though, the GDPR will likely foster the trust required for the complete acceptance of AI by the people and governments. In the meantime, regulators will have some time to lay the groundwork for a fully regulated data market.

2. Data privacy laws in the rest of the world

Let's talk about the US.

There's no formal federal-level law in the US to protect data. However, a combination of other federal laws still protects an American's data. 

Several American states have come up with their laws for protecting data over the past few years. One of the most celebrated of them is the California Consumer Privacy Act (CCPA), which provides its residents top-notch protection and rights to privacy. 

The law puts the power in the hands of its people. Residents get to choose how their personal data will be collected and what the collector can use this data for. 

Other states that also have similar laws (or are in the process of passing one) include New York, Virginia, Alabama, Connecticut, Illinois, and Florida.

Canada and the UK are actively trying to regulate data as well. Canada's Personal Information Protection and Electronic Documents Act (PIPEDA) is similar to the EU's data protection law. In the UK, GDPR ceased to apply starting July 31, 2021, because… well, Brexit.

However, the data of citizens in the UK are protected by what's known as the UK GDPR, less popularly known as the Data Protection, Privacy and Electronic Communications (DPPEC) Regulations of 2019.

Brazil started enforcing its Lei Geral de Proteção de Dados (LGPD) as recently as August 2021, which charges companies 1–2% of their revenues when found guilty of non-compliance. 

South Africa, too, has a data protection law that's almost as stringent as the GDPR. The Protection of Personal Information Act (POPIA) was first proposed way back in 2013 but has become stricter over time.

Bahrain leads the way in data protection laws in the Middle East with its Personal Data Protection law. The law provides citizens with rights over how their data can be collected, stored, and processed.

AI and data privacy can co-exist

Data protection laws aren't intended to stifle AI's growth. However, the AI community and regulators must collaborate to strike a balance between using data and compliance. This collaboration is key to keeping the AI community thriving and facilitating innovation.  2021 has been an exciting year for privacy protection legislation as several laws came into enforcement. But challenges lie ahead. 

One of the biggest compliance challenges that await legislators is cross-border data transfers. But there's a good chance that legislators will find a way to hit the sweet spot and protect consumer data without standing in the way of AI's growth.

Copyright 2021. Article made possible by SKALE.

What does the * mean?

If a link has a * this means it is an affiliate link. To find out more, see our FAQs.