The modern European privacy law packs a punch. Within hours of the General Data Protection Regulation (GDPR) coming into force, an Austrian privacy activist used the modern EU rules to file a legal complaint against Facebook and Google. It’s too early to say how the case will be resolved, but companies found to be in breach of the law could face fines of up to 4% of annual revenueThis means that both companies could be fined. a total of 7.6 billion euros (£6.6 billion).
But even as most internet users grappled with a flood of GDPR-related emails from companies trying to comply with the law, it occurred to me that perhaps the most aggressive attempt by lawmakers to protect people’s privacy still wasn’t enough. Not even halfway. The problem is that the law doesn’t protect the data that’s most valuable to tech companies: the data that’s inferred by algorithms and used by advertisers.
The basic premise of GDPR is that consumers must provide consent before a company like Facebook can start collecting personal data. The company must explain why the data is being collected and how it is being used. The company also cannot employ the data for a different purpose later.
All of these principles naturally translated into consent boxes that “appeared online or in apps, often accompanied by a threat that the service could no longer be used if the user(s) did not consent,” Max Schrems noticedactivist who filed a complaint against this “this or nothing” approach.
Still, any modern cases against Facebook and Google could follow the path of the current investigations into the Cambridge Analytica scandal. Addressing EU officials at a parliamentary hearing, Mark Zuckerberg was recently spotted rehashing a suit a famous storythat he was sorry and that he “did not do enough to prevent harm.” “Whether it’s bogus news, foreign interference in elections, or developers abusing people’s information, we haven’t looked broadly enough at our responsibilities,” he said.
In other words, the highly technical challenge of consumer data security and privacy has been reduced to a public spectacle of remorse and redemption. And when the solution arrives, as with GDPR, it will come in the form of email consent forms full of incomprehensible miniature print and terms. The greatest danger is that the public will be blinded to what really matters.
Where social media, search engines, and large online retailers have had real success so far is in defining what constitutes “personal data” that lawmakers believe needs to be protected. The types of data that GDPR covers include credit card numbers, travel records, religious affiliations, web search results, biometric data from wearable fitness trackers, and Internet (IP) addresses. But when it comes to targeting consumers, such personal data, while useful, isn’t the most essential thing.
For example, if HBO wants to advertise the modern season of Game of Thrones to anyone who reads an article about the show on the Novel York Times website, all HBO needs is an algorithm that understands behavioral correlation, not demographic profile. And these all-knowing algorithms, the hidden machine-learning tools that power everything from Facebook’s news feed to Google’s self-driving cars, remain unclear and unassailable. In fact, they have their own intellectual property protections that make them trade secrets, much like Coca-Cola Recipe.
But the difference between Coca-Cola and Facebook, of course, lies in their business models. Facebook, Google, Snapchat, and YouTube generate revenue through advertising. Consumers pay for Coca-Cola but get its digital services “for free.” And this seemingly free service has introduced what economists call the “principal-contractor” problemmeaning that tech companies may not be acting in the best interests of consumers because they are the product, not the customers. That’s why Facebook COO Sheryl Sandberg said Facebook users can’t opt out of sharing their data with advertisers because that would require Facebook to “paid product”.
GDPR may pave the way for a solution
But it’s not unsolvable. Tech companies could be required to appoint independent reviewers, computer scientists and academic researchers to audit algorithms to ensure any automated decisions are unbiased and ethical.
Data scientists working at tech companies could also be required to ensure that any wise algorithm adheres to the principle of “explainable AI.” This is the idea that machine learning systems it should be they can explain their decisions and actions to human users and “provide knowledge about how they will behave in the future.”
What is unique about today’s tech world, and what is virtually incomprehensible to those working outside the sector, is the minimal level of assurance and regulation of the basics that was expected. Facebook shares have already recovered since the Cambridge Analytica scandal.
This shows that the biggest potential benefit of GDPR is not so much the immediate protection of consumers, but the chance to open up an arena for public debate. Imagine if consumers could one day voice their dissatisfaction with unfair targeting or challenge the logic of a proprietary algorithm in a public tribunal staffed by independent computer scientists. It is this kind of built-in control that will make the internet fairer and more useful. GDPR is the first step in this direction.