-
Assistant Attorney General, Public Protection DivisionVermont Attorney General’s Office.
Third in a Series
Talking about privacy is like trying to hug an octopus. The concept covers a wide array of practices and technologies, including targeted advertising, surveillance, artificial intelligence and predictive analytics, biometrics and facial recognition, social media platforms and mobile apps, and online and geolocation tracking, to name just a few. Understanding all these technologies in their own right—much less their many implications for health, safety, and the future—can be overwhelming.
Making matters even more complicated is the fact that, in addition to consumers, countless stakeholders may be impacted by any policy change: Big Tech, small businesses, data brokers, retailers, hospitals, universities, banks, law enforcement and national security, and others. Some of them would benefit from keeping things exactly as they are. Others suggest policy proposals that look good on paper, but in practice would be ineffective. Some worry, not unjustifiably, that policymakers who don’t fully understand the underlying technologies will create laws that are both ineffective and overly burdensome.
Amid all this complexity, the policymaker must stay grounded in first principles:
- People don’t like being tracked or surveilled.
- They don’t want to be discriminated against, stalked, or defrauded, nor have their identities stolen.
- Personal information should be kept personal.
- Sensitive information should be treated with sensitivity.
- If information is collected for a purpose, it should be used for that purpose only.
- People want control over the information being collected about them, either to prevent it from being shared or to make sure it’s accurate.
All stakeholders – consumers and businesses, public and private sector – benefit from trust in the marketplace. Economies with high trust grow faster. When I transfer money to my bank, I know it’s almost certainly going to get there. If it doesn’t, I know there are protections and consequences that will make me whole. All that trust lets me and millions of others make use of the financial system without much thought.
Unfortunately, at present, the same cannot be said about my data. If I want to buy a new device or app, but I can’t figure out which one is going to sell my data and I know few consequences will occur if a company misrepresents their policies, I may decide I can simply do without. That’s a loss for everyone – I don’t get the product I want and a business loses a sale.(( Although some have argued that consumers don’t really care about privacy, polling indicates otherwise. )) Restoring trust in the technology marketplace is a key benefit of establishing sound privacy policy.
In previous articles in this series, we’ve discussed five privacy-related fallacies, such as the idea that people should have “nothing to hide,” that our data is out there anyway, slippery slopes, the false convenience tradeoff, and the targeted-ads red herring. We’ve also looked at two principles: context and balance. This article will introduce a third principle and three more fallacies.
Principle Three: Scope and Magnitude Matter
In the last installment, we discussed how people have a right to keep things private even if the following are true:
- They’ve disclosed that information to one community but not another.
- They happen to have been observed engaging in a private activity.
- There is a government record in a filing cabinet somewhere that documents an embarrassing fact.
In other words, people have always relied on practical obscurity to maintain their privacy. Even if you didn’t hide your deepest secrets in a bank vault, businesses don’t have a right to broadcast those secrets to the world. This is an example of trying to justify surveillance by ignoring how people operate in the real world.
A similar fallacy happens when a data collector tries to justify data collection on a massive scale by comparing it to a much more limited act. For example, when you walk into a car dealership, the salesperson might take into account how you’re dressed, what car you drove up in, and the details you share about yourself in order to estimate how much you are willing to pay. It is not improper to do so (though the potential for implicit bias complicates this judgment) – businesses collect information about their customers all the time. However, if a dealership collects thousands of data points about an individual from numerous sources, uses artificial intelligence to predict their behavior, and tracks their recent web browsing to see what other cars they’re looking at, it’s engaging in a different behavior entirely. Both could be described as “sizing up the customer,” but the latter has far greater potential repercussions.
Similarly, imagine a security guard standing at the door of a shop and surveilling the sales floor, the outside sidewalk, the customers passing in and out. It’s a small town, so the guard recognizes most of the people passing by. The guard is doing his job; security is an important part of any business. However, if you replace that guard with a surveillance camera outfitted with facial recognition, then place a similar camera in front of every shop, then network them together so that you are recording and tracking the comings and goings of everyone in the area, you have changed the nature of your endeavor completely.
In 1966, the Kaysen Committee, a federal task force appointed by Lyndon Johnson, recommended the creation of a national data center which would consolidate all of the government’s statistical data sets stored across 20 agencies, on 100 million punch cards and 30,000 computer tapes. The benefits to government efficiency and to social science researchers seemed obvious to the committee.
But they didn’t anticipate the overwhelmingly negative reaction from Congress, the media, and the public. In the potential of all that data consolidated in one place, people feared the creation of government dossiers and the looming threat of Big Brother. The would-be national data center was dead in the water. This event was a factor in the eventual passage of the Privacy Act of 1974. Even though all of this data was in the control of the U.S. Government, the fact that it was not centralized, that it was stored inefficiently, provided Americans a degree of protection and comfort.
The notion of such outrage at centralized data collection may seem quaint today, but it’s possible those naysayers were even more prescient than they realized.
Now I’ll turn to a few more fallacies that often come up when we discuss privacy.
Fallacy Six: You Agreed to Hand Over All Your Information
There is nothing wrong with agreeing to provide your personal information in exchange for goods and services—if you fully understand the terms of the bargain. The regulatory model for consumer protection relies on “notice and consent.” So long as the consumer is notified about the deal they’re getting into, and they consent to it, a transaction is generally considered fair.
But that “if” is huge. The problem is that in the technology arena, notice and consent often doesn’t work. Privacy polices and terms & conditions are notorious not only for being overly long and legalistic, but also for blatantly obfuscating the actual substance of the agreement. Carnegie Mellon researchers have determined that it would take 76 work days each year for the average person to read all the privacy policies that they “agree” to. Even if you actually spent all that time, you would probably still not have enough understanding to provide true informed consent.
But this argument also fails on another front. Even if all consumers devoted the time and fully understood what they were giving up, consumers still have little choice. If every tech company in an industry is conditioning participation upon invading user privacy (including the ones which are alleged monopolists), and if this all-or-nothing deal is necessary to obtain such necessities as a telephone, then are consumers really “choosing” to exchange their privacy for services?
Lastly, given the stakes involved in surrendering one’s most sensitive information, even if companies were completely transparent about what they were collecting, the lack of controls over downstream uses of that data and the potential consequences to the consumer of those uses make it unreasonable to consider any sort of carte blanche handover of one’s rights to self in exchange for, as one example, free email, to ever be truly “informed.”
While notice and consent has its place, it has never worked in every circumstance. Pharmaceuticals are a good example. Even if you read all the studies about a new drug and were willing to accept the risks of taking it, you would not be able to until the FDA approved it. Even then, you might need a third-party expert (a doctor) to sign off on your consent in the form of a prescription.
In the financial industry, we have recognized that even if all of the financial terms are explained, they may still be too confusing or overwhelming for the average consumer to appreciate. As a result, Congress passed the Truth in Lending Act in 1968 to clarify what an appropriate notice looks like. More recently, the Consumer Financial Protection Bureau (CFPB) was granted the power to prohibit “abusive” practices, financial transactions that take unfair advantage of a difference in sophistication between parties.
I once heard the response, “Take some personal responsibility, or just don’t go on the internet.” This is not practical. Going offline while still participating in society is no longer a reasonable option.
Fallacy Seven: But We Do Give You Control Over Your Privacy
This argument refers to privacy settings on apps, phones, and websites, and to the ability to opt out of certain types of data collection.
The biggest problem with this argument is that, as with the “nothing to hide, nothing to fear” claim, it presumes a right of businesses to surveil users and burdens the individual with the entire onus of keeping information private. This is like saying, “Sure, we positioned an employee outside your window with binoculars, but you have the power to draw the curtains.”
However, it’s not one individual with binoculars—it’s hundreds. And they aren’t just peeping through one window. They’re following you, bugging your conversations, sticking a GPS tracker on your car, and interviewing your friends.
To continue the analogy, it should not be the individual’s responsibility to seek out every spy who is lurking in the shadows and ask them to stop. Even this assumes that the snoop will stop when asked (and not re-start at a later time). There is no legal obligation for him to do so, and a lot of money to made from continuing.
To the extent that some companies offer some privacy controls, it is only the ones they want to offer. The controls may only relate to information they directly collect from you, not the plentiful dossiers they gather from third parties. Even where controls exist, they are often hard to find, worded misleadingly, and—as we have seen from numerous FTC cases—ignored or subverted when invoked.
And that’s just consumer-facing businesses. Many of the businesses that invade our privacy have no direct relationship with the consumers they spy on. Vermont enacted its first-in-nation Data Broker Registry Law for the simple but necessary purpose of identifying these businesses.
Finally, there is status quo bias and the control paradox. The former refers to the fact that consumers rarely change default settings, despite strong incentives to do so. The latter refers to an observation that more granular controls over one’s privacy gives an illusory sense of security that results in people giving up more information, not less.
Fallacy Eight: You’re Going to Break the Internet
This one is just slippery-slope fearmongering. It’s also deeply cynical in that it implies that the internet and the modern economy literally cannot exist unless the business community has free rein to do whatever they want with your data. That is categorically untrue.
Many companies have successful business models without monetizing your data. Some, like DuckDuckGo and Mozilla, even use that fact as a marketing advantage. Their actions indicate it is possible to regulate privacy without breaking the internet. Some companies that oppose regulation have begun loosening their objections to federal regulation, if merely to nullify the state law regulations that have filled the void left by Congressional inaction.
One of the most effective tools that consumer advocates have is education. In the privacy arena, one of the best ways we can protect ourselves is to understand the scope of the problem and what’s at stake—to truly get our arms around that octopus. We must make our decisions in the marketplace based on a thorough understanding of privacy and all of its implications, even while social media companies try to convince us that privacy is not worth protecting. That’s no easy task. But it’s worth it.
As more people become informed, opponents of privacy will be forced to stop relying on tired and disingenuous arguments. The goal is to develop policies that protect our privacy while not seriously hampering technological innovation or commercial success. Getting this balance right requires us to learn to see through bad arguments so that we can identify the good-faith defenses of honest business practices, of which there are many. Then we can chart a course for a fair and successful data economy.