Many of us have become numb to being tracked. We know that creepy things are going on when we browse the internet, that our phones share our location, that devices in our homes are listening and watching us, and many apps misuse our contacts, photos, and much more.
Yet we continue to browse, use our phones, buy smart devices, and download apps — because we enjoy the benefits of the technology. And because we largely don’t understand the privacy risks we face by doing these things without protecting ourselves.
The Risks from Privacy Theft
So what are the risks we face because devices, apps, services, companies, and governments are gaining access to more and more of our data? We recently spent nearly two years thinking about this, as we developed the Priiv app, built a scoring system for privacy and defining the actions that one can take to regain privacy.
Our conclusion was a list of five categories that capture and identify many of the risks facing people today. These categories cover broad swaths of life – personal, commercial, financial, physical, and political – and each encompass two or more specific kinds of risks.
Together, they reflect the blanket of protection we can all enjoy and expect in terms of privacy, and define specific ways we can see and measure our privacy being harmed or even removed. Each of the five risk categories is described in more detail below:
The first category is personal risk, which is the nearest to the old-fashioned idea of privacy. When your data is shared beyond where or how you intended, you can find yourself publicly embarrassed, and you can suffer damage to your reputations, which itself may have secondary impacts on your work, income, relationships, or opportunities.
People who don’t actively seek to protect their own privacy sometimes use the excuse that they ‘have nothing to hide.’ But when these people are told: “if you really don’t care about privacy, send me the password to your email account,” nobody ever does. Everyone has photos not meant to be shared publicly, private messages with personal or confidential information, and places they’ve been, people they’ve interacted with, or even purchases made or search histories that are best left in the dark. These are just a few examples of private data that if improperly or unexpectedly revealed – to the world or just to specific people for whom they were not intended – could cause personal damage.
A related but distinct risk is the “chilling effect” that sometimes causes people to modify their behavior, avoiding statements or activities, precisely because they know that they cannot count on privacy or confidentiality. When it first became widely known that search history could be exposed either via browser history or by courts or law enforcement providing Google with a warrant or subpoena, it made many people think twice about running certain searches – not because they were improper but because they could be made to seem untoward when exposed or taken out of context.
The chilling effect is a kind of self-censorship that takes many forms, from not typing a web search because you don’t want to later have to explain your motivation, to not visiting a website, not leaving a comment on social media because you’re afraid it could be taken out of context, to not clicking an ad because you don’t want similar ads following you for weeks, to avoiding walking into a store because you don’t want to be seen on the surveillance footage, or not ‘friending’ someone because you don’t want the public record of the relationship. In each of these cases your freedom – of thought, of expression, of movement, or of association is limited only because the risk of a privacy violation makes you decide to change your behavior. It has been proven that here is a psychological toll to even knowing that we’re being surveilled in these ways.
Next we have commercial risk, which can also be referred to as ‘Surveillance Capitalism.’ This is the broad range of harm caused by marketers leveraging technology to create large, detailed profiles on you, full of specific and mistaken assumptions, and then using those profiles (when not selling access to others) to target ads and offers in order to manipulate you into making purchases, adopting specific beliefs, or even voting certain ways. This information is also used to decide which offers you should or should not see, and how much they can get away with charging you for a product or service.
The level of harm this causes is often underestimated; while a certain amount of advertising is reasonable and to be expected, when it ventures into intentional psychological manipulation a line has been crossed. We can all live with the fact that an advertiser knows our approximate age and gender, but don’t have our heads wrapped around the fact that they know our medical histories, the date our last relationship ended, the number of times we’ve been to the grocery story this month, the fact that our mood was deemed ‘tense’ during over half of those visits, our psychological profile based on the music we listen to,the last 27 toll booths we’ve been through, and how many times we click on ads where the model is a bearded man.
It’s precisely this massively scaled ‘weaponization’ of our data that has turned this risk from a benign unpleasantry to a malignant cancer. The impact of ad tech is only tolerable while you remain ignorant of its scope and scale.
Eventually it all comes down to money; your data can enable people to steal money, directly by illegally accessing your accounts or indirectly by turning you into the victim of a scam. The risk of a direct account takeover often comes down to good password and account security habits, which are a factor in many of the risks we face from data loss.
But there are far more ‘subtle’ risks, where public personal information — the name of your nephew that gets used in an urgent email appeal for money to ‘get him out of jail’, or the place you went on vacation is used in a social engineering scam to sim-jack your phone number, or the fact that your mother’s maiden name can be found on social media which means everyone can answer the challenge questions that are supposed to be protecting your brokerage account.
Identity theft is another risk that can dramatically impact your financial world and is driven by shared and misused personal data. In this case, information about you is used to become you. This is commonly done to get credit, make purchases, open fraudulent accounts, and generally wreak havoc and generate expenses and trouble. As with the fraud examples above, it is turbo-charged by the explosion of information sources that gives away what used to be relatively obscure personally identifying information because they fail to secure that information (and you’re likely to give it to them)
The next category of privacy risk is Physical, in which we include all forms of harassment as well as actual physical damage or violence. This is unfortunately prevalent in the world of online dating and relationships, ranging from those who do a little too much ‘research’ on the object of their affections to those who inappropriately move communications from apps to email or phone or even in person visits without invitation.
Harassment can start from within a relationship, or ignite between strangers via a social media argument. And when everything including your home and work addresses, stores you frequent, and relatives names are all just a click away, it’s scarily easy for crazy or bad people to move an online problem into the real world. From ‘doxing’ which outsources the actual damage, to all kinds of intimidation which never come offline but instead cause mental anguish or terror, there are too many examples where the ease of access to information has enabled poor or criminal behavior.
One of the great risks of generating as much digital information as we do, is failing to think about how data that seems innocent in one context can become dangerous in another.
We all remember the first time we heard that posting vacation pictures on social media alerts thieves that your home is vacant. That wasn’t a risk we associated with sharing photos before. Similarly, while many people can allude to the place they work, gym they visit, or favorite bar or restaurant in all kinds of innocent ways, the data takes on a different meaning when someone you don’t want to meet in real-life suddenly knows exactly where you’ll be at some time certain.
The final category is political, which contains all the ways that you individually or in some cases the environment in which you operate, can be damaged or altered as a result of privacy violations.
One example would be information about your behaviors, thoughts, beliefs, or identity, that can be used to limit or exclude you from various forms of your personal freedom. This can be done directly – by identifying you and constraining or harming you in some way, or by limiting your access or rights, or indirectly by targeting you for informational manipulation.
Another would be the political version of ad targeting, where personal data including psychological impressions and other derived information is used to micro-target ads that inflame or misinform, or otherwise shift your political actions (watch ‘The Great Hack’ on Netflix to see real-world examples). These applications of personal data and technology exacerbate the damage of privacy theft, and extend it from personal to societal.
Reducing Privacy Risks
These categories of privacy risk are probably not comprehensive, but as we classified hundreds and hundreds of privacy improving actions we found that each impacted one or more, and we didn’t find any we could reasonably say contributed to the elimination of one of these.
Hundreds (perhaps thousands) of factors contribute to the risks you face across these categories. Your devices, apps, and services – and the companies and governments behind them – are all working hard to compromise your privacy because it helps them to earn more money or achieve some other goals. If you don’t work to protect yourself, you will not only be subject to these risks you will be damaged or harmed by their realization.
Ultimately, there are only three things you can and need to do: 1) change settings (where possible), 2) adjust behaviors (where practical), and 3) add privacy protecting tools and services (where available). The Priiv app was built to offer a personalized and prioritized path to privacy that specifically enumerates all of these for anyone. And it measures your progress in each of the above five risk categories along the way.