U.S. Attitudes Toward the 'Right to Be Forgotten'
IndustryView | 2014
When Europe’s highest court ruled in May that individuals had a “right to be forgotten”—i.e., they have the right to request that outdated or “irrelevant” information about them be removed from search results—the shockwaves were heard around the world.
Given the First Amendment and the traditionally strong emphasis on the public’s right to know in American culture, it may be difficult to imagine such a ruling happening stateside. But American culture is also traditionally strong on protecting privacy—and in fact, in January 2015, variant legislation applicable only to minors will become law in California. What if U.S. citizens start demanding the right to be forgotten, too?
We at Software Advice were intrigued by the possibility, so we surveyed 500 adults in the U.S. to find out how they felt about the right to be forgotten and the problems the law seeks to address. We then quizzed a panel of experts for their opinions on this complex issue.
The European Union (EU) ruling was divisive, to say the least. In one corner, we had Wikipedia founder Jimmy Wales denouncing the ruling as equivalent to “censoring history,” while other voices were raised in support of extending this right of privacy to Americans.
What nobody denied, however, was that it left many questions unanswered, such as: Is it fair to hold search engines responsible for the “irrelevant” information, rather than the website where it was published (and where the information will remain, even after the EU ruling)? Is this even a “right” at all? Is it practicable? After all, Europeans who want to find out what’s been “forgotten” need only switch to using the American version of Google, and the allegedly hidden search results will be returned.
In spite of this, the law cannot be appealed, and already, search results are disappearing across Europe. All these controversies aside, we wanted to know something very simple: Do Americans like this idea?
We asked respondents if they believed that search engines such as Google should be obliged to stop returning old or irrelevant results about individuals if those individuals complained. We also gave them a number of options to choose from as to what such a law might look like, if one were passed. The result was overwhelmingly clear: A solid majority—61 percent—were in favor of a “right to be forgotten" law for U.S. citizens.
Breaking the results down, we can see that 39 percent of respondents wanted an EU-style blanket law, applicable to everybody. A further 21 percent wanted the right only with qualifications: 15 percent thought it should only be extended to minors, while 6 percent believed everybody except public figures should be able to demand the right to be forgotten.
By contrast, only 18 percent were opposed on the basis of the public’s right to know, while 21 percent were concerned about the thorny issues surrounding the definition of what constitutes “relevancy.” (It’s a question that has bedeviled the European ruling, too—after all, who is to be the judge in such cases? In Europe, the answer is the search engines themselves, which has already led to still greater controversy.)
Regardless, the overall trend of the survey couldn’t be clearer: Many Americans want to be forgotten, too.
This result is made all the more interesting by the response to another question we asked. We wanted to gauge how concerned Americans are about the information that can hang around on the Internet for years, such as embarrassing social media posts or the news of past foreclosures (indeed, it was a case exactly like that which led to the European ruling).
So, we asked respondents whether they agreed that this sort of outdated information could be misleading or damaging to an individual. Here, in contrast to the strong support for the right to be forgotten, results were mixed and indecisive:
Twenty-eight percent "strongly" agreed with this proposition, and 19 percent “somewhat” agreed, giving us a total of 47 percent. Meanwhile, over one-third were ambivalent about the harm search results can cause, and a further 21 percent—who presumably don’t have teenage kids—didn’t seem to think that anything on the Internet posed a problem for anybody.
This latter result is very interesting: More individuals want the right to be forgotten (61 percent) than are concerned about the harms outdated or “no longer relevant” information can cause a person (47 percent). This suggests that, for many people, the appeal of the law is not even based on fears of the negative consequences of search results—but rather, is based on a belief in the individual’s right to privacy. So while such critics of the law as Jimmy Wales may fear for freedom of speech, many are just as concerned about protecting their right to privacy.
Next, we turned to a panel of experts for their opinions on the right to be forgotten and how it could potentially play out in the U.S.
Joseph Steinberg, security expert, regular Forbes contributor and CEO of SecureMySocial, a service that instantly warns clients if “problematic” material appears about them online: I support the legislation—if it’s done right. The concept might be appropriate, but the details aren’t worked out. For example: Who makes the decision? Right now [in Europe], Google [and other search engines] are making the decision [of] what to block—but it should be an outside party. There are no standards; there are no criteria; so a lot of this law is very vague.
Andy Kahl, senior director of transparency at Ghostery, one of the world’s top privacy tools for Web browsers: There is a positive aspect to this debate: It’s bringing the notion of data management into greater light. But we are entering an age where trading data about individuals is the status quo. It bothers me to think about all the time and energy we put into [trying to figure out] how to Band-Aid over that—as opposed to educating consumers about how to manage the way their data is being shared with gigantic corporations.
Heather Buchta, partner, Quarles and Brady LLC, whose legal practice covers e-commerce, software and technology: I think, in certain contexts, the justification is there to impose an obligation like this. For example, as the parent of two small children, I see why laws like the one in California are developing.
Pavel Krcma, chief technology officer at password-manager Sticky Password and former head of Viruslab for AVG Technologies: The right to be forgotten is an excellent example of the huge gap between how the world is perceived by lawmakers and how current technology works.
The first problem is that there’s nothing like, “Let’s forget everything in the past.” Everything I did in the past counts. It simply happened, and I have to live with the consequences. But things are more complicated now. We have technology [that] is able to record and search in all the details of our lives, and people even actively insert sensitive data [in]to various services.
We live in the information world, and any attempt to control it leads to [one of] two possible ends: It’s just a funny attempt, or it’s the start of a dictatorship.
Krcma: My statement is about the general direction. This law is just a small step, and it doesn’t change anything itself—but it could serve as a test of what people and companies are able to accept. I don’t believe that the EU is intentionally testing the mood of its citizens, but if we accept this law without problems, the next step could hurt us much more.
Steinberg: We already have a right to be forgotten in the U.S., in certain areas. If you miss a mortgage payment, 10 years later that’s not on your credit report, even though that is pertinent [information] and it’s something that happened; by law, it’s removed. Technology has undermined our existing rights to be forgotten. The question is: How do we do fix that correctly?
Of course, the devil is in the details. You don’t want everyone to remove stuff that might be in the public interest. For instance, you don’t want people to be able to randomly do it, and you don’t want information to be removed from some search results and not others.
Steinberg: You need to have certain objective criteria. For example, credit information. If [the rule is that] you are not allowed to carry a story about a foreclosure [in search results] after ten years, that is very simple to implement. Algorithmically, the criteria are very clear: If someone asks for it to be removed after six years, you say, “I’m sorry, ten years is the rule.” If they ask after 11 years, you remove it.
If there are areas where it is not objective, there should be an independent panel, though not occupied by the [search engines] themselves—there are conflicts of interest involved.
Kahl: I think that putting the search engine on trial for access to the data is missing the point. Saying [that] your privacy is preserved because it’s difficult to find information about you, which is what this law does—I don’t count that as a reasonable definition of “privacy.”
I do not want courts saying, “Congratulations, consumers, we’ve preserved your right to be forgotten,” when all they’ve done is have Google create a button that says, “Please remove this from the search results.” It’s not the right to be forgotten at all.
There’s been no discussion in these cases of whether the information, at its source, is to be deleted—but an individual’s privacy is at risk, whether it’s easy to find [the information] or not. Journalists were enterprising at uncovering information long before Google existed.
Krcma: The law simply ignores the technological reality. The workaround is too easy. I just read statistics that approximately 0.02% of EU citizens ask to delete a URL, and that many media houses actively report which URLs are blocked. This is, by the way, an excellent example of why this law can’t be successful.
Buchta: Probably not. When the Internet started, the concept was that all information should be equally available, regardless of what the purpose of the information was. If you go down the rabbit hole of saying that some information is more valuable than others, then where do you draw that line?
Buchta: I think that would be a step further than the regulators are willing to go. Assuming that all the previously reported information was factually correct, then the consumer still has a right to know. If I’m going to do a business deal with this man, he may have been acquitted, but the fact that he was involved in such a thing to begin with might be a relevant piece of information I want to know.
This comes up all the time in the context of review sites like Yelp and Angie’s List. Those complaints may have been made three or four years ago, and since then, everything has been stellar—but it still comes back to the consumer’s right to know who he’s doing business with.
Steinberg: It has implications on technical progress. Google is a large company, and can afford to deal with this particular requirement. But if you are a startup and want to start a search engine today, this [could] be a huge burden to deal with.
But any new thing creates business opportunities. There may be people who will create businesses around this need—say, engines that can search for [damaging] articles and flag them, or engines that can help manage compliance with this type of legislation.
Kahl: There are a lot of opportunities to create products that make it easy for consumers to responsibly manage data. Ghostery is a for-profit business based around that idea, [or, for example, I use a] Blackphone, which is encrypted by default, and has a bunch of privacy settings that are enacted by default.
For the deletion of stuff that is already out there, you’ll see a lot of people founding businesses to help enact your right to be forgotten. For a small fee [per] month, they’ll do a certain amount of monitoring, and make sure that no “irrelevant” data about you is on the Internet.
But what is Google’s liability if a request comes that does not meet the requirements of the right to be forgotten? Can an individual then ask for damages because the company did not process the request? Aren’t you incentivizing the search engine company towards censorship? If individuals can bring those claims, then isn’t it a smarter decision just to remove the information by default? It’s a real quagmire.
Steinberg: A European-style blanket right to be forgotten may not be the right way, but these problems need to be addressed. Some may get addressed because of civil lawsuits; some may get addressed by legislation; some may just remain unaddressed; and some may get addressed by search engines improving their algorithms.
Regarding the first amendment and freedom of speech, there are already restrictions on speech. You can be sued for slander. You can’t scream “fire” in a crowded movie theater. We’ve already restricted credit bureaus’ right to provide certain information based on how old it is, even if it’s true. There should be no problem extending the same requirements to other parties.
I do think, eventually, we are going to have some legislation in this area, but my hope would be the tech industry will be able to address some of these things technologically.
Kahl: There's precedent for the upcoming California law [granting underage Web users the right to have content they have posted online removed], because juvenile criminal records are often sealed—so it’s conceivable to see that extending to general online information. A blanket law, like in Europe, would be complicated; the freedom-of-speech issue would be a big hurdle. But a lot of these penalties in Europe are being levied against U.S. companies, and I don’t think it would be crazy to consider that someone might suggest [that] practices companies are already being forced to follow in Europe should [also] be adopted in America.
Buchta: It’s incredibly difficult to do this from a legal perspective. Our existing set of laws and regulations do favor free speech; they do allow people to post opinions and allow the free exchange of information. So, basically, if there’s something posted online and it’s accurate [but] perhaps not flattering, under existing laws, you’re going to be hard-pressed to find a way to ... get it removed.
Krcma: I’m afraid that if the government decides to do it, they will be able to slowly change the rules towards this [legislative] direction. I agree that it’s improbable that the same law as exists in the EU will be implemented in the USA, but there are many ways to implement a rule. The most important thing is [whether] enough citizens are against such rules, to stop any attempt from the very start. I’m not sure if we are there.
Buchta: There are things you can do within your own information dissemination that can be used for [search engine optimization, or] SEO, to at least get your information to come up first. It’s a far more practical, business-oriented way to deal with these issues. Ultimately, though, it’s got to be left to individuals to exercise some intellect, and figure out that if [that] posting is from 25 years ago, maybe it’s not so relevant—but it does fit into an overall picture.
Krcma: I believe that we, as a society, have to reconsider where the borders for privacy protection in the information world are. Governments, businesses and non-government organizations have a role to play. We need a wider discussion. When I think more about it, a law like this can start such discussions by attracting public attention. What else is this article, after all?
No doubt the debate will continue to rage, as the consequences of the European law unfold and the rest of the world watches. Even those among our experts who disagreed with the law agreed that it was at least an attempt to grapple with serious issues of individual privacy in an age when we all lead part of our lives online. Businesses, governments and individuals will have to negotiate this tricky terrain as technology races ahead; whether the results will satisfy anyone remains to be seen.