Harm is one of the ways in which we assess the standing of a legal proceeding. Who has been harmed? How grievously? And how can reparations be part of a judgment to repair or lesson those harms?
One of the problems with understanding the implications of surveillance is that the harms that occur do not neatly fit into traditional models of harm. The costs to individuals are typically small, even if the costs to society as a whole are great. Specific groups of people face greater potential costs while fears and anxieties about the potential of surveillance change many people’s behaviors (Foucault). But the costs and complications of surveillance become particularly complicated in networked situations as people’s relationships, associations, and other interpersonal connections become part of the situation. When data mining is used as part of surveillance, people are probabilistically implicated in ways that never involve personally identifiable information, but instead, invoke relationality (Crawford & Schultz). Is there a way, then, to think about networked harm?
What is Harm?
Most approaches to harm focus on the cost to individuals. Even cases that are conceptually understood as being on behalf of a wide swath of people often focus on the harm done to specific individuals. For example, U.S. v. Windsor is the Supreme Court case that effectively declared the Defense of Marriage Act unconstitutional. Windsor refers to Edith Windsor, the widow of deceased Thea Spyer who had been forced to pay an extensive tax bill upon inheriting her wife’s property because the federal government did not recognize her marriage. The ruling was specifically looking at the harm done to Windsor but, in doing so, ruled on behalf of millions of gay and lesbian couples.
Not all cases are heard on behalf of individual actors. Sometimes, lawsuits are brought on behalf of a group of people. The model for these cases, commonly referred to as class action lawsuits, help protect those who couldn’t bring claims for practical reasons (such as cost or unawareness of rights). However, bringing a class action suit forward is often onerous legally and practically, which is why many lawyers prefer to seek out ideal cases (such as Windsor) to bring forward a case implicitly on behalf of a class of people but practically on behalf of an individual.
Many issues of harm can easily be understood in terms of individuals or classes, but sometimes the impact is messier. Certainly the entwinement of rights is at the root of jurisprudence, but most models of interconnectivity assume that the relevant actors are all part of the same suit or implicated as a part of the class. Are there other ways of understanding interconnectedness?
Networks of Genes
In Maryland v. King, the Supreme Court ruled that DNA samples are a legitimate and reasonable part of an arrest under the 4th Amendment, equivalent to taking someone’s wallet, fingerprinting them, or photographing them. These practices are considered legitimate in the context of a criminal arrest because they act as identifiers. Indeed, the practice of collecting fingerprints during an arrest actually dates back to antiquity when fingerprints were also used to sign and seal documents.
At first blush, one might see DNA as an extension of fingerprints because genetic material can serve as a (mostly) unique identifier. (That said, identical twins may have identical DNA sequences and a person’s DNA can change over time.) But collecting genetic information implicates more than that individual. By collecting DNA, police databases effectively gather information about that person’s parents, siblings, and not-yet-born children and grandchildren. What rights do these people have over the data collection that is taking place?
The issue of databases comprised of genetic material raises numerous legal, ethical, and social issues; handling these dynamics is challenging. Consider, for example, the issue of HeLa cell line described in the popular science book The Immortal Life of Henrietta Lacks. These cancer cells, harvested from a dying patient in the 1950s, have become the basis of innumerable research studies advancing medicine and scientific knowledge. Yet, when researchers started publicly describing details about the cell line, they effectively published the details of Lacks’ family, undermining the genetic privacy of its living members.
In August, the National Institute of Health announced that they would work with the family of Henrietta Lacks to provide a mechanism for researchers to responsibly use HeLa cells and publish their findings. In doing so, they gave family members some control over access to these cell lines. If the NIH recognizes the networked nature of genetic material in its ethical procedures, should other government agencies as well? How should they minimize the harm that the collection and use of this data might have?
The Limitations of Contracts
Can the algorithmic processes used by Gmail to analyze email and display advertisements be considered a violation of federal wiretapping laws? This is the question before Justice Lucy Koh in California who motioned in September that a class action suit could be formed. This case is particularly fascinating because it’s not entirely clear who has standing for each individual instance.
One issue that has emerged in the case is whether or not a user agreement—or contract—is sufficient to justify this practice as not wiretapping. But who must sign a contract?
If Alice signs up to Gmail, she signs a contract that includes information about how her mail will be scanned. But when non-Gmail-user Bob writes to her, he didn’t sign that contract even though his emails are also being scanned. Who owns the emails? Does Alice own the ones she sends or the ones she receives? Do people only own the words they wrote or should emails be considered joint property? Who can contractually consent to use of networked data? How are other actors’ rights taken into account?
Partial Erasure
In September, California enacted SB568, colloquially referred to as the “eraser button” law. This law requires companies that create websites or apps directed at minors—or who knowingly have minors use their service—to remove or anonymize publicly posted content should that minor make such a request. The law is riddled with uncertainties but it creates the illusion of privacy controls. Although it was designed to address anxieties over social media, it neither protects youth from the more egregious reputational harms that they experience nor extends protections beyond what most sites offer as part of their terms of service.
One of the problems in addressing reputational harms is that it’s quite difficult to suss out who has the rights to what content. SB568 avoids this by only giving people rights over the content that they themselves posted. And yet, most reputational harm occurs when someone posts an inappropriate photograph or a hurtful message. Many people now experience the challenges that were once unique to public figures.
Two frames typically dominate these conversations: freedom of speech and property rights. Neither really gets at the complexities at stake. Are there other legal models for thinking about rights? For example, in family law, we don’t treat children as property. Instead, we employ a standard that thinks about the best interests of the child. What other models can be used to imagine rights in a networked age?
From Groups to Networks
There are plenty of cases where the costs of surveillance are borne by those directly affected or by a class of people who are harmed collectively, but the costs to individuals are not evenly distributed. Where they are situated within networks matters. And how networked data is used to undermine their privacy is increasingly of concern. Boundaries are porous and not easily defined. People’s relationships are central to surveillance concerns.
Over the last decade, the technology sector has increasingly embraced networked models. Although early communication tools were structured around individuals and groups (e.g., mailing lists, Usenet groups), the rise of social network sites allowed people to understand who they were within—and communicate to others in—networks. This technological turn has fundamentally disrupted the notion of privacy as people struggle to model how information should flow when access-control mechanisms get too complicated to serve as a viable tool. And yet, just because a piece of content is publicly accessible does not mean that all who are involved were intending for that content to be accessed by just anyone.
As technologists struggle to work out how privacy can and should operate across networks, surveillance has become increasingly networked and algorithmic. Data-centric marketers are computationally observing relationships between people to build models of behaviors, tastes, and interests. Meanwhile, government agencies are using networks to predictively model who poses a security risk. As networks become a central component of analysis, how can and should we think about the types of harm that unfold?
In a networked society, where power resides in networks (Castells), what kinds of rights and standing do people have when they are implicated by decisions intended to address others? Can we envision a notion of networked harm?