Menu Close

Washington and Lee Law Review - Roundtables

Roundtable

by Ian Huyett

In his address, Professor Calhoun used American Christian abolitionism to illustrate the beneficial role that religion can play in political debate. Surveying the past two millennia, I argue that Christian political thought has protected liberty in every era of the church’s dramatic history. Along the way, I rebut critics—from the left and right—who urge that Christianity’s political influence has been unhelpful or harmful. I also seek to show that statements like “religion has no place in politics” are best understood as expressions of arbitrary bias.

Roundtable

by David M. Smolin

Political and philosophical theorists have often advocated for the exclusion of some or all religious perspectives from full participation in politics. Such approaches create criteria—such as public accessibility, public reason, or secular rationale—to legitimate such exclusion. During the 1990s I argued, as an evangelical Christian, against such exclusionary theories, defending the rights to full and equal political participation by evangelical Christians, traditionalist Roman Catholics, and any others who would be restricted by such criteria.

 

Roundtable

by Wayne R. Barnes

Professor Calhoun, in his Article around which this symposium is based, has asserted that it is permissible for citizens to publicly argue for laws or public policy solutions based on explicitly religious reasons. Calhoun candidly admits that he has “long grappled” with this question (as have I, though he for longer), and, in probably the biggest understatement in this entire symposium, notes that Professor Kent Greenawalt identified this as “a particularly significant, debatable, and highly complex problem.” Is it ever. I have a position that I will advance in this article, but I wish to acknowledge at the outset that this is a difficult and complicated issue. It intersects with issues of constitutional law, theology, political theory, jurisprudence, philosophy, law and morality—and that’s just off the top of my head. As soon as one issue is addressed, twelve others raise their head and confound. I am also mindful that Professor Calhoun has been grappling with this issue for far longer than I have. I respect him and his thoughtful treatment of this issue immensely. Part of my trepidation in addressing this subject is that, as will be seen in this response, Professor Calhoun once held a very similar opinion on this issue as me. However, he has evolved beyond it, whereas I (to date) have not. The structure of this online symposium is that Professor Calhoun will have a chance to respond in writing to the points I make in this Article, and I will then have the opportunity to reflect and respond to his reply. I look forward to the exchange, and I know that I will be enriched for having participated in the dialogue.

Roundtable

by Jeremy Berkowitz, Michael Mangold & Stephen Sharon

In recent years, well-known cyber breaches have placed growing pressure on organizations to implement proper privacy and data protection standards. Attacks involving the theft of employee and customer personal information have damaged the reputations of well-known brands, resulting in significant financial costs. As a result, governments across the globe are actively examining and strengthening laws to better protect the personal data of its citizens. The General Data Protection Regulation (GDPR) updates European privacy law with an array of provisions that better protect consumers and require organizations to focus on accounting for privacy in their business processes through “privacy-by-design” and “privacy by default” principles. In the US, the National Privacy Research Strategy (NPRS), makes several recommendations that reinforce the need for organizations to better protect data.

In response to these rapid developments in privacy compliance, data flow mapping has emerged as a valuable tool. Data flow mapping depicts the flow of data through a system or process, enumerating specific data elements handled, while identifying the risks at different stages of the data lifecycle.

This Article explains the critical features of a data flow map and discusses how mapping may improve the transparency of the data lifecycle, while recognizing the limitations in building out data flow maps and the difficulties of maintaining updated maps. The Article then explores how data flow mapping may support data collection, transfer, storage, and destruction practices pursuant to various privacy regulations. Finally, a hypothetical case study is presented to show how data flow mapping was used by an organization to stay compliant with privacy rules and to improve the transparency of information flows.

Roundtable

by Chetan Gupta

This paper examines the hypothesis that it may be possible for individual actors in a marketplace to drive the adoption of particular privacy and security standards. It aims to explore the diffusion of privacy and security technologies in the marketplace. Using HTTPS, Two-Factor Authentication, and End-to-End Encryption as case studies, it tries to ascertain which factors are responsible for successful diffusion which improves the privacy of a large number of users. Lastly, it explores whether the FTC may view a widely diffused standard as a necessary security feature for all actors in a particular industry.

Based on the case studies chosen, the paper concludes that while single actors/groups often do drive the adoption of a standard, they tend to be significant players in the industry or otherwise well positioned to drive adoption and diffusion. The openness of a new standard can also contribute significantly to its success. When a privacy standard becomes industry dominant on account of a major actor, the cost to other market participants appears not to affect its diffusion.

A further conclusion is that diffusion is also easiest in consumer facing products when it involves little to no inconvenience to consumers, and is carried out at the back end, yet results in tangible and visible benefits to consumers, who can then question why other actors in that space are not implementing it. Actors who do not adopt the standard may also potentially face reputational risks on account of non-implementation, and lose out on market share.

Roundtable

by Ivan L. Sucharski & Philip Fabinger

To prepare for the age of the intelligent, highly connected, and autonomous vehicle, a new approach to concepts of granting consent, managing privacy, and dealing with the need to interact quickly and meaningfully is needed. Additionally, in an environment where personal data is rapidly shared with a multitude of independent parties, there exists a need to reduce the information asymmetry that currently exists between the user and data collecting entities. This Article rethinks the traditional notice and consent model in the context of real-time communication between vehicles or vehicles and infrastructure or vehicles and other surroundings and proposes a re-engineering of current privacy concepts to prepare for a rapidly approaching digital future. In this future, multiple independent actors such as vehicles or other machines may seek personal information at a rate that makes the traditional informed consent model untenable.

This Article proposes a two-step approach: As an attempt to meet and balance user needs for a seamless experience while preserving their rights to privacy, the first step is a less static consent paradigm able to better support personal data in systems which use machine based real-time communication and automation. In addition, the article proposes a radical re-thinking of the current privacy protection system by sharing the vision of “Privacy as a Service” as a second step, which is an independently managed method of granular technical privacy control that can better protect individual privacy while at the same time facilitating high-frequency communication in a machine-to-machine environment.

Roundtable

by Molly Jackman & Lauri Kanerva

Increasingly, companies are conducting research so that they can make informed decisions about what products to build and what features to change.These data-driven insights enable companies to make responsible decisions that will improve peoples’ experiences with their products. Importantly, companies must also be responsible in how they conduct research. Existing ethical guidelines for research do not always robustly address the considerations that industry researchers face. For this reason, companies should develop principles and practices around research that are appropriate to the environments in which they operate,taking into account the values set out in law and ethics. This paper describes the research review process designed and implemented at Facebook, including the training employees receive, and the steps involved in evaluating proposed research. We emphasize that there is no one-size-fits-all model of research review that can be applied across companies, and that processes should be designed to fit the contexts in which the research is taking place. However, we hope that general principles can be extracted from Facebook’s process that will inform other companies as they develop frameworks for research review that serve their needs.

Roundtable

by Effy Vayena, Urs Gasser, Alexandra Wood, David R. O'Brien, Micah Altman

Emerging large-scale data sources hold tremendous potential for new scientific research into human biology, behaviors, and relationships. At the same time, big data research presents privacy and ethical challenges that the current regulatory framework is ill-suited to address. In light of the immense value of large-scale research data, the central question moving forward is not whether such data should be made available for research, but rather how the benefits can be captured in a way that respects fundamental principles of ethics and privacy.

In response, this Essay outlines elements of a new ethical framework for big data research. It argues that oversight should aim to provide universal coverage of human subjects research, regardless of funding source, across all stages of the information lifecycle. New definitions and standards should be developed based on a modern understanding of privacy science and the expectations of research subjects. In addition, researchers and review boards should be encouraged to incorporate systematic risk-benefit assessments and new procedural and technological solutions from the wide range of interventions that are available. Finally, oversight mechanisms and the safeguards implemented should be tailored to the intended uses, benefits, threats, harms, and vulnerabilities associated with a specific research activity.

Development of a new ethical framework with these elements should be the product of a dynamic multistakeholder process that is designed to capture the latest scientific understanding of privacy, analytical methods, available safeguards, community and social norms, and best practices for research ethics as they evolve over time. Such a framework would support big data utilization and help harness the value of big data in a sustainable and trust-building manner.

Roundtable

by Dennis D. Hirsch, Jonathan H. King

Today, organizations globally wrestle with how to extract valuable insights from diverse data sets without invading privacy, causing discrimination, harming their brand, or otherwise undermining the sustainability of their big data projects. Leaders in these organizations are thus asking: What management approach should businesses employ sustainably to achieve the tremendous benefits of big data analytics, while minimizing the potential negative externalities?

This Paper argues that leaders can learn from environmental management practices developed to manage the negative externalities of the industrial revolution. First, it shows that, along with its many benefits, big data can create negative externalities that are structurally similar to environmental pollution. This suggests that management strategies to enhance environmental performance could provide a useful model for businesses seeking sustainably to develop their personal data assets. Second, this Paper chronicles environmental management’s historical progression from a back-end, siloed approach to a more proactive and collaborative “environmental management system” method. An approach modeled after environmental management systems—a Big Data Management System approach—offers an effective model for managing data analytics operations to prevent negative externalities.

Finally, this Paper shows that a Big Data Management System approach aligns with: (A) Agile software development and DevOps practices that companies use to develop and maintain big data applications, (B) best practices in Privacy by Design and Privacy Engineering, and (C) emerging trends in organizational management theory. At this critical, formative moment when organizations begin to leverage personal data to revolutionary ends, we can readily learn from environmental management systems to embrace sustainable big data management from the outset.

css.php