A program of the Center for Inquiry
We are interested in promoting beneficial technologies, which nanotechnology certainly promises to be, and doing so in ways that are efficient, innovative, and ethical. At the same time, we must also consider how much and what type of regulation is warranted regarding this new technology. Regulation can be proactive and supportive of innovation through incentives. Some regulation can have a negative impact by imposing restrictions on researchers. Governments can spur research with funding or inhibit it through laws. During the George W. Bush administration, all but a minor subset of federal funding for stem-cell research was halted by presidential decree.
There are societal and individual concerns regarding the environment, safety, and security that may warrant regulation of a new technology in some form, whether by governments, international bodies, or the researchers and innovators themselves. Mistakes made in the past in developing and implementing technologies and the resulting harms that have befallen individuals, populations, and the environment serve as examples by which nanotechnology can be more carefully introduced into the stream of commerce and unnecessary harms avoided. We should be mindful, however, that critics’ exaggeration of risks and irrational public fears have also proved harmful to society, as worthwhile new technologies have been stifled or delayed when they were needed most.
Regulation has had its failures. Sometimes safety and security issues are poorly understood and not appropriately anticipated. Regulating authorities may focus on irrelevant features of the technologies or unlikely risks. Moreover, the costs associated with regulation, both economic and social, may be greater than those associated with the risks. Regulating too much too early may be needless, yet failing to identify real hazards or harms can be disastrous. Technology assessment is but a modern, less-than-perfect reaction to past failures. It is easy to make an ethical case for caution, even where the risks are difficult or impossible to gauge. The potential for public distrust of an entire sector of technological development based upon harms that could have been anticipated or avoided argues for at least a public, concerted effort to confront such possibilities and effectively plan for them. Worse yet, the potential for real public harms weighs heavily against blindly pursuing the technology. Yet the ethical necessity of caution does not necessarily mean adopting the so-called “precautionary principle,”1 which has led in the recent past to delayed adoption of important technologies whose risks appear to have been overblown.
Although many have briefly touched on sci-fi, dystopian visions of “gray goo” and dismissed them as not imminent and perhaps technically not feasible anytime within the next ten years, there are other potential harms that bear consideration and over which some regulation, or at least self-policing by researchers, should perhaps be exercised. Because of the nature of nanotechnology, which involves very, very small components, research will likely be harder to track, contain, or regulate as it matures. Thus, it is now a good time to begin to consider the reasonably anticipated risks of nanotechnology and to take measures to avoid some of the possible harms. We’ll briefly discuss below each of the areas outside of intellectual property (IP) law that warrant considering regulation, and examine the potential benefits and harms that regulation (both under- and overregulation) might pose. (Intellectual property law is excluded from this analysis because it is driven principally by commercial, rather than ethical, considerations.)
Recently, ethicists and innovators have begun to seriously apply the notion of justice intergenerationally. Borrowing from legal philosopher John Rawls and from Immanuel Kant, this concept requires us to consider the impact of our intentional actions upon future generations, as well as on our contemporaries. One arena in which this plays out is the environment.2 Societies have spent billions of dollars remediating environmental harms resulting from technologies that were implemented in the past with little awareness of, or concern for, their potential impact.
The budding chemical industry of the late nineteenth and early twentieth centuries produced enormous wealth and helped bring about a revolution in lifestyles. Cheaper, safer, and more effective chemicals and products were introduced in personal hygiene, pharmaceuticals, and household products. Employment was created as well, lifting tens of thousands of workers into the middle class throughout North America and Europe. The legacy, however, of the early days of the chemical industry is hundreds of highly toxic sites where industries made chemicals or disposed of their waste products.3 In the mid-1960s and early 70s, governments throughout the world began the difficult and costly task of locating and cleaning up toxic sites, often only after they had taken their toll on the health of local populations. In the 1940s and 50s, the emerging nuclear industry exacerbated and complicated the nature of environmental pollution. Nuclear wastes have been accumulating around the world ever since, primarily at nuclear power generation sites, in forms that have proven difficult to contain and that have taken a toll on human, animal, and even plant health.
Waste products have not been the only source of environmental contamination. Sometimes, products themselves are later discovered to pose health and environmental hazards. A prominent example is DDT, which was an effective and useful pesticide that aided the so-called green revolution, by which the agricultural industry has been able to keep up with the contradictory demands of a growing population and shrinking amounts of arable land. Unfortunately, DDT contained dioxin, which was found to be a deadly environmental contaminant that killed songbird populations even while it was fighting pests in food crops. Moreover, dioxin worked its way up the food chain into human systems, causing cancers and other health problems. Asbestos had a similar story. Hailed as a miraculous flameproof mineral suitable for insulation and even capable of being woven into clothing, its toxic effects were recognized only later. Because of the size and structure of asbestos particulates, they were found to be carcinogenic when inhaled—and they were all too easily inhaled.
Today, carbon dioxide is the focus of an ongoing dispute regarding the unintended consequences of scientific progress and industrial development. CO2 is a natural product of respiration by most animals, as well as a natural food source and product of decomposition for plants. CO2 is necessary for life on Earth as we know it. However, its concentrations in the atmosphere have been historically low, measuring at under 1 percent of the total concentration of gases (.033 percent, to be precise). Most of Earth’s carbon has been sequestered in benign forms such as living plants, minerals, and the products of plant and animal decay. The latter have been sources of fuel for a long time. Wood and then coal proved to be valuable sources of heat in northern climates, and they have been used as household fuels for thousands of years.
In the past century and a half, however, with the growing energy demands of industrialization came an increasing demand for “fossil” fuels. As scientists now generally agree, although fossil fuels helped drive much of the economic and technical progress of the industrial and now post-industrial era, they came with a cost. By releasing significant quantities of otherwise sequestered CO2 into the atmosphere, humans may have changed Earth’s climate for some time to come, even if we now have successfully begun to transition to other, “carbon neutral” energy sources or practices. Ironically, this unintended damage may be largely the result of human activities that aided us in reaching our current level of development.
The lesson of these past failures is that new technologies carry unanticipated dangers, pose incalculable risks, and often have unintended consequences. These risks and consequences may outweigh the value of the technology, and scientists and innovators alike ought to try to calculate as best they can the risks and the chances that unanticipated, unintended consequences may prove harmful and to what degree. Of course, foreseeability is questionable. It may well be that the best anyone can accomplish at any time during the early stages of a technology is to extrapolate from what is known about the fate of similar technologies and mitigate as much as possible the possibility of irreversible harm.
In fact, because of past miscalculations and unanticipated harms, some of the environmental damages posed by nanotechnology and its biological cousin, synthetic biology, are being anticipated and prepared for. Numerous professional, national, and international efforts have already begun. Attempts at regulations regarding environmental emissions of nanoparticles have been promulgated. Meanwhile, a balance is being struck between mere caution and the precautionary principle, by which potential harms are avoided by extreme caution without regard to actual risk (the harm factored with the chances of the harm). It seems at these early stages that this balance might avoid the path that Europe took in its approach to genetically modified foods, which were all but banned. The precautionary principle’s application in the case of GMOs set countries who applied it back several years technologically, leaving them now to try to quickly catch up. In nanowares, this appears not to be happening. Even so, rules regarding environmental release of nanoparticles and application of nanotechnology to consumer products are emerging with an eye on past mistakes.
Environmental concerns do warrant special consideration, as products that are released into the environment often present persistent harms. Whereas exposures to mechanical technologies or even the testing of pharmaceuticals pose individual risks to users or subjects, releasing products into the environment—where exposure is often unknown to those exposed and dispersed geographically—poses threats to individuals who are not necessarily willingly put at risk. If nanoparticles emitted either during manufacture or through finished products enter the environment so that they pose threats to persons beyond those who knowingly manufacture or consume them, then a greater ethical duty emerges. This duty was overlooked when chemical wastes were disposed of by Hooker Chemical at Love Canal in Niagara Falls, New York, which became, in the 1970s, the hallmark case of environmental toxins resulting from chemical manufacturing. People were injured who had no connection with the industry and who had not directly benefited from or participated in either the manufacture of chemicals or the disposal of wastes. They never gave consent to their exposure. As with the Belmont Principles of applied bioethics, we should consider the consent of those affected by our actions when determining the level of care owed.4 People may be free to choose to submit themselves to risks, but environmental consequences affect others who made no such choice.
Cigarette smoke is composed of nanoparticles. These particles have been found to be carcinogenic to a certain segment of the population. People are free to expose themselves to hazards that are self-directed, as smoking is. But if secondhand smoke, for instance, poses an elevated risk of cancer to nonsmokers who can make no choice about inhaling it, then there should be consideration of the duties owed and by whom to those who are environmentally exposed through no choice of their own.
CO2 is persistent, and the accumulation of it over time takes decades to reverse. The same is true of radioactivity. When potential, persistent, and difficult-to-contain hazards enter the environment, affecting those who are not direct beneficiaries or who are not end users or may not even be aware of their exposure, an ethical duty arises. Manufacturers of products that can have negative, persistent environmental consequences have a duty at least to inform and more properly to mitigate potential effects to the greatest extent possible. The duty to inform extends to contacting both users and nonusers who might be exposed, and to future generations (in case the technology falls out of favor, yet the harms persist).
These duties may imply some measure of regulation, but they mostly encourage scientists and innovators to do sufficient testing and to explore the potential for negative environmental impact before designing manufacturing processes and before releasing a product into the stream of commerce. Moreover, regulation can be self-originating. In the best cases, scientists and industrialists are properly cognizant of the long-term impact of their actions and voluntarily take proper precautions. Overall, this encourages increased public trust, responsible and unimpeded inquiry, and useful innovation with minimal negative consequences.
Beyond environmental concerns, nanotechnology (as with all technology) poses risks to those involved in the research and manufacturing, as well as to knowing end-users. How should those at the forefront of the development of the technology consider these safety concerns?
Humans engage in all sorts of risky behaviors. They often do so voluntarily with full knowledge of the potential harms and even of the chances of those harms. Every day, people choose to drive their cars, which is one of the most risky regular behaviors in which one can engage. The chances of harm from automobiles, and the severity of those harms, are much higher than the risk of harm due to any existing genetically modified organisms. To what extent must those involved in the development and distribution of technologies inform and protect direct researchers, laborers, consumers, and end-users involved directly with that technology? To what extent ought governments regulate the duties involved?
A general principle of liberalism as developed in the Enlightenment is that the extent of governmental regulation of private activities ought to be low or nonexistent. Free markets ought to be encouraged both for ideas and economics. The invisible hand of the economy will adjust our behaviors according to the harms or goods that flow from our unimpeded activities. Although this set of propositions remains generally accepted, and the spread of free markets and prosperity have more or less correlated, there are notable exceptions. These exceptions have been tolerated especially where some “market failure” has been perceived (even where it might not have actually occurred). Market failures have certainly occurred in the nexus of ethics and technology. We have noted above numerous cases where technology progressed without sufficient knowledge or reflection, helping to cause or contribute to harms that could have been avoided. Sometimes, lapses of judgment in the form of simple negligence were to blame. In other cases, people did what we might consider morally wrong: introducing a technology known to cause harm without properly alerting users, mitigating damages, or accepting responsibility. Sometimes, even when products were developed and marketed without any possibility of foreknowledge regarding possible harms, when evidence of harms was later discovered, no further actions were taken. Take tobacco as an example. We can see that sometimes industry fails to self-regulate and markets similarly fail (in the case of tobacco, partly due to the addictive nature of the product), and so external regulations may be perceived to be necessary.
Why does self-regulation sometimes fail? Because we are morally imperfect, and even the best-intentioned people can have their judgment clouded by conflicting duties, self-interest, and even self-delusion. Market failures in ethics are responsible for the foundation of modern applied bioethics. The current set of foundational principles in bioethics emerged from the Nuremberg trials of Nazi physicians who had, sometimes motivated in part by genuine scientific curiosity or even by the desire to develop better medical knowledge and techniques, conducted experiments on human subjects that were clearly immoral. The universally repugnant atrocities that emerged in the Nuremberg Trials inspired the development of the Nuremberg Code, a set of ethical principles to guide future research on human subjects. Among these principles are the necessity of voluntary consent, the beneficence of the research (Is it aimed toward good ends with good intentions?), the requirement to avoid unnecessary harm and to avoid unnecessary risks, and that the research is performed only by qualified researchers. These principles derive from moral theories espoused by philosophers over the past several millennia. Yet because lapses by researchers have recurred more regularly than we’d like to think, the Nuremberg Code’s principles have been formalized in a number of government laws and regulations and professional standards and codes.
Even after the Nazi atrocities, researchers failed to abide by the Nuremberg principles in a number of noteworthy situations. Stanley Milgram’s psychological experiments at Yale, which were designed to discover the source of unethical behavior (to show why good people will do bad things when ordered to do so by someone in a position of authority) were arguably unethical according to the Nuremberg principles. In those experiments, Milgram did not give proper informed consent to his subjects who were led to believe they were administering electric shocks as punishments to people whom the subjects did not know to be actors. Milgram’s experiments were begun in 1961, shortly after the trial of the Nazi war criminal Adolf Eichmann began.5 It was only after the infamous Tuskegee Syphilis Study that the Nuremberg Code was formalized into legally enforceable rules, and modern bioethics became a more formal, applied field. The Tuskegee Study lasted forty years and followed a cohort of syphilis-infected sharecroppers in Alabama. They were all African Americans, and they were given regular medical checkups as part of the study, which was designed to discover the full range of symptoms over the course of the disease. But though the study began before any treatments for the disease were known, over the course of the study penicillin was found to be an effective cure. Nonetheless, the study subjects were never given access to the drug, and they remained untreated and uninformed about the existence of an effective treatment—for decades. As a result, the study subjects deteriorated due to untreated syphilis, even while others who were also victims of the disease but did not participate in the study were cured. The study lasted until 1972 and continued thereafter collecting data on the original cohort. It only ended when the press learned of the plight of the Tuskegee Study subjects and, in the face of public outcry, it was terminated.6In the wake of the Tuskegee Study, numerous efforts were undertaken to prevent future breakdowns of research ethics involving human subjects. An independent commission was formed that authored the seminal Belmont Report, released in 1979. The “Belmont Principles,” which define the duties of scientists to human subjects, include respect for persons, beneficence, justice (involving vulnerable subjects and populations), fidelity (involving fairness and equality), non-maleficence, and veracity.
Given the lapses by scientists, governments, and corporations in permitting harms that have come about through the development of technology and marketing of products outside of medicine, might we consider applying the Belmont Principles to scientific research in general? I have argued as much in the journal Science and Engineering Ethics.7 Specifically, if the Belmont Principles embody ethical duties owed to human subjects of research, shouldn’t they also be applicable to humanity as a whole, or at least to those who are directly affected by research through the development and introduction of products into the stream of commerce? For instance, workers exposed to hazardous chemicals during industrial processes who were not properly informed about the nature and consequences of that exposure, were not able to give full informed consent to their exposure. Consumers who purchased Ford Pintos without foreknowledge about their potential for exploding in rear-end collisions or that Ford (or even a knowledgeable customer) could improve each car with an inexpensive aftermarket upgrade, were treated unfairly, violating the manufacturer’s duty of fidelity. Why should the Belmont Principles apply only to duties of researchers to human subjects in studies and not to all scientists and developers (including those who move research into practice) performing research that may have direct effects on consumers? As a modern example of the extent of damage that can be done by both scientists and nonscientists making decisions based upon the current state of scientific understanding, consider the BP oil spill in the Gulf of Mexico. Numerous technical fixes could have helped prevent the blowout that resulted in the largest historical release of oil into the oceans and killed eleven of BP’s own workers and that now threatens a generation of residents of the Gulf region. Careful attention to the principles enunciated in the Belmont Report, including the need to treat people with dignity and to avoid unnecessary risks, might have helped put the conflicting desire for profits in its rightful place.
David Koepsell is an author, philosopher, attorney (retired), and educator whose recent research focuses on the nexus of science, technology, ethics, and public policy. He has provided commentary regarding ethics, society, religion, and technology on numerous media outlets, including MSNBC, Fox News Channel, the Guardian, the Washington Times, National Public Radio, Radio Free Europe, Air America, the Atlanta Journal Constitution, and the Associated Press. He has been a tenured associate professor of philosophy at the Delft University of Technology, Faculty of Technology, Policy, and Management in the Netherlands; visiting professor at UNAM (National Autonomous University of Mexico), Instituto de Filosoficas, and the Unidad Posgrado, Mexico; director of Research and Strategic Initiatives at Comisión Nacional De Bioética in Mexico; and asesor de rector at UAM Xochimilc.