By Roger Brownsword & Morag Goodwin

The authors of Law and the Technologies of the Twenty-First Century (Cambridge University Press, 2012) examine both the challenges and opportunities that are arising regarding regulatory frameworks and emerging technologies, looking specifically at: prudence, legitimacy, effectiveness and connection.

The twenty-first century, we can be sure, will be a time of relentless technological development. Whether we look at the projections for information and communication technologies (ICTs), biotechnologies, nanotechnologies, or neurotechnologies, the story is much the same: these emerging technologies will have transformative effects on the way that we work and play, on our health and our wealth, and no doubt, on our happiness too. It has never been more important that we get the regulatory environment right.

Wherever we look, regulators are grappling with the challenges presented by the changing technological landscape. Sometimes these challenges consist in tweaking existing law – for example, the CPS is currently consulting on the criteria for prosecuting offensive and/or threatening communications sent via social media; at other times, the law needs a major overhaul – data protection is an obvious example. Regulators are not short of advice on how to approach technological developments: there are numerous reports addressing particular applications of technologies (for example, reports on nanomedicine, nanofoods, enhancement in the workplace, pre-conception genetic testing, brain imaging and the criminal law, and so on) or new developments (synthetic biology, for example, has attracted a good deal of attention). The most recent report from the Nuffield Council on Bioethics, with emerging biotechnologies as its topic,1 is wide-ranging; but it still neglects significant parts of the technological array – and much the same might be said about the most recent report (on ICTs) from the European Group on Ethics in Science and New Technologies (EGE).2

 

Given such a burgeoning literature, and the increasing specialisation of such writings, why did we write Law and the Technologies of the Twenty-First Century?3 Quite simply, we did so with a view to setting a frame that is broad enough to capture the full range of technological development and the accompanying regulatory debates. The book is agenda-setting in the modest sense that it provides regulators with a frame within which to consider how they can respond to technological developments; likewise, the same frame might be used to structure a meeting between regulators and their publics to assess the adequacy of regulatory responses to rapid technological developments.

Our proposed agenda identifies four generic regulatory challenges: respectively, the challenges of prudence, legitimacy, effectiveness, and connection. Stated shortly, regulatory prudence relates to concerns regarding human health and safety as well as the integrity of the environment. In the case of regulatory legitimacy, the concerns largely relate to whether the regulators are trying to do the right kind of thing in the right kind of way. With regard to regulatory effectiveness, the question is whether regulatory interventions are fit for purpose in the sense of whether they have the intended impact. Finally, the focus of regulatory connection is on getting regulation sustainably connected to rapidly changing technologies. When applied to particular technologies, certain of these challenges are more relevant than others. For example, the focus of the Nuffield Council on Bioethics’ report on biotechnologies attracts questions about prudence and legitimacy rather than about effectiveness; and, in the early days of e-commerce, the priority was to get contract law connected to on-line transactions rather than debate questions of prudence or legitimacy. That said, however, for regulators to achieve their regulatory goals, they will nonetheless need to consider all four challenges.

To start with the challenge of regulatory prudence, on the face of it, ICTs – unlike biotechnologies or nanotechnologies – do not present themselves as ‘dangerous’ or potentially ‘catastrophic.’ However, they can enable dangerous activities. For example, the concerns that synthetic biology elicits about biosafety and bioterrorism are only amplified by the successful creation of a complete polio virus genome that made use of a viral genome map freely available on the Internet. Moreover, it is clear that our increasing reliance on ICT in all aspects of our lives creates a vulnerability that needs to be addressed – witness, for example, the famous DDOS attacks on Estonia in 2007 (severely disabling both governance and commerce).

Where such prudential concerns are elicited, the expectation is that regulators will take steps to assess and to manage the risks. Often, there will be a range of views as to the seriousness and likelihood of particular harms actually happening, as well as different judgments as to the likely benefits represented by the technology. Faced with a wide range of opinions, regulators are unlikely to be able to satisfy all views; and questions will persist as to whether regulators have succeeded in confining such risks as there are to an ‘acceptable’ level (whatever that may mean). In many cases, however, there is an additional complication: even with the benefit of expert advice, regulators might remain uncertain about the kind of harms that might arise or about the likelihood of such harms eventuating. In its report, the Nuffield Council on Bioethics rightly identifies ‘uncertainty’ – alongside ‘ambiguity’ and ‘transformative potential’ – as one of the three defining characteristics of emerging biotechnologies. In such a context, standard risk assessment techniques will not suffice and we move into the territory of precautionary reasoning. By and large, ICTs were developed without requiring engagement with the precautionary principle, but with the spread and depth of the technology we might expect some precautionary issues to be raised.

Turning to regulatory legitimacy, according to the EGE, the regulation of ICT needs to address such values as those of ‘autonomy; identity; privacy and trust; responsibility; [and,] justice and solidarity.’4 If the regulatory environment does not adequately reflect the weight and significance of these values, it will invite complaints as to its legitimacy. However, one of the complexities for regulators is that different ethical constituencies will make their own particular legitimacy demands, interpreting values in their own way – notoriously so, for example, in relation to the value of human dignity. In modern societies, the dilemma for regulators is in knowing how to answer to a plurality of such constituencies, each with their own criterion of right action, each with their own view of what regulators should be doing if they are to do the right thing. For example, if we are utilitarians, we will expect regulators to select the option that comes out best in terms of net utility; while, if we demand respect for human rights or human dignity, there are likely to be some red lines that we insist should never be crossed.

Turning to regulatory legitimacy, according to the EGE, the regulation of ICT needs to address such values as those of ‘autonomy; identity; privacy and trust; responsibility; [and,] justice and solidarity’.

One option for responding to such plurality is to rely on the integrity of the regulatory process rather than its outcome. Such is the approach advocated by the Nuffield Council on Bioethics when it insists that public debate should be broadened beyond expert communities to be fully inclusive. According to the Council, the object of public engagement should be not so much to ‘educate’ the public as to hear their (reasonable) views. At the end of the process, differences should have been minimised and regulators should be in a position to act on reasons that are at least acceptable to all reasonable persons. Accordingly, proponents of such an approach claim that, with the right kind of process, there is a reasonable chance that even the most controversial technologies can continue to be developed in ways that are generally viewed as beneficial and that pluralistic societies can settle their differences in a civilised way. The difficulty, however, for regulators in creating regulatory environments that are widely viewed as legitimate remains: so wide is the plurality of normative perspectives within contemporary societies that we cannot always agree on what is a reasonable view to hold, nor on how much those who do not hold such a view can be expected to compromise to satisfy it. Instead, the criterion of reasonableness has a tendency to privilege certain, middle-ground views, such as those based upon liberal values that seek to balance a utilitarian approach with human rights guarantees. Focusing on the process itself does not, thus, provide regulators with an easy way out of the legitimacy dilemma.

There is another, immensely important, legitimacy issue that arises from the use of technologies as regulatory tools. Following Lawrence Lessig’s seminal work, it is now a commonplace that regulators have at their disposal a variety of regulatory instruments, including various technological instruments, or (in an ICT context) ‘code’.5 While intelligent use of these instruments (such as CCTV, DNA profiling, GPS locating and tracking devices, DRM technologies, and so on) should improve the chances of achieving the regulatory purposes, there might nevertheless be questions about the acceptability of the particular means used. The legitimacy concerns here are rooted in the idea that, in a moral community, people try to do the right thing (meaning that they take account of the legitimate interests of others) i.e. they act for the right reason. So long as technological tools are used to amplify the signal that it is in the self-interest of regulatees to comply – because the technology will detect non-compliers – there is some threat to the conditions for moral community. However, it is when code and design leave regulatees with no option other than compliance that the legitimacy of the means employed by regulators needs urgent consideration. The problem here is that, even if the technology (as Ian Kerr has put it) automates virtue,6 it is not the same as freely opting to do the right thing.

While intelligent use of these instruments (such as CCTV, DNA profiling, GPS locating and tracking devices, and so on) should improve the chances of achieving the regulatory purposes, there might nevertheless be questions about the acceptability of the particular means used.

The shift from law (or ethics) to technological instruments changes the ‘complexion’ of the regulatory environment. Instead of guiding regulatees by prescribing what ought or ought not to be done, regulators signal what can or cannot be done. To comply or not to comply is no longer the question for regulatees; the only question is what in practice can be done. That said, it surely cannot be right to condemn all applications of technological management as illegitimate. For example, modern transport systems incorporate safety features that are intended to design-out the possibility of human error. Nevertheless, we need to monitor the cumulative impact of regulators resorting to such instruments7; and, of course, there is much to debate about the transparency, reversibility and location of these technological ‘fixes’ for non-compliance. In both specialist regulatory circles and in the public square, we need to be talking about the acceptability of ‘techno-regulation.’

One of the key challenges for regulators is to intervene in a way that is effective in responding to the concerns that we have already highlighted, but that is also supportive of beneficial innovation. Audits of regulatory performance in regard to effectiveness need to check that, for example, innovation-supportive regimes are not operating in ways that might be counter-productive (patent thickets are notorious) or being compromised by other regulatory interventions; that liability regimes are designed to protect nascent innovative enterprises; that facilitative law is properly geared for innovation; and that regulation is neutral in the sense that future innovative technologies are not unfairly disadvantaged. In relation to the support of innovation, as with all other regulatory purposes, the key to effectiveness turns on (i) the regulators themselves, (ii) the response (including the resistance) of regulatees, and (iii) some disruptive factors external to the regulators and regulatees. In cases of regulatory ineffectiveness, the problem is often likely to lie in more than one of these loci.

The reasons for regulatory ineffectiveness often lie with the regulators themselves – for example, they might be corrupt or prey to more subtle forms of capture, or they might lack resources; or it might be that they are not as competent as they need to be. Over the last two decades, much attention has been given to whether and how regulators can make smart choices, selecting the particular mix of regulatory instruments that will most effectively and efficiently deliver the regulatory goals.8 In the case of spam, for example, some will argue that regulators should stiffen up the legislative controls; others that filters and similar technological fixes are more effective; others that more international cooperation is required; and yet others that the focus should be on changing the culture of the spammers. Smart regulators will respond to these calls by seeking out the most effective combination of instruments and approaches. However, no great smartness is required to appreciate that, where law-like rules are the instrument of choice, they need to be clear, not overly-complex, not subject to constant revisions, and so on. Arguably, the regulatory environment for many technologies suffers precisely from over-complexity, contradiction and too-frequent change.9

Secondly, it is essential that regulatees respond in the right way. The habitual criminal classes aside, our regulatory intelligence needs to account for, and to anticipate, potential non-compliance. For example, it should be no surprise that regulatees act on rational economic calculations; or that those who are opposed to a particular regulatory position might actively pursue whatever avenues for challenge and review are lawfully available; or that, where regulation is out-of-step with widely shared societal norms, regulatees might defy the regulatory position (as in the case, say, of peer-to-peer file sharing). Unless regulators are in the fortunate position of facing wholly compliant regulatees, they must either try to minimise resistance ex ante or have a strategy for dealing with it ex post.

A third possibility is that regulation is affected by interference from an external source. Here, the problem is that of a disruptive externality – sometimes lawful, sometimes unlawful, sometimes intended, sometimes unintended, sometimes a natural disaster, sometimes a disaster of our own human making, and so on. For example, the implementation of hi-tech criminal justice was disrupted (perfectly lawfully) by the European Court of Human Rights when (in the Marper case) it ruled that the National DNA Database took inadequate account of the privacy of citizens10; and attempts to nurture innovative biotechnologies were disrupted by the European Court of Justice when (in the Brüstle case) it ruled that processes or products incorporating materials drawn from (destroyed) human embryos are excluded from patentability.11 Certainly, one of the major lessons of the last twenty years is that global and regional trade agreements, in conjunction with the development of the on-line world, have significantly reduced the effective influence of local regulators. When members of these trade clubs agreed to open their markets, they surely could not have anticipated how far they were surrendering local regulatory control over a future world of e-commerce or of the regulation of food safety and human health.

Finally, there is the challenge of regulatory connection, a challenge of getting connected, staying connected, and reconnecting in the event of disconnection. How does regulation (or, at any rate, regulation that is in a legal form) become disconnected? Sometimes, the difficulty lies in a lack of correspondence between the descriptive words found in the regulation and the form that the technology now takes; at other times, the difficulty is that the original regulatory scheme is no longer adequate for the uses to which the technology is now put. There are many examples of this phenomenon: witness, for example, the European Commission’s eventual acknowledgement that ‘rapid technological developments and globalisation have profoundly changed the world around us, and brought new challenges for the protection of personal data.’12 In the same way, copyright law and its exceptions (crafted for the off-line world) are constantly challenged by the development of on-line technologies.

Such is the challenge of connection; what is the answer? Ideally, we want regulation to bind to the technology and to evolve with it. In pursuit of this ideal, regulators face a choice between taking a traditional hard law approach or leaving it to self-regulation and, concomitantly, a softer form of law. Where the former approach is taken, the hard edges of the law can be softened in various ways – especially by adopting a ‘technology neutral’ drafting style, by writing codes of practices alongside the law, or by encouraging a culture of purposive interpretation in the courts. Conversely, where self-regulation and softer law is preferred, the regime can be hardened up by moving towards a form of co-regulatory strategy. However, no matter which approach is adopted, there is no guarantee that it will be effective and the details of the regulatory regime will always reflect a tension between the need for flexibility (if regulation is to move with the technology) and the demand for predictability and consistency (if regulatees are to know where they stand).

Although a huge amount of effort is now being applied to understanding the interfaces between regulation and emerging technologies, there is a long way to go. In this respect we are, as Michael Kirby has aptly observed, experts without a great deal of expertise.13 We know that it is difficult for regulators to deal with technological targets that move and morph; and we know that they might be tempted to rely on new technological tools to ensure ‘clean’ compliance. We know that there are challenges and opportunities. We also know that there might need to be trade-offs within the regulatory environment (for example, an improvement in legitimacy might be offset by a loss of effectiveness). We know that it is all very complex; and we certainly do not think that in Law and the Technologies of the Twenty-First Century we have come up with all or any of the answers. However, by isolating the challenges of prudence, legitimacy, effectiveness, and connection, we have laid out the terms for a structured conversation about some regulatory questions that, if not already of concern to us all, very soon will be.

About the Authors

Roger Brownsword, who is Professor of Law at King’s College London and a visiting professor at Singapore Management University, is one of the leading European researchers in the field of regulation and technology. He has acted as a specialist adviser to parliamentary committees dealing with stems cells and hybrid embryos. From 2004-2010, he was a member of the Nuffield Council on Bioethics, during which time he was a member of the working party on public health. Currently, he is a member of the UK NHS Screening Committee, and he is Chair of the Ethics and Governance Council of UK Biobank.

Dr. Morag Goodwin is Associate Professor of Law at Tilburg Law School, the Netherlands. Her areas of specialization include international law, notably law and development; international and European human rights law; non-discrimination law; Roma in the European legal context; and law and technology. She currently co-ordinates an EU-funded project – EDOLAD – to establish a joint doctoral programme in the field of Law and Development (www.edolad.eu); the programme will launch in 2014.

References

1. Nuffield Council on Bioethics, Emerging Biotechnologies (London, December 2012).


2. European Group on Ethics in Science and New Technologies, Ethics of Information and Communication Technologies (Opinion No. 26), Brussels, 22 February 2012.


3. Roger Brownsword and Morag Goodwin, Law and the Technologies of the Twenty-First Century (Cambridge: Cambridge University Press, 2012).
4. EGE Opinion No. 26, note 2 above, at p. 36.


5. Lawrence Lessig, Code and Other Laws of Cyberspace (New York: Basic Books, 1999).


6. ‘Digital Locks and the Automation of Virtue’ in Michael Geist (ed), From ‘Radical Extremism’ to ‘Balanced Copyright’: Canadian Copyright and the Digital Agenda (Toronto: Irwin Law, 2010) 247.


7. Karen Yeung, “Can We Employ Design-Based Regulation While Avoiding Brave New World?” (2011) 3 Law, Innovation and Technology 1.


8. See. e.g., Ian Ayres and John Braithwaite, Responsive Regulation (Oxford: Oxford University Press, 1992); and Neil Gunningham and Peter Grabosky, Smart Regulation (Oxford: Clarendon Press, 1998).


9. Chris Reed, “How to Make Bad Law: Lessons from Cyberspace” (2010) 73 MLR 903, esp. at 914-916.


10. S and Marper v United Kingdom (2009) 48 EHRR 50 (Grand Chamber).


11. Oliver Brüstle v Greenpeace E.V. Case C-34/10, [2011] OJ C 362, 10.12.2011.


12. European Commission, Communication (A comprehensive approach on personal data protection in the European Union), Brussels, 4 November 2010, COM(2010)609 final, at 2.


13. Michael Kirby, “New Frontier – Regulating Technology by Law and ‘Code’” in Roger Brownsword and Karen Yeung (eds), Regulating Technologies (Oxford: Hart, 2008) 367.