Regulating Technologies: Legal Futures, Regulatory Frames and Technological Fixes

April 26, 2018 | Author: Anonymous | Category: Documents
Report this link


Description

REGULATING TECHNOLOGIES While it is a truism that emerging technologies present both opportunities for and challenges to their host communities, the legal community has only recently begun to consider their significance. On the one hand, emerging information, bio, nano and neurotechnologies challenge policy-makers who aspire to put in place a regulatory environment that is legitimate, effective and sustainable; on the other hand, these same technologies offer new opportunities as potentially powerful regulatory instruments. In this unique volume, a team of leading international scholars address many of the key difficulties surrounding the regulation of emerging technological targets as well as the implications of adopting technology as a regulatory tool. How should we rise to the challenge of regulating technologies? How are the regulatory lines to be drawn in the right places and how is the public to be properly engaged? How is precaution to be accommodated, and how can the law keep pace with technologies that develop ahead of the regulatory environment? How readily should we avail ourselves of the opportunity to use technology as a regulative strategy? How are we to understand these strategies and the challenges which they raise? To what extent do they give rise to similar policy problems accompanying more ‘traditional’ regulatory instruments or generate distinctive challenges? While the criminal justice system increasingly relies on technological assistance and the development of a ‘surveillance society’, is a regulatory regime that rules by technology compatible with rule of law values? Regulating Technologies Legal Futures, Regulatory Frames and Technological Fixes Edited by Roger Brownsword and Karen Yeung OXFORD AND PORTLAND, OREGON 2008 Published in North America (US and Canada) by Hart Publishing c/o International Specialized Book Services 920 NE 58th Avenue, Suite 300 Portland, OR 97213–3786 USA Tel: +1 503 287 3093 or toll-free: (1) 800 944 6190 Fax: +1 503 280 8832 E-mail: [email protected] Website: http://www.isbs.com © The editors and contributors severally, 2008 The editors and contributors have asserted their right under the Copyright, Designs and Patents Act 1988, to be identified as the authors of this work. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, without the prior permission of Hart Publishing, or as expressly permitted by law or under the terms agreed with the appropriate reprographic rights organisation. Enquiries concerning reproduction which may not be covered by the above should be addressed to Hart Publishing at the address below. Hart Publishing Ltd, 16C Worcester Place, Oxford, OX1 2JW Telephone: +44 (0)1865 517530 Fax: +44 (0)1865 510710 E-mail: [email protected] Website: http://www.hartpub.co.uk British Library Cataloguing in Publication Data Data Available ISBN: 978–1–84113–788–9 Typeset by Compuscript Ltd, Shannon Printed and bound in Great Britain by TJ International Ltd, Padstow, Cornwall CONTENTS Contributors ............................................................................................................. vii Introductory Ref lections ........................................................................................1 1. Regulating Technologies: Tools, Targets and Thematics ..................................3 Roger Brownsword and Karen Yeung 2. So What Does the World Need Now? Reflections on Regulating Technologies ...................................................................................23 Roger Brownsword Part One: Technology as a Regulatory Tool ........................................................ 49 3. Crime Control Technologies: Towards an Analytical Framework and Research Agenda ....................................................................51 Ben Bowling, Amber Marks and Cian Murphy 4. Towards an Understanding of Regulation by Design .....................................79 Karen Yeung 5. Internet Filtering: Rhetoric, Legitimacy, Accountability and Responsibility ...........................................................................................109 TJ McIntyre and Colin Scott 6. Perfect Enforcement on Tomorrow’s Internet ..............................................125 Jonathan Zittrain 7. Criteria for Normative Technology: The Acceptability of ‘Code as law’ in Light of Democratic and Constitutional Values ................157 Bert-Jaap Koops 8. A Vision of Ambient Law ...............................................................................175 Mireille Hildebrandt 9. The Trouble with Technology Regulation: Why Lessig’s ‘Optimal Mix’ Will Not Work ........................................................................193 Serge Gutwirth, Paul De Hert and Laurent De Sutter Part Two: Technology as a Regulatory Target ................................................... 219 10. Cloning Trojan Horses: Precautionary Regulation of Reproductive Technologies .............................................................................221 Han Somsen 11. The Transplantation of Human Fetal Brain Tissue: The Swiss Federal Law ....................................................................................243 Andrea Büchler 12. Tools for Technology Regulation: Seeking Analytical Approaches Beyond Lessig and Hood ................................................................................263 Charles D Raab and Paul De Hert 13. Conceptualising the Post-Regulatory (Cyber)state .......................................287 Andrew D Murray 14. Vicissitudes of Imaging, Imprisonment and Intentionality .........................317 Judy Illes 15. Taming Matter for the Welfare of Humanity: Regulating Nanotechnology ...........................................................................327 Hailemichael Teshome Demissie 16. Regulating Renewable Energy Technologies: The Chinese Experience ........357 Deng Haifeng Closing Ref lections .............................................................................................. 365 17. New Frontier: Regulating Technology by Law and ‘Code’ ............................367 Michael Kirby Index .......................................................................................................................389 vi Contents CONTRIBUTORS Professor Ben Bowling, School of Law, King’s College London Professor Roger Brownsword, Director of TELOS, King’s College London and Honorary Professor in Law at the University of Sheffield Professor Andrea Büchler, University of Zurich, Switzerland Hailemichael T Demissie, TELOS, King’s College London Dr Haifeng Deng, Centre of Environment Resource and Energy Resource Legislation, Tsinghua University, Beijing, China Professor Serge Gutwirth, Director of the Centre for Law, Science, Technology and Society, VUB, Brussels Professor Paul De Hert, Centre for Law, Science, Technology and Society, VUB, Brussels and Tilburg Institute of Law, Technology and Society (TILT) Dr Mireille Hildebrandt, Erasmus University, Rotterdam, and Senior Researcher at the Centre for Law, Science, Technology and Society, VUB, Brussels Professor Judy Illes, Professor of Neurology and Canada Research Chair in Neuroethics, University of British Columbia, Vancouver, Canada Justice Michael Kirby, the High Court of Australia, Canberra, Australia Professor Bert-Jaap Koops, Tilburg Institute of Law, Technology and Society (TILT) Amber Marks, School of Law, King’s College London TJ McIntyre, University College Dublin Cian Murphy, School of Law, King’s College London Dr Andrew Murray, Law Department, London School of Economics Professor Charles D Raab, the Institute for the Study of Science, Technology and Innovation, University of Edinburgh Professor Colin Scott, University College Dublin Professor Han Somsen, Tilburg Institute of Law, Technology and Society (TILT) Dr Laurent De Sutter, the Centre for Law, Science, Technology and Society, VUB, Brussels Professor Karen Yeung, TELOS, King’s College London Professor Jonathan Zittrain, Professor of Internet Governance and Regulation, University of Oxford viii Contributors Introductory Ref lections 1 Regulating Technologies Tools, Targets and Thematics ROGER BROWNSWORD AND KAREN YEUNG I. Introduction The papers in this collection originate in an interdisciplinary conference on ‘Regulating Technologies’ that was held in London over what was a beautiful Easter weekend in April 2007. Formally, this event, which was sponsored by the Wellcome Trust and the Modern Law Review, marked the inauguration of TELOS, a new research centre for the study of Technology, Ethics and Law in Society, based in the School of Law at King’s College London. The conference was opened by Lawrence Lessig, whose seminal work on ‘code’ as a regulatory mode (or modality) needs no introduction1—suffice it to say that Lessig’s ideas formed an important backdrop for much of the conference discussion as indeed they have served to inspire and to agitate a number of contributions to this book. There was, of course, an intentional ambivalence in the formulation of the conference topic, an ambivalence that we now carry through into the title of this collection. What is the focus of ‘regulating technologies’? Is it the regulation (or regulability) of technology that is focal, or is the emphasis on those technologies that act as regulatory instruments (or that have a regulative effect)? Drawing on this ambivalence, the conference programme mapped out an agenda of questions prompted by reflections on the interface between regulation and technology. While some questions addressed the way in which we strive to put in place and then maintain adequate regulatory environments for the promotion and limitation of new technologies, others addressed the utilisation of new technologies within the regulatory repertoire. Broadly speaking, these two sets of questions correspond with the principal research spearheads for TELOS as well as facilitating the placement of the papers in this collection—the papers, for the most part, announcing whether 1 See, in particular, Lawrence Lessig, Code and Other Laws of Cyberspace (New York, Basic Books, 1999), and Code Version 2.0 (New York, Basic Books, 2006). Michael Kirby’s summarising comments, which include some short remarks about Lessig’s opening presentation, can be found at accessed 20 May 2008. 4 Roger Brownsword and Karen Yeung their focus is on how to regulate emerging technologies or, rather, on how we might find ourselves being regulated by such technologies.2 In the first part of the collection, the papers review the scope, extent and sig- nificance of new technologies being employed as regulatory tools. In England and Wales, where there are already more than four million profiles on the national DNA database and where, so it is commonly said, we are each captured on CCTV several hundred times a day, there is some urgency in exploring the implications of a technological take-over of the criminal justice system. Last Autumn, the Nuffield Council on Bioethics, in its report on The Forensic Use of Bioinformation: Ethical Issues3 questioned the breadth of the powers given to the state to retain DNA profiles; and, at the time of writing, we await the outcome of the European Court of Human Rights’ consideration (in Marper) of the compatibility of domestic law with the United Kingdom’s commitment to the Convention rights.4 Clearly, the practice of entrusting the state with sev- eral million non-anonymised DNA samples elicits serious concerns about the privacy of both the individuals concerned and their close relatives. Moreover, if we place the routine taking of DNA samples together with the retention of profiles within the larger context of surveillance and data collection, our concerns—not only about privacy, but also about the security and integrity of the data—are likely to be heightened.5 As Mark Rothstein and Meghan Talbott6 have warned: The prospect of expanded use of DNA forensics needs to be placed in context. In a world in which personal privacy is difficult to maintain against an onslaught of com- puter file sharing, surveillance cameras, biometric imaging, thermal imaging, and other technological ‘advances’, for many people, the last ‘off limit’ area for access to personal information is law enforcement. 2 Inevitably, some papers address issues that straddle both sets of questions. In such cases, we have simply exercised an editorial judgment as to which set is dominant in the discussion and placed the piece accordingly. In the event, and happily, we find that the papers are evenly distributed between the two principal parts of the collection. 3 London, September 2007. 4 In R v Chief Constable of South Yorkshire Police, ex parte LS and Marper [2004] UKHL 39, the House of Lords held that s 82 of the Criminal Justice and Police Act 2001, which authorises retention of the samples (both fingerprints and DNA samples) even where there is no prosecution or there is an acquittal, is compatible with the rights of privacy and non-discrimination as protected by the European Convention on Human Rights. However, Marper’s challenge was treated as admissible by the European Court of Human Rights and fast-tracked for hearing by the Grand Chamber. 5 Cp, eg, Esther Addley, ‘Two Discs, 25m Names and a Lot of Questions’ The Guardian (24 November 2007) accessed 8 December 2007; Brian Brady, ‘Ministers Want to Implant Chips to Monitor Offenders’ The Independent on Sunday (13 January 2008) pp 2–3; and Owen Bowcott, ‘FBI Wants Instant Access to British Identity Data’ The Guardian (15 January 2008) p 1. More generally, see Peter Bradwell and Niamh Gallagher, FYI: the New Politics of Personal Information (London, DEMOS, 2007), and Ben Bowling, Amber Marks and Cian Murphy, ‘Crime Control Technologies’ (ch 3 in this volume). 6 Mark A Rothstein and Meghan K Talbott, ‘The Expanding Use of DNA in Law Enforcement: What Role for Privacy?’ (2006) 34 Journal of Law, Medicine and Ethics 153 at 160–61. Tools, Targets and Thematics 5 Assume that a hypothetical country routinely required all of its residents to submit the following items to the police: a DNA sample, a yearly photograph, handwriting exemplar, voiceprint, fingerprints, hair samples, retinal scans, bank statements, credit card information, health records, and other details of their personal life. Obviously, ready access to this information by police would help solve crimes. Nevertheless, such comprehensive information submission to law enforcement would be widely viewed as hallmarks of a repressive, totalitarian state. The point is that the slide towards the technologically enhanced state creates a new risk of total control. And, if there is a danger of the Rule of Law being dis- placed by the Rule of Technology, the legal community needs to address these developments as a matter of urgency. Turning things around, the papers in the second part of the collection deal with the regulability of new technologies, particularly the regulatory regimes that we adopt for information and communication technology, biotechnology, neurotechnology and nanotechnology. What sort of regulatory environments are fit for the purpose of controlling and facilitating the research and development of these technologies, all of which move too quickly for regulatory comfort and each of which seems to have its own distinctive characteristics? For example, the technologies of cyberspace do not fall neatly under the jurisdiction of land- based legal systems; biotechnology—most obviously, biotechnology of the green variety—invokes a distrust of experts; human genetics prompts fundamental questions about respect for human rights and human dignity; and, while neuro- technology and nanotechnology are still largely unknown quantities about which precaution tends to be advocated, both the public and regulators have yet to form a clear view. By way of a prelude to this two-part discussion, a number of key thematics are highlighted in Roger Brownsword’s opening paper. In general, these are themat- ics that relate to the challenge of regulating new technologies, to questions of institutional design and to the opportunities for using technology as a regulatory instrument. Starting with the challenge of regulating new technologies, Brownsword identi- fies three broad questions as follows. First, are there generic lessons to be drawn from the regulation of new technologies? Observing that a core challenge involves reconciling the traditional ideal of regulatory certainty with the fundamental generic challenge of maintaining ‘regulatory connection’, he concludes that no simple prescriptions for effective and legitimate regulation can be found. In a sense, this is a theme that holds the volume together, for it is highlighted, too, by Justice Michael Kirby in his concluding overview. Secondly, is there something distinctive about the regulatory space inhabited by new technologies? Identifying a number of variables that might structure such an inquiry, Brownsword (fore- shadowing a central point in Andrew Murray’s paper) suggests that regulatory spaces are dynamic—for example, whereas there might be an initial public concern about a technology, with a growing acceptance the site of contestation might shift from matters of safety, precaution and legitimacy, to matters of compliance and 6 Roger Brownsword and Karen Yeung effectiveness. Hence, while we should try to develop stock (tried and trusted) responses to challenges that we know to be generic, simple transplantation of a particular regulatory response from one technology to another is not always appropriate. Thirdly, there is the vexed question of how to set regulatory policy in pluralistic communities. Drawing implicitly on his previous analyses of ethical plurality,7 and insisting that there are no ‘neutral’ ethical footholds, Brownsword contends that it is essential that the community should try to agree upon the pro- cedures through which it will seek to resolve disagreement. Where such agreement is achieved, then any conflict that is subject to these procedures will generate a resolution that is worthy of respect, even if individuals disagree with the particu- lar outcome. But, sometimes, ethical divisions are so profound that procedural agreement is impossible, a problem that becomes even more acute when viewed in the light of peoples’ fear of the unknown. A second thematic concerns regulatory design. To focus his remarks, Brownsword questions the common and comfortable assumption that the regulatory design asso- ciated with the Human Fertilisation and Embryology Act, 1990, gets things right. The fact that the Act conspicuously has not stood the test of time (and is now being given a major legislative overhaul) is not so much the point; rather, Brownsword is critical of what he sees as a failure to appreciate the tensions and trade-offs that are necessarily implicated in any particular regulatory design. In this light, he doubts the meaningfulness of those provisions in the (revised) Human Fertilisation and Embryology Bill (HL, 2007–08) that place the regulatory authority under an obliga- tion to carry out its functions ‘efficiently, effectively and economically’ and to ‘have regard to best regulatory practice (including the principles under which regulatory activities should be transparent, accountable, proportionate, consistent and targeted only at cases in which action is needed)’. According to Brownsword, such formulaic prescriptions do scant justice to the competing demands of the various desiderata (for agency constitution and operation) that inform legitimate regulatory design. Thirdly, there is the question of technology being deployed as a regula- tory instrument. Such a development might elicit a range of concerns, but Brownsword’s distinctive concern is that we should be alert to the threats that this might present to aspirant moral communities—not because regulatory prac- tices of this kind are immoral (although they might well be judged to be so) but because they might threaten the sustainability of moral community itself. Stated shortly, but controversially (see Han Somsen’s critique of this point), Brownsword suggests that it is plausible that, in a community of rights, there 7 See, eg, Roger Brownsword, ‘Three Bioethical Approaches: A Triangle to be Squared’, paper presented at international conference on the patentability of biotechnology organised by the Sasakawa Peace Foundation, Tokyo, September 2004 (on file with author); ‘Stem Cells and Cloning: Where the Regulatory Consensus Fails’ (2005) 39 New England Law Review 535; ‘Ethical Pluralism and the Regulation of Modern Biotechnology’ in Francesco Francioni (ed), The Impact of Biotechnologies on Human Rights (Oxford, Hart Publishing, 2007) 45; and Deryck Beyleveld and Roger Brownsword, ‘Principle, Proceduralism and Precaution in a Community of Rights’ (2006) 19 Ratio Juris 141. Tools, Targets and Thematics 7 will be support for the state being entrusted with a stewardship responsibility for the moral welfare of the community.8 According to Brownsword, this is a responsibility that is owed not only to present members of the community but also to future generations. Indeed, Brownsword claims that the most precious thing that an aspirant moral community can hand on to the next generation is an environment that is conducive to a moral way of life, to a way of life that hinges on agents trying to do the right thing, trying to respect the legitimate interests of fellow agents and being held responsible for their actions. At its most profound, the state’s stewardship responsibility is to ensure that the enthusiasm that regulators begin to display for technological instruments of control does not insidiously undermine the conditions that give moral life its meaning. As Brownsword presents it, the fundamental challenge for a community of rights is to decide whether to settle for less effective regulation (possibly permitting a degree of non-compliance that impinges on the rights and legitimate choices of ‘victims’) or, for the sake of effectiveness, to adopt techno-regulation, seemingly abandoning the importance that we attach to the dignity of choice and, with that, much of the basis on which our thinking—legal and moral—about responsibility, as well as rights, is premised. II. Technology as a Regulatory Tool The seven papers that comprise the first part of the collection open with Ben Bowling, Amber Marks and Cian Murphy’s panoramic sketch and stock-taking of the range of sophisticated technological instruments already used by the state in the service of a wide variety of criminal justice purposes. While some of the mooted technological applications for regulating social behaviour presently lie in the realm of science fiction (compare some of the examples discussed by Karen Yeung in the second paper in this half of the book), this contribution serves as a striking and sobering reminder that the ‘dreams (and nightmares) of science fiction writers of the nineteenth and early to mid-twentieth centuries are now becoming realities’. Moreover, unlike many of the contributions in this part of the volume, which focus primarily on technological applications which seek to, or have the effect of, shaping behaviour, Bowling, Marks and Murphy also draw attention to the use of technology for the purposes of monitoring behaviour, detecting deviance and punishing unlawful behaviour. With a view to mapping out a broader research agenda, Bowling, Marks and Murphy construct a typology of criminal justice and security technologies. Within their typology, crime control technologies are classified primarily by reference to their particular functional application—whether it be communicative, defensive, 8 Cp Roger Brownsword, ‘Happy Families, Consenting Couples, and Children with Dignity: Sex Selection and Saviour Siblings’ (2005) 17 Child and Family Law Quarterly 435. 8 Roger Brownsword and Karen Yeung surveillant, investigative, probative, coercive, or punitive—and within each class, they identify important questions for further research which the turn to tech- nological apparatus for crime control raises. In particular, they call for further descriptive research, observing that, while surveillance technology has attracted considerable scholarship, less attention has been devoted to other technological applications of importance. They also call for further examination of the links created between technology and crime control institutions, including research on their efficiency, effects, effectiveness and equity. Here, expressing a recurrent theme, they note that the technology has generally outpaced its legal regulation, arguing for the development of regulatory frameworks that will ensure that such technologies are adequately constrained. While the use of technologies in aid of the prevention, detection and punish- ment of crime might readily be understood as one of the specific obligations arising from state stewardship, Bowling, Marks and Murphy throw into sharp relief the need for appropriate restraints on the state’s turn to such a regulatory strategy. Yet they warn that the construction of an adequate regulatory framework will not be a simple or straightforward task given the broad range of criminal justice applications and purposes and the shifting assumptions upon which the criminal justice system rests. In particular, they draw attention to the need for institutional mechanisms to police the boundaries of the state’s use of crime con- trol technologies although, other than pointing to substantive restraints imposed on state action by the Human Rights Act 1998, they do not yet offer any concrete suggestions concerning what those mechanisms might look like. What remains to be seen is whether a rights-respecting state can, as Brownsword hopes, be relied upon to engage in effective self-regulation to safeguard against irresponsible reli- ance on technological tools of control. In the following paper, Karen Yeung, excavating beneath the surface of the fears articulated by Bowling, Marks and Murphy, analyses and evaluates the princi- pal types (or articulations) of design-based regulation. Although technology is already being employed for crime control purposes across the full spectrum of the regulatory cycle—from standard-setting, through to information-gather- ing and enforcement—Yeung confines her analysis to instruments that seek to shape social outcomes through technological design. In Yeung’s view, if we are to understand both the effectiveness of design-based instruments and the ways in which they implicate non-instrumental values that underlie judgments about their legitimacy, then we need to have a better understanding of how such instru- ments are intended to work and the social contexts in which they are employed and embedded. In other words, by highlighting the different locations and mechanisms of the various kinds of regulative technologies, Yeung seeks both to tease out the complexities (ethical, legal and public policy) concerning the use of particular design-based strategies and to facilitate the development of a regulatory jurisprudence that is more consistent, nuanced and systematic. Developing her analytical framework, Yeung identifies two ways in which design-based approaches might be classified: first, by reference to the subject Tools, Targets and Thematics 9 in which the design is embedded (places and spaces, products and processes, and biological organisms) and, secondly, by reference to their underlying design mechanism or ‘modality of design’. Essentially, there are three possibilities: (i) the design modality will encourage behavioural change or (ii) it will cushion the impact of harm-generating behaviour or (iii) it will altogether prevent the possibility of such harmful behaviour. Yeung suggests that this threefold clas- sification has the power to provide considerable assistance in evaluating the effectiveness of particular design-based interventions in relation to their desig- nated regulatory goals as well as in assessing their implications for values of a non-instrumental kind. Yeung anticipates that design-based approaches—particularly those that pre- vent the possibility of harm being done by functioning in a way that overrides any element of human choice—will be superficially attractive to regulators owing to their promise of 100 per cent effectiveness. However, she points out that, in prac- tice there are several reasons why self-enforcing design-based solutions might fail, due largely to various unintended effects arising from their use. Correcting these effects is likely to be considerably more difficult for policy-makers to address, at least in comparison to the ‘traditional’ policy instruments, most notably attempts to regulate through legal rules. Taking her evaluation of design-based approaches several steps further, Yeung explores some of the implications of design-based techniques for non-instrumental values which have been raised in scholarly debates, particularly their potentially corrosive effect on constitutional values of accountability, transparency and partici- pation and the conditions required for a moral community to flourish. While Yeung shares many of these concerns, she argues that whether, and to what extent, these fears apply, will depend partly on the particular design modality adopted as well as the surrounding social, political and moral context in which it is employed. In cer- tain circumstances, she contends, design-based instruments may actually serve to reinforce rather than undermine moral norms. Finally, she suggests that in seeking to evaluate the legitimacy of certain kinds of design-based instruments, particularly those which seek to shape individual behaviour through direct intervention in the decision-making process, we must confront deep and highly contestable questions concerning our individual and collective identity. In this context, she suggests that the notion of authenticity, of who we are and what it means to be truly ourselves, might help to orient our critical reflections, although it is unlikely to provide much in the way of concrete guidance. Yeung’s fears about the possible unintended effects of design-based approaches, coupled with concerns about a range of legitimacy deficits (especially deficits in accountability, transparency, participation, consent and choice), are given a vivid expression in TJ McIntyre and Colin Scott’s paper on Internet filtering and blocking technologies. Stated shortly, McIntyre and Scott argue that filter- ing technology may encroach upon the liberty of cyberparticipants in ways that are opaque, blunt and resilient to challenge. They begin by observing that the rhetoric associated with the term ‘filtering’ raises connotations of cleanliness, 10 Roger Brownsword and Karen Yeung purity and precision which may well be at odds with the actual deployment of the technologies, a deployment that in practice smacks more of imprecise censorship. Filtering, McIntyre and Scott remind us, is part of the broader pattern of Internet governance, which is comprised of a variety of institutions, actors and modali- ties of control. Understood as an instrument of governance, McIntyre and Scott demonstrate how Internet filtering technology raises new problems of account- ability and legitimacy, owing to its automatic yet hidden operation, the role of intermediaries to implement filtering technology, and the capacity of filtering to denude Internet users from the capacity for choice and thus undermine their moral freedom and responsibility. They conclude that filtering is less likely to chal- lenge constitutional norms of transparency, legitimacy and accountability if users have a choice whether to opt in, where the system provides feedback concerning filtered content and in circumstances where alternative providers are available. This is immediately followed by Jonathan Zittrain’s paper which, with a similar focus, demonstrates how digital technologies are increasingly employed to elimi- nate the enforcement gaps that arise from traditional rule-based enforcement. Zittrain’s paper is a powerful reminder that commerce, rather than the state, is likely to be the engine which drives the development of digital technologies. He considers the ramifications of the increasing turn to what he calls ‘tethered appliances’. These are digital devices which are remotely connected to their manu- facturer or vendor who alone installs, removes and alters the code used for the operation of the device. While tethering may render such devices less vulnerable to user error and virus attack, Zittrain argues that they will substantially increase the regulability of user behaviour, not just by the vendor, but also by state regulators. He demonstrates how several existing technological applications employ digital technology to provide manufactures with extensive control over how digital appli- ances are used by pre-empting behaviour deemed undesirable, monitoring user behaviour and imposing behavioural restrictions on specific users. These tethered appliances can readily be harnessed by states, particularly those of an authoritar- ian kind, to facilitate control over their citizens. Although the prospect of more thorough or ‘perfect’ enforcement through the central control of digital devices via tethering may be appealing to regulators, Zittrain identifies several reasons why we should hesitate. In this respect, he echoes many of the concerns already articulated by Yeung. Zittrain concludes that the key to maintaining creativity and generativity in the digital world is to ensure its internal security without resorting to lockdown, and to find ways to enable enough enforcement against its undesir- able uses without requiring a system of perfect enforcement.9 Many of the questions (concerning the acceptability of regulation by technologi- cal means) that are raised by the authors of the previous four papers resonate at a more abstract level with the discussion in Bert-Jaap Koop’s paper. Koops poses 9 For the larger context, see Jonathan Zittrain, ‘The Generative Internet’ (2006) 119 Harvard Law Review 1974. Tools, Targets and Thematics 11 his basic question in the following way: in the context of general acceptance of democratic and constitutional values, which criteria are relevant for the purpose of assessing the acceptability of normative technology (that is, a technology that is self-consciously used with a regulative intent)? Making a first cut at this ques- tion, Koops seeks to identify a long-list of criteria that are material to judgments of acceptability relative to both the norm-establishing and norm-enforcing variants of normative technology, as well as to both public and private employments of such technology. Rather than resorting to a general theory of acceptable regulation (based on a particular democratic or other ideological vision), Koops constructs a list of criteria which have already been expressed by commentators who have ques- tioned the legitimacy of design-based regulatory instruments. From this cumulative collection of criteria, Koops distils a set of primary criteria (comprised of human rights, other moral values, the rule of law and democracy) and secondary criteria (which includes transparency of rule-making, checking alternatives, accountability, expertise and independence, efficiency, choice and effectiveness, flexibility and transparency of rules). However, complementing Brownsword’s short remarks on the difficulties of regulatory design, Koops cautions that applying these criteria will rarely be a straightforward and uncontested exercise, and he emphasises the impor- tance of sensitivity to context. Nevertheless, by drafting his consensus statement on the relevant criteria, Koops aims to stimulate further reflection that will not only facilitate the assessment of the acceptability of concrete cases of normative technol- ogy but also that might allow more overarching conclusions to be drawn—one of which might be the unsettling thought that the democratic and constitutional values that anchor our judgments of acceptability are themselves liable to revision as normative technology insinuates itself into our daily lives. As Brownsword observes in his introductory paper, the increasing power and sophistication of a broad array of technologies is likely to enhance their attractive- ness to those responsible for implementing social policy. Yet the use of technology to shape, constrain and promote particular behaviours is not unique to the tech- nologies that we see emerging at the dawn of the twenty-first century. In order to demonstrate the extent to which technology is already embedded in modern law, Mireille Hildebrandt provides an illuminating sociological account of the impact of the modern printing press on the reach of law. It was through the printing press that law could be embodied in written form and widely disseminated. She describes how the transition from the communication of law’s commands via the oral to written tradition involved the externalisation of legal norms, materialising them in the form of written inscriptions, which, in turn, provided law with a durability in space and time because addressees no longer needed to occupy a face to face relationship with the purveyor of the law. In other words, the printing press was vital in facilitating the reach of law across large-scale polities and jurisdictions and thus contributed to the conditions for the emergence of the modern state. The ‘moral’ of Hildebrandt’s story, then, is that law cannot be separated from its technological embodiment. Hildebrandt, rehearsing a point also made by Koops, remarks that technology is never neutral: it can be constructed in different ways and, therefore, have different 12 Roger Brownsword and Karen Yeung normative implications. Exploring the ways in which a specific technology ‘induces/ enforces’ or ‘inhibits/rules out’ certain types of behaviour,10 Hildebrandt contrasts ‘regulative’ technology, such as a smart car designed to issue a warning to its driver on detection of driver fatigue, with ‘constitutive’ technology, such as a smart car which immobilises itself on detection of driver fatigue. Although technology, like law, can be regulative or constitutive, Hildebrandt argues that they should not be considered substitutable for she fears, like many cyberscholars, that technology may be employed to avoid fundamental legal principles. She sets out to compare techni- cal normativity to legal normativity with the aim of identifying the challenges for law arising from increasingly sophisticated technologies. Her account of the history of law’s embodiment in technology raises questions about its future trajectory. In particular, anticipating a point to be highlighted in the final paper in this part of the collection (by Serge Gutwirth, Paul De Hert and Laurent De Sutter) Hildebrandt cautions against viewing law merely as an instrument for achieving policy goals. Within constitutional democracies, the law plays a critical role in protecting citizens against the state and sustaining the balance of power between citizens, enterprise and the state. But, in the digital world of the future, in a setting of intelligent envi- ronments with hybrid multi-agent systems, with real-time monitoring and real- time adaptation to one’s inferred preferences, a reinvention of legal normativity will be required. She thus claims that we ‘urgently need to face the issue of digitalisation as a process that will regulate and constitute our lifeworld and for that very reason needs to be regulated and constituted by law’. Accordingly, Hildebrandt calls upon lawyers to sit down with technological engineers to discover how technological infrastructure can be designed in a way that will sustain constitutional democracy, for example, by finding the right balance of opacity and transparency, rather than destroy it—so, for instance, in the case of Ambient Intelligent (AmI), she suggests that we may need to develop an ‘Ambient Law’ that is embodied in the algorithms and human machine interfaces that support AmI. In the final paper in this first part of the volume, Serge Gutwirth, Paul De Hert and Laurent De Sutter offer us a bridge from our concern with technology as a regulatory tool to our interest in setting appropriate regulatory environments for new technologies. Stated summarily, they argue that regulatory theory will always misunderstand the nature of law if the latter is simply regarded as just another instrument of social control or policy implementation. Law is not a technology; it is a distinctive practice. Hence, to the extent that modern regulatory theory holds (i) that regulators should be clear about their regulatory purposes, (ii) that regulators should seek to achieve an optimal mix of the available regulatory instruments, and (iii) that law—along with social pressure, markets and archi- tecture, code and the like—is simply one of those instruments, then this is to understate the distinctive nature of legal practice. To be sure, in the first instance, 10 Hildebrandt terms this phenomenon ‘technological normativity’. Compare Koops’ designation of the technological mode of regulation as ‘normative technology’. Tools, Targets and Thematics 13 legal instruments (for example, in the form of European directives or domestic legislation) may be viewed as the outcome of a political process (albeit a process constrained to some extent by legal-constitutional norms). However, once the legal instrument has been made, its interpretation and application is taken over by a community of advocates and judges who operate in accordance with the canons of legal practice. Drawing on the work of, first, Isabelle Stengers and then Bruno Latour, it is argued that we should understand that the constraints and obligations of a practice set outer limits to its instantiation—that is, it is not simply a matter of there being good or bad legal practitioners, rather those who step beyond the defining limits no longer act as lawyers. Although the practice of law expresses itself in many different ways, ranging from civilian formalism to common law result-orientation, there are always limits that distance law from politics and from the particular facts; there is always a jurisprudence to be reckoned with, a demand for consistency and coherence; and, in the foreground, there is always a particular piece of substantive law that is focal. In short, lawyers, as Karl Llewellyn liked to put it, cannot simply take a scalpel to the law.11 Accordingly, the idea that legal instruments, as elements in the regulatory mix, can be expected mechanically to serve the background (politically driven) regulatory objectives is to reckon with- out the role of lawyers. If we concur with Gutwirth, De Hert and De Sutter that the practice of law creates its own distinctive expectations; if we agree that, rather than betray their practice ideals, lawyers will interpret, apply and enforce legal instruments in a characteristically detached and deliberative way; and, if we believe that legal prac- tice so constituted represents an important counterweight to, and constraining context for, politics, then we must regard law as a key element in an acceptable regulatory environment. With this thought, we can move on to the essays in the second half of the collection. III. Technology as a Regulatory Target The seven papers in the second part of the volume turn away from the use of tech- nology as a regulatory tool to the challenges associated with getting the regulatory environment right for new technologies. Put in rather general terms, if regulators are to get the regulatory environment right, they need to set the right kind of standards (whether with a view to deterring or encouraging the development or application of a technology); they need to monitor, apply and enforce these stan- dards in a way that conforms to ideals of due process; and they need to do this in a way that is effective (relative to the regulatory objectives).12 This underlines not 11 Karl N Llewellyn, The Common Law Tradition: Deciding Appeals (Boston, Little, Brown, 1960). 12 Cp Koops’ proposed threefold characterisation (substantive, procedural and result) of the criteria that should inform and structure judgments of regulatory acceptability. 14 Roger Brownsword and Karen Yeung only that there is many a slip between cup and lip but also that we need to be alert to the possibility of regulatory failure at any point of the regulatory enterprise. In the papers in this part of the collection, the regulatory environment is reviewed with reference to five key types of technology, namely: biotechnology, information and communication technology, neurotechnology, nanotechnology and the technology of renewable energy. A. Biotechnology There has been much debate about what would make the regulatory environ- ment for biotechnology fit for purpose; but, of course, much of the difficulty is that it is precisely the nature of the regulatory purpose that is contested.13 For example, should regulators prioritise the perceived benefits of agricultural and plant (‘green’) biotechnology, or should they adopt a precautionary stance that responds to concerns about human health and safety and the sustainability of the environment? Similarly, should regulators prioritise the perceived therapeutic benefits of developments in human genetics (‘red’ biotechnology) or should they take a more restrictive approach, acting on arguments that allege that these developments involve the compromising of human dignity? Taking issue with Brownsword’s broad brief for state stewardship, Han Somsen argues that we should resist the creeping and colonising application of the precautionary prin- ciple. Whatever sense such a regulatory policy has in the face of environmental hazard, precaution is simply too easily appropriated by those who have their own moral or political agenda in relation to red biotechnology. Somsen begins by explaining that the precautionary principle has an ‘enabling’ nature, allowing public bodies to take preventive action to avoid threats of serious or irreversible damage in cases where, relying only on the evidence, they would not have sufficient reason to take such action. Thus, although scientific uncertainty about risk would normally preclude states from seeking to limit or restrict private activities, the precautionary principle permits regulatory action to be taken—and rightly so, Somsen would hold, in special circumstances of grave environmental risk (possibly created by certain manifestations of green biotechnology). Somsen identifies three contexts in which the principle has been invoked. First, there is deliberative precaution, where the principle is invoked to stimulate delib- eration within the community concerning the social acceptability of the technol- ogy. Because risk-management is a political process, Somsen argues that this is an appropriate use of the principle. Secondly, we have fact-finding precaution. This involves the application of the precautionary principle to the risk assessment stage, rather than merely the risk management stage. Somsen is highly critical of this invocation, arguing that in liberal democracies that foster autonomy and 13 See, eg, Roger Brownsword, WR Cornish and Margaret Llewelyn (eds), Law and Human Genetics: Regulating a Revolution (Oxford, Hart Publishing, 1998); and Han Somsen (ed), The Regulatory Challenge of Biotechnology (Cheltenham, Elgar, 2007). Tools, Targets and Thematics 15 equality, and where technological expertise is recognised and respected, there is no obvious role for such precaution. Thirdly, there is precaution in the aforemen- tioned enabling sense, according to which regulators should temporarily prohibit a given technology until there is new evidence suggesting no risk or acceptable risk; thus the presumption favours the status quo. Somsen is very critical of the routine use of the precautionary principle for such enabling purposes, arguing that the elasticity of the principle allows dangerously wide scope for arbitrariness in its application, allowing different interest groups to invoke the principle (disin- genuously) in support of their own self-serving political agendas. To support his claims, Somsen draws upon specific instances where the precautionary principle has been invoked by ideologically opposed interests in the course of debates con- cerning the regulation of reproductive technologies for individual use. Debate about the appropriate role of precaution in setting regulatory policy is one of the many issues over which disagreement may exist within plural com- munities. Andrea Büchler’s discussion of how Swiss policy-makers have risen to the challenge of regulating (what are still largely experimental) fetal brain tissue transplantations provides a fine example of the messy reality of pragmatic com- promise that may be necessary within a plural democracy. Although the use of fetal tissue has the potential for significant therapeutic ends, it also raises a number of ethical questions, and the transplanting of fetal brain tissue affects many disparate interests. Büchler explains how Swiss law has attempted to protect a number of these interests with the introduction of the Federal Law on the Transplantation of Organs, Tissues and Cells (TPG), yet in contradictory ways. At the tissue removal stage, for example, the protection of the physical integrity of the woman and the common public interest in avoiding increasing numbers of pregnancy termina- tions are accorded priority over the interests in obtaining fetal tissue suitable for transplants as easily as possible. At the tissue transplant stage, the interests of tissue recipients in the protection of their bodily and psychological integrity is weighted more heavily than medical research’s interest in gaining scientific insights. Accordingly, at the stage of both removal and transplantation, regula- tors have constrained freedom of research in favour of the individual interests of donors and recipients. Such constraints notwithstanding, Büchler observes that Swiss law allows researchers considerable freedom. The TPG does not, for example, require that the aborted fetus be ascertained to be (brain) ‘dead’ before tissue is removed for transplant purposes—given that the brain tissue of the fetus must be ‘fresh’ if it is to be transplantable, the TPG acquiesces in the view that heart death will suffice. Nor does the regulatory framework appear to intend to constrain research freedom with requirements which afford complete protection to the autonomy of the tissue donor and tissue recipient. For example, the father of the aborted fetus has no say what- soever in matters concerning the removal of fetal tissue; on the face of it, once the mother’s decision to donate fetal tissue has been made, there is no possibility of revo- cation; there is no requirement that the person informing a tissue recipient of sound mind about the risks attending a transplant be independent of the transplant itself; and the transplant of fetal brain tissue during clinical trials to persons of impaired 16 Roger Brownsword and Karen Yeung judgment has not definitely been ruled out. The result is a somewhat mixed picture, a regulatory environment that attempts to strike a balance between multiple interests rather than relentlessly defending any particular ideological or ethical stance. B. Information Technology It was in the context of the development of information technology that John Perry Barlow declared: Law adapts by continuous increments and at a pace second only to geology in its stateliness. Technology advances in … lunging jerks, like the punctuation of biological evolution gro- tesquely accelerated. Real world conditions will continue to change at a blinding pace, and the law will get further behind, more profoundly confused. This mismatch is permanent.14 Even if there was not this temporal mismatch between regulation and informa- tion technology, David Johnson and David Post famously highlighted the spatial mismatch between local regulation and cross-border information technologies.15 With hindsight, we might think that the cyberlibertarians overstated their case: the Internet is not unregulated. Nevertheless, no one can doubt that cyberspace, with its variety of actors, institutions and practices, presents a formidable chal- lenge to regulators, even to smart regulators.16 According to Charles Raab and Paul De Hert, the governance of cyberspace—for example, the way in which we regulate to protect informational privacy or to deal with spam—is a more complex matter than we might imagine. Even if we draw on the insights of such leading regulatory theorists as Lawrence Lessig and Christopher Hood, our appreciation and understanding of the regulatory environment might be deficient in at least two respects. First, we might not fully appreciate that there are significant questions to be posed with regard to the relationship between the various tools that are available to regulators (for instance, questions about synergy and substitutability, about complementarity and conflict). Secondly, we might not appreciate the significance of the interactions between regulatory instruments and the various actors who populate a particular regulatory space—we might not appreciate, as Raab and De Hert put it, that regulation is ‘a social and political process and not just a question of what tools do what jobs’. With regard to the first of these matters, Raab and De Hert suggest that, if we focus on any part of the regulatory environment, we can ask two key questions, namely: ‘1. What tools pertain to what technology practices, and according to what criteria can these instruments be compared and contrasted? 2. Are the instruments substitutable 14 John Perry Barlow, ‘The Economy of Ideas: Selling Wine Without Bottles on the Global Net’ avail- able at accessed 22 May 2008, and extracted in Yee Fen Lim, Cyberspace Law (Oxford, Oxford University Press, 2002) 398 at 402. 15 David R Johnson and David Post, ‘Law and Borders—The Rise of Law in Cyberspace’ (1996) 48 Stanford Law Review 1367. 16 Cp Stuart Biegel, Beyond Our Control? Confronting the Limits of Our Legal System in the Age of Cyberspace (Cambridge, MA, MIT Press, 2003). Tools, Targets and Thematics 17 for each other, or are they complementary; and if complementary, how do they combine (and how might they combine better)?’ These are tough questions because the hypothesis that sets the stage for these questions, that is the hypothesis that we might focus on some (discrete) part of the regulatory environment, is itself problem- atic. For, the regulatory environment, if not quite a seamless web, is an assemblage of diverse governance instruments, each of which might be exerting some influence, background or foreground, direct or indirect, on the particular technology practice. However, being aware of the possible range of relationships between a suite of regulatory instruments is not enough; this, albeit with a degree of sophistication, is still a one-dimensional view of the regulatory environment. In Raab and De Hert’s judgment, it is essential that we go beyond an appreciation of the point and coun- terpoint in the regulatory repertoire; the regulatory score does not play by itself; a performance requires a conductor, an orchestra (with its range of musical instru- ments) and an audience. So it is with regulation: regulatory instruments ‘are wielded (or not) by individual or institutional actors who participate in regulatory regimes’. The missing, and vital, dimension is that of policy actors and their various relation- ships. In other words, claim Raab and De Hert, ‘we have to understand tool-making, tool-using and tool-regulating as processes in which, in theory, several actors may participate in the making, using and governing of each tool; or, indeed conflict with each other in these processes, for regulation is a political process, not a cut-and-dried matter of devising and applying technology and law’. Once we conceive of the regulatory environment as a matrix of governance tools applied in shifting real-world relationships, we are at least on the right track. Nevertheless, Raab and De Hert emphasise that there is a long and winding road ahead of us. It is one thing to begin to appreciate that the simple metaphors of regulatory tools and toolboxes, of regulatory mixes and mosaics, are apt to mis- lead, and to steer clear of assuming that regulatory instruments are self-executing, or operate mechanically, or simply act on inert or inactive regulatory targets; but it is quite another matter to bring our regulatory understanding up to a level that matches our naïve confidence in our regulatory intelligence. As an example of how we might develop a more constructive, matrix-like, approach to the regulation of cyberspace, Raab and De Hert single out Andrew Murray’s recent work.17 Drawing on that very work, in his contribution to this volume, Murray emphasises the fluid and dynamic nature of cyberspace, highlight- ing the complex and frequently unpredictable interactions between participants. In this context, Murray observes that attempts to regulate cyberspace that are based on a static model are doomed to fail. Rather, he conceptualises cyberspace in terms of a ‘post-regulatory’ state, recognising that the behaviour of a wide range of actors beyond the state as well as the response of those whom they seek to regulate directly is relevant to the outcomes of ordering social and economic life, particularly given the unique man-made, flexible architecture of cyberspace. 17 Andrew Murray, The Regulation of Cyberspace (Abingdon, Routledge-Cavendish, 2007). 18 Roger Brownsword and Karen Yeung In order to model regulatory interventions in such an environment, Murray argues that consideration must first be given to the value and importance of different ‘layers’ of regulation upon which communication networks are typically constructed, and which form part of their complexity. Secondly, consideration must also be given to the built environment in which regulatory intervention takes place, particularly the flexibility arising from the technological environment, law and social context. Thirdly, Murray draws attention to the power of the network, and the pervasive and near-perfect channels of communication within cyberspace. As a consequence, individuals who inhabit cyberspace are not merely passive receivers of regulatory demands, but regu- latees who respond and participate actively in a broader community. Given this dynamic regulatory matrix, Murray argues that the best regulatory model is one which harnesses existing relationships between actors through what he terms ‘symbiotic regulation’. For Murray, symbiotic regulation seeks to predict where tensions will arise within the regulatory matrix, aiming instead at regula- tory interventions which harness natural communication flows within the matrix in order to avoid those tensions. This requires that regulators first map these com- munications flows. In this respect, he suggests that theories of autopoietic social systems demonstrate that communication between actors across systems and sub- systems takes place in a stable but indirect manner, given the self-referential nature of systems dynamics. Accordingly, close observation should enable regulators to anticipate where communication between nodes will take place and, in turn, anticipate nodal responses to regulatory interventions. Secondly, in order to map the effect of interventions within the regulatory matrix, he argues that regulators should measure the probable (or actual) outcome of their intervention by apply- ing a system dynamic approach. Such an approach requires regulators to record the current information communicated by each node in the matrix and the con- tent and method of communication employed. The feedback thereby generated should enable the regulators to evaluate and refine their intervention in a process of continual modelling via continual monitoring of system effects. According to Murray, such a model suggests that it may be possible to design successful regula- tory interventions, even in highly complex regulatory environments. C. Neurotechnology Early debates about the regulation of neurotechnology have focused on two areas of practice: first, the use of scanning and brain-imaging technology and, secondly, the development and use of cognition-enhancing drugs. Recently, there has been a rush of discussion with regard to the latter.18 However, in her contribution, Judy Illes takes the former as her principal focus. 18 See, eg, John Harris, Enhancing Evolution (Princeton, NJ, Princeton University Press, 2007); British Medical Association, Boosting Your Brainpower (London, November 2007); and Barbara Sahakian and Sharon Morein-Zamir, ‘Professor’s Little Helper’ 450 Nature (20 December 2007) 1157. Tools, Targets and Thematics 19 While the design and use of technological means for achieving the ‘perfect’ enforcement of legal standards remains largely in the speculative realm (with the exception of technologies for the protection of intellectual property in digital data), the employment of neuroimaging technology for the purposes of detecting violations of legal standards might not be so distant. Explaining that the power of functional Magnetic Resonance Imaging (fMRI) technology lies in its capacity to measure how the brain reacts to certain questions by detecting and calibrating changes in the brain’s magnetic field (which arises when oxygen from the bloodstream is drawn into the parts of the brain which responds when answering questions), Illes notes that neuroscience has developed at a stag- gering pace in recent decades. Today, functional imaging studies are revealing in unprecedented detail the complexity of neural networks underlying moral behaviour. Nonetheless, Illes argues that, for several reasons, technology is still a long way off from providing the kind of social and legal applications for lie detection and information extraction that seem to be on its trajectory.19 First, the efficacy of fMRI scans is still limited because our understanding of the sensitivity (i.e., the measure of the existence of a signal) and specificity (i.e., the meaning of that sig- nal) are relatively blunt. Secondly, human behaviour is extremely complex, and lies can come in different forms. Illes then briefly identifies a series of challenges concerning the integration of neurotechnology into society including variability in standards of practice and quality control, competing analytical philosophies concerning the appropriate methodological approach to neuroscientific research, and ethical and policy challenges such as the danger of media hype and the risks that technology will be improperly used. In order to meet these challenges, Illes calls for regulation of the technology, referring to a scheme (one that she has pro- posed with Hank Greely) which draws upon the model of the US FDA combined with criminal law procedural requirements. D. Nanotechnology Many of the issues identified by Illes concerning the use and development of neurotechnology have direct parallels in debates about the development and deployment of nanotechnology. It is also the case that there are strong similari- ties between the shape of the regulatory debate concerning green biotechnology (with regard to environmental and public health concerns) and that concerning nanotechnological releases,20 just as there are close similarities between debates dealing with red biotechnology (with regard to respect for human rights and 19 Cp Henry T Greely, ‘The Social Effects of Advances in Neuroscience: Legal Problems, Legal Perspectives’ in Judy Illes (ed), Neuroethics (Oxford, Oxford University Press, 2006) 245. 20 Cp Geoffrey Hunt and Michael Mehta (eds), Nanotechnology: Risk, Ethics and Law (London, Earthscan, 2006). 20 Roger Brownsword and Karen Yeung human dignity) and nanomedicine.21 In both instances, there is a great deal of emphasis on the regulation of risk. However, a distinctive feature of Hailemichael Demissie’s contribution to this volume is the way in which while, far from ignor- ing the question of regulating against risk presented by hazardous nanotechnolo- gies, he wants to ensure that strategies for benefit sharing are placed squarely on the regulatory agenda. Having explained what nanotechnology is and why it is potentially so powerful—arguing that it is revolutionary in so far as it enables the manipula- tion of matter from the ‘bottom up’, rather than from the ‘top down’—Demissie observes that alongside the high level of optimism that has accompanied claims about nanotechnology’s potential benefits are fears about the ‘gray goo’, self- replicating matter that generates grave dangers to humanity. Various issues that bear upon the regulation of nanotechnology are also considered, including its unknown environmental impact, the lack of research funding to support develop- ment, its potential for abuse (including military applications), whether the appro- priate regulatory approach is precautionary or ‘proactionary’,22 and the limits of self-regulation. In the final part of his discussion, Demissie raises the relatively neglected question of ‘benefit-sharing’, observing that nanotechnology might be fairly characterised as belonging to the common heritage of humanity. In closing, he warns of the dangers of a ‘nanodivide’ between an enhanced class of humans and an unenhanced underclass. Regulators might, thus, infer that failure to spread and share the benefits of new technologies might itself represent a certain sort of risk by creating the conditions for division, disenchantment and possibly disorder. E. Renewables It is a growing awareness of the need to preserve our common heritage for future generations that underlies recent interest in promoting the use of renewable energy technology, and which forms the backdrop to Haifeng Deng’s contribu- tion. Deng reflects upon China’s policy to promote the wind power industry as part of its broader renewable energy policy, noting that the wind power industry in China is currently small in scale, lagging far behind other countries in size and technological sophistication. Accordingly, he claims that additional incentives are needed to promote the development and use of wind energy technology. To this end, Deng considers two different legislative incentive systems. One is the ‘renewable portfolio standard’, which imposes a legal obligation on power sup- pliers to use a specified proportion of renewable energy, monitored and enforced 21 See, eg, the European Group on Ethics in Science and New Technologies, Opinion on the Ethical Aspects of Nanomedicine (Opinion No 21, 2007); Bert Gordijn, ‘Ethical Issues in Nanomedicine’, in Henk ten Have (ed), Nanotechnologies, Ethics and Politics (Paris, UNESCO, 2007) 99; and, Roger Brownsword, ‘Regulating Nanomedicine—the Smallest of Our Concerns?’ (2008) 2 Nanoethics 73. 22 Cp Han Somsen’s discussion of precautionary approaches (ch 10 in this volume). Tools, Targets and Thematics 21 through a system of certification of the ‘renewable’ quality of the energy utilised. This system is represented in the UK, Australia and some US states. The other is the ‘mandatory purchase system’, which imposes a legal obligation on energy suppliers to obtain their energy from qualified power producers (eg, as is the case in Germany). Under the latter system, the price for energy is prescribed by law, rather than determined by the market, although the price is determined in order to ensure that the generation of renewable energy yields a profit for the suppler. Deng briefly compares the relative advantages and shortcomings of the two schemes, arguing that whilst the first is well-suited to countries with highly liberalised electricity markets and where there is competition within the renew- able energy sector, the second is better suited for countries which do not have such liberalised electricity markets, and where the renewable energy sector is in its fledgling state with more government support being needed. On this basis, Deng regards China’s adoption of the mandatory purchase system as an appropriate choice to meet China’s needs. IV. Closing Reflections In his closing remarks, Justice Michael Kirby identifies a number of paradoxes (or tensions) prompted by the conference discussions—for example, while regulatory inaction might allow for a technology to be developed or applied in ways that are regretted, there is equally the danger that regulators might over- react in a precautionary way to risks that are possibly presented by emerging technologies; or, again, while regulators might wish to encourage the develop- ment of technologies that promise to be emancipating and freedom-enhancing, they might find that the regulatory environment, as well as the technologies so encouraged, militate against important political and civic freedoms. Neither technology nor regulation, it seems, has a neutral setting, let alone a readily predictable application. Following up these observations, Justice Kirby draws seven lessons for regula- tors. Some of these lessons point the way towards sound regulatory practice—for instance, the need for regulators to base their interventions on a solid scientific and technological understanding, the need to ensure that the public is properly engaged in the regulatory debates, the need to attend to the particular, and dif- ferent, characteristics of the individual technologies, and the need to appreciate that inaction might be read as an, in principle, permission (even encouragement) which, in practice, might then be very difficult to withdraw. Other lessons point to the problematic context in which local regulators frequently operate. For, such regulators are striving to deal not only with technologies that are, in one sense or another, global but also with a patchwork of secular and non-secular cultures and deep ethical divisions. 22 Roger Brownsword and Karen Yeung By way of concluding our introductory comments, we can do no better than take to heart the first of Justice Kirby’s paradoxes. He puts it thus: [Regulating Technologies] surveys what is substantially a blank page. Increasingly the content of law, like the content of life, will be concerned with technology and with its many consequences for society. The importance of the chosen topic therefore belies the comparatively little that is written, said and thought about it. Paradoxically, then, those who first lay claim to expertise may participate in a self-fulfilling prophesy. Accordingly, we should concede at once that, if we are experts, it is only in the most modest sense of the word. However, the fifth and final paradox identified by Justice Kirby highlights the oddity that it sometimes falls to small, inexpert, groups to stumble on something of real significance. In this light, perhaps we should be more assertive, for as Justice Kirby also concludes, the issues raised by the topic of regulating technologies—whether understood as a question about the use of technological tools or as one about the regulation of technological targets—are ‘more important for our societies and their governance than virtually any of the other topics that legal science could offer’. It is in this spirit, therefore, of making the first tentative marks on a tabula rasa that we offer this collection of papers. 2 So What Does the World Need Now? Reflections on Regulating Technologies ROGER BROWNSWORD* I. Introduction A decade ago, at the turn of the Millennium, there were reasons for thinking that this was a special time. Seemingly great leaps forward in both human and animal genetics promised to bring dramatic improvements to human health, and the Internet was transforming the cultural, commercial and, quite possibly, political worlds. Yet, it was not clear that these were unqualified goods. Concerns (unfounded as it proved) about a Y2K computer crash raised questions about the wisdom of an increasing reliance on digital data; and the prospect of reproductive cloning, even if acceptable in Edinburgh sheep, was not one that humans would necessarily welcome in their own species. At about this time, a cluster of ideas began to crystallise. One was that it might make sense to think about the regulatory challenges presented by new tech- nologies generically, not just the challenges presented by information technol- ogy or biotechnology, but by new technologies simpliciter. Another was that we should monitor the way in which new technologies themselves might come to be deployed as regulatory instruments. Here already was the embryonic idea for ‘regulating technologies’: on the one hand, the challenge of regulating technolo- gies, on the other, the opportunity to turn these technologies into instruments of regulation—the idea of regulatory targets becoming regulatory tools. Two further ideas were implicated in these crystallising thoughts. One was that the inquiry needed to focus on regulation rather than law—at any rate, to the extent that this was an inquiry into channelling conduct; but, at the same time, we should not forget about Rule of Law values. And, the other idea was that the millennial changes that attracted our attention were taking place during a particular epoch of globalisation. Accordingly, it also seemed to be important to place any inquiry * Professor of Law, King’s College London, Director of TELOS, and Honorary Professor in Law at the University of Sheffield. 24 Roger Brownsword into regulating technologies in a context of governance that is both international and globalising.1 From such general thoughts, a number of particular questions were prompted. This was, as it were, the agenda for the underlying project. That agenda now looks something like this: — There is a sense that modern technologies are significantly different to their pre- decessors. But, is this right? If so, in what sense are such technologies radically different or revolutionary? And, why is this a matter of relevance to regulators? — Is the fact that these new technologies (information and communica- tiontechnologies and biotechnology, as well as neurotechnology and nanotechnology)are in some way implicated in the processes or phenomenon of globalisation significant? If so, how? And, at the same time, is the develop- ment of global governance a relevant consideration? — Spheres of regulation beyond the nation state have developed rapidly in recent years but the nature of regulation within nation states (and regulatory theory that accompanies it) is also undergoing major change. How does this bear on questions concerning regulation and technology? — In pluralistic societies, how are regulators to defend the positions that they take up? The challenges of legitimation and legitimacy get more acute, do they not, as regulatory zones move beyond nation states to cover regions and even larger (world-wide) jurisdictions. How are these challenges to be met? — How are moral caveats (such as moral exclusions against patentability, or the general exceptions for ‘public morals’ in international trade agreements) to be operationalised? — There is scepticism about the effectiveness of traditional legal strategies of command and control. Smart regulation explores all the options. Even so, compliance cannot be taken as read. If laws are ineffective within nation states, how are they likely to fare in regional and international regulatory spaces let alone cyberspace? — How can we design regulatory institutions in such a way that they respond to all desiderata—flexibility with calculability, independence with accountability, expertise with detachment, speed with due deliberation, and so on? — How are regulators to respond to a lack of trust (to a crisis of confidence) in experts? How are the public to be engaged? — How is regulation able to stay ‘connected’ to rapidly developing technologies? Is law able to evolve with the technology or is it destined to be chasing it? — Is there something special about the regulatory space occupied by new tech- nologies (especially cyberspace?); and is there anything distinctive about the kind of crimes (eg, cybercrime) or torts (eg, genomic torts) associated with these technologies? 1 See Roger Brownsword and N Douglas Lewis, general editors, Global Governance and the Quest for Justice, vols I–IV (Oxford, Hart Publishing, 2004–08). So What Does the World Need Now? 25 — Are there generic lessons to be learnt (so that we do not keep re-inventing the regulatory wheel) or does each technology import its own regulatory groundrules? — What happens if we join up concerns about the way that technologies might be deployed in the private sector with concerns about reliance on technology by the state as a regulatory instrument? — When the state embraces new technologies as regulatory tools, is this a step towards the dystopian orders depicted by Orwell and Huxley? — Should we be troubled by the thought that, with technological sophistication and a culture of prevention and risk management, the Rule of Law might be replaced by the Rule of Technology? Would this be such a bad thing? In conjunction with these questions, I also started with a sketch2 and a general story-line for what, in due course, was to become Rights, Regulation, and the Technological Revolution.3 The frame for my sketch was Francis Fukuyama’s Our Posthuman Future,4 the fundamental thesis of which is that modern biotechnology represents an insidi- ous threat to human dignity and it needs to be regulated accordingly. As for the story that I told in counterpoint to Fukuyama’s manifesto, the gist of it was that regulators, in their efforts to regulate new technologies, would learn a great deal about the strengths and weaknesses of traditional regulatory instruments, but also would spot the potential of these emerging technologies as regulatory tools, supplementing and even supplanting traditional modes of regulation. In other words, this was a short version of the story now told in my recent book. While the book does not attempt to respond to all the questions posed by the underlying agenda, it does take a position that brings the last question in the list back to the first. In response to the first question, my position is that, as lawyers, we need not agonise about whether the technological changes that are underway are revolutionary in relation to the underlying science, or the like. What is signifi- cant is the increasing reliance on technology as a regulatory tool. Just as we might think that the employment of information technology reflects a fundamentally different way of going about the business of lawyering,5 so we might view the employment of the emergent technologies as a fundamentally different way of going about the business of regulating. A regulatory environment that is dense with these new technologies is a very different place to an environment that relies on compliance with norms that are either legally or morally expressed or simply implicit in custom and practice. If this is the regulatory environment of the future then, in response to the last question, we should certainly be worried about the 2 See Roger Brownsword, ‘What the World Needs Now: Techno-Regulation, Human Rights, and Human Dignity’ in Roger Brownsword (ed), Human Rights (Oxford, Hart Publishing, 2004) 203. 3 Roger Brownsword, Rights, Regulation, and the Technological Revolution (Oxford, Oxford University Press, 2008). 4 (London, Profile Books, 2002). 5 Cp Richard Susskind, Transforming the Law (Oxford, Oxford University Press, 2003). 26 Roger Brownsword breakdown in the procedural values of the Rule of Law, in the lack of transparency and accountability, and the like. However, the fundamental concern for any aspi- rant moral community is that a reckless adoption of a technology-led regulatory style will erode the conditions that are essential for the community to make sense of its moral mission. The technological revolution, in other words, is disruptive in relation to both the enterprise of law and the project of moral community. With this background, and by way of a prelude to the essays in this collection, let me offer some reflections on regulating technologies. The paper is in three principal parts. In section II, I dip into the underlying agenda to speak to some issues concerning the regulation of technology; in section III, I offer a few com- ments on the difficult question of regulatory design; and, in section IV, I turn to the implications of relying on technology as a regulatory instrument. II. The Regulation of Technology (Regulating Technologies) From the questions that relate to the regulation of technology—that is, from the questions that invite discussion in the second part of the collection—let me isolate the following three for short consideration: (i) Are there generic lessons to be learned about the regulation of new technologies? (ii) Is there anything distinctive about the regulatory space occupied by particular technologies? (iii) In pluralistic societies, how are regulators to achieve positions that are perceived to be legitimate? (i) Are there generic lessons to be learned about the regulation of new technologies? Without doubt, the outstanding generic challenge presented by new technolo- gies is that of regulatory connection. Indeed, if we define ‘new technologies’ by reference to the speed of their development, this might be no more than the statement of an analytic truth; but, even if we do not secure the truth of the proposition in this definitional way, the fact is that experience indicates that the technologies in which we have an interest do develop and move on in ways that create difficulties for regulators. Sometimes, the difficulty lies in a lack of correspondence between the form of words found in the regulation and the form that the technology now takes; at other times, the difficulty is that the original regulatory purposes no longer provide clear justificatory cover for the uses to which the technology is now put.6 Whatever the details of the dif- ficulty, no one disputes that maintaining regulatory connection is a key generic challenge. The question is: what, if any, lessons do there seem to be in relation to this generic difficulty? 6 See, further, Brownsword, n 3 above, ch 6. So What Does the World Need Now? 27 First, is there a clear-cut answer to the regulator’s question, ‘How are we to keep the regulation connected to the technology?’ Ideally, we want regulation to bind to the technology and to evolve with it. In pursuit of this ideal, regulators face a choice between taking a traditional hard law approach or leaving it to self-regulation and, con- comitantly, a softer form of law. Where the former approach is taken, the hard edges of the law can be softened in various ways—especially by adopting a ‘technology neutral’ drafting style,7 by delegating regulatory powers to the relevant Minister and by encouraging a culture of purposive interpretation in the courts. Conversely, where self-regulation and softer law is preferred, the regime can be hardened up by mov- ing towards a form of co-regulatory strategy. However, no matter which approach is adopted, there is no guarantee that it will be effective and the details of the regulatory regime will always reflect a tension between the need for flexibility (if regulation is to move with the technology) and the demand for predictability and consistency (if regulatees are to know where they stand). To this extent, therefore, there is no straight- forward generic lesson to be drawn; it is not as though, having identified the problem, we now have a template for responding. We are, as Michael Kirby aptly observes in his closing reflections in this volume, experts without a great deal of expertise.8 Secondly, where a regulatory framework becomes disconnected, there is no denying that this might be undesirable relative to considerations of regulatory effectiveness and/or regulatory economy. With regard to the former (regula- tory effectiveness) the problem is that, once regulation becomes disconnected, regulatees cannot be quite sure where they stand—and this will create difficulties irrespective of whether the regulatory environment is intended to support and promote certain activities (for example, human embryonic stem cell research) or to prohibit them (for example, human reproductive cloning). Here, we might pause to note an irony: the more that regulators (in an attempt to let regulatees know where they stand) try to establish an initial set of standards that are clear, detailed, and precise, the more likely it is that the regulation will lose connection with its technological target (leaving regulatees unclear as to their position). With regard to regulatory economy, the point is that, where regulation becomes formally disconnected, it is wasteful to expend either legislative or judicial resource simply to declare, albeit expressly and for the avoidance of doubt, that the regulatory position is as it was clearly intended to be. That said, we should not assume that (ex post) regulatory disconnection is necessarily and inevitably a bad thing and that, when it happens, every effort should be made to close the gap. Sometimes, in 7 As advocated, for instance, in relation to electronic signatures (see, eg, Pamela Samuelson, ‘Five Challenges for Regulating the Global Information Society’ in Christopher T Marsden (ed), Regulating the Global Information Society (London, Routledge, 2000) 316 at 320–21) and electronic money. For a comprehensive analysis of technological neutrality, see Bert-Jaap Koops, ‘Should ICT Regulation be Technology-Neutral?’ in Bert-Jaap Koops, Miriam Lips, Corien Prins and Maurice Schellekens (eds), Starting Points for ICT Regulation—Deconstructing Prevalent Policy One-Liners (The Hague, TMC Asser Press, 2006) 77. See, too, the excellent discussion in Chris Reed, ‘The Law of Unintended Consequences—Embedded Business Models in IT Regulation’ (2007) Journal of Information Law and Technology (on line). 8 Michael Kirby, ‘New Frontier—Regulating Technology by Law and “Code” ’ (ch 17 in this volume). 28 Roger Brownsword the interests of regulatory legitimacy and democracy, it is important to take time out to debate the developments that have taken place and to determine how the regulatory framework should be adjusted.9 Thirdly, even if there are no simple prescriptions for effective and legitimate regulatory connection, there is a growing awareness that there is a serious prob- lem that requires attention. So, for example, it has been proposed that ‘the Chief Scientific Advisor should establish a group that brings together the representatives of a wide range of stakeholders to look at new and emerging technologies and iden- tify at the earliest possible stage areas where potential health, safety, environmental, social, ethical and regulatory issues may arise and advise on how these might be addressed’.10 Such a group should ensure, not only that regulators are forewarned but also, as experience is gathered, that regulators are forearmed. Regulators, too, are waking up to the fact that sustainability is a problem and there are encourag- ing signs of imaginative solutions being sought. So, for example, in the House of Commons Science and Technology Select Committee’s report on hybrid and chimera embryos,11 it was suggested that the regulatory agency should be given a broad licensing power to authorise the use of inter-species embryos as research tools but that, if a particularly controversial use or wholly uncontemplated type of embryo were to be proposed, the regulatory framework should ‘contain a provision to enable the Secretary of State to put a stop to the procedure for a limited period while deciding whether or not to make regulations’.12 Such an idea contemplates a constructive exercise in joint regulation, with the breadth of the agency’s licensing powers being geared for flexibility and connection, and the Secretary of State’s stop and review powers designed for both clarity and legitimacy. In the event, this particular suggestion was not taken forward. Nevertheless, the drafters of the Human Fertilisation and Embryology Bill 2007–08 endeavoured to incorporate in the regulatory framework a number of anti-disconnection mea- sures. Most strikingly, section 1(5) of the Bill provides as follows: If it appears to the Secretary of State necessary or desirable to do so in the light of devel- opments in science or medicine, regulations may provide that in this Act … ‘embryo’, ‘eggs’, ‘sperm’ or ‘gametes’ includes things specified in the regulations which would not otherwise fall within the [relevant] definition. In addition to the limitations that are specified in the express terms of this regulation- making power, the Bill stipulates that regulations ‘may not provide for anything containing nuclear or mitochondrial DNA that is not human to be treated as an 9 See, further, Brownsword, n 3 above, ch 6. 10 The Royal Society and the Royal Academy of Engineering, Nanoscience and Nanotechnologies: Opportunities and Uncertainties, RS Policy document 19/04 (London, The Royal Society, 2004) para 9.7. 11 House of Commons Science and Technology Select Committee, Government Proposals for the Regulation of Hybrid and Chimera Embryos (Fifth Report of Session 2006–07) HC 272-I (5 April 2007). 12 Ibid at para 100. Compare, too, The Academy of Medical Sciences, Inter-Species Embryos (London, July 2007) at 39; and the House of Lords House of Commons Joint Committee on the Human Tissue and Embryos (Draft) Bill, Human Tissue and Embryos (Draft) Bill, HL Paper 169-I, HC Paper 630-I (London, The Stationery Office, 1 August 2007), where a regime of ‘devolved regulation’ is favoured. So What Does the World Need Now? 29 embryo or as eggs, sperm or gametes’. In other words, even if it were to appear necessary or desirable to do so, the Secretary of State’s powers do not extend to changing the relevant statutory definitions in a way that would encompass hybrid or chimera embryos. A further example of an attempt to maintain connection is found in section 26 of the Bill which pre-authorises the making of regulations to cover procedures for mitochondrial donation such that human embryos are cre- ated by using genetic material provided by two women. What should we make of such forward-looking measures? On the face of it, such provisions are a welcome attempt to come to terms with one of the key facts of regulatory life, namely that there will be technological devel- opments that legislatures simply cannot foresee. In the case of the power to broaden the statutory definitions, no attempt is made to second-guess what the nature of the developments in science or medicine might be. We know from recent experi- ence that embryology is a rapidly developing field; but the particular way in which it might develop is less predictable—hence, the absence of any particular trigger- ing circumstances in the terms of the regulation-making powers. By contrast, the powers given by section 26 represent a response to a rather particular technological development, indeed one that has been foreshadowed for some time. Before we embrace particular measures of this kind, I suggest that we need to be satisfied on two related matters. First, we need to be confident that the scenarios and powers in question have been fully debated and authorised at the time of enactment—otherwise the advance authorisation will fail to satisfy the criteria of legitimacy. Secondly, we need to be sure that the scenarios and the scope of the powers are sufficiently clear to enable the debate to be adequately informed— otherwise, a well-intended effort to try to be ahead of the game will prove to be a false regulatory economy. In the light of these provisos, we might have some res- ervations about the section 1(5) power, certainly more so than with regard to the section 26 power. Granted, the former has been circumscribed so that hybrids and chimeras are excluded; even so, unlike the section 26 power, there is no knowing what kind of developments in science and medicine might prompt the Secretary of State to invoke the section 1(5) regulation-making power. There is no guarantee, of course, that advance measures of this kind will be effective when they are activated. To some extent, it might make a difference whether the purpose of the regulatory intervention is to prohibit some conduct or to permit it. Consider, for example, clause 65(2) of the (subsequently abandoned) draft Human Tissue and Embryos Bill 2007, a clause that gave the Secretary of State prior authorisation to regulate against (ie to prohibit) the selling, supplying or advertising of DIY sperm sorting kits (if and when such kits become available). While the joint parliamentary committee that scrutinised the draft Bill expressed sympathy with the intention behind this clause, it judged that the provision would be unenforceable in practice13—and the committee might well have been right in 13 HL Paper 169-I, HC Paper 630-I, n 12 above, at para 284. 30 Roger Brownsword its assessment. For, had couples not accepted the legitimacy of this restriction, they might have tried to source the kits on the black market; and we can be fairly con- fident that they would have been assisted by overseas Internet suppliers. This does not mean that activating such regulatory powers will always be a complete waste of time; but regulators should have fairly modest expectations about the likely effec- tiveness of their intervention.14 By contrast, where regulation declares some activity (such as egg donation for mitochondrial replacement only) to be permitted, then there is perhaps less of an issue about effectiveness—or, at any rate, this is so unless the intention is not merely to permit but to permit and to promote. Nevertheless, a permissive provision of this kind might agitate the dignitarians (ie, those who hold that we have a categorical duty not to act in any way that compromises human dignity);15 and, although the signals from the appeal courts have hardly given this constituency any encouragement,16 we should not discount the possibility that the exercise of such new-style powers might be tested through judicial review. (ii) Is there anything distinctive about the regulatory space occupied by particular technologies? One of the principal ideas associated with the underlying agenda is that, each time a new technology appears, or an established technology assumes a fresh significance or moves forward in some way, we should not, so to speak, have to re-invent the regulatory wheel. Moreover, this sentiment chimes in with the oft-heard view that we should not repeat the mistakes (especially the mistake of genetic reductionism) that we might have made with biotechnology. On the other hand, this idea needs to be counter-balanced by the thought that the technologies, while having some similarities as regulatory targets, are nevertheless different—the thought that each new technology has its own distinctive identity. Hence, even if we do not need to re-invent the regulatory wheel, we do need to refine our regulatory intelligence to bring it into alignment with the characteristics of each particular technology. One way of trying to implement this sense of similarity and difference is to think about the regulatory space occupied by a particular technology.17 If we had a set of variables that enabled us to plot a regulatory space, we should be able to 14 Cp Roger Brownsword, ‘Red Lights and Rogues: Regulating Human Genetics’ in Han Somsen (ed), The Regulatory Challenge of Biotechnology (Cheltenham: Edward Elgar, 2007) 39. 15 For discussions of dignitarian thinking, see, eg, Roger Brownsword, ‘Bioethics Today, Bioethics Tomorrow: Stem Cell Research and the “Dignitarian Alliance”’ (2003) 17 University of Notre Dame Journal of Law, Ethics and Public Policy 15; ‘Three Bioethical Approaches: A Triangle to be Squared’, paper presented at international conference on the patentability of biotechnology organised by the Sasakawa Peace Foundation, Tokyo, September 2004; ‘Stem Cells and Cloning: Where the Regulatory Consensus Fails’ (2005) 39 New England Law Review 535; and Brownsword, n 3 above, esp ch 2. 16 Notably, R v Secretary of State for Health ex parte Quintavalle (on behalf of Pro-Life Alliance) [2001] EWHC 918 (Admin) (Crane J); [2002] EWCA Civ 29; [2003] UKHL 13 and R (Quintavalle on behalf of Comment on Reproductive Ethics) v Human Fertilisation and Embryology Authority [2002] EWHC 2785 (Admin); [2003] EWCA 667; [2005] UKHL 28. For commentary, see Brownsword, n 3 above, ch 6. 17 This is an exercise carefully undertaken in relation to information technology in Andrew Murray, The Regulation of Cyberspace (Abingdon, Routledge-Cavendish, 2007). So What Does the World Need Now? 31 figure out quite quickly, and in an organised way, in which respects the technology in question was a routine regulatory target and in which respects it was distinctive and special. But, what would those variables be? Given that this is an exercise in the application of regulatory intelligence, the key variables must be those factors that we take to be of regulatory significance. Accordingly, I suggest that the start- ing point should be to treat the main variables as those relating to legitimacy, effectiveness (including economy and efficiency), and connection; and, where the regulatory space involves more than one domestic legal system (as is invariably the case with new technologies), then there is the further variable of cosmopolitanism (that is, the challenge of doing justice to the twin ideals of universal concern and respect for legitimate (local) difference).18 What, then, is it that makes a particular technology, or the regulatory space that it occupies, different and distinctive? If, as I have suggested, connection is a generic challenge, we can assume that each technology will be developing at a rate that threatens disconnection. To be sure, some technologies will develop even more quickly than others; but, generally, this will not be where the relevant dif- ference is to be found. At an early stage of their development, some technologies might be purely domestic in their significance such that there is not yet a cosmopolitan challenge to be addressed. If so, there will, at this juncture, be a distinction between the technologies that raise a cosmopolitan challenge and those that do not. However, where technologies are designed to deliver benefits, but where they also present risks, we can be sure that it will not be long before cosmopolitan questions arise. On the face of it, then, the particular differences are likely to relate to legitimacy and effectiveness. One senses that while, for some technologies, legitimacy is the regulatory ‘hot- spot’, for others it is effectiveness. Following up this thought, we might say that the regulatory space in which information technology is located differs from that occupied by, say, red biotechnology or neurotechnology because, in the former, it is effectiveness that is problematic while, in the latter, it is legitimacy that is the source of regulatory difficulty. However, while this characterisation of difference might be on the mark at the moment, values and views might change as might the features of the technological targets once convergence occurs. For example, we might say that, while effectiveness is likely to be the principal regulatory challenge in relation to nanotechnology (how can you regulate for safety when the hazards associated with the technology are not clear?), the regulatory difficulty might switch to legiti- macy once nano-medical applications are developed; and, over a period of time, quite possibly, things might change again as the technology is accepted. Or, to take another example, because each technology emerges against an existing regulatory background, there will be a question about whether fresh or dedicated regulatory provisions need to be introduced for the emerging technology. Just as it was asked 18 For discussion, see Brownsword, n 3 above, ch 7. 32 Roger Brownsword whether existing contract law provisions might suffice to regulate e-commerce, it is now being asked whether existing health and safety regulations will suffice to cover nanotechnology.19 Sooner or later, though, the answer to the question becomes clearer and what was once an issue is no longer so. In other words, the features that distinguish a particular regulatory space at a particular time are neither intrinsic to the technology nor enduring. Regulatory spaces are shifting scenes. The lesson to be drawn from this, therefore, is that we should try to recognise common regulatory challenges for what they are. However, while, in the interests of regulatory economy, we should avoid reinventing the wheel, we should not sup- pose that we can mechanically transplant a particular regulatory regime from one regulatory space to another any more than we should assume that the pressure points on regulators will remain constant. (iii) In pluralistic societies, how are regulators to achieve positions that are perceived to be legitimate? Elsewhere, I have highlighted two key points that bear on the challenge of regu- latory legitimacy.20 One point concerns the difficulties that regulators have in accommodating the various constituencies that make up an ethical plurality; and the other concerns the mistaken assumption that such widely relied on concepts as ‘harm’ and ‘consent’ are ethically neutral and unproblematic (even in a plural- ity). In both cases, plurality is the problem. However, there are different degrees of plurality and we need to understand the significance of this matter. The easier context is that in which there is a baseline of agreement with regard to the shape and character of the community’s ethic. Even in such a community, there will be scope for disagreement. For instance, where the community is com- mitted to a rights ethic (a community of rights), there might be disagreement about the scope and application of agreed rights or about who qualifies as a rights- bearer, or about the appropriate way of treating those who are not rights-holders. To this extent, there is a plurality within the singularity that is a community of rights. Potentially, though, even this degree of plurality could be destabilising. It is critical, therefore, that members of such a community not only agree on the general shape of their ethical commitments but also agree upon the processes that will be employed to resolve their disagreements. In other words, the community needs to develop a politico-legal framework, orientated towards its basic ethic, 19 See, eg, the Royal Society and the Royal Academy of Engineering, n 10 above, ch 8; Jean McHale, ‘Nanotechnology, Small Particles, Big Issues: A New Regulatory Dawn for Health Care Law and Bioethics?’, paper delivered at Twelfth Annual Interdisciplinary Colloquium on Law and Bioethics, University College London, 2 July 2007; Sonia E Miller, ‘Regulating Nanotechnology: A Vicious Circle’ in Nigel M de S Cameron and M Ellen Mitchell (eds), Nanoscale (Hoboken, NJ, Wiley, 2007) 155; and Trudy A Phelps, ‘The European Approach to Nanoregulation’ in Cameron and Mitchell, above, 189. 20 See, eg, Roger Brownsword, ‘Stem Cells and Cloning: Where the Regulatory Consensus Fails’ (2005) 39 New England Law Review 535, and ‘Ethical Pluralism and the Regulation of Modern Biotechnology’ in Francesco Francioni (ed), The Impact of Biotechnologies on Human Rights (Oxford, Hart Publishing, 2007) 45. So What Does the World Need Now? 33 that facilitates the provisional settlement of the community’s differences.21 If this trick can be pulled off, the expectation is that disputants will accept that there are reasonable differences of moral opinion (within the accepted parameters) and that provisional settlement must be respected. Or, to put this another way, it means that regulators who act within the terms of the agreed politico-legal frame- work can respond shortly to those who challenge the legitimacy of their decisions. Essentially, the community is at one in trying to elaborate and act on the best interpretation of its commitments; where a ‘best interpretation’ is contested and needs provisional settlement, regulators who set the standard need not claim that the position adopted is in line with everyone’s interpretation; it suffices to stand on the authority to make the decision and a good faith attempt to discharge the responsibility of taking a public position on the matter. Once we depart from a relatively safe haven of this kind, we are in serious dif- ficulty.22 Plurality now spells disagreement of a more fundamental nature—and especially so when it includes constituencies that categorically condemn various acts and practices as compromising human dignity. To some extent, we might be able to cover over the disagreements by drafting consensus declarations in suitably vague or abstract terms; but, as soon as a concrete issue needs to be addressed, the divisions break out. We might also find happenstance agreement in some cases; but such consensus is fragile, unpredictable, and exceptional. Moreover, because the differences go so deep, the prospects for a procedural solution are poor. In short, when regulators are dealing with this degree of plurality, there is no easy way of rising to the challenge of regulatory legitimacy. Before we settle on a rather painful prognosis for regulatory legitimacy, we might seek comfort in the larger picture. We might think, for example, that if the renaissance of dignitarianism owes something to what Gregory Stock calls ‘European sensitivities’,23 then it will probably fall away as quickly as it has asserted itself.24 Yet, there is reason for thinking otherwise. In particular, neither the utilitarian nor the human rights perspective gives much support to the inter- ests of conservatism, constancy and stability. And, as the pace of new technology accelerates, we should not underrate the felt need to find a way of registering a concern that the world should, if not stand still, at least slow down. Alongside this concern, there is the fear of the unknown. According to Manuel Castells, The greatest fear for people…is the oldest fear of humankind: fear of the technological monsters that we can create. This is particularly the case with genetic engineering, but 21 Cp Deryck Beyleveld and Roger Brownsword, Law as a Moral Judgment (London, Sweet and Maxwell, 1986; reprinted Sheffield, Sheffield Academic Press, 1994) where precisely such a framework is elaborated. 22 See the discussion of the problem of ‘external authority’ in Deryck Beyleveld and Roger Brownsword, ‘Principle, Proceduralism and Precaution in a Community of Rights’ (2006) 19 Ratio Juris 141. 23 Gregory Stock, Redesigning Humans (London, Profile Books, 2002) at 13. 24 After all, it is little more than thirty years since philosophers could write that human dignity ‘seems to have suffered the fate of notions such as virtue and honor, by simply fading into the past’: see Michael Pritchard, ‘Human Dignity and Justice’ (1972) 82 Ethics 299 at 299. 34 Roger Brownsword given the convergence between micro-electronics and biology, and the potential devel- opment of ubiquitous sensors and nanotechnology, this primary biological fear extends to the entire realm of technological discovery.25 Admittedly, we might not think that constraint for the sake of constraint, nor for that matter irrational fear, has much to recommend it; but, as I argue in Rights, Regulation, and the Technological Revolution,26 any aspirant moral community needs to be careful that the adoption of new technologies does not have the effect of undercutting the very conditions upon which its aspirations are predicated. The lesson here, then, is not encouraging. Depending upon the particular con- figuration of power and plurality, a failure to rise to the challenge of regulatory legitimacy might or might not be politically problematic. Be that as it may, one of the facts of regulatory life is that there is no easy way out of deep moral disagree- ment. It is a problem that has taxed moral and political philosophers; and it is a problem that will continue to plague the regulation of new technologies. III. Regulatory Design Where regulatory decisions are being made about new technologies, the form and style of the regulation and its institutional array needs very careful consideration. There is a huge amount to be said about institutional design, much of it not at all specific to new technologies. Once again, I can only begin to scratch the surface.27 To earth these short remarks, consider the case of the Human Fertilisation and Embryology Act, 1990, together with the Human Fertilisation and Embryology Authority that was set up by that Act. It is frequently said that this is a model regula- tory scheme. Yet, what is model about it? The legislation has been outrun by devel- opments in embryology; it is a textbook example of regulatory disconnection. More to the point for present purposes, the regulatory authority is thought by its various critics to be too slow and bureaucratic in its decision-making, unrepresentative in its membership (dignitarians are not welcome), prone to capture by its licensees from whom the authority draws its funds, and largely unaccountable.28 Whilst we might mount a defence to the criticism of regulatory disconnection (along the lines that this was a case of productive disconnection and debate),29 what should we think about the kind of criticisms that are made of the regulatory agency itself? 25 Manuel Castells, The Internet Galaxy (Oxford, Oxford University Press, 2001) 280. 26 Brownsword, n 3 above, esp chs 9 and 10. 27 For helpful regulatory ‘maps’, see Julia Black, ‘De-centring Regulation: Understanding the Role of Regulation and Self-Regulation in a ‘Post-Regulatory’ World’ (2001) 54 Current Legal Problems 103, esp at 134–5; and Colin Scott, ‘Accountability in the Regulatory State’ (2000) 27 Journal of Law and Society 38. 28 See, eg, Sarah Boseley, ‘MPs Hit at Fertility Watchdog over Designer Baby’ The Guardian (18 July 2002). 29 See, further, Brownsword, n 3 above, ch 6. So What Does the World Need Now? 35 It will be recalled that, a couple of years ago, plans were announced for the merger of the Human Fertilisation and Embryology Authority with the more recently-formed Human Tissue Authority, these two agencies being reconstituted as the Regulatory Authority for Tissue and Embryos (RATE). This plan to create one super agency was not short of critics, not least because it entrusted the new authority with a responsibility for a very wide sweep of activities running from assisted reproduction to state of the art research. Faced with overwhelming criti- cism from the Joint Committee on the Human Tissue and Embryos (Draft) Bill, Government abandoned its plan.30 However, the merits of the proposed merger and the eventual abortion of RATE is not the present issue. Rather, what is of interest is the regulatory requirement that was to be placed on RATE by section 10 of the draft Human Tissue and Embryos Bill 2007 and which has been carried forward in relation to the (now surviving) Human Fertilisation and Embryology Authority by section 7 of the Human Fertilisation and Embryology Bill. Here, the Authority is required to carry out its statutory functions ‘effectively, efficiently and economically’ and in such a way that it has ‘regard to the principles of best regula- tory practice (including the principles under which regulatory activities should be transparent, accountable, proportionate, consistent and targeted only at cases in which action is needed)’31—in other words, the Authority is required to act in accordance with both the 3Es and the principles of good regulation as set out by the Better Regulation Task Force. The idea that a regulatory Authority, whether RATE, the HTA, or the HFEA, might be judicially reviewed for a failure to comply with the requirements of effectiveness, economy, or efficiency is surely no more than a paper possibility— or, at any rate, barring quite exceptional incompetence by the agency, this must be the case. Just to take one scenario: suppose that the efficiency curve for the HFEA shows that the optimal gearing is at a point of rather low agency activity. At this level, the Authority does not carry out inspections or audits of licensed facilities. In due course, following a scandal or two, and a media campaign complaining about the agency’s inadequate supervision, there is an application for judicial review alleging that the HFEA has failed to carry out its functions effectively. The HFEA’s response is that, if it is to be more effective, its performance will be less efficient (not to mention being less economical). I imagine that, if the initial application for judicial review were granted, the ruling would be that it is for the HFEA, not the High Court, to accommodate these values; and that, provided this is done in a way that satisfies undemanding Wednesbury reasonableness,32 the Authority must proceed as it thinks best. Be such matters as they may, it is the second requirement that is of greater interest. 30 See HL Paper 169-I, HC Paper 630-I, n 12 above, at para 297. 31 S 7 operates by inserting new ss 8ZA(1) and (2) into the 1990 HFE Act. 32 Associated Provincial Picture Houses Ltd. v Wednesbury Corporation [1948] 1 KB 223; and, for dis- cussion of the reasonableness standard in judicial review, see John N Adams and Roger Brownsword, Understanding Law, 4th edn (London, Sweet and Maxwell, 2006) ch 8. 36 Roger Brownsword As is well-known, the Better Regulation Task Force advocates the five principles of good regulation as now specified in the above-mentioned section 7.33 Viewed as a ‘back of an envelope’ list, each of the principles, and the set as a whole, is plausible. However, each principle invites elaboration. For example, when trans- parency is unpacked, we find that it includes requirements for clarity of regulatory purpose and clear communication of that purpose, consultation, clear penalties for non-compliance, clear drafting of regulations, and information, support, and guidance for those subject to the regulation as well as time to comply. Similarly, when we read beyond the headings for proportionality and targeting (necessity), we find that these principles are geared to counteracting the tendency towards over-regulation in a risk averse society.34 Already we have a sense that holding a regulatory agency to account in relation to these principles will not be entirely straightforward. However, the bearing of the better regulation principles on the matter of institutional design is much more complex than this. Let me simply note four aspects of this unstated complexity. First, to ask a naïve question, which bit of regulatory design is it that the bet- ter regulation principles have as their target? By and large, the principles are not directed at the substantive standards set by regulators. Granted, there is an agenda here about over-regulation (for which we should read an excessive burden created by regulatory prohibition or requirement); but, in general, the principles are not about writing the regulatory script itself. Rather, for the most part, the principles seem to be about the operational practice of regulators. The principles, in other words, are less concerned with telling regulators what standards they should or should not set as with telling regulators how to go about setting standards. In their practice, regulators should act in a way that is transparent (with all that this prin- ciple implies), consistent, and so on. By contrast, the principle of accountability seems to speak to a different concern—not a concern about the standards that are actually to be set, nor about how standards are set, but about making regulators answerable for their actions. What we detect, then, is that the better regulation principles straddle matters that speak not only to the way in which an agency is constituted (particularly relating to the way that an agency is held to account) but also to the way in which an agency operates. But, once we begin to separate out these aspects of regulatory design, we might wonder whether the principles expressed by the Task Force give us the full story. This takes us to the second point. In a paper that every regulatory architect should read, Michael Trebilcock and Edward Iacobucci identify ten design desid- erata which they then marshal as five oppositional pairs.35 Although the focus 33 See Scientific Research: Innovation with Controls (Better Regulation Task Force, London, 2003) Appendix C, p 36. 34 Ibid at p 3: ‘The UK has a proud history of scientific research and innovation, but in an increas- ingly risk averse society this is in danger of being undermined by excessive regulation.’ 35 Michael Trebilcock and Edward Iacobucci, ‘Designing Competition Law Institutions’ Cambridge Lectures (for the Canadian Bar), Queen’s College, Cambridge, July 2001. So What Does the World Need Now? 37 of the paper is on regulatory design in the context of competition law, what Trebilcock and Iacobucci say is of general application. The five key pairs of oppo- sition are between independence and accountability, expertise and detachment, transparency and confidentiality, efficiency and due process, and predictability and flexibility. Once again, these desiderata seem to straddle agency constitution (especially, independence, accountability, expertise, and detachment) and agency operation (particularly, transparency, confidentiality, efficiency, due process, pre- dictability and flexibility); but, wherever we look, the oppositional pairs suggest tensions that are implicated in regulatory design. In this light, three of the prin- ciples proposed by the Better Regulation Task Force look one-sided: transparency needs to be balanced with confidentiality, accountability with independence, and consistency (or predictability) with flexibility. This, however, is not yet the end of the complexity because, as Trebilcock and Iacobucci point out, many of the values ‘interact with each other in polycentric, mutually reinforcing or antithetical ways. For example, accountability may be antithetical to administrative efficiency by proliferating appeal or review processes, while expertise may enhance administra- tive efficiency. Confidentiality and flexibility may be antithetical to due process, but due process in turn may be in tension with expertise.’36 Thirdly, although neither the better regulation principles nor the Trebilcock and Iacobucci desiderata are directed at the substance of regulatory standards (which is where we find the ethical plurality most vociferously at odds with itself), we should not infer that questions of regulatory design are value-free or ethically neutral. Questions of regulatory legitimacy arise here too; and, inevitably, we soon run into the problems of plurality. For example, the opposition between efficiency and due process tends to be underwritten by the opposition between utilitarian ethics (for efficiency) and rights ethics (for due process); and the Task Force’s agenda against ‘over-regulation’ of science is implicitly underwritten by a utilitarian ethic that is prioritised against the rights constituency that demands (as utilitarians see it) burdensome consent and data protection practice as well as against the dignitarian red-light ethic. Mapping the ethics that support the prin- ciples and desiderata would be a major exercise; but it would serve to draw out and underline the complexity of the matter. Fourthly, we have, thus far, posed the question of regulatory design in relation to just a single agency. However, the HFEA, or any another agency, typically will form part of an institutional set, comprising the agency, the legislature, the execu- tive and the courts. The significance of this is that we want the set as a whole, not simply the agency in isolation, to make regulatory sense. For example, if we were to criticise the design of the courts as conferring too much independence on judges and leaving the judicial branch insufficiently accountable to electors, the obvious response would be that we see good regulatory sense in the current design when put alongside the accountability of the political branch. Or, again, as 36 Ibid at 9. 38 Roger Brownsword we saw in recent debates about the licensing powers of the Authority, we need to think about the interaction between the various parties that make the regulatory environment what it is. Finally, when we turn from regulatory design in general to regulatory design in the particular context of new technologies, we see that there is a pressing chal- lenge. For, in some parts of the world, it would be no exaggeration to say that there is now a crisis of confidence in both the practitioners and the custodians of new technology; scientists and regulators alike are no longer trusted. How is this breakdown in trust to be repaired? How are trusted institutions to be re-built? As Onora O’Neill has astutely observed, we can introduce processes that are designed to manifest trustworthiness (processes that are geared for transparency and accountability and the like) but this does not necessarily engender trust.37 Paradoxically, procedures that are designed for trustworthiness—including pro- cedures for public participation—might contribute even more to the breakdown of trust.38 The lesson of all this is clear: principles of ostensibly better regulation do not necessarily or straightforwardly make things better; enshrining such principles in a hard law form does not necessarily improve the quality of an agency’s perfor- mance; and if regulatory institutions are to enjoy the trust and confidence of the public (where there are concerns about the technology) as well as meeting the demands of their political and technological stakeholders, there are major design challenges ahead.39 IV. Technology as a Regulatory Tool (Regulating Technologies) The final three questions in the underlying agenda express a concern that runs through the papers in the first part of this collection: namely, what are the impli- cations of new technologies being adopted as regulatory tools? Already, we see a technological approach being employed within the framework of traditional ‘obey or pay’ forms of regulation. The technology might be designed 37 Onora O’Neill, Autonomy and Trust in Bioethics (Cambridge, Cambridge University Press, 2002) ch 6. 38 In general, for such a phenomenon, see Cass R Sunstein, ‘Paradoxes of the Regulatory State’ (1990) 57 University of Chicago Law Review 407. 39 Cp Michael Kirby, ‘Human Freedom and the Human Genome: The Ten Rules of Valencia’ (paper given at international workshop on Freedom and Risk Situations, Valencia, Spain, 25 January 1999) at 18–19: Without global institutions, talk about prohibitions, regulations and moratoriums will be just that: talk. The absence of effective inhibitions amounts to a permit for science to go where any individual researcher chooses…Ultimately, we require effective institutions of regulation and lawmaking which render the genomic scientist and the technologist, like everyone else, answerable to the law. So What Does the World Need Now? 39 to discourage non-compliance or to improve the chances of detection, or both; it might be pretty crude (for example, speed bumps or other traffic calming measures within restricted areas)40 or it might be more sophisticated (for example, CCTV, smart cards, tracking devices, DNA data bases, and so on). In order to tighten the technological grip, the technology of surveillance and detection has to be more sophisticated and pervasive41 and/or non-compliance must simply be designed out (whether by focusing on products, people, or places). Whether the technological initiative is for detection or design-out, the implications of this regulatory turn invite serious consideration. And, while such developments give rise to a range of concerns, I suggest that the deepest concern relates to the threats that such regula- tory strategies might present to aspirant moral communities—not because regula- tory practices of this kind are immoral (although they might well be judged to be so) but because they threaten the sustainability of moral community itself. To pick up three questions for further consideration: (i) is the particular way in which a design-based regulatory approach impacts on an agent’s choice signifi- cant; (ii) is the prospect of techno-regulation really feasible (can ambient law ever be as smart and flexible as traditional law); and, last but by no means least, (iii) to what extent should we accord the regulatory State a stewardship jurisdiction? (i) The details of regulating by design The papers in this collection draw a number of distinctions between technologies that have a regulative effect—for example, between the intentional and uninten- tional use of technology as a regulatory instrument, between norm-setting and norm-enforcing technologies,42 between regulative and constitutive technolo- gies,43 and elsewhere I have drawn a broad distinction between those (panopti- con) technologies that are designed to monitor and detect non-compliance and those (exclusionary) technologies that are designed to eliminate the option of non-compliance.44 Moreover, as Karen Yeung highlights in her contribution to this volume, it needs to be appreciated that there is a broad range of design-based strategies, that each strategy impacts on moral choice in its own way, and that the nuanced nature of regulating by design bears further consideration.45 One of the biggest challenges to the freedom of humanity in the coming century will be to build more effective national and international institutions which can respond with appropriate speed and expertise to the challenges of science and technology. See, too, the remarks made by the Joint Committee on the Human Tissue and Embryos (Draft) Bill, n 12 above, esp at paras. 130–33. 40 For relatively straightforward design initiatives, see Neal Kumar Katyal, ‘Architecture as Crime Control’ (2002) 111 Yale Law Journal 1039. 41 See, eg, Clive Norris and Gary Armstrong, The Maximum Surveillance Society: The Rise of CCTV (Oxford, Berg, 1999) ch 10. 42 For these first two distinctions, see Bert-Jaap Koops, ‘Criteria for Normative Technology’ (ch 7 in this volume). 43 For this distinction, see Mireille Hildebrandt, ‘A Vision of Ambient Law’ (ch 8 in this volume). 44 See, eg, Roger Brownsword, Brownsword, n 3 above, chs 9 and 10. 45 Karen Yeung, ‘Towards an Understanding of Regulation by Design’ [in this volume]. 40 Roger Brownsword In a community of rights, agents will face more than one kind of moral dilemma. One kind of dilemma will be that in which the agent is striving to do the right thing but it is not clear what action is required; for example, this is the dilemma of an agent who is not sure whether the right thing is to tell the truth or to tell a white lie, whether to respect a confidence or to inform another of a risk, and so on. However, it is another kind of dilemma that is relevant to our thinking about the impact and import of design-based regulation. This is the dilemma of an agent who believes that the morally required action is x (say, keeping a prom- ise) but who is inclined, for reasons of non-moral self-interest, to do not-x (say, breaking the promise in order to make a financial gain). As Kantians would put it, this is the case of an agent whose will is in conflict, the autonomous moral will being contested by the heteronomous will of inclination and desire. More prosai- cally, we can identify the following four key elements in this conflicted situation: (a) the agent is aware that doing x is the morally required action; (b) however, the agent is inclined, or desires, to do not-x; (c) this conflict arises in circumstances where a choice between doing x and doing not-x presents itself to the agent as a real practical issue; and (d) the circumstances also allow, in practice, for the doing of not-x. In principle, regulators might target any one of these elements in order to design around or design out the difficulty. The question is whether, in a commu- nity of rights, anything rides on which element of the situation regulators target. Assuming that the agent is aware that doing x is morally required, then where an agent might be tempted to defect, regulators might seek to reinforce the agent’s moral resolve against defection. In most communities, social pressure together with the possibility of censure and criticism works quite well to keep agents on the moral straight and narrow. However, we are contemplating regulators who employ new technologies to reinforce the moral line. Let us suppose, then, that regulators introduce into the food or water supply a cocktail of smart drugs that has the desired effect. With this supplement, agents find it much easier to empathise and sympathise with others and to overcome their immoral inclinations; and, as a result, they do the right thing. We might recall Mustapha Mond’s conversation with the Savage in Huxley’s Brave New World,46 where Mond points out that, in place of all the effort associated with hard moral training, anyone can be moral by swallowing a small amount of soma. As Mond puts it, ‘Anybody can be virtu- ous now. You can carry at least half your morality about in a bottle. Christianity without tears—that’s what soma is.’47 Back in a community of rights, would such a regulatory strategy (assuming that it is known that this is what regulators are doing) be problematic? One thought, a thought articulated by Yeung, is that this kind of approach might be judged to interfere with authentic, unaided, moral action. Other things being equal, we certainly might intuitively prefer that moral 46 Aldous Huxley, Brave New World (London, Flamingo, Harper Collins, 1994). 47 Ibid at 217. So What Does the World Need Now? 41 action is unaided rather than artificially assisted; but, unless the injunction that agents should do the right thing for the right reason also implies that there should be a certain degree of hardship or adversity involved in doing the right thing, it is not clear that this intuition is reliable.48 To be sure, if the regulatory intervention makes it so easy for agents to do the right thing that they experience no resistance to doing that thing, then there is no element of overcoming and there is a risk that agents lose the sense that they face a choice (between right and wrong). If, instead of boosting the moral will, regulators target their strategy at sup- pressing the inclination to defect, would this make any difference? Let us suppose, once again, that a regime of smart drugs will have the desired effect. On the face of it, this does not seem to be materially different from the first approach. If the sup- pressants are so powerful that they eliminate all desire to defect, then there might be a questionmark against such an intervention; and, we might also question this approach if we harbour a sense of moral virtue that involves a certain degree of overcoming (where the intervention, if not eliminating the desire to defect, sup- presses it to a level that makes it simply too easy for the agent to claim any merit in doing the right thing). So, provided that agents are not given a ‘walk-over’ or such a favourable weighting with regard to the ratio between willing x and willing not-x that they can hardly fail to do the right thing, a design strategy of this kind might be judged acceptable. Having said this, in both cases, I am assuming that the intervention is general rather than agent specific, and that it applies across a broad spectrum of acts rather than in relation to one particular kind of act. Where the intervention is agent specific and restricted to one particular type of act (say, paedophilia), a community of rights might judge that elimination of desire (or major ramping up of moral resolve) is acceptable provided that the agent otherwise enjoys a ‘normal’ moral life. Turning to the targeting of the circumstances rather than the agent, what should we make of a design that simply eliminates the difficulty? Consider an example suggested by Jonathan Zittrain as follows:49 One might put flagstones on a grassy quadrangle precisely where people tend to walk, rather than trying to convince people to use paths that do not track where they would like to go, to mitigate the damage to the turf. This might not be in the longer-run interest of regulators because, where keep- ing off the grass really matters, regulators will need to find ways of being ‘more insistent and creative in influencing behavior.’50 However, the present question is whether such putative smart regulation is in the interest of regulatees. Where resources are in short supply, but where additional resources can be provided, is it 48 Compare Neil Levy, Neuroethics (Cambridge, Cambridge University Press, 2007) esp chs 2 and 3; and John Harris, Enhancing Evolution (Princeton, NJ, Princeton University Press, 2007). 49 Jonathan Zittrain, ‘A History of Online Gatekeeping’ (2006) 19 Harvard Journal of Law and Technology 253 at 255. 50 Ibid. 42 Roger Brownsword always smart to supply to demand? For instance, do we think that it is smart par- enting where, in order to avoid conflict, children are given their own televisions, their own computers, their own rooms, and so on? The effect of this strategy is to reduce the opportunities that children have to learn how to share, how to cooper- ate, how to compromise. If they have adequate opportunities elsewhere, then why not make the home a haven for their self-regarding individualism? And, in a com- munity of rights, we might entertain similar thoughts. If we keep on eliminating situations where we need to be other-regarding, will the community be capable of responding in the right (moral) way if and when the need arises? Unless we are other-regarding by nature (which evokes a further, the fourth, possibility), we need some practice at being moral; children need to be nurtured in a moral direc- tion, and we all need the opportunity. Fourthly, there is the possibility that regulators might target the practical opportunity for defecting from the moral code. Where techno-regulation sim- ply eliminates the possibility of deviating from the required pattern of conduct, where the only practical option is to do the right thing, then the conditions for moral community are surely compromised. If this is the default strategy for regu- lators, then a red line has been crossed. Even if some of the other instances of a design-based approach might be acceptable in a community of rights, systematic targeting of the practical opportunity for defection is off limits: in general, regula- tors should not try to exclude the possibility of doing wrong. This prompts further reflection on the distinction between design-out and design-in strategies. If it is wrong for regulators to eliminate the possibility of doing wrong, does it matter whether they target potential violators (by design-out measures that preclude the possibility of deviance) or their victims (by design-in measures that protect agents against the harm otherwise caused by acts of devi- ance)? To clarify, regulators might be able to design agents so that they simply do not have the capacity or the will to deviate from the required pattern of conduct; or, lacking this technological expertise, regulators might be able, through various technological means, to immunise victims against violations. In the former case, the design-out means that agents are coded to act like saints; in the latter case, agents are still free to sin but any harm associated with sinning is neutralised. When I took a first bite at this particular cherry,51 I suggested that a community of rights might think that this is a distinction without a difference because devi- ants who know that they can never inflict any real harm on others might as well not have the inclination or the will to deviate in the first place. However, this is not the only plausible response. Taking a different view, a community of rights might reason that there is a significant difference between design-out and design- in because, in the former case, agents are only dimly aware (if at all) that they are doing right rather than wrong, while in the latter case agents will be aware that they are deviating. In the former case, agents make no attempt to deviate; but, in 51 Roger Brownsword, ‘Neither East Nor West, Is Mid-West Best?’ (2006) 3 Script-ed 3 (available at accessed 22 May 2008). So What Does the World Need Now? 43 the latter case, agents not only can attempt to deviate but be aware that they are acting against the preferred regulatory pattern of conduct. Even with this second bite at the cherry, it is not clear whether the distinction between design-out and design-in really matters for a community of rights. (ii) The feasibility of techno-regulation If there was no possibility that technologies might develop in a way that enables regulators to code and design for compliance, our concerns about a regulatory revolution would be merely academic. In this light, we should recognise that many researchers in the biosciences are sceptical that the science and technology needed to support the kind of control implicit in such a regulatory vision is on any foreseeable horizon. To master the coding and circuitry of the brain and the body, let alone our interactions with others and the environment, is a challenge of massive proportions. Perhaps if we were engaging in this kind of speculative dis- cussion at the turn of the next Millennium, or the Millennium after that, it would have a little more practical purchase. Even if we are not deterred by such sceptical projections, we might be troubled by the objection that, regardless of the pace of technological advance, it simply is not feasible to techno-regulate in the subtle way that traditional rules (and their human interpreters) operate.52 Consider the case of a railway carriage that is set aside as a quiet zone. Although it is the recent development of the mobile phone that has prompted the practice of designating a quiet coach, the current regulatory practice is highly traditional. Typically, notices are displayed in the carriage, reminding passengers that this is the quiet coach and prescribing that ‘passengers should not use mobile phones or make any other unnecessary noise’, or some such form of words. Generally, social pressure suffices to enforce compliance with the rule. However, if it were felt that a stiffer sanction was called for, say a fixed fine, we might look to techni- cally-minded regulators to implement this scheme. In the not too distant future, when each railway passenger is biometrically identified, when smart cards operate multi-functionally as tickets, as entry and exit tokens, and as payment instru- ments, and when CCTV is routinely embedded in transport systems, this might look like a reasonably straightforward brief for the designers. Basically, if a pas- senger acts improperly in the quiet carriage, they will only be able to exit the car- riage once the fixed penalty payment has been deducted from their payment card. However, even if this sounds perfectly feasible as a technical challenge, is it so clear that the technology can be mapped onto the background rule? The rule, as currently drafted, is open-textured.53 Even if there is not much doubt about what qualifies as a mobile phone (although would a child playing with a toy mobile phone break the rule, would a BlackBerry count as a mobile 52 Cp Richard Susskind, Transforming the Law (Oxford, Oxford University Press, 2000) esp chs 7 and 8. At 170, Susskind identifies five dimensions of feasibility: technical possibility, jurisprudential soundness, commercial viability, organizational suitability and strategic appropriateness. In the text, I have assumed technical possibility and then focused exclusively on the issue of jurisprudential soundness. 53 Classically, see HLA Hart, The Concept of Law (Oxford, Clarendon Press, 1961). 44 Roger Brownsword phone?), there is some vagueness about what qualifies as ‘use’ of a mobile phone (would an adult using their mobile to text a friend or to take photographs break the rule?). More seriously, the catch-all supplementary phrase ‘or make any other unnecessary noise’ is wide open to interpretation. The intention of the rider clearly is to catch other activities that violate the spirit (if not the letter) of the rule, but we can imagine a host of activities that might or might not be caught depending upon how we interpret the phrase and especially how we interpret the word ‘unnecessary’. For example, would a passenger listening to a personal music player be in breach of the rule if the background ‘chink-a-chink’ is audible? Would it be a breach of the rule to talk with a colleague in an animated fashion, or to bring a crying child or a barking dog into the quiet coach, and so on? This is not the end of the matter, for whatever traditional legal rules might mean on paper, there is often a practice around the rule that is quite different.54 The paper rules are one thing; the real rules are something else. Expectations relative to the paper rules do not always coincide with expectations encouraged by custom and practice. So it is with quiet coaches. If we are guided by custom and practice, the paper rule seems to be disapplied in certain circumstances— for example, where the train is over-crowded and there is standing room only in the quiet coach, when a train is severely delayed and passengers want to use their mobiles to let friends and relatives know that they are running late, and (or so I assume) if there were an emergency needing a rapid 999 response. One might argue that some of these exceptions are implicitly written into the rule, that the prohibition is not on the use of mobiles but on the unnecessary use of mobiles. However, this hardly improves the position because, whichever way we look at it, what the rule means depends on a raft of conventions, linguistic and social, and the conventions are sometimes fuzzy as well as being susceptible to change. In the light of this, the question is whether the ambient regulatory environ- ment of the high-tech quiet carriage could be modulated to reflect these various scenarios. Insofar as we are able to specify the various scenarios and the applica- tion of the rule (applied or disapplied) in those scenarios, I assume that expert systems will be sufficiently sophisticated to be able to track old-fashioned law. However, there seem to be two sources of serious difficulty: one is that we are not able to foresee or anticipate the full set of scenarios; and the other is that, over time, we change our minds about how the rule should be applied. Yet, these dif- ficulties do not look insuperable. In response to the former, the obvious move is to equip the system with a default rule. Whether the default should be for applica- tion or disapplication is a matter to be settled; but, once the default is installed, the system knows what to do even if the scenario is not specifically anticipated. In response to the latter, we could minimise the difficulty by agreeing that we will 54 Cp Stewart Macaulay, ‘The Real and the Paper Deal: Empirical Pictures of Relationships, Complexity and the Urge for Transparent Simple Rules’ (2003) 66 MLR 44. So What Does the World Need Now? 45 not change our minds on the hoof. If we then have an outcome that we judge to be unacceptable—arising from a classic hard case such as an elderly person, carrying a mobile phone, but failing to appreciate that there are restrictions in the quiet coach—we should make the necessary adjustments to the system and possibly compensate the passenger; but, in general, so long as the penalty for vio- lation is a relatively minor and reversible one, this might seem to be a reasonable price to pay for submitting to the rule of the technology. Of course, it might be protested that the technology can never match the law because the beauty of the latter is that we can make it up as we go along. However, this seems like a quixotic inversion of what we usually take to be the virtue of the Rule of Law, namely that making it up as we go along is precisely what we do not do. Insofar as the Rule of Technology checks against just that temptation, some might think that, not only is regulation by technology feasible, but indeed desirable.55 (iii) State stewardship Elsewhere, I have suggested that, in a community of rights, there will be support for the state being entrusted with a stewardship responsibility for the moral wel- fare of the community.56 Like any form of stewardship, this responsibility implies an obligation not only to present members of the community but also to future generations. The most precious thing that an aspirant moral community can hand on to the next generation is an environment that is conducive to a moral way of life, to a way of life that hinges on agents trying to do the right thing, try- ing to respect the legitimate interests of fellow agents, and being held responsible for their actions. At its most profound, the state’s stewardship responsibility is to ensure that the enthusiasm that regulators begin to display for technological instruments of control does not insidiously undermine the conditions that give moral life its meaning. However, if the state is not to tilt from its liberal disposi- tion to a more authoritarian form, it is imperative that we are clear about both the basis and the boundaries of stewardship. To start with the basis of stewardship: in a community of rights, we can assume that the state will need special reasons for interfering with acts that are to be treated as prima facie permissible—whether because they do not obviously impinge on the rights of others or because the relevant others have consented and there is no impingement on non-consenting third parties. Moreover, we can assume that where individual agents act, alone or in concert, in ways that seem to be permissible, the state has the burden of justification if it is to intervene against such acts. As Han Somsen rightly points out in his contribution to this volume,57 to license the state to intervene on the grounds that the acts in question might be 55 Cp Mireille Hildebrandt, ‘A Vision of Ambient Law’ (ch 8 in this volume). 56 See, eg, Roger Brownsword, ‘Happy Families, Consenting Couples, and Children with Dignity: Sex Selection and Saviour Siblings’ (2005) 17 Child and Family Law Quarterly 435. 57 Han Somsen, ‘Cloning Trojan Horses: Precautionary Regulation of Reproductive Technologies’ (ch 10 in this volume). 46 Roger Brownsword damaging to rights-holders or might be damaging to the community is to put a considerable trust in both the sound judgment and the good faith of the state. In a community of rights, we can define away this difficulty; for it is an analytical truth that, in such a community, the state simply will not act in bad faith or in a way that is clearly incompatible with the community’s rights commitments. Once we remove this safety net, however, there is no guarantee that stewardship, like precautionary restriction, will not serve as a Trojan Horse for disreputable regula- tory purposes. Lacking such a guarantee, it is an open question how far we might want to go with the idea of a stewardship responsibility. To take a step back, if it is agreed that the state needs special reasons for inter- fering with prima facie permitted acts, we might argue for a lower or a higher threshold for legitimate state intervention. If we argue for the higher threshold, we are, in effect, treating the state as no different to an agent. In the absence of con- sent, the state should not prohibit or otherwise impede an agent’s act unless this is necessary for the sake of more compelling rights. By contrast, if we argue for a lower threshold, our view is that, in addition to the reasons that are adequate rela- tive to the higher threshold, the state may (indeed, should) exercise a stewardship responsibility. Quite possibly, those who view the state as an unwelcome extension of private relationships will tend towards the former view, while those who start with a public law perspective will tend towards the latter view. Clearly, though, whether our mind-set is private or public, we will want to see the boundaries of stewardship closely defined. What, then, are the boundaries of stewardship? I suggest that, in a community of rights, there are three circumstances in which stewardship might legitimately be invoked. First, if we suppose (as I do) that the members of a community of rights do not regard themselves as morally omniscient, the state has some margin to cater for the fallibility of the community. Accordingly, if it is argued that an action should be prohibited because it might put at risk the interests of possible rights-holders or because it might indirectly be damaging to rights-holders, the state may intervene (if only temporarily) on stewardship grounds. Secondly, the state has a responsibility to protect and promote the conditions that are condu- cive to flourishing agency. Public health seems to be such a case.58 Stewardship certainly requires the state to keep citizens informed about risks to their health and a community of rights might well judge that it is legitimate for the state to exercise stewardship by requiring participation in programmes that are intended to improve the conditions of public health. Thirdly, to return to my basic point, the state has a stewardship responsibility to protect and promote the conditions that are constitutive of a meaningful moral community—and, unless we can devise some arrangement for super-stewardship, we must leave it to the state to self-regulate (through judicial review and similar checks and balances) against irresponsible reliance on technological tools of control. 58 Cp Nuffield Council on Bioethics, Public Health: Ethical Issues (London, November 2007). So What Does the World Need Now? 47 In proposing a stewardship jurisdiction for the state, it was not my intention to court controversy. However, there is no denying that, once we venture beyond the gated and secure conditions of a community of rights, stewardship might prove to be a hostage to fortune. As with so many of the matters arising from what I am calling the underlying agenda, I am conscious that this is a significant item of unfinished business. V. Conclusion So, what does the world need now? Matt Ridley has argued that technical fixes have been employed to make people healthier, wealthier, and wiser; and, by and large, what improves the quality of life is invention rather than legislation.59 Regulators may well conclude, therefore, that what the world needs now is hi-tech social control. If so, where technology is deployed in support of traditional measures of prevention and enforcement, respect for human rights and human dignity continues to be relevant to the lines that we draw around the acceptable use of the technology (by the regulators). If regulators go beyond this, systematically relying on a tech- nological strategy in place of traditional forms of social control, then whatever our moral take, whatever we make of human rights or human dignity, there is a risk that the preconditions for moral debate and discourse are corroded and compromised. The extent of an aspirant moral community’s loss is captured by Jürgen Habermas in the following terms: Without the emotions raised by moral sentiments like obligation and guilt, reproach and forgiveness, without the liberating effect of moral respect, without the happiness felt through solidarity and without the depressing effect of moral failure, without the ‘friendliness’ of a civilized way of dealing with conflict and opposition, we would feel, or so we still think today, that the universe inhabited by men would be unbearable. Life in a moral void, in a form of life empty even of cynicism, would not be worth living. [Our impulse is] to prefer an existence of human dignity to the coldness of a form of life not informed by moral considerations.60 In other words, if information and biotechnologies are developed not merely to assist traditional forms of regulation but to operate as techno-regulatory solu- tions, then a community of rights faces a choice: namely, to settle for less effective regulation (possibly permitting a degree of non-compliance that impinges on the rights and legitimate choices of ‘victims’) or, for the sake of effectiveness, to adopt techno-regulation (seemingly abandoning the importance that we attach to the dignity of choice and, with that, much of the basis on which our thinking about responsibility, as well as rights, is premised). 59 Matt Ridley, ‘We’ve Never Had it so Good—and It’s All Thanks to Science’ Guardian Life (3 April 2003) 8. 60 Jürgen Habermas, The Future of Human Nature (Cambridge, Polity Press, 2003) at 73. 48 Roger Brownsword In a community of rights, it is not enough that a regulatory technology works (that it achieves the desired regulatory effect). Nor is it enough that the technology respects privacy and confidentiality, or has been authorised by processes that satisfy the requirements of free and informed consent. In a community of rights, the fundamental question is whether the technology threatens to change the cultural environment in a way that no aspirant moral community can live with. If there is a real concern that the technology presents such a threat, regulators, as stewards for the moral community, should go no further with that kind of strategy. When regulators trade technologically guaranteed compliance for legitimacy, we cannot even say that they have entered into a pact with the Devil; because when regulators strike this deal, in effect, they dispense with a public distinction between right and wrong. The regulatory challenge presented by new technologies can become, and is already being seen as, an opportunity;61 but it is hard to imagine a challenge that is more fundamental than that presented by the self-same opportunity. 61 Compare Roger Brownsword, ‘Genetic Databases: One for All and All for One?’ (2007) 18 King’s Law Journal 247. Part One Technology as a Regulatory Tool 3 Crime Control Technologies Towards an Analytical Framework and Research Agenda BEN BOWLING, AMBER MARKS AND CIAN C MURPHY The influence of the criminal upon the development of productive forces can be shown in detail. Would the locksmith’s trade have attained its present perfection if there had been no thieves? Would the manufacture of banknotes have arrived at its present excellence if there had been no counterfeiters? Would the microscope have entered ordinary commercial life had there been no forgers? Is not the development of applied chemistry as much due to the adulteration of wares, as to the attempts to discover it, as to honest productive effort? Crime by its ceaseless development of new means of attacking property calls into existence new measures of defence, and its productive effects are as great as those of strikes in stimulating the invention of machines.1 Introduction The substantive focus of this chapter—crime control technologies—can be stated simply enough, but this simplicity is deceptive for several reasons. Firstly, technology—which we define as the application of scientific knowledge, materi- als, techniques, systems, methods of organisation and the use of electronic and mechanical devices—is ubiquitous in contemporary criminal justice, as it is in many other spheres of human activity.2 Therefore the range of types of technical devices that we might write about is extremely wide. Secondly, as Marx suggests, throughout history crime control has been a motor for technological innova- tion in many apparently unrelated areas; therefore the boundaries of the field are fuzzy. Thirdly, the range of technological applications in the criminological field is incredibly wide and includes the management and communication of 1 K Marx, ‘Theories of Surplus Value, Addendum 11. Apologist Conception of the Productivity of All Professions’, re-printed in D Greenberg, Crime & Capitalism (Philadelphia, PA, Temple University Press,1993) 53. 2 P Grabosky, ‘Technology & Crime Control’, (1998) Trends and Issues in Crime and Criminal Justice (Australian Institute of Criminology); Criminal Justice Matters special issue (58) on Crime and Technology. 52 Ben Bowling, Amber Marks and Cian Murphy information, physical defence against crime, surveillance, public order mainte- nance, crime prevention and detection, criminal justice administration, and pun- ishment. We have, it seems, carved out an impossibly wide brief and can do little more here than to provide a descriptive overview of the technological applications studied by criminologists, to describe the legal framework within which crime control technologies are developing, to raise some questions about the ways in which technology is changing the criminal justice system as a system and make some suggestions for inter-disciplinary research.3 The Criminological Context Two changes in the criminal justice system (CJS) provide a backdrop to recent technological innovations in this field. The first is the emergence of a ‘risk man- agement’ or ‘actuarial approach’ to the regulation of crime.4 This new approach reaches beyond the boundaries of the traditional CJS and transcends the entire social system in ‘the risk society’, linking CJS agencies with other institutions such as the health service, education, housing and the insurance industry.5 The central precept of an actuarial approach to criminal justice is that the system should be less concerned with traditional punishment based on ‘downstream’ or ‘after the fact’ concepts such as retribution and rehabilitation, and should instead manage the risks presented by the disputable, dangerous and disorderly, using ‘upstream’ or ‘pre-emptive’ techniques of disruption, control and containment. The idea of risk management is linked to the ‘precautionary logic’6 of the so-called ‘Bush doc- trine’ that posits that the state should seek positively to enlarge freedom and secu- rity, by intervening in ways that pre-empt wrongdoing, whether it be from hostile states, terrorists, serious organised-crime groups or ‘anti-social’ young people. A second important trend is what some critics have referred to as ‘populist punitiveness’7 or ‘authoritarian populism’.8 In its simplest terms this expresses a tendency—evident since the late 1970s and amplified since the mid-1990s—for politicians to ‘talk tough’ on ‘law and order’ in the pursuit of electoral advantage. This has direct practical implications for a more punitive approach to crime, disorder and anti-social behaviour. It entails a shift within the CJS away from 3 One limitation of our research should be clear from the outset—we are not concerned here with cyberspace, but with ‘real space’ technologies. 4 Malcolm M Feeley and Jonathan Simon, ‘The New Penology: Notes on the Emerging Strategy of Corrections and its Implications’, (1992) 30 Criminology 449–74. 5 U Beck, Risk Society: Towards a New Modernity (London, Sage, 1992); RV Ericson and K Haggerty, Policing the Risk Society (Oxford, Oxford University Press, 1997). 6 RV Ericson Crime in an Insecure World (2006) 38–9. 7 AE Bottoms and P Wiles, ‘Environmental Criminology’, in M Maguire, R Morgan and R Reiner (eds) The Oxford Handbook Of Criminology, 2nd edn (Oxford, Clarendon Press, 1997) 305–59. 8 S Hall, Drifting into a Law and Order Society, Human Rights Day Lecture (London, The Cobden Trust, 1980). Crime Control Technologies 53 ‘due process’ and towards a ‘crime control’ ethos.9 As such, there is less concern with the due process protections in law enforcement and the administration of justice, the presumption of innocence, the minimisation of intrusion, coercion, and intervention in the lives of the ordinary citizen. Instead, the focus shifts to techniques of proactive crime control and preventive detention, the presumption of risk and the maximisation of knowledge about and intervention in the lives of citizens, especially those considered to pose a risk. Actuarial justice arises from a shift in thinking in which crime is no longer viewed as an aberration but rather a normal condition of late modern society and therefore all citizens come under suspicion.10 As a result, technologies that were once restricted to heightened secu- rity locations such as airports are now deployed throughout the social fabric. Taking the two trends together we can see changes in the governance of crime including statistical risk assessment, defensive and pre-emptive crime prevention strategies and a sharp increase in levels of coercion, punishment and control.11 In this process, technology has been described as a ‘force enabler’12 increasing the capacity for surveillance, coercion and control.13 Although levels of crime in the United Kingdom have fallen significantly in recent years, it seems likely, for several reasons that there will be a continued push towards increased security. First, the emergence of a ‘new security agenda’—including combating serious organised crime and terrorism—implies that the scale and potential impact of blurred criminal/military security threats is unprecedented. Second, there is a renewed emphasis on serious violent crime with politicians and police coming under increased pressure from tabloid newspapers and a vocal victims movement. Thirdly, there is a new emphasis on sub-criminal ‘anti-social behaviour’—including chil- dren hanging around on street corners, public drunkenness and ‘neighbours from hell’—that is believed to require innovative control methods. All of these trends have caused politicians to refer to the contemporary CJS as being ‘utterly useless … to get on top of 21st century crime’14 and to conclude that a new approach is required to ‘rebalance the criminal justice system in favour of the law abiding majority’.15 Technology will be a key driver in this process. Public discussion in this area generally takes for granted that new technolo- gies will deliver enhanced crime reduction and the safer society that justice and 9 HL Packer, The Limits of the Criminal Sanction (Stanford, CA, Stanford University Press, 1968). 10 D Garland, The Culture of Control: Crime and Social Order in Contemporary Society (Oxford, Oxford University Press, 2002). 11 J Simon, Governing Through Crime (Oxford, Oxford University Press, 2007). 12 Report of the Group of Personalities in the Field of Security Research, ‘Research for a Secure Europe’ (2004), 4; available at accessed 13 January 2007. 13 N Christie, Crime Control as Industry: Towards Gulags, Western Style, 3rd edn (London; New York, Routledge, 2000) 132–3. 14 T Blair’s speech at No 10 Downing Street to launch the ‘respect agenda’, 10 January 2006: accessed 13 January 2007. 15 Home Office, ‘Rebalancing the Criminal Justice System in Favour of the Law-Abiding Majority’ (July 2006), accessed 23 May 2008. 54 Ben Bowling, Amber Marks and Cian Murphy security ministers promise. This may or may not be the case and assessing the effectiveness of crime control technologies falls outside the scope of this paper. Instead, our rather different and more modest goal is to describe the range of technologies used in the crime control sphere and raise some research questions that arise from it. The key ethical issue is if human behaviour is to be managed and controlled by technology, what procedural safeguards and ethical limitations exist or can be put in place to regulate emerging forms of technologically driven crime control? Applying Technology to Crime Control: Towards a Typology This paper aims to look at the application of technology and scientific knowledge across the entire crime control ‘system’. To this end, we have developed a typology (Table 3.1) that categorises the varied ways in which technology has been applied to crime control, exploring its various goals and functions, the organisations that use it and the legal framework that governs it. One way to approach this survey would be to take a specific technological device or system and examine its use in various different settings in the crime control apparatus. For example, the video camera, hooked into a closed circuit television system is used to watch over public space or monitor people entering a building, for traffic control and congestion charging, for surveillance of suspects and to cap- ture intelligence, to provide evidence in criminal trials and as a means to observe those held in police or prison cells. Looking at the technologies is fruitful—and we have included a column in our typology to set these out—but we are less interested here in the technologies themselves than in their practical applications to specific spheres of crime control. This focuses attention on specific practices of crime con- trol, criminal justice and security—the everyday activities and tasks involving the use of scientific knowledge and technology. These include the practice of watch- ing television screens, listening to microphones and tape recordings, collecting physical, biological and chemical samples in the field and analysing them in the laboratory, installing and using technical equipment on the beat, on prison land- ings, or in the community. The central analytical distinction in our typology is the different ways in which technology has found communicative, defensive, surveil- lant, investigative, probative, coercive and punitive applications. We start our survey with the use of information and communication tech- nologies (ICT) in crime control. These enable information to be shared within and between criminal justice and other institutions thereby linking the police officer on the beat with court, prison, probation and other databases and also expand the capacity of criminal justice agencies to communicate with the public. Of course, ICT operates across the other applications; it is not a cog in the wheel of crime control but is the grease that lubricates the whole machine. We next Crime Control Technologies 55 Ta b le 3 .1 . A pp li ca ti on s of T ec h n ol og y to C ri m e C on tr ol A pp lic at io n G oa ls a n d fu n ct io n s U se rs Sp ec ifi c te ch n ol og ie s (e xa m p le s) U K S ta tu te s C om m un ic at iv e To c ol le ct , s to re a n d sh ar e in fo rm at io n w it h in a n d be tw ee n c ri m in al ju st ic e ag en ci es a n d w it h t h e pu bl ic C ri m in al ju st ic e an d se cu ri ty ag en ci es , p ri va te c om pa n ie s C om pu te rs n et w or ks , d at ab as es , I nt er ne t, m ob ile p ho ne , P D A , c ri m e pr ev en ti on in fo rm at io n, m as s m ed ia , t el ev is io n Io C A , R IP A , Fo I, D PA , H R A D ef en si ve To c re at e ph ys ic al b ar ri er s an d ar ch it ec tu ra l d es ig n to d ef en d pe op le a n d pr op er ty fr om c ri m e Lo ca l a u th or it ie s, p ri va te co m pa n ie s, in di vi du al s Lo ck s, b ol ts , a la rm s, g at es , f en ce s, b ar be d an d ra zo r w ir e, a n ti -c lim b pa in t H R A Su rv ei lla nt To o bs er ve w it h a v ie w to p ro vi di n g se cu ri ty a n d pr ev en t cr im e an d di so rd er Po lic e, M I5 , M I6 , G C H Q , SO C A , l oc al a u th or it ie s, pr iv at e co m pa n ie s; p ri so n s C C T V , s n if fe r an im al s, io n s ca n , c om pu te r sc an , I D c ar ds , l oy al ty c ar ds ; x -r ay , b io - m et ri cs , R FI D , G P S tr ac ki n g, A N P R R IP A , H R A , Po lic e A ct 1 99 7, SO C PA , D PA In ve st ig at iv e To c ol le ct , s to re a n d an al ys e in fo rm at io n to d et ec t cr im es c om m it te d (r ea ct iv e) , p re ve n t cr im es in pr os pe ct ( pr oa ct iv e) a n d se cu re e vi de n ce to p ro s- ec u te o r el im in at e a su sp ec t fr om a n in qu ir y Po lic e, S O C A , l oc al au th or it ie s Su rv ei lla n t t ec hn ol og ie s; in te rr og at io n te ch n iq ue s, D N A , fi n ge rp ri n ts , b al lis ti cs , fo re n si c ch em is tr y, p hy si cs , b io lo gy , pa th ol og y, p sy ch ol og y an d ps yc hi at ry R IP A , P A C E , H R A P ro ba ti ve To c on vi ct t h e gu ilt y an d ac qu it t h e in n oc en t C ou rt s, C P S, P ol ic e, F or en si c sc ie n ce s er vi ce s, d ef en da n ts , vi ct im s, la w ye rs A ll su rv ei lla n t an d in ve st ig at iv e te ch n ol og ie s PA C E , H R A C oe rc iv e To u se fo rc e to m ai n ta in o rd er a n d co n tr ol a n d to de ta in s u sp ec ts a n d th os e ac cu se d of c ri m e Po lic e, p ri so n s H an dc u ff s, s ti ck s, fi re ar m s, le ss le th al w ea po n s (s pr ay s, n et s, p la st ic b u lle ts , Ta se rs e tc ), o do u r an d so u n d w ea po n s PA C E , H R A P un it iv e To p u n is h w ro n gd oe rs w it h t h e pu rp os es o f re tr i- bu ti on , d et er re n ce , i n ca pa ci ta ti on o r re h ab ili ta ti on P ri so n , p ro ba ti on Se cu re a cc om m od at io n , c or po ra l p u n - is h m en t, lo ck s, e le ct ro n ic m on it or in g, in ca pa ci ta ti on , e xe cu ti on , d ru g te st in g C JA , P ri so n A ct , H R A , H A SW A K ey C JA : C ri m in al Ju st ic e A ct 2 00 3; D PA : D at a P ro te ct io n A ct 1 99 8; H R A : H u m an R ig h ts A ct 1 99 8; H SW A : H ea lt h a n d Sa fe ty a t W or k A ct 1 97 4; Io C A : I n te rc ep ti on o f C om m u n ic at io n s A ct 1 98 5; P A C E : P ol ic e A n d C ri m in al E vi de n ce A ct 1 98 4; R IP A : R eg u la ti on o f In v e st ig at iv e P o w er s A ct 2 00 0; SO C PA : S er io u s O rg an is ed C ri m e an d Po lic e A ct 2 00 5 56 Ben Bowling, Amber Marks and Cian Murphy examine defensive applications—fences, locks, bolts and other complex mechanical devices—used to secure individuals and their possessions and to protect buildings and other spaces, often by private individuals and organisations as well as local authorities. Surveillant applications involve the use of technologies to observe people and places. While in the past, surveillance was targeted at specific indi- viduals today surveillance targets everyone. Contemporary surveillance extends beyond the capacity of the human eye to include infra-red night vision, listening devices, chemical sensors and the like to monitor financial transactions, social networks, and the movement of people through transport networks. These tech- nologies straddle the criminal justice system and the private sphere, being used by the police but also business and local authorities. Investigative applications include collecting evidence for the purpose of identifying suspects in specific crimes (reactive investigation) and, increasingly, in ‘proactive investigation’ of crimes in prospect. Investigative applications are more intrusive, since they involve searching a person, their home or possessions; collecting bodily samples from the mouth or hair for DNA testing; fingerprinting; photography and the seizure of personal items; and because failure to comply can amount to a criminal offence. This brings us to probative applications, in which science and technology are used in the examination of evidence in criminal trials. Crucially, probative applications draw largely on material acquired from investigation and surveillance applica- tions and, as suggested above, the technology in use may remain a constant as the application shifts. For example, CCTV images used to identify suspects are now used in a courtroom setting as a source of evidence on which to determine guilt or innocence. Coercive applications include those undertaken by the police to control crowds, to effect an arrest and to control people in custody. Devices range from handcuffs, bodybelts, manacles and other restraints to truncheons, sticks and firearms as well as chemical, electric and sound weapons. Punitive applications are very diverse not least because the purposes of punishment—retribution, deter- rence, incapacitation and rehabilitation—are themselves diverse. Technologies applied to punishment include architectural design from the panopticon to the supermax prison designed to hold the convicted population to electronic moni- toring used to ‘punish in the community’ and extends to punishment of the body and execution. Like any typology, there are grey areas and tensions between its boundaries. We have chosen to focus on general applications rather than specific technolo- gies on account of the tendency for the same technology to be used in a range of applications. More problematic for our typology are the tensions between different applications. For example, if coercion involves the infliction of physical pain—when a police officer uses a baton or a taser to control an uncooperative suspect—might this be considered a form of informal ‘punishment’? We therefore offer this typology not as a definitive statement of hard and fast categories of crime control applications, but to generate and explore hypotheses about crime control as a whole. Crime Control Technologies 57 The Legal Context: Civil Liberties and Human Rights The use of technology in crime control has long given rise to concerns for indi- vidual liberty.16 In Britain, liberty has historically been encapsulated in the maxim that an individual may do anything except that which is proscribed by law. In the latter half of the twentieth century, this formula led to an erosion of civil liber- ties and increased police powers.17 The trend of eroding liberty was reversed, at least temporarily, when the Human Rights Act 1998 (HRA) was commenced in 2000. It provides for the incorporation in British domestic law of the European Convention on Human Rights (ECHR). The ECHR enumerates certain rights which are considered to be central to the ‘common heritage of political traditions, ideals, freedom and the rule of law’.18 Widely heralded as a milestone in British constitutional history, the HRA’s effectiveness in securing respect for human rights is the subject of ongoing debate.19 The ECHR provides a basic framework for governance in accordance with human rights. Art 1 enshrines a general obligation to ‘secure to everyone within [a state’s] jurisdiction the rights and freedoms’ defined in the Convention. Arts 2–14 set out those rights, and the limitations upon them. While certain aspects of the Convention are absolute,20 other rights are both limited and, in extreme circum- stances, can be derogated from entirely. These include the rights to liberty and security; respect for private and family life; freedom of thought, conscience and religion; and freedom of expression, assembly and association. The formula for limitations on these rights is clear. A limitation must serve a legitimate purpose, be prescribed by law, and be necessary in a democratic society. The content of the rights is elaborated on by judgments of the European Court of Human Rights (ECtHR), and by domestic judges applying the provisions of the HRA. Despite providing a standard against which public (and private) action can be measured, the HRA and ECHR system has a number of limitations as an effective control on state action, and in providing clear guidance to states concerning what is permissible. First, the standards of the Convention are broadly drafted. Though 50 years of Strasbourg jurisprudence, and a decade of familiarity with the HRA, has improved understanding of these standards, they remain vague in the absence of case law on specific issues. Second, where it exists, the case law is often fact-driven and it 16 Surveillance technologies, most famously foreseen in Orwell’s Nineteen Eighty-Four, are at the forefront of popular consciousness when it comes to crime control. 17 KD Ewing and CA Gearty, Freedom Under Thatcher (1990), see also KD Ewing and CA Gearty, The Struggle for Civil Liberties (2000). 18 Council of Europe, European Convention for the Protection of Human Rights and Fundamental Freedoms (1950), Preamble. 19 Compare CA Gearty, Principles of Human Rights Adjudication (2004) and KD Ewing, ‘The Futility of the Human Rights Act’ (2004) PL 829. 20 The right to life, the prohibition on torture and slavery, and the principle of no penalty without law. 58 Ben Bowling, Amber Marks and Cian Murphy can be difficult to extrapolate broad legal norms from judgments in individual cases. Third, a declaration of incompatibility under the HRA does not always make clear what needs to be done to make law compatible and does not, of itself, change the law. This can lead to an issue ping ponging between courts and legislature, especially where the government only grudgingly and half-heartedly addresses the matter. As a result, to properly regulate technologies that regulate, the ECHR stan- dards must be taken as a starting, rather than finishing point. The broad rule of thumb—that an infringement of rights must be in the pursuit of a legitimate purpose, prescribed by law, and necessary in a democratic society—needs to be fleshed out and applied to each individual technological innovation. This ‘flesh- ing out’ has occurred to a greater extent in some areas than others. For example, in respect of some forms of surveillance, the requirement of compliance with the ECHR prompted changes in the legislative framework of the intelligence and security services even before the HRA was passed.21 The place of the criminal law, as the ultimate last resort in the control of state action, should be recognised. The ability to use force has been described as the core of the police function.22 Where this use is so grievously in breach of the standards of reasonableness, legal action under the criminal law cannot be ruled out. Murder, assault, battery and criminal damage to property are all charges that might be levelled against law enforcement officers who are empowered to use vio- lence in carrying out their duties. However, two hurdles make the use of criminal law unlikely. First, as the de Menezes case demonstrates, even in the most shocking cases of police error or misuse of power, criminal prosecutions are unlikely for law enforcement officers acting in an official capacity. Second, and most pertinently, much of the deployment of technology is far less brutal, and much more insidi- ous, and so does not fall within the framework of the criminal law. A related point in this regard concerns the law of evidence and probative appli- cations of technology. Whatever oversight is provided on the use of technology before a defendant faces charges in court, once technology is used to provide evi- dence in the courtroom, the veracity of information becomes key to providing a fair trial. Thus, when technology is used—intercepting communications, DNA evi- dence, and even psychological or neurological expertise—fundamental principles of criminal justice require that the law of evidence is developed sufficiently to allow the accused to challenge evidence and to bring contradictory evidence forward. As is made clear below, the law of evidence can lag behind technological advances, leaving those accused of crimes at a real disadvantage in presenting their case. In examining legal regulation, as with other aspects of this paper, surveillant technologies come to the fore. As Judge Pettiti put it, ‘the mission of the Council of Europe and its organs is to prevent the establishment of systems and methods 21 H Fenwick, Civil Liberties and Human Rights (2007) 1010–12. 22 E Bittner, ‘Florence Nightingale in Pursuit of Willie Sutton: A Theory of the Police’, in H Jacob (ed), The Potential for Reform of Criminal Justice (Newbury Park, CA, Sage, 1974). Crime Control Technologies 59 that would allow “Big Brother” to become master of the citizen’s private life’.23 The invocation of Orwell highlights the prominence of the idea of a ‘surveillance state’ in the public conscience. However, it is necessary to remember that ‘Big Brother’ is not just watching—he is also tooled up and on the beat. When examining these other technological advances, the dearth of legal regulation should cause concern. Having provided an overview of the types of technology, their applications to various spheres of crime control and a brief examination of the general crimino- logical and legal issues that arise, we now look in closer detail at specific applica- tions in the field. Information and Communication Technology in Crime Control As in many other areas of human activity, one of the fastest growing applications of technology to criminal justice and security is in the sphere of information and communications. Information or intelligence has sometimes been referred to as the ‘lifeblood’ of policing and the ability to handle information—such as the names and details of suspects, defendants and prisoners—lies at the heart of attempts to develop an effective and efficient criminal justice system. Since the mid-1980s strenuous attempts have been made to computerise case management systems with the explicit goal of using technology as a means to enable the cre- ation of a system from the various criminal justice organisations. As Chief Constable Peter Neyroud, Chief Executive of the National Police Improvement Agency, points out, new technologies are making significant changes in the functioning of the criminal justice system.24 One example is the linking up of databases. The National Strategy for Police Information Systems (NSPIS) will cre- ate a new single case system linking police, court and prison databases.25 A unique identifying number will be allocated at the point of arrest and without the need for re-keying data will be carried through an offender’s time in custody and linked to the Police National Computer (PNC). At the same time, police databases—that have proliferated from a single one a decade ago to 40 today—will be linked up.26 Stimulated by the failures to share information identified by the Bishard Inquiry, a Police National Database (PND) is being created that will link together such lists as fingerprints (LIVESCAN, IDENT1), facial images (FIND) and the national DNA database (NDNAD). The collection of information to be loaded onto these systems is being enabled through new legislation that allows the collection and 23 Malone v United Kingdom (1984) 7 EHRR 14, concurring opinion of Judge Pettiti. 24 P Neyroud ‘Joined Up to Face New Challenges: The Criminal Justice System and Change in the 21st Century’, speech given at the Centre for Crime and Justice Studies New Developments in Criminal Justice seminar series, 24 September 2007, available at accessed 23 May 2008. 25 See accessed 7 January 2008. 26 Neyroud, n 24 above. 60 Ben Bowling, Amber Marks and Cian Murphy retention of DNA from arrestees whether or not they are subsequently convicted of any offence and new systems of data collection such as the LANTERN hand- held fingerprint reader.27 Integrated criminal justice data will also be directly available to front-line police officers through a hand-held personal digital assis- tant . The arguments for this development are clearly put. The current collection of criminal justice organisations is ‘the most unsystematic system anyone has ever worked in’28 and requires reform to function properly. Gaps in the collection and sharing of data provide criminals with the space within which to work and law enforcement requires re-tooling to prevent them from doing so. The new systems promise crime-fighting effectiveness, resource efficiency, speed and the reduction of administrative burdens. The government imperatives are also clear—to show significant reductions in crime (15%), increase the number of Offenders Brought To Justice (OBTJ) by 1.25 million and reassure the public.29 Communicative technologies sit slightly awkwardly with the other applications since they have no claim to be specific to crime control and cannot be seen as a purpose or function of criminal justice (unless the reader subscribes to the view that the criminal justice system serves not to control, but to know).30 There is also an overlap with other spheres. For example, communication is an important part of the process of punishment. The recent introduction of ‘talking CCTV’ invokes the use of softer forms of punishment such as public admonishment and the issuing of warnings. Such informal social control mechanisms based on com- munication have always existed; now they are mediated through technology and controlled from a distance. Defence against Crime31 Technologies to defend people and places against crime include locks, bolts, gates, defensive walls, barbed and razor wires, ‘smart water’, climb-proof paint, private home alarms and personal alarms designed to physically keep out intruders and warn of their impending arrival. In response to innovations in crime and in the development of the field of ‘crime science’, a range of new ‘defensive technologies’ have been developed. For Pease, this has brought about a recognition that ‘the scope of science and engineering relevance is potentially much wider than as a means of 27 For a comprehensive account of the historical development and ethical implications of the use of bioinformation in the investigation and prosecution of crime see Nuffield Council on Bioethics, The Forensic Use of Bioinformation: Ethical Issues (London, September 2007). 28 Neyroud, above n 24. 29 Neyroud, above n 24. 30 M Foucault, Discipline and Punish (London, Allen Lane, 1977). 31 This technological application is referred to by others as ‘crime prevention’ or ‘crime reduc- tion’: K Pease ‘Crime Reduction’, in M Maguire et al (eds), The Oxford Handbook of Criminology (Oxford, Clarendon Press, 2002); K Pease, ‘Science in the Service of Crime Reduction’, in N Tilley (ed), Handbook of Crime Prevention and Community Safety (Cullompton, Willan, 2005). We have chosen ‘defence’ as a more precise definition of these applications as other applications could also be seen as in pursuit of crime prevention or crime reduction. Crime Control Technologies 61 intervening after the fact’ and that crime control must therefore shift away from its reactive stance and adopt a more proactive approach to crime reduction.32 The emerging discipline of crime prevention took off seriously in the late 1970s with its theoretical and practical base being driven from within the Home Office. Criminological research using modified ideas of rational choice argued that the criminal event could be identified where specific conditions converged—the presence of a suitable target, a motivated offender and the lack of appropriate guardianship.33 The challenge for crime prevention technologists was therefore to engage with the offender’s risk calculation through initiatives to increase the effort to attack the target and reduce the rewards, increase the risks through improving guardianship. A range of crime prevention techniques flowed from this, such as ‘target hardening’ through the development of car alarms, steering locks, property marking and lo-jack tracking systems. Recent developments include the creation of a Centre for Design against Crime at Central Saint Martin’s College of Art and Design which seeks to design crimi- nal opportunities out of products—the ‘crime free car’ for example—and the Jill Dando Institute of Crime Science at University College London which aims to bring together ‘politicians, scientists, designers and those in the front line of fighting crime to examine patterns in crime, and to find practical methods to dis- rupt these patterns’.34 Most of the examples in this category aim to reduce crime through the imposition of barriers and other mechanisms of target hardening and security defences. There is now a wide a range of ways that scientific methods have been used to shape environments, with security and crime prevention in mind: gated communities, the creation of ‘defensible space’, benches that cannot be lain upon and concrete street furniture and paving that prevent skateboarding. Such technologies raise questions about the boundaries between defensive, coercive and punitive technologies, as well as the role of private security in policing public spaces and the need to ensure adequate regulation of technologies deployed by private persons as well as state actors. Surveillance The term ‘surveillance’ was once readily identified with specific targeted police operations against suspected criminals. We use the term ‘investigative’ to describe this type of operation—to which we return below—and treat surveillance as the practice of monitoring the general population. Contemporary surveillance is characterised by its lack of particularity in that it is an intelligence-gathering tool used before the relevant law enforcement agency has any suspicion that a particular individual is involved in crime. It is therefore not investigative in the 32 Pease, ‘Science in the Service of Crime Reduction’, n 31 at 181. 33 M Felson, Crime and Everyday Life (Thousand Oaks, CA, Sage, 2002). 34 accessed 23 May 2008. 62 Ben Bowling, Amber Marks and Cian Murphy traditional sense, but concerned with general security provision by offering a deterrent to misbehaviour through an increased risk of detection. The power of surveillance using devices such as cameras, microphones, computers, automated car registration plate recognition, store loyalty cards, travel cards, phone-taps, and satellites has expanded dramatically in recent decades. It is in this area that the most research has been conducted, with technological advances and the emer- gence of a surveillance society being treated almost synonymously by some critics. A great deal of surveillance activity is conducted by the police, local authorities, providers of privately owned public space (such as shopping centres) and private companies. The United Kingdom leads the way in the roll out of CCTV with more than four million cameras watching public spaces. The term ‘new surveillance’ has been used by Gary Marx to describe technical means of extracting personal information that go ‘beyond what is offered to the unaided senses or voluntarily reported’, enabling the collection of information that might reasonably be expected or assumed to be confidential.35 A few key examples should serve to illustrate the type. First, the ‘technological strip search’: millimetre wave technology permits its operators to see though clothing with the aim of enabling its operators to detect illicit substances or firearms concealed on the person. This technology is presently being deployed in a variety of locations in the absence of legal guidelines. Second, the ‘technological property search’: a number of police forces are using portable thermal imaging cameras to create images of the inside of households with the aim of identifying indoor cannabis cultivation. No publicly available legal guidance exists in relation to their use. Third, ‘mobile trace detection’: portable technologies such as ion-track itemisers and ‘sniffer dogs’ are used to identify prior contact with illicit substances. Mobile police patrols today have metal detectors and x-ray screening for weapon detection, ion-track detection machines for drugs and explosives, and sniffer dogs for DVDs, cash, drugs and explosives, with which to conduct surveillance on the public. Citizens are not under any obligation to co-operate with surveillant technolo- gies and non-compliance is not a valid ground for suspicion.36 However, outside the confines of the criminal justice system, citizens are expected to comply with increasing security measures. In an atmosphere of ‘nothing to hide, nothing to fear’, the boundaries between surveillant and investigative applications of technol- ogy become increasingly blurred. Take, for example, the commuter faced with a sniffer dog at a railway station. He or she is under no obligation to co-operate with these operations but evasive conduct is being treated as grounds for suspicion justifying an investigative stop and search. It is important to determine in which way the technologies are being used; (surveillant, investigative or even coercive or punitive), in order to juxtapose the relevant regulatory frameworks and safeguard 35 GT Marx, ‘What’s New about the ‘New Surveillance’? Classifying for Change and Continuity’ (2002) 1 Surveillance & Society 9–29, at 12; available at accessed 23 May 2008. 36 Rice v Connolly [1966] 2 QB 414. Crime Control Technologies 63 the rights and freedoms of citizens. Surveillant technologies clearly engage the right to privacy. Even if this application of technology is considered to be benign and passive, we are drawn to human rights jurisprudence to consider whether the degree of intrusion is justified by the purpose it serves. In addition to the sensing technologies, new methods of data storage and analysis are changing the way in which ‘dataveillance’ is carried out.37 It is now possible using pattern-matching techniques to search databases for number plates; faces or gaits and complex algorithms can be used to detect ‘unusual pat- terns’ such as a person walking in circles or sitting in the same position for long periods. As in many areas of technological development, surveillance practice has developed more rapidly than the ethical and legal frameworks, or even a practi- cal understanding of what equipment is in use and what its implications are. According to the Home Office, until 1 March 2000 ‘there was no statutory basis for a systematic control of CCTV surveillance in public spaces’.38 Now, however, ‘public space surveillance’ is regulated by the Data Protection Act 1998 (DPA). Under the DPA, CCTV operators must have a ‘legitimate basis for processing images’, that is a valid reason to have CCTV in operation (specifically the preven- tion and detection of crime); the system’s operation must be registered with the Data Protection Commissioner (DPC) and the public must be aware of the system with appropriate signage placed on entering a ‘gated’ CCTV area. There is further regulation of covert systems under RIPA. The HRA requires that CCTV in public space meet the requirements that it is necessary, proportionate to the threat for which it is required and accountable. The emergence of new law in this area has led to claims that surveillance is highly regulated. However, the current complex- ity of statute law in this area and the absence of test cases leaves the degree of real regulation unclear. Crime Investigation The application of science and technology to the practice of criminal investi- gation has a long history and is now very extensive. There is at present a huge amount of enthusiasm for the enabling qualities of forensic science—above all in the field of DNA testing—for improving the effectiveness and efficiency of the investigative process. Advances in biology (eg forensic pathology), chemistry (eg mass spectrometry) and physics (eg ballistics) all lend themselves to the detection of crime. RIPA legalised and regulates investigative practices—such as breaking and entering to plant intrusive surveillance devices, paying informers and authorising 37 ‘Dataveillance is the systematic use of personal data systems in the investigation or monitor- ing of the actions or communications of one or more persons’. R Clarke, Information Technology and Dataveillance, see accessed 11 January 2008. 38 See accessed 7 January 2008. 64 Ben Bowling, Amber Marks and Cian Murphy police under-cover operations—that were part of routine practice since the birth of policing. New technologies, first used in the surveillance of the general popula- tion can also be deployed as investigative tools. This shift in application can result in a blurring of the boundaries between surveillance and investigation. The appli- cation of technology to the traditionally investigative power to conduct a personal search is particularly interesting. Before the emergence of ‘new surveillance’ technologies, a police officer could only determine the contents of a person’s personal property (carried in their pocket, handbag, briefcase) or the contents of their home by conducting a physical search. Unless consent was obtained from the person in question, the police would necessarily commit assault or trespass in conducting a search and the police were therefore provided with powers of search where grounds for reasonable suspicion could be shown. Grounds of reasonable suspicion are required before the stop and search power can be used (bar some specific legislative exceptions).39 This ensured a measure of protection for the privacy of citizens. Despite the legislative framework in place for regulating police powers of search, primarily the Police and Criminal Evidence Act 1984 (PACE), we do not have a statutory definition of what a ‘search’ is. A ‘search’ has tradition- ally been interpreted as involving physical intrusion that would amount to an assault or trespass in the event of the statutory conditions required for a search not being satisfied. It is not clear if we can rely on recent legislation to maintain the privacy protected under PACE in the face of the threat posed by new surveil- lance technologies. The extent to which new applications of technology by law enforcement blur the boundaries between surveillance and investigation was addressed in relation to property searches in the US Supreme Court case of Kyllo v United States.40 The police had aimed a thermal-imaging device at the appellant’s residence to detect heat emanations associated with high-powered marijuana-growing lamps. Based on the thermal-imaging information, police obtained a search warrant for the resi- dence. The court held that when the police obtain by sense-enhancing technology any information regarding the interior of the home that could not otherwise have been obtained without physical intrusion into a constitutionally protected area, that constitutes a search. Kyllo used the traditional concept of trespass and its associa- tion with physical intrusion to justify expanding the concept of ‘search’ to include new surveillance techniques. In concluding that use of the thermal-imaging device was a search, the court in Kyllo stressed that the: ‘Government [may not use] a device … to explore details of the home that would previously have been unknow- able without physical intrusion.’ 39 B Bowling and C Phillips, ‘Disproportionate and Discriminatory: Reviewing the Evidence on Stop and Search’ (2007) 70 MLR 936–61. 40 Danny Lee Kyllo v United States 533 US 27,121 S Ct 2038, 150 L Ed 2D 94; 2001 US LEXIS 4487. The case can be contrasted with the finding by the Canadian Supreme Court that the use of infa-red imaging equipment to detect the growing of cannabis did not violate the right to privacy on the par- ticular facts of the case: Tessling [2004] SCC 7 reported in CLR 2005 167–68. Crime Control Technologies 65 An interesting application of neuroscience and psychology are technologies that purportedly enable the detection of deception. Two examples can be men- tioned briefly. First, Voice Risk Analysis can pick up changes in the voices of telephone callers when under stress which can be taken as indicators of telling lies. These have been used by local authorities to detect fraudulent benefit claim- ants.41 A second example is the use of high technology imaging methods such as Functional MRI scanners and electroencephalographs to identify attempts to lie during interviews and drugs that ‘reduce conversational inhibitions and the urge to deceive’. Both examples have been proposed as alternatives to the use of interrogation techniques that would be characterised as torturous.42 Advocates see such methods as having moral and practical advantages over those traditional interrogation techniques noting that the latter ‘depend overwhelmingly on coer- cive combinations of fear, disorientation, and pain’. What would be the objection, they ask, to forcing suspects to take ‘a hypothetical pill, whose only side effect is slight nausea and a headache, that makes anyone who takes it tell the truth for 90 minutes’?43 Noting the human rights objection that forcing the pill on prison- ers would violate their right not to incriminate themselves they argue that these ‘minimally invasive interrogation options’ would not cross ‘a hallowed legal line’ since the state can already take blood, fingerprints and DNA for testing. Galligan considered this argument in the context of right to silence debate: [S]uppose that the police could find out all they need to know by plugging the suspect into a machine; the process is painless but it reveals everything about the suspect—his history, actions, thoughts and desires. That would strike at the very centre of the zone of privacy. But there is no difference in principle between using the machine and requiring the suspect to disclose the same information through speech. The means differ but the objection is the same: the police have no claim on direct access to that information and it follows that they have no claim on the suspect to lower the shield of privacy.44 Science has ‘rendered visible much that was previously imperceptible’45 and even the technologies used to search persons can be used to obtain more detailed infor- mation than would have been available from a physical search, for example, prior contact with illicit substances is purportedly detectable in the personal odour of a person. Hair testing can provide information on patterns of drug use over long periods of time.46 Fingerprints can be analysed for lifestyle information such as 41 H Mulholland, ‘Lies, Damned Lies and Lie Detectors: Introducing Lie Detector Tests for Benefit Claimants’, The Guardian (5 April 2007). 42 H Rishikof and M Schrage, ‘How Technology Will Eliminate Torture’ The Slate (18 August 2004). See accessed 7 January 2008. 43 Ibid. 44 DJ Galligan, ‘Right to Silence Reconsidered’ (1988) Current Legal Problems 80, 88–90. 45 I Kerr and J McGill, ‘Emanations, Snoop Dogs and Reasonable Expectations of Privacy’ (2007) 52 Criminal Law Quarterly 392–432 at 393. 46 T Mieczkowski, ‘New Approaches in Drug Testing: A Review of Hair Analysis’ in ED Wish (ed), Drug Abuse: Linking Policy and Research, Annals of the American Academy of Political and Social Science, vol 521 (London, Sage, 1992) 132–50. 66 Ben Bowling, Amber Marks and Cian Murphy smoking habits.47 Biometrics and illicit substance detectors have the potential to disclose details that would previously have only been made available through questioning. Other surveillance technologies can be used to pinpoint where- abouts, obviating the need to ask suspects to disclose it. Technological develop- ments in law enforcement raise difficult questions that are not confined to the right to silence debate. The idea that our bodies can be reduced to a means by the state—that the human body itself can be a crime control technology—offends human rights at its very roots in human dignity. Technology in the Courtroom The shift in the application of technology from surveillance and investigation to their use in the courtroom is an interesting one, not least because of its far-reaching implications for justice and liberty. While being captured on CCTV or having a DNA sample taken as a consequence of being seen as suspicious may be irritating, unnerv- ing, humiliating and invasive, the same technologies in the courtroom context take on a new significance, on which a person’s liberty (or even life) may well depend. Science is playing an increasingly important role in criminal trials. A fundamen- tal point is that made by Jane Hickman, a criminal defence lawyer and secretary of the Criminal Appeal Lawyers Association, ‘No one asks, “How far should we go with forensic evidence?” and it’s increasingly becoming the whole story in a trial.’48 In a recent decision, the Court of Appeal held that ‘If the outcome of the trial depends exclusively or almost exclusively on a serious disagreement between distinguished and reputable experts, it will often be unwise, and therefore, unsafe to proceed.’49 A lack of disagreement, however, is not necessarily an indication of reliability. Patrick O’Connor QC has identified a problem in the time lag between the emergence of a new science as prosecution evidence and the availability of any defence expert or the availability of any scientific research to contest it.50 This problem may become more acute with the establishment of dedicated funding streams to focus science and technology attention on crime reduction and by the perceived social control benefits from the aura of mystique surrounding new detection technologies: There may be a ‘honeymoon’ period in which deterrence is greatest. Offenders will have a period of uncertainty about a new forensic evidence-based scientific detection device. This maximises offender uncertainty and hence deterrence.51 The absence of an agreed protocol for the validation of scientific techniques prior to their being admitted in court has been described by the House of Commons 47 T Simonite, ‘Fingerprints Reveal Clues to Suspect’s Habits’ Newscientist.com news service April 2006, accessed 12 January 2008. 48 A Wade, ‘Silence in Court: Forensic Science on Trial’, The Guardian (London 3 October 2004) 9. 49 Canning [2004] EWCA Crim 01. 50 Criminal Appeal Lawyers Association Conference Proceedings, October 2004, unpublished. 51 G Farrell, ‘Skunks, Cinnabar Moths and Smart Policing’ (1997) Police and Government Security Technology Journal 62–3. Crime Control Technologies 67 Science and Technology Committee on Forensic Science as ‘entirely unsatisfactory’.52 Problems arise when the significance or probative value of scientific findings has not been adequately researched and courts have been criticised for admitting evidence that lacks a statistical basis for comparison. One example is drug trace evidence. In the case of Fleur the defendant’s cars were examined for traces of drugs and traces of heroin were found amongst sweepings taken from it. There were no statistics from which any conclusion could be drawn as to how rare it is to find traces of heroin in sweepings from a car. Work has been done on the contamination of bank notes, which shows that almost every bank note in circulation picks up traces of cocaine and a survey of seats in public transport revealed that 17 per cent tested positive for heroin. In Fleur’s appeal, the appellant argued, the prosecution conceded, and the Court of Appeal agreed, that in light of the lack of statistics with which to gauge the significance of the finding of drug traces in the defendant’s car, the evidence of it should never have been admitted. However the Court of Appeal ruled that the admission of the evidence would not have affected the outcome of the jury’s verdict in Fleur’s case.53 The House of Commons Science and Technology Committee has identified an urgent need for research into how juries cope with forensic evidence.54 It might be that juries, like many laypersons, put too much faith in and emphasis on science, a phenomenon that has been dubbed ‘the CSI effect’55 because of the portrayal of forensic science as an infallible indicator of guilt in crime scene investigation television programmes. O’Connor points out that the principal danger with the police and prosecutors increasingly turning to scientific evidence is the false aura of mathematical certainty which surrounds it. This aura alone may override all other evidence and dictate the jury’s verdict. In relation to facial mapping evidence, the Court of Appeal held that in the absence of a national database of facial characteristics, or agreed formula on the probability of occurrence of particular facial characteristics, expert opinion based on facial mapping should not be admitted as identification evidence.56 While the Court of Appeal has been rightly applauded for this decision, it lends support to arguments in favour of establishing national databases of biometric information. This raises questions about privacy and the relationship between state and citizen, leading us back to Jane Hickman’s question: How far should we go with forensic evidence? In the actuarial paradigm, forensic findings are being used to punish persons outside of the criminal justice system, in the denial of state benefits or visitor rights to prisoners. What is the regulatory oversight in these fields? On account of the comparatively intrusive and coercive nature of the criminal justice system as a 52 House of Commons Science and Technology Committee, Forensic Evidence on Trial, Seventh Report of Session 2004–05, p 76. 53 R v Fleur and Sinitchiyski [2004] EWCA Crim 2372. 54 House of Commons Science and Technology Committee, Forensic Evidence on Trial, Seventh Report of Session 2004–05, p 73. 55 KM Stephens, ‘The Changing Role of Forensic Science’ (2005) 13 Police Futurist. 56 R v Gray 2003 EWCA Crim 1001. 68 Ben Bowling, Amber Marks and Cian Murphy method of state control historically, a number of due process safeguards were sewn into the system. They include the presumption of innocence, the privilege against self-incrimination, the requirement of reasonable suspicion for the exercise of for- mal police powers, the principle of equality of arms, and no punishment without conviction. The principles have evolved primarily as a result of judicial initiative. Coercion The problem of making people behave—managing crowds (disorderly or other- wise) and handling arrestees—has been a perennial concern facing police services since their nineteenth century origins. It has been argued that the capacity to use coercive force goes to the very heart of the police mission57 and it is for this reason that from the birth of policing, officers have had access to a range of coercive tech- nologies designed to enhance their capacity to control. Among the ‘force enablers’ available to police are those designed temporarily to hold offenders between arrest and holding cell (such as handcuffs and body-belts),58 those designed to handle unruly crowds and individuals acting dangerously including the less than lethal weapons (sticks, electricity, water, odour, toxic chemicals, baton rounds, nets, etc) as well as deadly weapons (from cutlasses to firearms) and technological adapta- tions of motor vehicles. The principal rationale for the use of ‘less than lethal weapons’59 is that when police are faced with an individual judged to be danger- ous, such weapons could be used as a means to protect the public, the responding police officers and even the suspect without using deadly force. Many of these weapons were used previously in closed settings such as prisons. In addition to coercive technologies that use physical force, there are numerous other, more subtle, means of getting people to do what you want. Technological developments in the fields of sound and odour have been used to control people in public spaces. One low intensity sound weapon, The Mosquito, is useful for this discussion. This instrument, which can be placed outside a shop, sends out a high-pitched buzzing sound over a range of 15–20 metres that only teenagers can hear.60 The manufacturer’s website boasts: 57 Bittner, n 22 above. 58 Remote Control Stun Belts are used on defendants in at least 100 US jurisdictions and by the US Bureau of Prisons. The belts inflict eight second shocks sending 50,000 volts of electricity through the wearer’s body, causing defecation, urination and welts. 59 The definition of ‘less than lethal’ weapons is controversial and many observers regard the term as an oxymoron. A working definition is provided in the proceedings of the 4th European Symposium on Non-Lethal Weapons: ‘a new technology which enables forces to operate in new force scenarios without the traditional kill and damage of ordinary conventional weaponry’. 60 According to the manufacturer’s website, it seems that there is a very real medical phenomenon known as presbycusis or age related hearing loss which, according to The Merck Manual of Diagnosis and Therapy, ‘begins after the age of 20 but is usually significant only in persons over 65’. It first affects the highest frequencies (18 to 20 kHz) notably in those who have turned 20 years of age. It is possible to generate a high frequency sound that is audible only to teenagers. accessed 13 January 2008. Crime Control Technologies 69 The Mosquito™ ultrasonic teenage deterrent is the solution to the eternal problem of unwanted gatherings of youths and teenagers in shopping malls, around shops and anywhere else they are causing problems. The presence of these teenagers discourages genuine shoppers and customers’ from coming into your shop, affecting your turnover and profits. Anti social behaviour has become the biggest threat to private property over the last decade and there has been no effective deterrent until now. The blurb notes that The Mosquito has been acclaimed by the police forces of many areas of the United Kingdom and has been described as ‘the most effective tool in our fight against anti social behaviour’.61 The device raises concerns over both the right to privacy (bodily integrity), and discrimination on grounds of age. Both rights are protected by the ECHR, and their infringement by a private device operating in the public domain has given rise to concern in Parliament and amongst human rights groups. Applying the rule of thumb, it is unclear how the requirement that an interference be ‘pre- scribed by law’ can be applied to a private undertaking acting on its own initia- tive. Though the product’s manufacturer claims ‘our preliminary searches have found nothing to suggest that the use of the device is unlawful’,62 this does not of itself address the question of a positive legal basis for the infringement of rights. The human rights organisation, Liberty, cites the Environmental Protection Act 1990 as requiring local authorities to investigate ‘noise emitted from a premises’ when this noise constitutes a nuisance, and to issue an abatement notice if a nuisance is found. An acceptable definition of ‘public nuisances’ is offered into which the noise emitted by The Mosquito clearly falls.63 Information provided by biometrics feeds the capabilities of less than lethal technology. The key issue in relation to current advances is specificity.64 A report in 2000 from the Applied Research Center of the College of Medicine at Pennsylvania State University documents the growing interest in the development of ‘non-lethal techniques with a high degree of specificity, selectivity, safety and reversibility that would avoid production of a lasting impairment to the subject(s) or individual(s) activating the technique’.65 In relation to ‘specificity of wounding’ two authors in the 2005 Military Review claim ‘[i]f we acquire a target’s genome and proteome information, including those of ethnic groups or individuals, we could design a vulnerating agent that attacks only key enemies without doing any harm to ordinary people’.66 As with the investigative technologies discussed above, 61 accessed 23 May 2008. 62 accessed 13 January 2008. 63 See accessed 7 January 2008. 64 Bradford Science and Technology Report No 8, August 2007, p 37. 65 J Lakosi, W Bosseau Murray and JM Kenny, The Advantages and Limitations of Calmatives for Use as a Non-Lethal Technique (Applied Research Laboratory/College of Medicine, Pennsylvania State University, 3 October 2000) p 2. 66 G Ji-Wei and X Yang ‘Ultramicro, nonlethal, and reversible: looking ahead to military biotechnol- ogy’ (2005) July–August Military Review 75–80. 70 Ben Bowling, Amber Marks and Cian Murphy it is clear that there is much work to be done in considering the ethical implica- tions of such developments. Punishment The use of technology for the purpose of punishment has been uniquely cre- ative throughout history. Scientists and engineers have lent their skills to the development of excruciating methods of punishment and the invention of an extraordinary variety of different tools with which to do it—from the rack and thumbscrews to the birth of the prison, creative technologies of confinement and imaginative methods of execution. It is quite probable that all of the techniques of coercion described in the previous section could be used as forms of punishment. In some contexts sticks, guns and torches are used by police for the purposes of summary justice. The prison, itself a triumph of technologies of bars, locks, bolts and design, is modernising, with the growth of the US ‘supermax’ as the model of the technological future of the prison.67 Punishment has traditionally been the infliction of pain, for an act deemed to be an offence by a court, backed with the legitimate use of coercive force.68 The pains inflicted by punishment have included physical punishment, financial penalties and the deprivation of liberty typically through prison, but also more recently through orders limiting freedom of movement, conditions requiring attendance, abstinence from alcohol or drugs backed with testing regimes. In the past two decades, punishment has taken on new technological forms, the most significant of which is electronic tagging of convicted offenders and other mechanisms for electronic monitoring. The deployment of such technologies is frequently justified on the basis of their deterrent value, blurring the boundary between prevention and punishment. The goal of deterrence is to discourage some form of behaviour through the fear or threat of some unpleasant conse- quence or by offering a positive reward for compliance. In penology, deterrence theory is couched very much in terms of its function as a rationale for punish- ment, emphasising the idea that people are deterred from committing anti-social or criminal acts either through the memory of prior punishments or the imagi- nation of possible punishment for future transgression. If deterrence takes the form of being barked at by ‘talking CCTV’ or a machine that emits a low level irritating noise, to what extent can this be seen as punishment? Certainly the faces of the children on the Mosquito ‘teenage deterrent’ supplier’s website seem to be experiencing pain as they hold their hands over the ears and run from the sound. 67 A Coyle, ‘Two Visions of the Post-modern Prison: Technocorrections’ unpublished paper, King’s College London International Centre for Prison Studies. 68 B Hudson, Understanding Justice (Buckingham, PA: Open University Press, 2003). Crime Control Technologies 71 Towards a Research Agenda Our goal in this paper has been to delineate the range of applications of scientific knowledge and the invention of mechanical and electronic devices for use in the apparatus of crime control, seeking some theoretical unity by linking debates in penology and criminology with those in the field of regulating technologies. With this in hand we have looked at examples in each sphere and have started the process of mapping connections between technologies and different applications in crime control and to think about the broader implications that emerge from an analysis of the unifying power of technologies across the system of crime con- trol. We have raised some of these issues as we progressed through the paper and by way of conclusion set out what we think are some of the key areas for future research that can be grouped under the headings of descriptive, evaluative, legal- regulatory and normative. Descriptive research Our first requirement is a baseline description of the crime control applications of emerging technologies. While there is now extensive research on the use of surveil- lance devices both for the investigation and detection of specific criminal offences and of wider public areas, there is much less research in the criminological field concerning the use of technologies in the collection and analysis of evidence or for the purpose of coercion and punishment by police and prison authorities. To take just one example, there is (to our knowledge) no proper empirical study of the use of The Mosquito ultrasonic ‘teenage deterrent device’. While the sales of this device are now in the tens of thousands, little information exists on the short- or long- term impact on young people’s health and wellbeing. Without this preliminary descriptive work, normative and regulatory questions about freedom of movement and association cannot even be posed, let alone answered. Similarly, the use of police armaments, techniques of restraint and control have largely been untouched by criminologists. As technologies of all kinds are both applied to and driven by crime control, it is also imperative that researchers examine the ways that particu- lar scientific developments and devices could migrate from their original uses to new settings. Therefore we need to examine scientific developments in physics, biology and chemistry to consider their impact on crime control industries. We also need to examine the way in which technology creates links within and between institutions. The collection, distribution and analysis of timely and accurate information may be important for the purposes of crime control and shared within or between a criminal justice agencies and may be an unquestion- ably good thing for the efficient and effective function of the criminal justice system (CJS—and indeed between the CJS and other spheres of state activity). However, research on multi-agency working within the CJS has uncovered inter- and intra-organisational tensions arising from differences in culture and 72 Ben Bowling, Amber Marks and Cian Murphy history and fundamental conflicts in institutions’ roles and aims. For example, what is the likely practical outcome when systems of care for ‘at-risk’ youth are ‘joined up’ with those agencies concerned with coercion and punishment? Primacy of purpose is likely to be claimed by the CJS, which seeks to ‘control’ crime—to the likely detriment of the youth involved. Evaluative research Crime control technologies are often claimed to be successful in terms of their goals, as their effectiveness is considered self-evident, either on the basis of lim- ited scientific trials or the claims of their manufacturers. Taking the Mosquito again as an example, the manufacturers claim that this technology is one of the greatest advances in controlling the ‘eternal problem’ of teenage misbehaviour. However, these claims are based on anecdotal evidence used for product market- ing. Instead of reliance on advertising sound bites, claims to effectiveness should be based on sound empirical research. Does the Mosquito in fact contribute to crime control? Are its effects sustained over time? Does the device merely displace anti-social behaviour? Does it have any unwanted side-effects such as defiance?69 Similarly, to what extent have other coercive devices actually contributed to the control of crime? Have the DNA databases contributed to crime reduction? Has CCTV contributed to greater safety? An evaluation would ideally include rigorous experimental and control conditions, in addition to qualitative research to explore the broader implications of these technologies. In relation to the probative appli- cations of these technologies, we need research on the accuracy and error rates of the whole gamut of forensic biological, physical and chemical tests and how far can they be relied on in the courtroom. We need to be informed about the sources and consequences of errors—both human and technological. In tandem with researching benefits, we must also research costs. We need to know the financial costs of using such technologies across the CJS. At the moment, even attempting an estimate is impossible, but in our view it is a crucial evalua- tive step. The implementation costs of a United Kingdom ID card is estimated at between £5.4 and £19 billion. The cost of research alone is quite staggering. Based on recent figures from the Home Office Scientific Development Branch (HOSDB), the Transport Security Network (TRANSEC), the Police Information Technology Organisation (PITO—now the National Police Improvement Agency) and the Engineering and Physical Sciences Research Council’s Crime Technology Programme, the cumulative annual budget is around £400 million.70 A more 69 Anecdotal evidence from one inner London housing estate is that the installation of the Mosquito to prevent young people from congregating in a covered car-parking bay was met with the reaction of tearing down the device, damaging roofing tiles and defecating in the area. 70 HOSDB has a budget 2006–07 is around £22.5 million and a staff of around 200 physicists, chemists and electrical engineers developing search and surveillance equipment fingerprint, drug, explosive and weapon detection, video enhancement, body armour testing and the development of less-than-lethal weapons. The Transport Security Network (TRANSEC) has a departmental Crime Control Technologies 73 commercial approach to forensic science can be seen through arrangements by the police to procure forensic science services through a process of competitive tendering. At the same time police forces are also increasing their own in-force forensic science capabilities. As a result of declining sales in defence equipment at the end of the Cold War, defence contractors have turned their attentions to the criminal justice market.71 The surveillance industry is one of the fastest growing sectors in the financial world72 and the Home Office appears to view a future global market in forensic services, where the UK provides an increasing proportion of services to other countries and foreign companies have an ever more significant role in the UK.73 The privatisation of the forensic science services is discussed at length in the seventh report of session 2004–05 of the House of Commons Science and Technology Committee, Forensic Science on Trial. Broader social costs of the development of crime control science and technology must also be addressed. This includes the erosion of fundamental liberties such as freedom of movement, association and assembly. Technologies that infringe these rights should be proportionate and ‘necessary in a democratic society’. Therefore we must assess whether these technologies actually provide enhanced security, as well as the costs in lost liberty. Without this primary research, the contractual calculus cannot be made. It is also essential that any evaluative research considers the questions of fairness and equity. While it can be argued that utilitarian goals of maximising crime con- trol benefit for the majority would justify intrusion into the liberties of the minor- ity, we need to think through the broader impact of technologies on vulnerable and minority groups. For example, it is clear that the use of DNA testing in relation to arrested offenders is having a disproportionate impact on minority communities. Rather than falling uniformly on the entire population, the DNA database contains a significantly higher proportion of the black population than of the population as a whole.74 This is a by-product of the use of the powers to stop, search and responsibility for Transport Security across all forms of transport has a budget for 2005–06 is £16.8 million. The Department of Trade and Industry has allocated a budget (2006) of £7.5 million for research and development in sensing and imaging technologies in healthcare, security, crime control and environmental applications. The Police Information Technology Organisation (PITO) which provides information and communication technology solutions, maintains the Police National Communication and delivered the new Airwave digital mobile radio system for police services and develops biometric identification databases had a budget in 2005–06 was £363.59 million. The Engineering and Physical Sciences Research Council (EPSRC) launched its Crime Technology Programme in 2002 with an initial budget of £6 million to fund research projects. 71 S Wright, An Appraisal of Technologies of Political Control, Scientific and Technological Options Assessment (Luxembourg, European Parliament Directorate General for Research Directorate, 1998) 40. 72 DM Wood (ed), A Report on the Surveillance Society (Wilmslow, UK: Office of the Information Commissioner, 2006) 15. 73 House of Commons Science and Technology Committee, Forensic Evidence on Trial, Seventh Report of Session 2004–05, at 83. 74 In evidence to the House of Commons Home Affairs Committee on Young Black People and the Criminal Justice System, Baroness Scotland reported that 75% of young black males will soon be on the national DNA database. House of Commons Home Affairs Committee, Young Black People and the Criminal Justice System. Second Report of Session 2006–07, vol 1 (London, TSO, 2007). 74 Ben Bowling, Amber Marks and Cian Murphy arrest which impact disproportionately on ethnic minority communities. If the experience of DNA testing arising from arrest were reflected in the inequitable impact of other kinds of intrusive (eg facial images) or coercive technologies (eg lethal force), it would be extremely troubling if minorities were to bear the brunt of the technological developments. The observations of ethnic inequality apply equally to fairness across age, socio-economic status, geographic and other social divides. We need to be mindful of the predictable but unintended consequences of technological development. It is clear that the implementation of any technology has observable ‘side effects’ in the form of new social and psychological develop- ments. For example, it has been suggested that the rise of the ‘hoodie’ phenom- enon is a direct consequence of the use of CCTV. Hooded sweatshirts have been fashion items for many decades, however with the growth of surveillance, young people are increasingly choosing to wear their hoods up, and further attempts to conceal their identity include baseball caps, masks and gloves. This means that not only are young people’s faces invisible to those monitoring CCTV, but also to the ordinary members of the public with whom they share physical spaces. It can therefore be argued that an unintended consequence of the implementation of CCTV is the emergence of a generation of young people who routinely hide their faces from public view, the long-term consequence of which are untold but certainly include the generation of anonymity, fear and anxiety as a paradoxical and perverse result of systems designed to document and monitor identity. As the quotation from Marx at the beginning of this chapter suggests, the innovation of crime paralleling the innovation in crime control is a game of ‘cat and mouse’ in which advances in one will quickly be matched by advances in the other. Attempts to establish identity through CCTV, biometric passports and ID cards will be matched (perhaps even thwarted) by those innovative manipulators of techno- logical systems. The idea that technology will protect us once and for all from violence and dishonesty is an appealing myth, but a myth nonetheless. Legal and regulatory research The pace of technological change is moving more rapidly than our ability to keep up with it in terms of psychological, social adjustment and in terms of legal regulation. This, it seems to us is related to the phenomenon described by Alvin Tofler as ‘future shock’, the unexpected and premature arrival of the future.75 Much of the science and technological gadgetry touched on this paper are the stuff of a previous generation’s science fiction. The dreams (and nightmares) of science fiction writers of the nineteenth and early to mid-twentieth centuries are now becoming realities. With each passing year, as technology feeds upon technology, the capacity to go further with crime control technologies grows ever more profound. It seems to us that the human capacity to understand the 75 A Tofler, Future Shock (London, Pan, 1970). Crime Control Technologies 75 process, assimilate and readjust psychologically, sociologically to these changes lags far behind the technologies themselves. The key aspect of these psycho-social changes in which we are specifically interested are the norms and values concern- ing autonomy, privacy and the power of the state to engage with, manipulate and intrude into those zones, and, most specifically, in the field of legal regulation. Our brief survey into the legal regulation of crime control technologies sug- gests that in almost every sphere the legal frameworks have only very recently started to emerge. It is interesting to note that RIPA 2000 was put in place in the wake of the HRA 1998 and without this regulation such intrusive measures as the planting of listening devices went unregulated. Yet regulation has hardly kept pace with technological change simply because many of the technologies now being deployed by law enforcement and criminal justice agents were unimagined or at least unavailable for use at the time that the legislation was drafted. Therefore the law needs continual re-examination to ensure it provides adequate regulation. In many instances the interpretation of the law requires that test cases are brought to the higher domestic and international courts. In the current politi- cal climate, with a shift towards a proactive security state, those activities which intrude and coerce are generally considered acceptable until legal rulings to the contrary. Test cases in the criminal field are notoriously problematic as it is only when intrusive or coercive practices come to light that the law is tested. The ques- tion then revolves around the rights of suspected and convicted criminals. It is perhaps not surprising that public debate and judicial consideration in relation to convicted offenders works on a different calculation of the balance between secu- rity and liberty than it would in the case of a person judged to be of unimpeach- ably good character. This is particularly true at a time when politicians urge the rebalancing of the criminal justice system in favour of the ‘law abiding majority’. Discussion about the protection of the liberties of all individuals is frequently left to test cases involving convicted offenders. In arguing the case for a regulatory framework, we urge the reader to consider the different ways in which crime control technologies are applied in particular spheres. There is a particular need to be aware of the ways in which the mission of particular technologies mutates as they shift from one sphere of application to another. One example is the shift from surveillance for the identification of specific suspected offenders to more general surveillance of the public—what might be referred to as ‘mission creep’ or ‘mission shift’. The same can be observed in the shift from the surveillant to probative functions so that technologies put in place for the purposes of general surveillance (which, therefore, may have only limited safeguards on their accuracy and legality) could easily be used for the purposes of identifying specific suspects and as evidence in court. A key empirical question is how far are technologies shifting in application from one sphere to another and a normative question is how far is this permissible and what kinds of regulation should be in place to define and enforce the boundaries of mission creep and shift? If the boundary between surveillant and investigative procedures breaks down and it becomes impossible to distinguish between the two, which regulatory 76 Ben Bowling, Amber Marks and Cian Murphy paradigm should prevail? The voluntary and consensual paradigm? Or that of obligatory co-operation with accompanying offences or evidential consequence for non-compliance? Should the technologies instead be graded according to the degree of intrusiveness so that the more intrusive the technology, the greater the import attached to the information it obtains? Or the more serious the conse- quences of non-compliance, the more particular and probative the justification required for its use? How could these criteria be measured? Is public acceptability an adequate means of measuring intrusiveness? If the boundary between surveil- lance and investigation is to be maintained then are codes of conduct regulating police usage of surveillant measures required? Should civilians be provided with leaflets outlining the consensual nature of compliance with the surveillance? How can we define and ensure consensual compliance? Police have justified the use of surveillant technologies in mobile patrols as a means of ‘deterring’ crime. What extent of technological deterrence is permis- sible? Police have no power to use stop and search as a deterrent; grounds of reasonable suspicion are required before the stop and search power can be used (bar some specific legislative exceptions). When surveillant technologies are used as a means of deterrence rather than investigation, what is the legal basis and limit of their use? In the field of preventive technologies, where deterrence is the pri- mary aim, there is a proliferation of private technology employed by individuals. How are these to be regulated? They are not ‘prescribed by law’—should they be proscribed? The use of a technology as a deterrent may encourage an exaggerated portrayal by law enforcement of its accuracy and reliability. This may be justifiable if the technology in question is restricted to this role but most technologies are also being deployed in surveillant, investigative and probative roles. There may be reluctance on the part of the police to reveal the unreliability of the technology if it is being used as a deterrent in police operations. Are we content for defendants to be convicted or to have their conviction quashed on the basis of scientific evidence alone? If so, what is the purpose of judges, lawyers and juries with no scientific understanding? Could we instead have automated trials? If we are not content for decisions on guilt and innocence to be determined by science alone, why not and to what extent should its role be limited? Different regulatory issues arise depending on the purposes to which technolo- gies are put. In our view there is a pressing need for legal research and empirical research to explore these questions. Such research should not be left until the technologies are in place. The difficulties in regulating technologies once they have become entrenched in society are well documented. Whilst we need specific research to identify the specific problems raised by particular technologies, espe- cially in relation to their accuracy, a more comprehensive and principled approach might result in more systematic regulation of these technologies as and when they appear. Perhaps we need to turn the regulatory tools on the regulators. Do we need, perhaps, to ‘risk assess’ the science of risk assessment? There are certainly grounds for exploring the tools used and the long-term consequences of their use. Perhaps we should also be using ‘futures research’ methods to anticipate the Crime Control Technologies 77 intended and unintended consequences of the introduction of new technologies and attempt to introduce systems to check for errors and unintended consequences and to ensure independence, accountability and high professional standards in the use of technologies as they are implemented, rather than at some later point either as an afterthought or demanded when errors or unwanted side-effects are discovered.76 Normative research One of the basic assumptions on which the criminal justice system has rested is the idea of an autonomous human subject who is capable of making a free choice to commit or desist from crime. There has been speculation throughout the history of criminology about man’s desires or drives couched in the language of psychological or biological predisposition, prompting the generation of the images of the ‘born criminal’ or those ‘driven to crime’. Such imaginings challenge the idea of individual autonomy and free will and it is only with recent advances in the biology of anti-social behaviour that these ideas have been given full rein and have stimulated speculation about the capacity (perhaps even responsibility) of the state to intervene through risk-prediction and technologies for controlling the behaviour of the risky. At the same time, technologies of risk management and pre-emption contribute to the colonisation of human autonomy from a differ- ent angle. For example, technologies can contribute to reducing the capacity for criminal conduct such as a car that will not start if the driver is intoxicated. Questions must be raised about the role of public education in this sphere. How far are the general public aware of the extent and nature of technologies in the institutions that will shape their lives in the future? To what extent do secondary schools students have an awareness of these issues? What place should a discussion about rights, responsibilities and the responsible use of technology take in citizenship classes? The question ‘Quis custodiet ipsos custodes?’—who will control the controllers—has preoccupied theorists of democratic governance since Roman times. In our view, this question is increasingly urgent today as technology enables the capacity to watch and control as never before. There is a need to establish systems of accountability, scrutiny, error checking and control on these new technologies. Technological developments threaten autonomy from outside by blurring the boundaries between criminal justice and social control as the nature of the latter 76 An analogy could be made with crime prevention. Ken Pease notes that new technologies have ‘crime driving potential’ that requires manufacturers to ‘retrofit’ crime reduction devices once a prod- uct is already on the market. Similarly, crime control technologies may have unwanted side effects that are predictable but not predicted during development due to lack of concerted thought. As Ekblom notes in the context of crime prevention, ‘remedial or retrofit solutions are never as efficient as ones designed and incorporated into the product from the start’. P Ekblom, ‘Gearing Up Against Crime: A Dynamic Framework to Help Designers Keep Up with the Adaptive Criminal in a Changing World’ (1997) 2/4 International Journal of Risk, Security and Crime Prevention 249–65. 78 Ben Bowling, Amber Marks and Cian Murphy becomes increasingly coercive and intrusive. If surveillance technologies provide evidence of wrongdoing, or if behaviour thought to be ‘risky’ can justify criminal justice intervention then what happens to the boundaries between citizen and suspect, ‘guilt’ and ‘innocence’? If electronic monitoring is used for the purpose of punishment (such as Home Detention Curfew) and for monitoring post-release prisoners ‘in the community’ and as a way of monitoring ‘at risk’ populations whose guilt or innocence has yet to be established, where is the boundary between punishment and control or between liberty and captivity? If autonomy is a central feature of what it is to be human, then where are we left when our autonomy is challenged from all angles? Conclusion In this paper, we have deliberately eschewed exposition of the advantages for crime control of information and communications devices, surveillance equip- ment, forensic science and new technologies for coercion and punishment. This, we think, can comfortably be left to the many advocates within the crime control industry. Instead, we have focused on normative and regulatory aspects of this rapidly developing field and have found that the application of science and tech- nology to crime control raises more questions than it answers. In our view, we need to think more carefully about the broader social impact of ‘crime control technologies’, reinvigorate the debate about what we call ‘security’, widen the idea of justice and aim for a higher quality of liberty. It seems to us that the creation of a safer society is a worthy goal and that it is inevitable that technology will play some part in this field as it does in other walks of life. We must also remember that technology can turn back on power. Forensic science can exonerate the innocent person accused of crime, cameras can watch how prisoners are treated in police cells and prisons, and citizens can carry out sousveillance of police misconduct. Herein lies an opportunity to establish a more egalitarian and democratic access to technology. The crucial point is that the technologies of crime control—which come with inherent infliction of harms such as intrusion into privacy and liberty, the use of force and the pains of punishment—must be tightly controlled if the promise of protection is not to be broken by tyranny of oppression. 4 Towards an Understanding of Regulation by Design KAREN YEUNG* [O]ur intuitions for thinking about a world regulated by architecture are undeveloped. Lawrence Lessig1 I. Instruments for Implementing Social Policy In his best-selling book, Code and Other Laws of Cyberspace, Lessig reveals how architecture (or ‘code’) can be changed in order to realise a collective or social end, lamenting the poverty of existing thinking concerning the implications of employing design-based approaches to shape social outcomes.2 This paper seeks to help fill the lacunae in our thinking about a world regulated by architecture, by sketching an outline framework for exploring design-based instruments for implementing social policy that will aid our understanding of their ethical, legal and public policy complexities. The purpose of this paper is first, to consider in greater depth the nature and variety of design-based approaches for achieving collective goals, briefly pointing to a varied range of current and potential appli- cations and secondly, to consider their legitimacy. Given the enormous range of design-based instruments, the infinite number of social ends for which they might be employed, and the varied nature of the relationship between the former and the latter, my purpose is not to develop any single and simple classification scheme, or guidelines for their use. Rather, my concern is to tease out this com- plexity in the hope of making our thinking about design-based regulation more consistent, nuanced and systematic. My discussion proceeds in three parts. First, I identify two ways in which design- based approaches to regulating might be classified: by reference to the subject in * I am indebted to my Anna Oldmeadow for her research assistance and to Bronwen Morgan, Justine Pila, Eloise Scotford and Simon Halliday for their insightful comments on earlier drafts. Any errors remain my own. 1 L Lessig, The Law of the Horse: What Cyberlaw Might Teach’ (1999) 113 Harvard Law Review 502. 2 L Lessig, Code and Other Laws of Cyberspace (New York, Basic Books, 1999) 91–2. 80 Karen Yeung which the design is embedded (places and spaces, products and processes, and biological organisms) and also by reference to their underlying design mechanism or ‘modality of design’. Secondly, I consider how design-based regulatory instru- ments might be evaluated in terms of both their effectiveness in achieving des- ignated regulatory goals and their implications for values of a non-instrumental kind. Although the attractiveness of many design-based approaches lies in their promise of 100% effectiveness, I will identify a number of reasons why design- based solutions may fail, due largely to various unintended effects arising from their use. Correcting these effects is likely to be considerably more difficult for policy-makers to address, at least in comparison to ‘traditional’ policy instru- ments, most notably attempts to regulate through legal rules. It is the implications of design-based techniques for non-instrumental values, however, that have raised serious concerns by scholars. They fear, amongst other things, that design-based instruments may jeopardise constitutional values and the conditions required for a moral community to flourish. While I share many of these fears, I will argue that whether, and to what extent, these fears apply, will depend partly on the design- modality adopted as well as the surrounding social, political and moral context in which they are employed. In certain circumstances, design-based instruments may serve to reinforce rather than undermine moral norms. Thirdly, I suggest that in seeking to evaluate the legitimacy of certain kinds of design-based instruments, particularly those which seek to shape individual behaviour through direct inter- vention in the decision-making process, we must confront deep and highly con- testable questions concerning our individual and collective identity. My aim here is to provoke reflection rather than offer simple solutions. In this context, I suggest that the notion of authenticity, of who we are and what it means to be truly our- selves, might help to orient our critical reflections. But even if there is widespread consensus of the value of authenticity, its notoriously elusive and slippery content and contours are unlikely to provide much in the way of concrete guidance. A. Understanding Design-based Instruments Regulatory literature has hitherto focused upon attempts to promote social policy goals by changing individual behaviour, primarily through the ‘traditional’ policy instruments of command, competition, communication and consensus which seek to alter the external conditions that influence an individual’s decision to act.3 Consider the following approaches aimed at tackling the increasingly urgent social policy goal of reducing obesity in the developed world. Here, the state might: — enact laws prohibiting the manufacture and sale of any food or beverage that exceeds a specified sugar or fat level; — impose a tax on high fat and high sugar foods; 3 B Morgan and K Yeung, An Introduction to Law and Regulation (Cambridge, Cambridge University Press, 2007) ch 3. Towards an Understanding of Regulation by Design 81 — undertake public education campaigns to encourage healthy eating and regular exercise or attach obesity warning labels to high fat and high sugar foods; or — offer specified privileges or benefits to those who agree to participate in con- trolled diet and exercise programmes. However, as Lessig points out, sociologists have long observed that technological design or ‘architecture’ may be used for shaping the social world, although these instruments may be considerably less visible than traditional approaches to public policy implementation.4 B. A Taxonomy of Design-based Instruments Although Lessig draws upon a wealth of historical examples where architecture has been used to pursue social ends, he refers to them in an undifferentiated fashion. In order to deepen our understanding of these instruments, one useful starting point may be to classify them according to the subject in which the design is embedded (the ‘design-subject’).5 The following broad categories are not water- tight, and many typically overlap. Thus, like more traditional policy instruments, any given design-based instrument might be placed in more than one category and instruments could be readily combined. i. Designing Places and Spaces When we think about architecture as means for shaping behaviour, we are typically concerned with how places, spaces and the external environment more generally may be designed to encourage certain behaviours while discouraging others. The Crime Prevention Through Environmental Design (CPTED) approach to urban planning and design begins from the fundamental (and unsurprising) premise that human behaviour is directly influenced by the environment we inhabit.6 As Lessig demonstrates, the ‘code’ which constitutes the architecture of the Internet, provides a particularly effective means for shaping behaviour in cyberspace, although he also provides a long list of examples involving the use of design for shaping social outcomes in real space: speed bumps on roads to reduce traffic speed; railroads and other obstacles to achieve informal segregation between black and white communities;7 the bridges on Long Island to block buses; build- ing codes to facilitate disabled access; the wide 19th-century boulevards of Paris 4 In his early and well-known typology, Christopher Hood refers to the ways in which governments may use ‘organization’, a label which he applies to the government’s stock of land, buildings, equip- ment and a collection of individuals with whatever skills and contacts they may have, in government’s direct possession or otherwise available to it, through ‘direct action or treatment’ to effect behavioural change. See C Hood and H Margetts, The Tools of Government in the Digital Age (Basingstoke, Palgrave Macmillan, 2007) 102. 5 Eg, Brownsword identifies three design subjects: people, products and places in R Brownsword, ‘Code, Control, and Choice: Why East is East and West is West’ (2006) 25 Legal Studies 1, 12. 6 See, eg, NK Katyal, ‘Architecture as Crime Control’ (2002) 111 Yale Law Journal 1039. 7 Lessig, above n 2, at 98. 82 Karen Yeung to make it more difficult for revolutionary insurgents to take control of the city8 the distance between the White House and the Capital to make it more difficult for the President and Congress to connect and thereby reduce executive influence over the legislature9 and similar motivations for the location of constitutional courts in continental Europe.10 ii. Designing Products and Processes But Lessig’s illustrations also include cases in which design is embedded in manu- factured products or industrial processes in order to alter its social impact or the user’s behaviour. He cites several examples, including the technology of ciga- rettes,11 security coded car radios12 and spraying marijuana fields with paraquat.13 Other well-known (and much discussed) examples include digital rights manage- ment technology (also called technical protection systems) intended to prevent the unauthorised copying of copyright-protected digital material by designing out the possibility for individuals to copy or use such material without authorisation, or car ignition locking systems which prevent car engines from starting unless all occupants are wearing seatbelts thereby reducing the risk of serious injuries to passengers arising from motor vehicle collisions. iii. Designing Biological Organisms All the examples of design-based interventions for achieving social outcomes which Lessig draws upon involve the design of spaces, places or things. So, return- ing to my earlier example of ways in which the state might seek to tackle the problem of obesity, it might design public spaces to encourage physical exercise (elevators etc for disabled use only). Town centres might be pedestrianised, allow- ing vehicular access only to those with mobility impediments. Healthy and low-fat food products might be packaged more attractively, while junk food is packaged in plain, unadorned form to make it appear less appetising and/or clearly labelled with an appropriate ‘obesity warning’. But design-based means can also be extended to the manipulation of biological organisms, from the simplest bacteria through to highly sophisticated life-forms including plants, animals, and—of course, human beings. So, for example, in seeking to reduce obesity, the following possibilities (some of which are still in the realm of science-fiction) might be considered: 1. Genetically modified sugar cane, with all the flavour of sugar but which con- tains only a tiny proportion of the calories of unmodified sugar. Similarly, livestock might be genetically modified to produce leaner meat, providing the same nutrients, but with less fat and fewer calories. If these products replaced 8 Lessig, above n 2, at 91. 9 Lessig, above n 2, at 92. 10 Ibid. 11 Lessig, above n 2, at 87. 12 Lessig, above n 2, at 90. 13 Lessig, above n 2, at 94. Towards an Understanding of Regulation by Design 83 their unmodified counterparts, then we would expect a reduction in the general level of high calorific sugar-laden or fatty food consumed, thereby helping to reduce obesity levels across the population. 2. Stomach-stapling or gastric banding surgery might be provided to overweight individuals, which suppresses the appetite and dampens hunger, encouraging individuals to reduce their food intake. Alternatively, a pill might be developed which also serves to dampen feelings of hunger, diminishing the desire to eat and thereby encouraging a reduction in food intake. Although both approaches are designed to generate weight loss by reducing the individual’s desire to eat, the first operates on the digestive system while the second operates on brain and nervous system to block the transmission of hunger signals. 3. Overweight individuals might be offered a replacement bionic stomach which processes calories at an accelerated rate. This should lead to weight-loss with- out any behavioural change by the individual in either food consumption or exercise levels. Of course, if a state proposed to implement any of these strategies, it would raise a number of serious questions, particularly in relation to individuals who did not consent to the intervention. However, although I briefly refer to some of these concerns in the following discussion, issues concerning how design-based instru- ments should be regulated are largely beyond the scope of this paper. a. Designing Plants Although the example of calorie-reduced sugar concerns the design of food crops, plants might be designed for a range of non-food applications. For example, biofuels provide a possible design-based means for reducing environmental pollution from carbon emissions. Crops such as cereals, soybean, rape seed oil, sugar cane and palm oil can be used to make the two leading biofuel products, bioethanol and biodiesel.14 b. Designing Animals With the exception of genetically modified fish (particularly salmon), the pros- pect of genetically modified animals for human consumption is currently a long way off.15 Nevertheless there are several potential applications currently under consideration, including the introduction of genes to alter meat and milk com- position to produce either leaner meat or enhanced anti-microbial properties of milk for newborn animals.16 14 Agricultural products currently grown specifically for use as biofuels include corn and soybeans (primarily in the United States); flaxseed and rapeseed (primarily in Europe); sugar cane (in Brazil); and palm oil (in South-East Asia). Existing biofuel technology is relatively inefficient and is criticised for undermining sustainable development because it may encourage deforestation for the cultivation of biofuel products, and encourages monoculture. So-called ‘second-generation’ biofuels may, how- ever, reduce this difficulty. 15 The Royal Society, The Use of Genetically Modified Animals (London, The Royal Society, 2001). 16 The Royal Society observed in 2001 that such technology is in the early stages of development, and it is likely to be at least a decade before large animals were modified with deleted genes, or com- mercial value will have been evaluated and approved by regulatory bodies, ibid at para 54. 84 Karen Yeung But we can also anticipate the design of non-human creatures for non-food applications. For example, one promising technology is the genetic modification of insects that carry human disease to create strains of insect that are incapable of carrying human disease, ie they are refractory to transmission. A genetically mod- ified (GM) strain of malaria-resistant mosquito has already been created, which carries a gene that prevents infection by the malaria parasite and is better able to survive than disease-carrying insects. Although still very much at the early stage of development, it is estimated that mosquitoes modified not to transmit malaria would, if they replaced the ‘natural’ variety, spare millions of lives a year.17 c. Designing Humans But perhaps the most well-known interventions directed at biological organisms are those technologies that seek to alter the human constitution. While the follow- ing technologies have not generally been employed by the state to implement social policy goals, it is not difficult to imagine how such technologies might be used for such purposes.18 Well-known design-based human interventions include: Surgery Perhaps the most widely available and well-known form of human surgical alteration is cosmetic surgery. For those in developed economies, individuals seeking to ‘improve’ their appearance can readily engage a plastic surgeon to alter their physical appearance, such as breast enlargements, liposuction to remove fatty tissue and skin tightening to reduce the appearance of wrinkles. Psychopharmacology The use of psychotropic drugs to alter and enhance mood has become very widespread, helping to alleviate depression and, as a consequence, reduce time taken off work by those with potentially debilitating mental conditions. Anti- depressants are also intended to inhibit certain kinds of neurological functioning, thereby reducing the risk that the individual will engage in self-harming activities; Bio-engineering (genetic manipulation) Gene therapy may potentially be used to alter behaviour through the repair or replacement of genes, or the placement of a working gene alongside another faulty gene.19 Pre-implantation genetic testing and diagnosis might also be used 17 Researchers estimate that it may be at least 10 years before the bioengineered insects could be introduced in the environment, in the hope that they would replace the wild population and thereby reduce or eliminate disease transmission. See ‘GM Mosquito could Fight Malaria’, BBC News, 19 March 2007 available at accessed. 18 One notorious use of such technologies includes attempted state-sponsored sterilisation programs associated with eugenics. For a discussion, see R Proctor, Racial Hygiene: Medicine Under the Nazis (Cambridge, MA, Harvard University Press, 1988); NH Rafter, White Trash: The Eugenic Family Studies 1877–1919 (Boston, Northeastern University Press, 1988). 19 Nuffield Council on Bioethics, Genetics and Human Behaviour: The Ethical Context (London, 2002). Towards an Understanding of Regulation by Design 85 to select embryos which display predispositions towards specific behavioural traits.20 Bionics In medicine, bionics involves the replacement or enhancement of organs other body parts by mechanical versions. Bionic implants differ from mere prostheses by mimicking the original function very closely, or even surpassing it. For example, the cochlear implant is already widely used, and the rapid development of nanotechnology opens up the possibility for using extraordinarily powerful yet exceptionally small computer chips to enhance organ functions, including certain kinds of brain function.21 C. Design Modalities Another possible approach for classifying design-based instruments, and which cuts across the above taxonomy, is to focus on their underlying mechan- ics. Just as Lessig refers to four ‘modalities’ of control in classifying different instrument classes, we can also look inside each instrument class. In other words, we might explore the ‘modalities of design’.22 Like more traditional policy instruments, the mechanics underpinning design-based approaches may seek to achieve their specified objective in different ways, with varying levels of effectiveness: a. By Encouraging Behavioural Change Some design-based instruments alter the surrounding conditions for action to encourage the behaviour deemed desirable. These conditions may be directed at the external environment (in cases where the design is targeted at places, spaces, products or processes), or internal to the biological organism (in cases where the design is directed at people, plants or animals). In such cases, the resulting behavioural response is intended to be a product of individual choice. Accordingly, if the individual chooses not to act in the manner desired, then the desired outcome will not be fully achieved, undermining the effectiveness of the intervention. For example, appetite-suppressants (whether by gastric banding surgery or a drug that blocks the transmission of hunger signals to the brain) are intended to encourage weight loss by weakening or eliminating the feelings of hunger that would otherwise be experienced, weakening the individual’s desire to eat and hence encouraging the individual to reduce his or her food in-take. But techniques of this kind will not result in weight loss if individuals nevertheless 20 Ibid. 21 KR Foster, ‘Engineering the Brain’ in J Illes (ed), Neuroethics (New York, Oxford University Press, 2006) 185–200. 22 I am indebted to Justine Pila for suggesting this term. 86 Karen Yeung choose to maintain their pre-intervention calorific intake, despite the fact that they do not feel hungry (which is readily conceivable, given that people eat for many reasons other than hunger). b. By Changing the Impact of the Harm-generating Behaviour Other design-based approaches seek to achieve their designated aim by altering the impact of harm-generating behaviour, rather than by facilitating behavioural change. Thus, if a genetically modified sugar cane which tasted identical to sugar but with vastly reduced calorie content replaced unmodified sugar for food con- sumption, then we would expect a reduction in the general level of high calorific sugar-laden or fatty food consumed, thereby helping to reduce obesity levels across the population. Although we might expect such approaches to generate a very high level of effectiveness, they are not failsafe, because the individual’s behaviour might serve to undermine the effect of the intervention. For example, if individuals raise their rate of food-intake so to off-set or exceed the consequences of consuming calorie-reduced sugar, then the intervention would fail to generate weight loss. c. By Preventing the Harm-generating Behaviour Rather than seek to alter the impact of harm-generating behaviour, design might be employed to prevent the behaviour altogether. Some techniques of this kind simply reduce the probability of such conduct occurring, but others could in theory be employed to eliminate the harm-generating behaviour entirely. An example of the former kind might include the use of pre-implantation genetic testing to identify and select embryos which do not possess the so-called ‘obesity gene’, thereby reducing the likelihood of the resulting child becoming obese in later life.23 An example of the latter kind would include a bionic stomach engi- neered to processes food at an accelerated rate up to a designated limit so that any food consumed by the individual in excess of the prescribed maximum would simply pass through the body unprocessed, so that weight loss would be the inevitable result. Unlike the use of prevention techniques which merely reduce the likely incidence of the undesired state, prevention techniques which are designed to override human action offer the potential to ensure that the desired outcome will be fully achieved. Although the above examples used to illustrate variation in design-based modalities for regulating are primarily concerned with designing biological organisms, this classification scheme applies equally well to the design of places and spaces, products and processes. So, for example, in order to reduce personal injuries and fatalities arising from motor vehicle accidents, a community might consider the following design-based techniques: 23 T Frayling, ‘A Common Variant in the FTO Gene is Associated with Body Mass Index and Predisposes to Childhood and Adult Obesity’ (2007) 316 Science 889–94. Towards an Understanding of Regulation by Design 87 (a) Encourage behavioural change: Install speed bumps in roads to encourage driv- ers to reduce their speed. Speed bumps alter the external conditions experienced by road users to encourage drivers to slow down but they may be ineffective if drivers choose to maintain their speed and simply suffer the discomfort and risk of car damage that may result from driving over the bumps at speed; (b) Change the impact of harm-generating behaviour: Install air-bags in all motor vehicles. By altering the functioning of motor vehicles, air-bags are intended to reduce the severity of personal injuries arising from motor vehicle acci- dents without requiring any behavioural change by the driver or occupants. However, serious injuries to passengers may nevertheless occur due to indi- vidual behaviour which effectively neutralises the effect of the air-bags (eg by driving so recklessly that the vehicle is involved in a collision of an impact which exceeds the capacity of the air-bags adequately to cushion the occu- pants from injury); and/or (c) Prevent the harm-generating behaviour: Install a comprehensive ‘smart’ inte- grated transport system.24 Cars are fully automated. Individuals no longer drive. The passenger simply enters into the car’s computer the designated destination, and the vehicle is then directed to the destination by a central computer, which simultaneously records, tracks and directs every other vehicle on the road. The system is programmed to avoid congestion, motor vehicle accidents and speeding. A ‘dummy’ steering wheel and accelerator are provided for those who wish to retain a sense of the experience of driving, but the driver cannot override the smart system control of the vehicle. Such a system would directly ensure the achievement of the desired goal without the need for behav- ioural change, and which is unaffected by any action by the person targeted. While speed humps seek to reduce motor vehicle injuries by altering driving conditions to encourage the desired change in behaviour (ie reduction of speed) and air-bags pursue this goal without the need for behavioural change by alter- ing the way in which the motor vehicle functions (ie by reducing the severity of injuries arising from motor vehicle accidents), the third kind of approach, which seeks to prevent motor vehicle accidents from occurring, does not allow scope for individual action. As a consequence, the achievement of the desired goal may be thwarted on either of the first two approaches, unlike prevention-oriented strate- gies which ‘design out’ the opportunity for individual action thereby potentially providing a fail-safe means for achieving the desired end. D. A Word About Filtering Readers with a particular interest in design-based approaches for regulating behaviour in cyberspace may be wondering why filtering technologies have not 24 I have borrowed this idea from Brownsword, R., ‘Code, Control, and Choice: Why East is East and West is West’. (2006) 25 Legal Studies 1, 16. 88 Karen Yeung been included within the above classification scheme, given its widespread and well-known use as a means for preventing or blocking access to content deemed undesirable. The power of filtering lies, however, not primarily in its capacity to encourage behavioural change, alter the impact of harm-generating behaviour, or to prevent harm, but in its ability to detect, identify and thus discriminate between units with prescribed characteristics in a large population. Accordingly, filtering technology is not a modality of control, but a powerful tool of identifica- tion and selection. Once identified and selected, a range of actions might be taken, whether it be to scrutinise, privilege, assist, administer treatment, restrict access, exclude or extinguish. In cyberspace, filtering is typically employed as a regulatory device to restrict or exclude access to content deemed undesirable, but filtering is merely the means for identifying and selecting the content to be excluded: filters can be employed equally well to single out content deemed ‘desirable’.25 Filtering can thus be understood as an adjunct technology, one that may be employed in pursuit of each of the other three modalities of design. For example, a community might employ filtering technology to reduce motor vehicle injuries in several different ways. It could be used in conjunction with driver profiling technology to identify and discriminate between drivers with a tendency to speed and those who do not. In this way, filtering devices might then trigger the raising of speed humps in residential areas upon identification of those with a propensity to drive at excessive speed, whilst lowered for drivers identified as ‘safe’. Alternatively, high-risk drivers could be located on global-positioning devices installed into all motor vehicles, alerting other drivers of their proximity. And where a motor vehicle is identified as being driven in an exceptionally dan- gerous or erratic manner, it might be automatically immobilised: here, filtering technology is linked to regulatory technology designed to override human behav- iour and thereby exclude dangerous drivers from public roads. Note, however, that the exclusion of ‘high risk’ drivers from public roads would not eliminate motor vehicle accidents. Although the actions of reckless and negligent driving is likely to be the most significant cause of motor vehicle accidents, there are many others causes which these hypothetical technical fixes would not prevent. II. Evaluating Design-based Regulatory Techniques Classification schemes, such as the two I have outlined, can be valuable analytical devices. Hence the existence of multiple classification schemes can be a source of strength and we need not attempt to determine whether one scheme is superior 25 Fears about Internet filtering for content an individual deems desirable are expressed by Cass Sunstein, who worries that the use of filters to cater to individual tastes and preferences may diminish a community’s shared experience and exposure to diversity-enhancing content. See Cass Sunstein, Republic.com (Princeton, NJ, Princeton University Press, 2002). Towards an Understanding of Regulation by Design 89 to any another. Rather, identifying which scheme is to be preferred in specific contexts will depend largely on the purpose of analysis. The aim of the follow- ing discussion is to consider several issues that arise in seeking to evaluate the legitimacy of design-based instruments, understood in terms of their effectiveness in achieving their designated policy goals, and their implications for a range of non-instrumental values. To this end, one strength of a modality-based taxonomy, which cuts across a subject-oriented focus, is that it enables common challenges associated with a wide range of technologies and subjects to be opened up for inquiry and examination, including questions of legitimacy. In addition, increas- ing technological convergence and the rapid development of ‘smart’ technologies suggests that subject-focused classifications may be of limited usefulness. Smart systems involve sophisticated interaction between people and their environments, transcending a subject-focused taxonomy. In contrast, we can apply the modality- based classification scheme to smart technologies in what appears to be a reason- ably straightforward manner. For example, Hildebrandt contrasts two different ways in which smart cars might be designed to reduce motor vehicle accidents caused by driver fatigue: on detection of driver fatigue, a smart car which issues a warning intended to encourage the driver to take appropriate action (stop and rest) would be located in category (a) because the underlying design modality seeks to encourage behavioural change. By contrast, smart cars which automati- cally direct the driver to a parking lot and prohibit continuation of the journey on detection of driver fatigue fall into category (c), adopting a design modality which overrides human action to achieve the desired end. For policy-makers and law-enforcement officials, rapid technological advance- ment in this so-called ‘Age of Information’26 ushers in the exciting prospect of design-based instruments capable of achieving regulatory goals with a level of precision and effectiveness impossible to attain through more traditional policy instruments. From the world of ambient intelligence which Mireille Hildebrandt envisions,27 it is but a short step to a world of ‘ambient regulation’, one in which intelligent technology is employed by well-meaning governments to rid us of the plethora of harmful by-products associated with contemporary industrialised life. But even if we focus solely on the effectiveness of design-based instruments, it is questionable whether such perfection can ever be more than a technophile’s dream. Of the three modalities of design which may be employed to shape the social world, the assurance of success only arises where design is employed to prevent specified outcomes by overriding human behaviour. Where the design- modality employed seeks to encourage behavioural change, to alter the impact of harm-generating behaviour, or to reduce the probability of undesired social outcomes, scope remains for human agency and thus for thwarting the achieve- ment of the desired goal. 26 M Castells, The Information Age: Economy, Society and Culture (Oxford, Blackwell, 1996). 27 M Hildebradt, ‘A Vision of Ambient Law’ (this volume ch 8 p 175). 90 Karen Yeung As a consequence, we should not be surprised if regulators increasingly turn their attention towards instruments which design out scope for individual decision- making and thus offer the promise of guaranteed success. But even in these circumstances, such perfection is likely to be illusory for several reasons. First of all, design-based instruments may be vulnerable to technical circumvention, the extent of which will vary considerably between contexts. So, for example, while hackers have proved themselves remarkably adept at ‘cracking’ the digital code developed to restrict access to specified web content and other digital applica- tions, it would take exceptionally high levels of technical expertise to reverse- engineer genetically designed organisms. Secondly, no technology is fail-safe for it is impossible to eliminate entirely the risk of technical or other operational error. Although the risk of technical failure can often be reduced to a tolerable level, one of the greatest challenges posed by new and so-called revolutionary technologies is that their risks are unknowable and hence unquantifiable on existing scientific knowledge. In these circumstances, policy-makers often face conflicting messages. While some call for mandatory restrictions and even the prohibition of such tech- nology, others call for government support to promote its development, pointing to an array of potentially powerful social applications that might be employed to fight disease, alleviate poverty and otherwise enhance collective welfare. In demo- cratic states, it is only when these hurdles have been successfully surmounted that policy-makers could realistically contemplate employing such technologies in pursuit of regulatory purposes. A. Establishing Regulatory Standards: Rules vs Architectural Design But one of the most significant limitations of utilising design as a regulatory policy instrument arises from the prospect of design-failure. Although design-based techniques for implementing regulatory objectives need not involve any behav- ioural change, nor reliance on legal rules, nonetheless regulators contemplating their use need to ensure that their instruments are accurately targeted. While the meaning of ‘regulation’ is notoriously inexact and highly contested, a functional, cybernetic approach to regulation is widely used and accepted, characterising a regulatory system as having the capacity for standard-setting, to gather informa- tion about the state of the system, and to effect change to the state of the system in order to bring it into alignment with its intended purpose.28 The nature and form of regulatory standards, and the tasks involved in standard-setting, vary with different policy instruments. For regulators who wish to rely on legal com- mands to implement their policy goals, standard-setting involves the drafting of legal rules that will provide guidance to those they regulate. For regulators who 28 C Hood, H Rothstein and R Baldwin, The Government of Risk (Oxford, Oxford University Press, 2001) 23. Towards an Understanding of Regulation by Design 91 opt for design-based techniques, standard-setting entails the design of technical standards which can then be embedded within the architecture of the regulatory design instrument. But for each regulatory instrument, its success is typically and primarily assessed in terms of its effectiveness: the extent to which it ensures that the chosen policy goal is achieved in practice. Regulators who opt for legal rules often find that, despite careful attention to the drafting of rules, they may nevertheless fail to bring about the desired policy objectives owing to the imperfect match between the rule and its purpose, and to uncertainty in applying the rule to individual circumstances. In his contribu- tion to this volume, Roger Brownsword illustrates some of these difficulties by considering how the challenges associated with drafting a suitable rule to regulate passenger behaviour in railway carriages designated as quiet zones. If we imagine the case of a very simple rule, such as ‘do not use mobile phones’ then it is readily apparent why this rule is unlikely to ensure that the carriage remains quiet. This simple rule is not well-matched to its intended purpose, for the reasons outlined by Julia Black in her perceptive analysis of regulatory rules.29 First, because the operative basis of a rule (ie. mobile phone use) rests on an anticipatory, gener- alised abstraction (ie. that the use of mobile phones causes unwanted noise), this inevitably suppresses properties that may subsequently be relevant (ie. noise may be generated from other sources) or includes properties that may in some cases be irrelevant to the relationship between the rule and its desired purpose (ie mobile phone use does not always generate noise, such as transmitting and receiving text messages while the ring-tone is switched off). Secondly, the causal relationship between the event and the harm or regulatory goal is an approximate one which might not be borne out in every case. In this example, although using a mobile phone usually generates noise, they do not always do so, and noise can stem from other sources. Thirdly, even if a perfect causal match between the generalization and the aim of the rule could be achieved, future events may develop in such a way that it ceases to be so. So, for example, future generation mobile phones might be fitted with privacy-enhancing technology enabling the user’s speech to be ren- dered inaudible to all but the recipient of the telephone call. Recognising these imperfections, those responsible for drafting regulatory rules might add the words ‘or other unnecessary noise’ to the basic rule prohibit- ing the use of mobile phones. But this enhanced rule might still fail to promote its underlying goal due to uncertainty in its application. This uncertainty arises from the indeterminacy of rules which is, in turn, a product of the inescapable indeterminacy of language. Even when the meaning of the words used in the rule are clear, the question will always arise as to whether the general term used in the rule applies to a particular fact situation. So for example, is a laptop computer which runs software that enables speech transmission via the Internet a ‘mobile phone’ for the purposes of the rule? Is the noise of a crying infant ‘unnecessary’ 29 J Black, Rules and Regulators. (Oxford, Clarendon Press, 1997). 92 Karen Yeung in this context? It is this indeterminacy in application which HLA Hart described as the ‘open texture’ of rules.30 Accordingly, even rules carefully crafted to fit their intended purpose may nevertheless fail to provide clear guidance to their address- ees, suppressing some behaviour which has no bearing upon the regulatory goal, while conduct that undermines the policy goal might be interpreted as falling outside its reach. While finding ways to establish rules that are fit for purpose and can deal adequately with linguistic uncertainty might prompt regulators to look to other policy instruments, I seriously doubt whether these difficulties can be success- fully avoided by resort to design-based techniques. Firstly, although regulators no longer need to rely on lawyers to draft suitable legal rules, they do not dis- pense with the need to establish standards to implement policy goals. The task of standard-setting is merely shifted from lawyers to design-engineers, entrusted to embed regulatory policy objectives into the design and operation of the regulating architecture. So, for example, if engineers briefed to design a railway carriage that will operate as a quiet zone devise a carriage fitted with devices that automatically blocks the transmission of mobile telephony signals, then (leaving aside the pos- sibility of operational failure) this would effectively prevent passengers located in that carriage from using mobile phones, even in circumstances when they could be used silently, but it would fail to eliminate unwanted noise from other sources. Although the instrument would be completely effective in achieving the engineer’s design goal—preventing the use of mobile phones—it would fail to achieve the regulator’s underlying policy goal. In others words, standards embedded into regulating technology may under- or over-shoot their policy target, just as regula- tory rules might be under- or over-inclusive. But standards embedded into regulating architecture differ from standards embedded into legal rules in at least two respects. First, the binary logic of techni- cal standards is not subject to the uncertainties arising from the inherent inde- terminacy of language that plagues the use of rules. In order to avoid operational failure or suspension in the event of unforeseen circumstances, designers can pro- gram their instruments to issue a default response. For example, design-engineers might program the signal blocking device installed in train carriages designated as quiet zones to treat any unrecognised digital signal as ‘permissible’, by allowing it to transmit without interference, or as a ‘violation’ and automatically block trans- mission. While the provision of a default standard avoids the need for human interpretation and thereby ensures that a regulatory response will obtain for every situation, it cannot ensure that the response will be aligned with the regulator’s underlying policy objectives. Nor is it difficult to envisage serious consequences arising from insensitive design. If the default technology in our hypothetical rail- way carriage is programmed to block any unrecognised signals, this might gener- ate minor inconvenience to laptop users who discover that they cannot transmit and receive email messages, but the consequences would be very serious for an 30 HLA Hart, The Concept of Law, 2nd edn (Oxford, Clarendon Press, 1994). Towards an Understanding of Regulation by Design 93 amputee whose prosthetic limbs relied upon digital signals for their operation. Secondly, unlike rules, design-based instruments may be self-executing.31 Once the standard embedded within the technological instrument is triggered, then the response can be automatically administered. By contrast, rule violation cannot be sanctioned unless and until compliance with the rule is actively monitored and enforced. This not only requires human personnel to monitor and commence enforcement action against suspected violations, but also requires—at least in democratic societies—a set of enforcement institutions to oversee and administer the lawful application of sanctions. Rules rely on interpretation, enforcement and sanction through human interaction, requiring human agents to interpret and apply rules to discrete factual circumstances where a violation is alleged to have occurred. Because rule enforcement is a resource-intensive activity, a considerable number of legal violations, particularly those of a fairly minor nature including trivial road traffic violations, frequently go unpunished. Thus in practice, rule- based regulation typically relies for its success not only on well-drafted rules, but also on the fact that most people are largely law-abiding and will effectively ‘self- regulate’, without the need for comprehensive and costly enforcement activity. At first blush, design-based regulatory instruments that override human action seem to offer a unique and considerable advantage over their traditional rule- based counterparts, allowing regulators to avoid devoting human and institutional resources necessary for monitoring and enforcing regulatory rules whilst offering consistent and immediate application. But while enforcement institutions may appear costly by comparison, socio-legal scholars have amply demonstrated that the judicious exercise of discretion by enforcement officials serves a vital role, enabling regulatory rules to be applied in a manner that conforms with their underlying ‘spirit’ or policy objective, rather than insisting on strict compliance with the letter of the law where they believe that this would be counterproductive. Within rule-based regulatory regimes, inescapable problems of inclusiveness and determinacy that arise at the rule-setting stage can be addressed at the enforcement stage through sensitive interpretation and application. Although many of the dif- ficulties associated with the use of rules can ultimately be traced to the vagaries and complexity of human interaction and ingenuity, it is the flexibility and adaptability of human responses that provide the means for overcoming these limitations.32 B. Feedback and Error Correction In contrast to rule-based regulation, self-enforcing design-based regulatory instru- ments are intended to operate as ‘closed’ systems. Once standards have been estab- lished, there is no opportunity for adjustment within the system itself if the standards 31 For a discussion and critique of self-enforcement in the context of ‘tethered’ digital appli- ances, see J Zittrain, ‘Tethered Appliances, Software as Service, and Perfect Enforcement’ (ch 6 this volume). 32 B Morgan and K Yeung, An Introduction to Law and Regulation. (Cambridge, Cambridge University Press, 2007) 176. 94 Karen Yeung turn out to be misaligned with their intended policy goal. As Justice Michael Kirby observes, unlike law, technological filtering mechanisms used to regulate Internet content often result in excessive censorship, particularly in the absence of rights of challenge.33 Unless there is some feedback mechanism by which failure in the design-standards can be appropriately communicated to designers to make appro- priate adjustments, then the failure will continue to repeat itself within the system. In addition, there are many situations in which the service of aims and values other than those underpinning the effective achievement of regulatory goals may have a legitimate claim to priority. One of the powerful lessons of socio-legal research is the crucial role of enforcement officials. It is through their first-hand experience, observing and interacting with those they regulate that they can not only interpret and apply rules to promote their underlying objectives, but also mediate tensions between effective policy outcomes and the need to mitigate their harshness in par- ticular cases. This is not to say that enforcement officials are invulnerable to error, inconsistency or even corruption, but their indispensable role means that there will be regulatory officials who stand where the rule hits the road, rubbing up against the reality of human experience and who can thus exercise enforcement discretion to avoid unintended, unfair or otherwise unwanted outcomes. By contrast, designers of regulating technology are likely to be far removed from those whom the technology is intended to regulate.34 The same might be said, of course, of parliamentary draftsman who draft the legislative standards used to implement the state’s policy objectives.35 But they are constitutionally independent of, and institutionally separate from, the enforcement officials and institutions responsible for the application and execution of law, reflecting the constitutional separation of powers which serves as a safeguard against the abuse of the state’s coercive power. Seen in this light, the self-enforcing nature of design- based instruments entails a significant shift in the power relations between the regulator and the engineers and architects they employ to establish regulatory standards, on the one hand, and those they regulate on the other. Where design- based instruments design–out scope for individual decision and action, regulated persons are deprived of the opportunity to appeal to human reasoning and judg- ment in demonstrating why the sanction associated with rule violation ought not apply in specific cases. Accordingly, design-engineers are in a position to exert a more powerful influence than those who draft regulatory rules. As Justice Michael Kirby succinctly observes, ‘[g]iven the importance of technology to the current age, how do we render those who design, install and enforce such programmes accountable to the democratic values of our society?’.36 33 M Kirby, ‘New Frontier—Regulating Technology by Law and “Code” ’ (this volume ch 17 pp 367). 34 In addition, design-engineers are more likely to be found in the private sector consultants, rather than in direct public employment. The implications of a shift away from the public sector to private- sector standard-setting have been considered by several commentators, particularly in the cybercon- text, but are beyond the scope of this paper. 35 R Baldwin, ‘Why Rules Don’t Work’ (1990) 53 MLR 321. 36 M Kirby, ‘New Frontier—Regulating Technology by Law and “Code”’ (this volume, ch 17 p 367). Towards an Understanding of Regulation by Design 95 The enlarged power wielded by regulatory design-engineers calls forth their need to pay careful attention to the consequences of design failure when establish- ing regulatory standards. In particular, when contemplating the design of default mechanisms, careful consideration should be given to the consequences of type I (false positives) or type II (false negatives) errors, to determine which should be preferred. Within the design of democratic legal systems, the moral harm associ- ated with false positives (wrongly convicting the innocent) is judged to be graver than the wrong associated with false negatives (failing to convict the guilty) and this moral evaluation of the consequences of error is institutionally designed into the legal process, at least in the burden of proof and institutional protections accorded to those charged with criminal wrongs. Other contributors to this volume echo this concern when technology is used in pursuit of collective ends. For example, Judy Illes warns that if neuroimaging technology is to be used for lie-detection purposes and the potential for and consequences of error are high, protections should be put in place to mitigate the possibility, or at least to double-check positive findings before a person is subjected to further testing.37 Likewise, Justice Michael Kirby highlights the risk of excessive censorship arising from type II errors that arise when Internet filters designed to prohibit access to materials considered ‘harmful to minors’, also inadvertently prevent access to lawful erotic materials, or discussion about censorship or to websites concerned with subjects of legitimate interest.38 C. Design-based Instruments and Democratic Values The loss of opportunity for individuals to appeal to the discretion and judgment of enforcement officials against the inappropriate or unfair application of regula- tory standards reflects a broader concern that the turn to design-based regulatory instruments diminishes opportunities for democratic participation. Although concerns about the self-enforcing character of design-based instruments apply only to those which override human agency, the turn to design-based policy instruments more generally may also reduce opportunities for citizen-participa- tion in the policy process. Even if design-based instruments can deliver on their promise of enhanced effectiveness, commentators have already cast doubt on their legitimacy by highlighting how their use threatens important non-instru- mental values. For example, Lessig expresses two fears about architecture as means for regulating cyberspace. First, that state sponsored code-based regulation of an indirect kind undermines constitutional values of transparency and accountabil- ity. By ‘indirection’ he means state attempts to harness architectural controls to achieve its aims indirectly in an opaque manner by enrolling non-state parties,39 37 Illes, J, ‘Viscicitudes of Imaging, Imprisonment, and Intentionality’ (this volume, ch 14 p 317). 38 M Kirby, ‘New Frontier—Regulating Technology by Law and “Code”’(this volume, pp 23–24). 39 He provides an example of the state ordering doctors not to give abortion advice with the aim of discouraging abortion, although this example is more appropriately characterised as indirect regula- tion via the use of command-based mechanisms directed at gate-keepers rather than directly at those whose behaviour is targeted. 96 Karen Yeung rather than directly through transparent, state-sponsored advertising campaigns or direct tax breaks to those who engage in the conduct that the state wishes to encourage (for example, benefits to those who proceed with pregnancy rather than undertake an abortion).40 Secondly, he is concerned with the capacity of the private sector to employ code for private gain, overriding the legislatively autho- rised balance between competing values. He points out that while trusted systems give copyright owners more control over their digital rights, they displace the bal- ance of values between authorial creativity on the one hand and the promotion of an intellectual and creative commons on the other.41 In other words, Lessig is concerned with good governance, warning that architecture may be used by the state and the private sector to achieve their own ends in ways which undermine constitutional values of transparency, public deliberation and participation while severely restricting the liberty of individual netizens.42 While Brownsword shares Lessig’s concerns about the potential for design- based instruments to violate tenets of good government, his worries are even more deeply rooted, for he fears that they may seriously undermine moral com- munity. The remainder of this paper explores some of these concerns in greater depth. In order to isolate his concerns about the implications of techno-regulation for moral community from concerns about good governance, Brownsword imag- ines a scenario in which regulators, mindful of the values of transparency and accountability, present a designed response to a problem but will not introduce it unless specifically mandated to do so by the informed choice of their citizens. He employs a similar hypothetical case to the one I have described, in which the state seeks to render transportation safer through the use of a smart integrated road transport system. Citizens fully debate the proposals and, although there are small pockets of resistance, the majority of citizens vote in favour of the proposal. But even if the state seeks to adhere to the tenets of good governance in proceed- ing with the design-based solution in the manner suggested, Brownsword worries that although the scheme successfully eliminates the harms associated with road traffic accidents, road traffic violations, car crime and so forth, it does so at the expense of individual moral autonomy and the conditions required for a flourish- ing moral community. Before exploring these concerns further, it is worth noting that the risks that design-based approaches may pose to constitutional values are unlikely to be satisfactorily resolved simply by insisting upon the majority endorsement of transparent proposals that have been publicly debated and voted upon. Even if a majority of citizens give informed consent to the proposal, it still leaves several thorny questions concerning how the scheme could be implemented and admin- istered without unduly jeopardising constitutional values. For example, if a smart integrated transport a scheme is to eliminate road traffic accidents, than it must 40 Lessig, above, n 2 at 98. 41 Lessig, above, n 2 at 135. 42 See also L Lessig,‘The Zones of Cyberspace’ (1996) 48 Stanford Law Review 1403 at 1408. Towards an Understanding of Regulation by Design 97 be implemented across the board, so that even individuals who voted against the scheme would be required to participate. If participation is effectively compelled through design, rather than through coercive laws, (for example, if old-style cars are incapable of functioning on new ‘smart’ roads) then the dissenting minority might claim that by rendering their old-style cars useless, the state’s implementa- tion of the programme amounts to an unconstitutional violation of their prop- erty rights, effectively expropriating their property without just compensation. In other words, majority approval for a design-based solution is unlikely to be sufficient to discharge the demands of good governance in democratic states, for these extend to the implementation and administration of public programmes and not merely the decision to adopt them.43 While concerns about interfering with individual property rights may seem relatively minor, concerns about the need to respect individual rights become much more acute when humans are the subject of design-based intervention, requiring some kind of surgical procedure or other kind of interference with an individual’s bodily integrity. In other words, state use of design-based instruments to achieve collective ends raise important questions concerning how they should be regulated. Such questions are beyond the scope of this paper, suffice to say that much will depend on who would admin- ister the technology and for whose benefit (would individuals self-administer for personal benefit? would individuals administer the technology for the benefit of others such as children, the mentally infirm and other vulnerable persons or creatures?), when would it be used and in what circumstances will the design be administered?, and to what end (how urgent is the social end which the state seeks to pursue via design)? D. De-moralising Design? But Brownsword’s concerns about design-based approaches to regulation extend beyond the erosion of principles of good governance to fears that they remove opportunities for the individual to engage in moral reasoning and judgment, removing the conditions needed for a moral community to flourish: Techno-regulation approaches the problem of social order in a way that does not rely on building normative consensus; it is amoral; it does by-pass the realm of values; and it does not rely on moral discipline or obedience to authority. However, this is not because techno-regulation favours non-moral reason over moral reason, but more dramatically because it by-passes practical reason altogether … far from normalising crime, techno- regulation seeks to eliminate it as an option. (Code as Control). According to Brownsword, if a moral community is to flourish, individuals must have the capacity for genuine choice. Hence the freedom to choose to do right necessarily entails the freedom to choose to do wrong. Within Brownsword’s ideal moral community, people act in an other-regarding way for the right reasons, not 43 See K Yeung, Securing Compliance (Oxford, Hart Publishing, 2004). 98 Karen Yeung through fear, nor because their environment gives them no other alternative or because they have been designed to act only in the right way. I share Brownsword’s concern about the importance of considering the impli- cations of design-based instruments for moral judgment.44 But in so doing, we must attend carefully to the modalities of design and surrounding social prac- tice in considering how design-based instruments affect, or are likely to affect, moral agency. Brownsword’s fears about the demoralising effects of design-based regulation apply only to a sub-set of design-based instruments, those which rely on a design-modality which prevents harm-generating behaviour in its entirety by overriding human decision and action (which he terms ‘techno-regulation’). Design-based instruments, if thoughtfully designed, may well have a ‘moralising’, rather than a ‘de-moralising’ effect on individual decision-making and social practice more generally. This moralising potential is nicely illuminated by crimi- nologist David Smith in his account of architecture for reducing fare evasion on the London Underground.45 He draws upon the experience of waist-high ticket barriers at the entrance to the London Underground which serve as a symbolic barrier, rather than making it physically impossible to avoid paying the correct fare. Passing through the barrier with a valid ticket is a ritual of lawful acceptance, whereas jumping over it is a flagrant transgression. Under the old system without ticket barriers, fare evasion merged imperceptibly into informal fare transactions in a succession of half tones and ambiguities. Within the new system, Smith describes the automatic gate as a symbolic barrier which ‘dramatises the choice between morality and deviance’.46 The use of electronic ticket barriers that can be readily by-passed by physically jumping over them is a clear example of architectural mechanics that fall into the first design-based modality described in the preceding section: those designed to achieve their social ends by encouraging behavioural change. Ticket-barriers are analogous to speed humps on roads: they aim to bring about behavioural change by altering the relative desirability of a given course of action. Although individuals may wish to drive at speed through residential streets, the speed bumps introduce an undesired effect to accompany such action, causing physical discomfort to the passengers inside the vehicle and risking damage to the vehicle. The individual’s desire to drive at speed is thus tempered by her counterveiling 44 Jonathan Zittrain expresses a similar concern in his discussion of tethered appliances: observing that, ‘perfect enforcement collapses the public understanding of law with its application eliminating a useful interface between the law’s terms and its application. Part of what makes us human are the choices that we make every day about what counts as right and wrong, and whether to give into temp- tations that we believe to be wrong. In a completely monitored and controlled environment, those choices vanish. One cannot tell whether one’s behaviour is an expression of character or is merely compelled by immediate circumstance.’ per J Zittrain, ‘Tethered Appliances, Software as Service, and Perfect Enforcement’ (ch 6 this volume at p 125). 45 DJ Smith, ‘Changing Situations and Changing People’ in A von Hirsch, D Garland and A Wakefield (eds), Ethical and Social Perspectives on Situational Crime Prevention (Oxford, Hart Publishing, 2000). 46 Ibid at 169. Towards an Understanding of Regulation by Design 99 and contemporaneous desire to avoid physical discomfort and property damage. Similarly, ticket barriers make fare evasion less attractive. While individuals may be attracted by the prospect of a ‘free ride’, this desire to avoid paying the fare is tempered by the desire to avoid the social disapproval from other passengers who witness the public demonstration of fare evasion entailed by jumping over the ticket barriers. In both cases, the scope for moral agency is entirely preserved: peo- ple have a genuine choice whether to pay the fare and proceed through the barrier or to jump over the barrier and proceed, or whether to slow down and proceed over the speed bumps, or continue at speed and endure the consequences. But the social context in which moral agency is exercised has been reshaped, making it more difficult for individuals to ignore the moral consequences of violation. Unlike the waist-high electronic ticket barriers installed on the London under- ground, the floor to ceiling barriers in the Paris metro do not provide passengers with any choice: they cannot be physically circumvented, thus relying on the third kind of design-modality to bring about the desired intention of eliminat- ing fare evasion by overriding individual choice.47 They represent the kind of ‘techno-regulation’ that form the focus of Brownsword’s objections. In a related vein, Lianos argues that the effect of such barriers is to strip people of their per- sonal autonomy, destabilising moral principles found on personal choice. Smith argues that a system that forces compliance tends to weaken self-control, since these are no longer needed. This leads, Smith suggests, to the intriguing possibil- ity that consistent enforcement (eg through automatic ticket systems) may have a moralising or alternatively demoralising effect, depending on exactly how it is achieved. A system which delivers a strong and consistent symbolic message (that fares must always be paid before crossing the barrier) may have the effect of creat- ing or reinforcing norms, strengthening belief in them, and making it harder for people to disengage their self-controls from these norms. In contrast, a system which removes all personal choice may tend to weaken self-controls, for a variety of reasons. If people are denied any autonomy, then they perceive that the moral responsibility lies entirely with the system so that they no longer retain any obliga- tions themselves. In addition, there are bound to be occasions when the outcome is unjust (I pay the fare but, due to operational failure, the machine refuses to accept my ticket). Unless there is a possibility of discussion and redress, people will perceive the system as a ‘mindless brute’, and have no shame in outwitting it, whenever the opportunity arises.48 Yet even in circumstances where scope for individual choice is preserved, the potential moralising effect of such design-based instruments might come at too high a price. The unpleasant consequences which design-based instruments attach to behaviour deemed undesirable in order to promote behavioural change 47 I am grateful to Adrien Lantieri for pointing out that there are various relatively simple ways in which it is possible to travel on the Paris underground without a valid ticket. 48 DJ Smith, above n 45, at 170. 100 Karen Yeung may be of such severity that they may be of questionable legitimacy. Consider, for example, the ‘mosquito’, discussed by Bowling, Marks and Murphy in their contri- bution to this volume, a device designed to deter the presence of teenagers by send- ing out a high-pitched buzzing sound that only teenagers can hear, causing them considerable irritation and discomfort. While there may well be circumstances in which such intrusive technology might be justified, it is doubtful whether such a device could be legitimately utilised by the state merely for reasons of social convenience. Perhaps more appropriate design-based approaches for discour- aging the unwanted but nonetheless lawful congregation of teenagers are tech- niques employed by one Australian local authority, which piped Barry Manilow’s ‘Greatest Hits’ into public car parks which local youths consider unfashionable.49 Likewise, managers of an Australian shopping mall reported remarkable success in driving away unwanted teenagers from its premises by repeatedly playing Bing Crosby’s 1938 song, ‘My Heart is Taking Lessons’.50 In other words, the legitimacy of such techniques may depend not only on preserving scope for individual choice, but also upon proportionality between the adverse consequence generated by the design and the regulatory purpose it seeks to promote.51 And what of design-based approaches that operate directly on the individual’s decision-making process which seek to restrict opportunities for the exercise of individual judgment but without overriding it altogether? While speed humps, ticket-barriers and the broadcasting of unfashionable music seek to encourage behavioural change do so by altering the external conditions for action in order to alter the individual’s decision framework, the same behavioural change might be achieved by intervening directly in the individual’s internal decision-making processes. In particular, interventions directed at the individual’s neurologi- cal functions might be employed to promote the desired behavioural response. Thus, appetite-suppressants might be used to encourage overweight individuals to reduce their food intake, and advances in neuroscience might in future lead to the development of psychotropic drugs that enable a wide range of individual desires, thoughts and emotions to be dampened or enhanced. Should internal and external design-based approaches to encouraging behavioural change be regarded as equivalent? So, for example, I might decide against eating a chocolate bar after watching a public advertisement warning of the health risks associated with obesity. But what of gastric-banding, or an appetite suppressing drug that prevents hunger signals from reaching my brain so that I no longer feel hungry? In both cases, I retain the capacity to make decisions about my food intake. And in both cases, it could be said that my desire to eat has been diminished. But in the first case, the dampening of my desires has been achieved via engagement with 49 ‘Manilow to drive out “hooligans”’, BBC News (05 June 2006) available at accessed 26 May 2008. 50 ‘Bing Keeps Troublemakers at Bay’, BBC News (8 July 1999). 51 K Yeung, Securing Compliance (Oxford, Hart Publishing, 2004). Towards an Understanding of Regulation by Design 101 my rational decision-making capacity, whilst in the second, my desires have been dampened directly via design, rather than through my rational engagement. In his contribution to this volume, Brownsword queries whether, in a com- munity of rights, such a regulatory strategy would be problematic. He imagines the introduction of a cocktail of smart drugs that makes it easier for individuals to empathise and sympathise with others and to overcome their immoral inclina- tions so that they do the right thing. Although we might intuitively prefer that moral action is unaided rather than artificially assisted, Brownsword questions whether this intuition is reliable. Yet he concedes that if such intervention makes it so easy for agents to do the right thing that they experience no resistance to doing that thing, then there is no element of overcoming and there is a risk that agents lose the sense that they face a choice between right and wrong. But on this logic, how are regulators to identify the point at which design-based intervention of this kind is to be regarded as unacceptable? And quite apart from inescapable problems of line-drawing, attempts to shape social behaviour through direct intervention in neurological functioning (whether by designer drugs or some of the cruder methods depicted in Aldous Huxley’s Brave New World where babies are fertilised in state hatcheries and conditioned from birth in state-controlled child-rearing institutions) raise acute questions concerning their legitimacy. Elsewhere I have argued that the means that we use to achieve our social goals reflect value judgments about the appropriate relationship between means and ends.52 Accordingly, while the state’s use of waist-high ticket-barriers serve as a reminder to passengers of the moral impropriety of fare evasion, the use of smart drugs to dull my materialistic desires reflects a very different social understand- ing of individuals, and of their relationship to the state. The first strategy reflects an understanding of individuals as vulnerable beings who experience competing desires but are nonetheless reasoning moral agents, capable of distinguishing between right and wrong and therefore capable of and willing to accept responsi- bility for the consequences of their choices. It emphasises the relational dimension of collective life, one reliant upon social trust, in which individuals are treated as reasoning agents capable of acknowledging their responsibility for choosing to straddle the ticket barriers in the knowledge that such flagrant fare evasion entails the prospect of social disapproval from fellow passengers, even if no formal state punishment is exacted. Although the second strategy also recognises the vulner- ability of individuals to competing desires and the temptation to pursue self- interest in situations which call for other-regarding action, it reflects a radically different understanding. On this view, individuals are understood mechanistically, in which technical ‘assistance’ may be imposed by the state on its citizens in order to raise the likelihood that they will respond in the manner deemed desirable. My concern is that, in assessing the legitimacy of design-based interventions for 52 K Yeung, ‘Assessing the Legitimacy of Regulatory Instruments: Can Means be Separated from Ends?’, Discussion Draft, unpublished, May 2007. 102 Karen Yeung achieving social purposes, it is important to identify the value frames which we bring to bear on that assessment, particularly when design is targeted or embed- ded in human subjects. While I share many of Brownsword’s concerns about the implications of design-based regulation for a moral community, I suspect that they raise even more wide-ranging ethical and social concerns. Our value frames reflect, in large measure, our individual and collective identity, of who we truly are, of what gives meaning to our individual and collective life. In other words, they reflect what it means to be authentically ourselves.53 III. Authenticity and Design Might the notion of authenticity assist in evaluating design-based instruments which directly intervene in the individual’s decision-making process? Although this paper does not provide the occasion for further elaboration, I doubt whether this notoriously slippery and elusive concept will be capable of providing straight- forward and concrete guideposts. Yet it might nevertheless provide a useful compass, helping to orient us in making our way through the thicket of deep and difficult issues which these technological possibilities will force us to grapple with. Its potential to provide a sense of direction can be illustrated by considering the two remaining design-based regulatory strategies that restrict opportunities for the exercise of individual judgment without overriding it altogether: those which seek to prevent harm-generating conduct by reducing opportunities for conflict that often generates harmful behaviour, and those which seek to avoid the harm- ful outcomes of the targeted conduct by dampening its harmful impact. A. Design by Conflict-Avoidance Conflict-reduction strategies have been popular with advocates of situational crime prevention techniques for tackling criminal behaviour.54 They seek to prevent harm by reducing opportunities for conflict and temptation from which harm often springs. For example, rowdy football fans from opposing teams may be channelled through different exit channels, so that they no longer encounter each other when leaving the stadium, thereby reducing the opportunity for vio- lence to erupt between supporters from opposing teams. In his contribution to this volume, Brownsword analogises these strategies with parents who give each of their children a television or computer to avoid the conflict (and associated tantrums) that might otherwise ensue. He fears that a community which employs 53 For an illuminating discussion, see C Taylor, The Ethics of Authenticity (Cambridge, MA, Harvard University Press, 1991). 54 See generally A von Hirsch, D Garland and A Wakefield (eds), Ethical and Social Perspectives on Situational Crime Prevention (Oxford, Hart Publishing, 2000). Towards an Understanding of Regulation by Design 103 design to reduce opportunities for conflict, like parents who provide extra television sets to their children, deprive citizens with opportunities to learn how to share, to co-operate, and to pursue workable compromises, so that a community which keeps eliminating situations where other-regarding conduct is needed may find itself unable to cope when the need arises. It seems to me, however, that such a community would be no less authentic: the temptation for individuals to act in self- rather than other-regarding ways remains intact, even though the opportunities for temptation might be reduced. If, however, such strategies consume considerable resources then we might question whether other social goals may have a more urgent claim on our limited resources. While parents who succumb to their childrens’ demands for toys and gadgets might be criticised for failing to teach their children how to share and to compromise, I am less inclined to see this as an appropriate role for a liberal democratic state. And so long as people choose to live in communities, I seriously doubt whether opportunities for conflict can be so readily eliminated. Conflict is more likely to be displaced rather than eliminated, so that there will invariably be numerous occa- sions for citizens to choose between self- and other-regarding action. B. Design by Target Hardening Rather than reduce opportunities for conflict that typically lead to undesirable social consequences, regulators might seek to reduce or avoid those consequences through design-based modalities that reduce the adverse effects typically associ- ated with the regulated behaviour (see category (b) above). Here design might be embedded in industrial processes (for example, a shift to technology that gener- ates energy from solar, wind and wave power rather than by burning fossil fuels would reduce carbon-dioxide emissions arising from energy consumption), prod- ucts (such as air-bags installed in motor vehicles intended to reduce the severity of injuries arising from motor vehicle accidents) or in living organisms (such as genetically modified seeds that generate food crops that can withstand and thrive in extreme environmental conditions), including human beings (such as a vaccine that immunises the body against specified disease). Because the range of possible design-targets is so large, it is questionable whether meaningful generalisations are possible concerning their legitimacy assessed in terms of their implications for non-instrumental values. Nonetheless, there are two issues of a general nature that warrant further reflection. Design-based approaches to regulation of this kind include techniques which criminologists sometimes describe as ‘target-hardening’. Instead of focusing on the behaviour of agents who throw stones at glass windows, why not deal with the problem by installing shatter-proof glass? In his earlier reflections, Brownsword indicated that such strategies would be no different to strategies that design-out the possibility of harmful conduct, because deviants who know that they can- not inflict real harm on others are effectively deprived of the choice between right and wrong. In this volume, however, he suggests that there might be a valid 104 Karen Yeung distinction: at least in the case of design-out, agents have the freedom to deviate, aware that their actions contravene the preferred regulatory pattern of conduct. In reflecting upon the legitimacy of target-hardening strategies, it may be valu- able to bear in mind the distinction between so-called ‘traditional crimes’ and ‘regulatory wrongs’. Although the distinction between these two kinds of wrong- doing is notoriously unstable, it may have considerable ethical significance. For lawyers, regulation is typically understood as the sustained and focused attempt by the state to alter behaviour generally thought to be of value to the community in order to ameliorate its unwanted adverse side-effects. While one of the prin- cipal aims of the criminal law is to censure conduct considered to be morally reprehensible, the aim of regulation is primarily to modify that behaviour, rather than to punish or censure those engaging in it.55 Accordingly, if technological solutions can be found to diminish or eliminate the unintended but harmful consequences of socially valued activity, then (assuming that they do not consume excessive resources) they should be wholeheartedly embraced, except in circum- stances where the design is embedded or targeted at living organisms where more caution may be warranted. In these circumstances, Brownsword’s worries that target-hardening may diminish our sense of the wrongfulness do not apply, at least not with the same force. Design-based strategies for eliminating the harmful by-products of socially valued activities, or harms caused by bad social luck, leave intact the conditions which Brownsword claims are essential for a moral com- munity to flourish. Outside that context, however, Brownsword’s worries have considerable theo- retical bite, but whether they are borne out in social practice is likely to be highly context-sensitive. While target-hardening strategies might result in a loss of sen- sitivity to the wrongfulness of certain conduct and erode individual self-restraint in some circumstances, in others the reduction in criminal opportunities might make people less inclined to avail themselves of such opportunities that do arise because they lack the knowledge, skills or social contacts to execute the offence successfully.56 I also worry that such strategies risk inappropriately shifting moral and social responsibility for harmful criminal acts from the agents who commit them to the state who might be accused of failing to provide effective target- hardening solutions or, even more problematically, to victims themselves who fail adequately to protect themselves from criminal harm.57 Secondly, where design-based strategies for reducing or eliminating the harm- ful impact of socially valued behaviour is targeted at living organisms, rather than at products, processes, places or spaces, then we would do well to act with humil- ity and caution. This is not to say that such technology is necessarily unwelcome. Indeed, the development of vaccines and the implementation of community-wide 55 K Yeung, Securing Compliance. (Oxford, Hart Publishing, 2004) 78–85. 56 DJ Smith, above n 45, at 160. 57 R Duff and S Marshall, ‘Benefits, Burdens and Responsibilities: Some Ethical Dimensions of Situational Crime Prevention’ in von Hirsch, Garland and Wakefield (eds), above n 54, at 17. Towards an Understanding of Regulation by Design 105 immunisation programmes to prevent and reduce disease fall squarely within this category of interventions. But because living organisms are self-reproducing and often highly complex, the long-term consequences of design-based intervention can be very difficult to predict, heightening the risk of design failure. We also need to be mindful of how and why we understand and value individual life-forms, their relationship to each other, and the nature and value of humanity itself within the broader eco-system that we inhabit. For transhumanists, the prospect of utilising technology, not merely to overcome the limitations of the natural world which we inhabit, but also to overcome our inherent human limitations, is a cause for celebration. For others, the prospect of technological advancement towards a ‘posthuman’ state is abhorrent and should be steadfastly and vigorously resisted. My point is not to enter into this debate, but merely to identify those areas where design-based approaches, however effective they might be in achiev- ing their policy goals, might raise deep and difficult questions concerning our individual and collective identity. In navigating this fraught territory, the notion of authenticity might help to focus our assessment of whether technological solutions to collective problems may erode who we truly are, of our sense of ourselves and whether our collective actions help to propel us towards, or away from, who we want to be. But here we encounter highly contested understandings of what it means to be authentic. It connotes a faithfulness to some ‘essential’ nature or way of being (‘telos’). But what is that elusive essential quality, and does it exist at all? Here we encounter lively and deeply felt contestation about what it means to be ‘truly human’, about what constitutes essence of our human capacities and why we value them.58 And even if there is a widely-shared commitment to authenticity within a community of rights which might help to illuminate our path, there will no doubt be consid- erable disagreement over what authenticity entails, and large questions remain over what (if any) circumstances a commitment to authenticity should trump other individual and collective values. IV. Conclusion This paper has sought to clarify and deepen our understanding of design-based approaches for achieving collective goals. I began by developing two ways in which design-based approaches to regulating might be classified: first, by refer- ence to the subject in which the design is embedded (places, and spaces; products and process; and biological organisms) and secondly, by reference to the design mechanism or ‘modality of design’. I suggested that three general design modali- ties could be identified, based on the way in which design is intended to achieve 58 See F Fukuyama, Our Posthuman Future (London, Profile Books, 2002). 106 Karen Yeung its desired end: by encouraging behavioural change, by altering the conditions of existing behaviour to alter its impact or by seeking to prevent the outcome deemed undesirable. These design-modalities vary in their effectiveness. Those which operate by promoting behavioural change, or seek to change the impact of existing behaviour, are more vulnerable to failure than those which do not. And those which override individual behaviour are the most effective of all. The assurance of effectiveness that accompanies design-based instruments which seek to prevent social outcomes deemed undesirable by overriding individ- ual behaviour may appear, at first blush, to offer considerable advantages over their traditional counterparts. But I have sought to demonstrate why this assurance of effective policy outcomes is likely to be illusory. Not only is the risk of operational failure unavoidable, but the task of designing standards that will accurately and precisely hit the regulator’s desired target is likely to prove exceedingly difficult. A rich body of scholarship concerning the theory and practice of ‘traditional’ rule-based regulation bears witness to the impossibility of designing regulatory standards in the form of legal rules that will hit their target with perfect accuracy. The obstacles lying in the path of accurate standard-setting cannot be avoided simply by embedding those standards into design-based instruments rather than in legal rules. Although the prospect of self-enforcement which design-based instruments may be attractive to regulators, socio-legal scholars have amply dem- onstrated the vital role which enforcement officials often play to resolve problems arising from the indeterminacy of rules, ensuring that they are applied in a man- ner which will promote their underlying policy goal, and mitigating unfairness in individual cases. Because rules rely on human agency for their operation, they may be vulnerable to avoidance through formalistic interpretations by regulatees or lax enforcement by regulators. But it is also the scope for human agency that provides the source of their ingenuity and flexibility, breathing life into their apparently simple frame. Thus, in the context of traditional rule-based regulation, rule failure can be overcome through the application of human communication, reason and understanding, enabling rules to be interpreted, applied and enforced in ways that can accommodate changing and unforeseen circumstances. Although insensitivity to human agency provides the basis for guaranteeing the effectiveness of design-based instruments which override human agency, it is this rigidity and consequent lack of responsiveness that will generate injustice when unforeseen circumstances arise. Not only do design-based instruments for implementing regulatory policy goals offer varying levels of effectiveness, but they also vary in the extent to which they implicate a range of non-instrumental concerns. Several commentators have already warned that design-based instruments may erode constitutional values of transparency, accountability and democratic participation. The legitimacy of such instruments has also been questioned on the basis of their ‘de-moralising’ consequences. I have suggested, however, that in order to evaluate the extent to which such fears apply, we need to attend to differences in the design-modality adopted and the social context and practice surrounding their application. So, Towards an Understanding of Regulation by Design 107 for example, design instruments which seek to promote behavioural change by attaching unpleasant consequences to behaviour deemed undesirable might rein- force rather than undermine moral norms. Yet the legitimacy of such approaches will also depend on a proportionate relationship between the unpleasantness of the consequence administered and the social goal sought to be achieved. In con- trast, design-based instruments which seek to promote social outcomes deemed desirable by reducing or eliminating the harm associated with behaviour other- wise deemed socially desirable are unlikely to engage these concerns, although evaluating their legitimacy will nevertheless depend upon issues of cost and the scarcity of a community’s resources relevant to other social priorities. Finally, I have advocated the need for caution and humility when seeking to promote collective goals by intervening in the design of biological organisms for the use of such technologies opens up deep and difficult questions concerning our individual and collective identity. I have suggested that the notion of authenticity might help to orient our reflections upon the implications of such approaches on our moral autonomy, social relations and our collective life. Yet because authen- ticity is a highly contested and elusive notion, where considerable disagreement arises concerning what it means to be ‘truly human’, about what constitutes essence of our human capacities and why we value them, indicates that it is unlikely to provide clear guideposts. Not only does technological advancement present us, as individuals, which fundamental questions about our individual identity, but the potential for employing technology as a means for achieving collective ends throws up fundamental questions about how we understand and nurture our collective identity. 5 Internet Filtering: Rhetoric, Legitimacy, Accountability and Responsibility TJ MCINTYRE AND COLIN SCOTT I do intend to carry out a clear exploring exercise with the private sector … on how it is possible to use technology to prevent people from using or searching dangerous words like bomb, kill, genocide or terrorism. EU Justice and Security Commissioner Franco Frattini, 10 September 20071 I. Introduction In the Internet context, filtering and blocking refer to technologies which provide an automatic means of preventing access to or restricting distribution of particu- lar information. There is, of course, nothing new about seeking to control access to media and other resources. Governments have long had lists of banned books, sought to control access to newspapers, or sought cuts to films prior to their gen- eral exhibition. But we argue that qualitative differences between contemporary Internet content filtering practices and traditional censorship raise new problems of regulatory accountability and legitimacy. Consider the following recent examples. Many states, such as Saudi Arabia and China, have deployed filtering at a national level to censor political or por- nographic material, in effect creating ‘borders in cyberspace’.2 Google’s Chinese language site, at the behest of the Chinese Government, has introduced censor- ship of searches such as ‘Tiananmen Square’.3 The UK’s dominant incumbent 1 I Melander, ‘Web Search for Bomb Recipes Should be Blocked: EU’ Reuters (10 September 2007), available at accessed 26 May 2008. 2 N Villeneuve, ‘The Filtering Matrix: Integrated Mechanisms of Information Control and the Demarcation of Borders in Cyberspace’ (2006) 11(1) First Monday, available at accessed 26 May 2008. 3 H Bray, ‘Google China Censorship Fuels Calls for US Boycott’ The Boston Globe (28 January 2006), available at accessed 26 May 2008. 110 TJ McIntyre and Colin Scott telecommunications operator, British Telecom, (in consultation with the Home Office) has put in place a ‘Cleanfeed’ system which automatically blocks cus- tomer requests for websites alleged to be hosting child pornography 4 and the Government has indicated its intention to ensure that all UK Internet service providers (ISPs) should adopt a similar system, whether by voluntary cooperation or otherwise.5 In Belgium, the courts have ordered an ISP to implement technical measures to prevent user access to file-sharing websites and to stop users from distributing certain music files.6 In Canada the ISP Telus blocked its subscribers from seeing a website supporting a strike by its employees, inadvertently block- ing many unrelated sites also.7 Meanwhile throughout the world ISPs and users deploy spam filters and sender blacklists with varying degrees of success.8 These examples differ greatly from each other. But in each case the blocking shares some common features. First, it is automatic and self-enforcing in its nature. Once the technology is developed and deployed no further human intervention is required, unless and until users find ways to circumvent the intended controls. Secondly, it is often opaque. Some filtering mechanisms may be transparent to the affected user, as with some email filtering systems which send to users sum- maries of email which has been blocked as spam. But in many cases filtering is, of necessity, opaque, in at least some dimensions, as a condition of its effectiveness. Thirdly, filtering generally involves intermediaries. Again, this is not always the case. A user may run a spam filter locally on their own machine. But since much filtering involves denying the end user access to certain material it is more com- mon for filtering to be directed to other Internet points of control.9 These three features are not unique to filtering. Lessig has pointed out the auto- matic and often opaque nature of code as a modality of regulation10 while theorists such as Boyle11 and Swire12 have noted that the decentralised and international nature of the Internet will encourage regulators to focus on indirect enforcement, 4 M Bright, ‘BT Puts Block on Child Porn Sites’ The Observer (6 June 2004), available at accessed 26 May 2008. See also P Hunter, ‘BT Siteblock’ (2004) 9 Computer Fraud and Security 4. 5 W Grossman, ‘The Great Firewall of Britain’ net.wars (24 November 2006), quoting Vernon Coaker, Parliamentary Under-Secretary for the Home Department to Parliament. Available at accessed 26 May 2008. 6 Sabam v Scarlet, Decision of the Court of First Instance in Brussels of 29 June 2007, discussed in OUT-LAW News (6 July 2007), available at accessed 26 May 2008. 7 CBC News (24 July 2005), available at accessed 26 May 2008. 8 See, eg, L Lessig, ‘The Spam Wars’ The Industry Standard (31 December 1998), available at accessed 26 May 2008. 9 See, eg, J Zittrain, ‘Internet Points of Control’. (2003) 43 Boston College Law Review 1 discussing how and why regulators target ISPs rather than users. 10 L Lessig, Code and Other Laws of Cyberspace, 2nd edn (Cambridge, MA, Basic Books, 2006). 11 J Boyle, ‘Foucault in Cyberspace: Surveillance, Sovereignty, and Hardwired Censors’ (1997) 66 University of Cincinnati Law Review 177. 12 P Swire, ‘Of Elephants, Mice, and Privacy: International Choice of Law and the Internet’ (August 1998), available at SSRN: accessed 26 May 2008. Internet Filtering 111 targeting intermediaries rather than end users, ‘elephants’ rather than ‘mice’. But we will suggest that in the particular context of filtering they interact to raise some important issues. By way of introduction we will examine the rhetoric underlying the use of the term ‘filtering’. We suggest that this term, convenient though it is as shorthand for this technology, is loaded and that it may be preferable to talk in more neutral terms of ‘blocking’ or even of ‘censorware’. We will then explore where filtering fits into our modalities of governance and the resulting issues of legitimacy and accountability. As regards legitimacy we argue in particular that the use of technology to exert control over Internet users frequently challenges tenets associated with the rule of law concerning both the process for and content of norms governing behaviour. These challenges emerge, in particular, where technology is linked to compliance with voluntary codes or soft law instruments by non-state actors. Whilst it may be suggested that the voluntary character of compliance with such instruments reduces or removes the requirements suggested by rule of law concerns, the consequences of compliance will often accrue to third parties who do not experience compliance as voluntary and in situations where many of the elements of the regime of control are deter- mined by non-state actors outside of the normal public policy process. Following on from that, we will argue that the combination of automatic enforcement, opaque systems and rules directed at intermediaries may leave affected users unaware that their behaviour is being controlled, so that the opaque nature of filtering may result in a loss of accountability. Where, as is often the case, it is not clear what is being blocked, why, or by whom, the operation of mecha- nisms of accountability—whether by way of judicial review, media scrutiny, or otherwise—is greatly reduced. Finally we will consider the argument that, as compared with control through legal instruments, filtering may rob users of moral agency or responsibility in their use of the Internet, with the implication that they may freely do whatever it is technically possible to do, with no necessity of moral engagement in their activi- ties. If such consequences were to follow through into wider patterns of social interaction, the consequences for responsibility, and for social ordering generally, of such low-trust mechanisms of control might be troubling. We do not reject the use of filtering in the Internet context. Without filtering our email inboxes would rapidly become unusable. It is through the technology of filtering rather than legal controls that spam has, to a greater or lesser extent, been effectively tackled. The development of commercial websites which accredit and testify to the relevance of material, or the trustworthiness of others, has given many firms great success and is clearly meeting a demand.13 The efficiency which 13 Y Benkler, The Wealth of Networks: How Social Production Transforms Markets and Freedom (New Haven, 2006), 12, 75. Blogs, online bookstores, journals, online encyclopaedias, and buying/selling intermediaries such as eBay each engage in different forms of filtering. 112 TJ McIntyre and Colin Scott is promised is seductive. However, we do suggest that the legitimacy of filtering in any particular context requires close examination by reference to issues of transpar- ency, responsibility and accountability in respect of the devising and administering of controls, the purposes for which such controls are deployed, and the consent (or absence of consent) of those whose behaviour is controlled as a result. II. Rhetoric The term ‘filtering’ is widely used—even by critics—as shorthand for bundles of practices through which technology is used to exert control over users of the Internet.14 Other terms for filtering—such as the British Telecom ‘Cleanfeed’ project—have also sought to capture the rhetorical allure of cleanliness and purity. Others, however, have challenged this terminology. The term ‘filtering’, it has been argued, implies an element of choice on the part of the affected user, with ‘censorware’ being a more appropriate term for blocking which is beyond user control. 15 The term carries an illusion of precision: The word ‘filter’ is much too kind to these programs. It conjures up inaccurate, gee-whiz images of sophisticated, discerning choice … When these products are examined in detail, they usually turn out to be the crudest of blacklists, long tables of hapless material which has run afoul of a stupid computer program or person, perhaps offended by the word ‘breast’ (as in possibly ‘breast cancer’).16 We agree that the metaphorical deployment of the term filtering is loaded with meanings which imply virtue and thereby resists challenge through rhetoric. In particular (by analogy with the filtering of drinking water) the term may rein- force a view of the Internet as something that is piped into one’s home where it is passively consumed. This view—building on the pervasiveness doctrine in broadcasting law—has already been deployed17 to justify greater regulation of the Internet, often coupled with an explicit comparison of objectionable material 14 Eg, Y Akdeniz, ‘Who Watches the Watchmen? The role of filtering software in Internet content regulation’ in C Möller and A Amouroux (eds), The Media Freedom Internet Cookbook (Vienna, 2004); B Esler, ‘Filtering, Blocking and Rating: Chaperones or Censorship?’ in M Klang and A Murray (eds), Human Rights in the Digital Age (London, Glasshouse Books, 2005); RP Wagner, ‘Filters and the First Amendment’ (1999) 83 Minnesota Law Review 755. 15 Oral testimony before the Library of Congress Copyright Office Hearing on anti-circumvention mechanisms under the Digital Millennium Copyright Act, 11 April 2003. Transcript available at accessed 26 May 2008. 16 Congressional evidence of S Finkelstein, quoted in B Miner, ‘Internet Filtering: Beware the Cyber Censors’ 12(4) Rethinking Schools Online (Summer 1998), available at accessed 26 May 2008. 17 JD Wallace, ‘The Specter of Pervasiveness: Pacifica, New Media, and Freedom of Speech’ CATO Briefing Paper 35 (12 February 1998), available at accessed 26 May 2008. Internet Filtering 113 on the Internet to sewage in the domestic water supply.18 The more interactive character of Web 2.0 technologies, such as social networking sites, removes them further from a parallel with broadcasting. In addition, to say that we are filtering something implies that we are treating that something as an undifferentiated mass, and as noted by Finkelstein, that we are doing so in a relatively straightforward and scientific way. This may reflect a popular conception of the Internet as a single entity, but it is at odds with the real- ity of the Internet as being a network of networks—an architecture which links together a disparate collection of protocols, applications, sites, and users. If we wish to block certain content online then we may do so in a variety of different ways, in a number of different locations, and to a number of different users—for example at national boundaries or at the organisational level, on the server side or the user side, over all protocols or merely HTTP.19 The loose use of the term Internet filtering tends to undermine this diversity and may suggest that a one size fits all solution is appropriate. Of course, alternative terms could equally be objected to. For example, to frame the discussion as one about ‘automated censorship’ or ‘censorware’ might draw the riposte that many aspects of the practice are distinct from censorship as it is traditionally practised. It might also be said that the term ‘blocking’ doesn’t adequately convey the precision and selectivity which technology may make pos- sible. Nonetheless, we would suggest that the term be used with caution.20 III. Implications of Filtering as a Method of Governance: Legitimacy and Accountability Control of Internet use through filtering is part of a broader pattern of gover- nance in which technology forms only a part. The technologies and practices 18 A comparison notably made by the United States Department of Justice in its opening state- ment in ACLU v Reno, 23 October 1996, transcript available at accessed 26 May 2008 . ‘If a water source was mixed with a sewer system, and you had a filter that screened out but 6.6 percent of it, would that be a solution to the problem? Would that cure the problem of the drinking water.’ Similarly (though speaking of viruses and other malware rather than pornography) technology site ZDNet recently editorialised that ‘But when we attach a PC to the Internet, we might as well be wading through open sewers. Currently, many ISPs are allowing Internet traffic to flow through their systems completely unfiltered, which is akin to a water authority pumping out raw sewage to its customers to clean for themselves.’ ‘Time to filter out the Internet effluent’, ZDNet (18 August 2004), available at accessed 26 May 2008. 19 R Deibert and N Villeneuve, ‘Firewalls and Power: An Overview of Global State Censorship of the Internet’ in M Klang and A Murray (eds), Human Rights in the Digital Age (London, Glasshouse Books, 2005) 114. 20 We should also be conscious that the term ‘blocking’ can be used in a more technical manner to refer to means of denying access to particular IP addresses or services on particular port numbers. See Deibert and Villeneuve, previous n, at 112. 114 TJ McIntyre and Colin Scott associated with filtering, and the associated fragmentation in both the actors and modalities engaged in control of social and economic practices provide a critical case of the difficulties of adapting traditional narratives of legitimacy and accountability to contemporary governance. In this section we first address the nature of governance practices associated with filtering and then address some of the normative implications. Lawrence Lessig’s celebrated claim ‘code is law’21 dramatically highlighted the potential of software architecture to substitute for law in the control of behav- iour. Elaborating on Lessig’s four-way analysis, we recognise hierarchy (or law in Lessig’s terms), competition (or markets), community (or norms) and design (or architecture) as four basic modalities of governance (or control).22 Working with these four modalities of governance it appears mistaken to think of architecture as displacing other modalities. Design has long had a key role in controlling behav- iour not separate from, but allied to, other modalities of governance, in particular the hierarchical exercise of legal power. Famously, Jeremy Bentham’s Panopticon, a design for a prison in which a smaller number of guards are able to keep an eye on all the prison corridors from a central tower,23 is dependent for success on the exercise of legal authority to detain prisoners and apply discipline to those who are observed breaching prison rules. Thus surveillance was used to support the exercise of legal power.24 More recent work on crime control has emphasised the role of architecture and design in inhibiting criminal conduct, but again against a background of legal enforcement.25 There is potential also for linking control through design to the other gov- ernance modalities. Thus competition and design may operate together in the voluntary provision by car manufacturers of control mechanisms which enhance safety, such as inhibitors to prevent driving while under the influence of alcohol. Physical controls over the use of space in parks or bars, which inhibit certain forms of behaviour, may be used to give expression to community norms rather than legal rules. With filtering we can readily see that the technology may be linked to legal authority, as where ISPs are directed to block access to certain websites. Filtering may also be an aspect of market-based control, for example where businesses market filtering-software for email and the test of the take-up and success of the product lies not with compliance with legal rules, but rather with the extent of sales in the market. A third possibility is that filtering is part of community- based systems of control, for example where norms governing website access are 21 Lessig, Code and Other Laws of Cyberspace, above n 10 at 6. 22 A Murray and C Scott, ‘Controlling the New Media: Hybrid Responses to New Forms of Power’ (2002) 65 MLR 491. 23 J Bentham, Panopticon or the Inspection House (Dublin, 1791). 24 M Foucault, Discipline and Punish: The Birth of the Prison (Harmondsworth, 1977); J Scott, Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed (New Haven, 1998). 25 Eg O Newman, Defensible Space: Crime Prevention through Urban Design (New York, 1972). Internet Filtering 115 reflected in shared software for inhibiting access to blacklisted sites. Frequently two or more modalities may be in play.26 Observing these governance modalities in play with filtering raises some important normative issues, particularly concerning the legitimacy of certain aspects of governance. In its most general sense legitimacy refers to that general acceptance of governance arrangements which sustains the capacity for gover- nance, even through times when the content of what is being done may be con- troversial. Internet governance raises problems because of the manner in which a traditional understanding of the separation between the role of governments, markets and communities is challenged by practices such as those deployed in respect of Internet filtering. This challenge is reflected in anxieties that standard tenets of public accountability for governance decisions and compliance with values for the rule of law may be undermined. Both accountability and rule of law aspects are reflected in the extent to which the implementation of filtering, though it may be mandated or permitted in pub- lic legislation, moves us away from public actors and legal rules for its implemen- tation. Thus where a government legislates to prohibit access to certain books, there is a public legislative process involving elected representatives in the mak- ing of rules for enforcement by public officials. Much of the policy making and implementation in respect of Internet filtering occurs through other mechanisms involving different actors with risks both to the transparency and accountability dimensions which, through conceptions of the rule of law, underpin legitimacy in governance. Automatic Enforcement A key feature (some would call it a virtue) of technological control is that it is applied automatically without human intervention. There is no scope for argu- ment, no exercise of discretion and (depending on the code) all users are treated alike. At first glance this may seem to be virtuous from a rule of law perspective, since it reduces the scope for discretionary or biased enforcement, and thus for users to be treated differently without legitimate cause. But there are some trou- bling aspects of automatic enforcement. First, by ruling out any element of discretion we may end up with an all or nothing approach to governance which may not comply with principles of pro- portionality—we may, for example, see an entire website or domain blocked due to offending material on a single page. Villeneuve has pointed out that this 26 The analysis of multi-modal governance, in relation to the Internet and other social and economic activities, begs the question whether design is modality of control at all. See C Scott, ‘Spontaneous Accountability’ in M Dowdle (ed), Public Accountability: Designs, Dilemmas and Experiences (Cambridge, Cambridge University Press, 2006). The particular quarrel with design as a modality of control is that, in contrast with the other three modalities, it has no obvious ‘account- ability template’ mirroring its control function. 116 TJ McIntyre and Colin Scott form of over-blocking is common27—filtering systems tend not to be sufficiently granular to restrict themselves to the targeted material. Indeed, even sites which have no affiliation with the offending material may find themselves blocked if the common but crude approach of IP address filtering is used. While in some cases overblocking may result from clumsy or lazy technical implementations, there is a deeper problem which may not lend itself to a techni- cal solution. Software is a very efficient mechanism for implementing rules, but not so good when it comes to standards.28 This presents a particular problem in relation to the filtering of material alleged to be distributed in breach of copy- right.29 Here, filtering software may be very efficient when identifying whether excerpts of copyright material are being used—but will fall down when faced with the standards-based assessment of whether that amounts to a ‘fair use’ or ‘fair dealing’ with the copyright work. The result may be to upset the balance struck by copyright law, resulting in hyper-efficient enforcement of copyright claims but systematic neglect of the situations where the law recognises that unauthorised use of copyright material is socially desirable. Whilst blocking may be automatic, ‘the process by which [users] protest their innocence and get the right to com- municate back will be slow, bureaucratic, and manual.’30 Consider for example the way in which students used the Internet to reveal seri- ous security flaws in electronic voting machines produced by Diebold Electronics.31 A particularly important aspect of that campaign was the use of internal emails of Diebold which had been leaked. Unsurprisingly, Diebold claimed copyright in the emails and threatened sites hosting them with legal action unless they were removed. The response on behalf of the students was what they described as ‘electronic civil disobedience’—disseminating the emails widely throughout the Internet while simultaneously seeking a judicial declaration that this use was privileged. They were ultimately successful in the latter endeavour—the court accepted that portions of the email archives which were used to inform the public about concerns as to the legitimacy of elections were clearly subject to the fair use exception under US law.32 However, had a filtering system been in place restrict- ing the distribution of that material, discussion of an important matter of public 27 Villeneuve, above n 2. 28 J Grimmelman, ‘Regulation by Software’ (2005) 114 Yale Law Journal 1719. 29 This issue has become topical as Google has in October 2007 introduced a copyright filter- ing system for its video sharing site YouTube. See L Rosencrance, ‘Google Launches Video Blocking Tool for YouTube’ Computerworld (16 October 2007), available at accessed 26 May 2008. 30 C Doctorow, ‘French Law Proposal Will Force ISPs to Spy on Users and Terminate Downloaders Without Trial’ Boing Boing (25 November 2007), available at accessed 26 May 2008. 31 See, eg Y Benkler, The Wealth of Networks: How Social Production Transforms Markets and Freedom (New Haven, 2006) at 225 ff. 32 Online Policy Group v Diebold 337 F Supp. 2d 1195 (2004). Internet Filtering 117 concern would have been silenced in the meantime—notwithstanding the formal position which US law takes against prior restraints.33 Secondly, the automatic nature of software may eliminate the feedback mechanisms normally associated with good governance. Whereas the hierarchical mechanisms of legislative implementation contain mechanisms for registering concerns about what is or is not fair or effective, and similar feedback loops arise within both market and community governance processes, the automaticity of design-based controls associated with software negates the existence of a feedback loop.34 The feedback loop associated with rule making and enforcement is illustrated by the 1960 prosecution of Penguin Books for publishing Lady Chatterley’s Lover—a trial which by drawing public attention to the book not only made it a best seller but also resulted in a substantial relaxation of the test of obscenity in English law.35 Filtering systems, by doing away with a public enforcement process, may inhibit this evolution of norms. We have noted the automatic nature of software-based implementation of filtering norms. A key contrast between such automatic enforcement, on the one hand, and bureaucratic enforcement of legal rules on the other, is that bureau- cratic enforcers can cater for harsh or unintended effects through the exercise of discretion. Indeed much research on regulatory enforcement suggests that the discretion not to enforce is routinely applied by most enforcement agencies for a variety of reasons, and that formal enforcement occurs only in a minority of cases, frequently linked to perceptions of motivation or persistence in respect of breaches.36 The observation of discretion in rule enforcement enables rule makers to set rules which might be considered harsh if systematically applied, but where the degree of stringency in the rule itself is considered necessary to address the objectives behind the rule. Automaticity within enforcement of filtering norms has no mechanism to deal with the problem of excess stringency and the related problem of over-inclusiveness in application of the norm. This is a problem not only for the achievement of objectives, but also for the legitimacy of norms which, in their application, inhibit conduct beyond what was intended. Opaque Nature of Filtering Traditional forms of censorship generally require that if items—such as particu- lar books, videos or periodicals—are prohibited, then a list of those items must 33 See, eg, New York Times Co v United States 403 US 713 (1971). 34 J Grimmelman, ‘Regulation by Software’ (2005) 114 Yale Law Journal 1719; L Tien, ‘Architectural Regulation and the Evolution of Social Norms’ (2003–2004) Yale Journal of Law and Technology 1 35 See, eg, CH Rolph, The Trial of Lady Chatterley: Regina v. Penguin Books (London, 1961) for an (edited) transcript of the trial and explanation of the context in which it took place. 36 P Grabosky and J Braithwaite, Of Manners Gentle: Enforcement Strategies of Australian Business Regulatory Agencies (Melbourne, Oxford University Press, 1986). 118 TJ McIntyre and Colin Scott be made publicly available. After all, without such a list how is the citizen to know that they are breaking the law by importing or possessing such an item? In addition, traditional censorship mechanisms will generally give persons affected by the designation of items an opportunity to be heard prior to designation or to challenge a designation.37 Also, in traditional censorship mechanisms we expect the publication of criteria which will be applied in determining whether particular material is objectionable. These factors can be lacking in the case of filtering. At one level, the end-user may not be made aware that filtering is in opera- tion,38 or that access to a particular site has been blocked by filtering. Nor will the site owner necessarily be aware unless they spot and can diagnose a fall-off in traf- fic. In some states websites deemed unacceptable by governments (for example those of opposition political groupings, media and human rights organisations) are routinely blocked, with feedback to the user suggesting that the website is not available (‘file not found’) or that access has been inhibited by some technical problem (eg ‘connection timeout’).39 The more transparent and accurate message, ‘access blocked by government order’ is less commonly given. The use of error pages has been described as ‘an attempt to deflect criticism, allowing the authori- ties to claim that they are not censoring Internet content’.40 Alternatively, the end user may be actively misled—Uzbekistan, for example, informs users that sites banned for political reasons are blocked for supposed pornographic content.41 This appears to neatly combine two layers of deception— simultaneously justifying the block and smearing political opposition. It has been observed that governments ‘[u]nable to justify the reason for blocking political content … choose to obscure or deny the fact that such content is in fact targeted’.42 Even if a user is aware of the fact of filtering, they may not know who is responsible for it: it may be any entity upstream of the user.43 We may not know, for example, whether it is the Chinese government blocking material, or some commercial entity which finds it expedient to cooperate. There are also commercial imperatives at work. Manufacturers of filtering software guard their lists of blocked sites, seeing them as trade secrets. Those lists are generally encrypted, and the manufacturers have sued or threatened to sue those who would make them public.44 Consequently the lists may not be subject 37 The process in Irish law in respect of film and video is described in K Rockett, Irish Film Censorship: A Cultural Journey from Silent Cinema to Internet Pornography (Dublin, 2004). 38 A point made by L Lessig, above n 10, where he refers to ‘truth in blocking’ as a desirable characteristic. 39 Deibert and Villeneuve, above n 19, at 119. 40 N Villeneuve, above n 2. 41 Ibid. 42 Ibid. 43 L Lessig, above n 10, at 257. 44 For an example see B Fitzgerald, ‘Note: Edelman v. N2H2—At the Crossroads of Copyright and Filtering Technology’ (2004) 69 Brooklyn Law Review 1471. Internet Filtering 119 to independent scrutiny or analysis. Villeneuve illustrates this with an interesting example: Saudi Arabia was condemned by human rights organisations for blocking access to non-pornographic gay and lesbian sites. After learning about the blocked sites, the Saudi authorities promptly removed the blocking. Saudi Arabia never intended to block access to those sites. These sites were likely misclassified by the commercial filtering product, SmartFilter, that Saudi Arabia implemented at the national level. In effect, US corpora- tions are in a position to determine what millions of citizens can and cannot view on the Internet. Even the countries implementing filtering products do not know for certain what is in fact being blocked.45 Indeed, in numerous cases, manufacturers have taken advantage of this fact to blacklist and thereby silence their critics.46 At least in some situations, it may be the case that transparency would destroy the effectiveness of filtering. For example, there is understandable concern that revealing the list of blocked child pornography sites censored by British Telecom’s Cleanfeed system would simply advertise them further. The filtering of spam has also been marked by a battle of wits between spammers and filters—and some spam filters therefore keep their internal workings secret for fear that their effec- tiveness would be lost if spammers could tailor their offerings to circumvent the filters. This may be a general problem with any filters which engage in content analysis. On the other hand, some jurisdictions have implemented elements of transpar- ency. In Saudi Arabia, for example, users are presented with a blockpage which states that the requested Web site has been blocked but it also contains a link to a Web form through which users can petition to have the site unblocked … The acknowledgement of blocked content allows users to petition to have sites unblocked if there has been a mis-classification. It also requires governments to justify why a specific site is blocked.47 However, such transparency might itself give rise to concern. This blunt statement— that the requested site has been blocked—will also serve to remind the user that their online activities are of some interest to the state, thus possibly having a chilling effect on further Internet use. Consequently, the opaque nature of many Internet filtering processes serves to challenge key requirements in both public and market governance relating to feedback on the operations of the process. From a public governance perspective 45 Villeneuve, above n 2. 46 For examples see The Free Expression Policy Project, Internet Filters—A Public Policy Report (New York, 2006), available at accessed 26 May 2008; Electronic Frontiers Australia press release, ‘Government Approved Net Filters Attempt to Silence Critics’ available at accessed 26 May 2008; TIME Digital Magazine, ‘Cybersitter Decides to Take a Time Out’ (8 August 1997), available at accessed at 26 May 2008. 47 Villeneuve, above n 2. 120 TJ McIntyre and Colin Scott the problem relates to the inability of those affected to know about and challenge decisions on filtering. From a market governance perspective such opacity removes the possibility of feedback processes through which errors can be detected and corrected. The Role of Intermediaries Traditional forms of censorship and control of information have generally focused on either the person making available certain information (such as prohibiting the publication of certain material) or, less often, the recipient (as where a person is punished for possession of child pornography).48 Addressing regulation to interme- diaries is not unprecedented (consider, for example, the liability of printers and dis- tributors in defamation or the role of airlines in immigration control49) but has been less common. The growth of filtering, with its focus on intermediaries, is pragmatic, in the sense that it frequently enrols actors who have knowledge and/or capacities for control which government does not have. However this pragmatic extension of the capacities for control of government must be balanced with a proper scrutiny of the implications of implicit or explicit delegation to businesses and other non-state actors and, relatedly, the deployment of methods and procedures of governance which would frequently not be open to governments to use themselves. At the outset, filtering which is implemented by intermediaries is inherently more opaque, lacking as it does any necessity that speaker or recipient be notified. We have already noted that in many existing systems site owners and users alike may not be aware either that filtering is in operation or that particular sites are blocked. This is not a necessary characteristic of filtering—for example, librar- ies in the United States have been active in informing their patrons that legally required filtering systems are in place.50 However, not all intermediaries may share 48 See, eg, S Kreimer, ‘Censorship by Proxy: The First Amendment, Internet Intermediaries and the Problem of the Weakest Link’ (2006) 155 University of Pennsylvania Law Review 11 at 13: The archetypal actors in the First Amendment drama appear on stage in dyads: in free speech narratives, a speaker exhorts a listener; in free press accounts, a publisher distributes literature to readers. In the usual plot, the government seeks to disrupt this dyad (for legitimate or illegitimate reasons) by focusing sanctions on the source of the speech. On occasion, the government turns its efforts to the listener, seeking to punish receipt of illicit messages or possession of illicit materi- als preparatory to reading them, and the courts proceed to evaluate the constitutionality of those proposed sanctions. 49 J Gilboy, ‘Implications of “Third Party” Involvement in Enforcement: The INS, Illegal Travellers, and International Airlines’ (1997) 31 Law and Society Review 505. 50 See American Library Association, ‘Access to Electronic Information, Services and Networks’ (19 January 2005), available at accessed 26 May 2008. This pro- vides that ‘Users’ access should not be restricted or denied for expressing or receiving constitutionally protected speech. If access is restricted or denied for behavioral or other reasons, users should be provided due process, including, but not limited to, formal notice and a means of appeal.’ The ALA has been active in opposing federally required filtering systems, notably in United States v American Library Association 539 US 194 (2003). Internet Filtering 121 the ideological commitment to transparency or freedom of expression which would lead them to do this. Filtering by intermediaries also increases our concerns about the application of the rule of law. Decisions to require filtering are often made by public authori- ties, even though others are responsible for their implementation. Compliance with some version of the rule of law is a key part of the legitimating apparatus for public authority decision makers, but may be lacking in the case of filtering by intermediary. In some instances, such as the Australian Interactive Gambling Act 2001, there is specific legal authority for a public body to investigate particular content, make determinations and issue notices requiring ISPs to block access to that content.51 But more problematic is the situation where government uses its inherent steering capacity, without legislation, to encourage ISPs or other intermediaries to engage in content filtering. For example, in the UK the Government has encouraged ISPs to engage in filtering as part of self-regulation. This was initially done by way of consultation and cooperation with the incumbent and dominant operator, British Telecom, which developed its ‘Cleanfeed’ system to automatically block customer access to URLs alleged to host child pornography, the list of blocked URLs being maintained by the Internet Watch Foundation.52 Now, however, the Government has indicated its intention to ensure that all UK Internet service providers (ISPs) should adopt either ‘Cleanfeed’ or a similar system, with the threat of legislation should ISPs fail to do so ‘voluntarily’.53 This presents a number of challenges for the rule of law. Even if an individual ISP’s actions can be described as voluntary, the effect is to subject users without their consent to a state mandated regime of Internet filtering of which they may be unaware. The Internet Watch Foundation (IWF), which determines which URLs should be blocked, has a curious legal status, being a charitable incorpo- rated body, funded by the EU and the Internet industry, but working closely with the Home Office, the Ministry of Justice, the Association of Chief Police Officers and the Crown Prosecution Service.54 There is no provision for site owners to be notified that their sites have been blocked.55 While there is an internal system 51 Interactive Gambling Act (Cwlth) 2001, s 24. 52 Bright, above n 4. See also Hunter, above n 4. 53 W Grossman, ‘The Great Firewall of Britain’, above n 5, quoting Vernon Coaker, Parliamentary Under-Secretary for the Home Department: ‘We believe that working with the industry offers us the best way forward, but we will keep that under review if it looks likely that the targets will not be met’. 54 See, eg, the ‘Memorandum of Understanding Between Crown Prosecution Service (CPS) and the Association of Chief Police Officers (ACPO) concerning Section 46 Sexual Offences Act 2003’ dated 6 October 2004, available at accessed 26 May 2008, which gives special recognition to the role of the IWF. See generally Internet Watch Foundation, ‘About the Internet Watch Foundation’ available at accessed 26 May 2008. 55 Internet Watch Foundation, ‘Child Sexual Abuse Content URL List’ available at accessed 26 May 2008. 122 TJ McIntyre and Colin Scott of appeal against the designation of a URL to be blocked, that mechanism does not provide for any appeal to a court—instead, the IWF will make a final deter- mination on the legality of material in consultation with a specialist unit of the Metropolitan Police.56 Consequently the effect of the UK policy is to put in place a system of cen- sorship of Internet content, without any legislative underpinning, which would appear (by virtue of the private nature of the actors) to be effectively insulated from judicial review.57 Though the take-up of the regime may be attributable to the steering actions of government, the way in which the regime is implemented and administered complies neither with the process or transparency expectations which would attach to legal instruments. There is also cause for concern about the incentives which delegating filtering to intermediaries might create. From the point of view of the regulator, requir- ing intermediaries to filter may allow them to externalise the costs associated with monitoring and blocking, perhaps resulting in undesirably high levels of censorship.58 But perhaps more worrying are the incentives which filtering cre- ates for intermediaries. Kreimer has argued that by targeting online intermediar- ies regulators can recruit ‘proxy censors’, whose ‘dominant incentive is to protect themselves from sanctions, rather than to protect the target from censorship’.59 As a result, there may be little incentive for intermediaries to engage in the costly tasks of distinguishing protected speech from illegal speech, or to carefully tailor their filtering to avoid collateral damage to unrelated content. Kreimer cites the US litigation in Centre for Democracy & Technology v Pappert60 to illustrate this point. In that case more than 1,190,000 innocent websites were blocked by ISPs even though they had been required to block fewer than 400 child pornog- raphy web sites. IV. Responsibility A central objection to technology as regulator generally is, that to the extent that otherwise available choices for human action are inhibited, there is a 56 Internet Watch Foundation, ‘Child Sexual Abuse Content URL Service: Complaints, Appeals and Correction Procedures’ available at accessed 26 May 2008. 57 As Akdeniz puts it ‘When censorship is implemented by government threat in the background, but run by private parties, legal action is nearly impossible, accountability difficult, and the system is not open or democratic.’ Y Akdeniz, ‘Who Watches the Watchmen? The role of filtering software in Internet content regulation’ in C Moller and A Amouroux (eds), The Media Freedom Internet Cookbook (Vienna, 2004) at 111. 58 S Kreimer, ‘Censorship by Proxy: The First Amendment, Internet Intermediaries and the Problem of the Weakest Link’ (2006) 155 University of Pennsylvania Law Review 11 at 27. 59 Ibid at 28. 60 337 F Supp 2d. 606 (ED Pa 2004). Internet Filtering 123 loss of responsibility for one’s actions. We are accustomed to assuming moral responsibility for actions which are within an acceptable range of possible actions. If actions outside the acceptable range are simply impossible, then we need no longer engage in moral choice, since our actions will, of necessity be acceptable. This effect, Brownsword has suggested, may be corrosive of our moral capacity.61 Where restrictions on unacceptable conduct are created through technology in some social domains (such as the Internet) it creates the risk that our moral capacity to act in other, less restricted, domains will be reduced, with adverse social consequences. Or, as Spinello argues, ‘code should not be a surrogate for conscience’.62 Perhaps paradoxically, the converse may also be true—the fact that technology makes certain acts easier to perform may in some contexts reduce the moral or legal responsibility of users for those acts. If something is easy to do it may be less clear that it is illegal. Zittrain63 for example has argued that: The notion that some content is so harmful as to render its transmission, and even reception, actionable—true for certain categories of both intellectual property and pornographic material—means that certain clicks on a mouse can subject a user to intense sanctions. Consumers of information in traditional media are alerted to the potential illegality of particular content by its very rarity; if a magazine or CD is avail- able in a retail store its contents are likely legal to possess. The Internet severs much of that signaling, and the ease with which one can execute an Internet search and encounter illegal content puts users in a vulnerable position. Perhaps the implemen- tation of destination ISP-based filtering, if pressed, could be coupled with immunity for users for most categories of that which they can get to online in the natural course of surfing. Taken further, Zittrain’s argument suggests where technical controls on behaviour are in place users may come to believe, or the law may come to accept, that those online actions which are not blocked by some technical means are permissible. Indeed, a similar viewpoint is already reflected in many national laws which criminalise unauthorised access to a computer system only if the user has circum- vented some technical security measure protecting that system.64 In the case of filtering, these arguments may intersect to suggest that pervasive filtering may reduce the moral accountability of users significantly both by reduc- ing their capacity to make moral choices and by signalling to them that those actions which are not blocked are permissible. 61 R Brownsword, ‘Code, Control, and Choice: Why East is East and West is West’ (2005) 25 Legal Studies 1 and ‘Neither East Nor West, Is Mid-West Best?’ (2006) 3(1) SCRIPT-ed. 62 R Spinello, ‘Code and Moral Values in Cyberspace’ (2001) 3 Ethics and Information Technology 137. 63 J Zittrain, ‘Internet Points of Control’. (2003) 43 Boston College Law Review 1 at 36. 64 See the discussion in SM Kierkegaard, ‘Here Comes the ‘Cybernators’!’ (2006) 22(5) Computer Law & Security Report 381. O Kerr, ‘Cybercrime’s Scope: Interpreting “Access” and “Authorization” in Computer Misuse Statutes’ (2003) 78 New York University Law Review 1596 suggests that this approach should apply to unauthorised access offences generally. 124 TJ McIntyre and Colin Scott V. Conclusions Filtering is likely to remain amongst the most important technologies mediating between users and suppliers of content. However, depending on the purpose underlying a particular system of filtering, it is also likely to present significant issues of transparency, legitimacy and accountability. Where the purposes of the filtering are those of the user it is not difficult to imagine systems for filtering which meet most of the normative requirements discussed in this chapter. Users may opt in where they have a need to do so and the system may have feedback so that users can see corrections or opt out if the filtering mechanism is insuf- ficiently accurate to meet their purposes. Many email filtering systems have these properties. A third feature of such systems, that they are designed and operated by commercial firms raises few concerns in this context since a user who is dissatis- fied with the way the system works is able to migrate to a different provider. Such systems are, in effect, likely to be regulated through competition. A filtering system which is applied by an intermediary (rather than a user) and which lacks transparency, because the user does not know it has been applied, or cannot see which messages are filtered out is weak in two respects—it lacks con- sent, and it lacks a feedback mechanism to correct for technical weaknesses in the system. A user will for example be aware of false negatives, because spam email will reach their inbox, but may be unable to detect false positives where email they wanted to receive was filtered out. Much filtering is, of course, directed not at the purposes of the user but rather at broader public purposes, such as the blocking of offensive, controversial or illegal Internet content. In some instances parents may be choosing to apply filter- ing to protect children. There is here an element of consent. However, many such regimes lack transparency and feedback mechanisms such that over-inclusive control, which blocks sites which parents would not have sought to block, is not systematically addressed within the system. We have noted that some governmental regimes for blocking Internet content, while they lack consent from users, nevertheless contain elements of transparency, because users are told that sites are blocked, and elements of feedback, because users are invited to inform operators of the system if they think a site has been blocked in error. Regimes which lack consent, transparency, and feedback mechanisms are open to two basic objections. First that they are not amenable to correction where they operate in an over- (or under-) inclusive manner and second that they remove responsibility from users. Even where governments maintain control over such regimes these weaknesses are significant and difficult to justify. A fortiori the most challenging regimes are those with these properties operated by commercial firms either at the request or command of governments, or for the own purposes of firms. 6 Perfect Enforcement on Tomorrow’s Internet* JONATHAN ZITTRAIN The PC and Internet, as wonderful as they are, are a bit flaky. The Internet runs on a ‘best efforts’ basis, with no guarantee of bandwidth from one end to the other, and with data packets handled by intermediaries that are not contractually bound to deliver them. (Imagine if package delivery worked this way.) PC software still can crash for no discernable reason, disgorging an incomprehensible error code. People might argue about merits of one platform compared to another (‘Linux never needs to be rebooted’1), but the fact is that no operating system is perfect, and more importantly, any PC open to running third-party code at the user’s behest can fail when poor code is adopted. The fundamental problem arises from too much functionality in the hands of users who may not exercise it wisely: even the safest Volvo can be driven into a wall. People are frustrated by PC kinks and the erratic behaviour they produce. Such unexpected shortcomings have long been smoothed out from refrigerators, televisions, mobile phones and automobiles. As for PCs, telling users that their own surfing or program installation choices are to blame understandably makes them no less frustrated, even if they understand that a more reliable system would inevitably be less functional—a trade-off seemingly not required by refrigerator improvements. Worse, increasing reliance on the PC and Internet means that more is at risk when something goes wrong. Skype users who have abandoned their old- fashioned telephone lines may regret their decision if an emergency arises and they need to dial an emergency number like 999, only to find that they cannot get * This chapter is drawn from the manuscript for The Future of the Internet—And How to Stop It and is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 License. The original text can be accessed through the author’s Web site at http://www.jz.org. (See .) 1 See, eg, ‘10 Things a New Linux User Needs to Unlearn’, Mostly Linux (17 June 2006), available at accessed October 2007; (‘Reboots are not SOP (Standard Operating Procedure).’); Nicholas Petreley, ‘Security Report: Windows vs. Linux’, The Register (22 October 2004) available at accessed October 2007. 126 Jonathan Zittrain through, let alone be located automatically.2 When one’s finances, contacts and appointments are managed using a PC, it is no longer merely frustrating if the computer comes down with a virus—especially an infection of the now-common sort that Microsoft concedes is dangerous enough to require a full wipe of the machine, not a mere anti-virus quarantine.3 The most likely reactions to PC and Internet failures brought on by the prolif- eration of bad code, if they are not forestalled, will be at least as unfortunate as the problems themselves. People now have the opportunity to respond to these problems by moving away from the PC and toward more centrally controlled— ‘tethered’—information appliances like mobile phones, video game consoles, TiVos, iPods, iPhones and BlackBerries. The ongoing communication between this new generation of devices and their vendors assures users that functionality and security improvements can be made as new problems are found. To further facilitate glitch-free operation, devices are built to allow no one but the vendor to change them. Users are also now able to ask for the appliancisation of their own PCs, in the process forfeiting the ability to easily install new code themselves. In a development reminiscent of the old days of AOL and CompuServe, it is increas- ingly possible to use a PC as a mere dumb terminal to access websites with interac- tivity but little room for tinkering. (‘Web 2.0’ is the new buzzword that celebrates this migration of applications traditionally found on the PC onto the Internet. Confusingly, the term also refers to the separate phenomenon of increased user- generated content and indices on the Web—such as relying on user-provided tags to label photographs.) New information appliances that are tethered to their mak- ers, including PCs and websites refashioned in this mold, are tempting solutions for frustrated consumers and businesses. None of these solutions standing alone is bad, but the aggregate loss will be enormous if their emergence represents a wholesale shift of our information ecosystem away from generativity. Some are sceptical that a shift so large can take place.4 But confidence in the generative Internet’s inertia is misplaced. It discounts the power of fear should the existing system falter under the force of particularly well-written malware. A shift to tethered appliances and locked-down PCs will have a ripple effect on longstanding cyberlaw problems, many of which are tugs-of-war between indi- viduals with a real or perceived injury from online activity and those who wish to operate as freely as possible in cyberspace. A shift to tethered appliances also entails a sea change in the regulability of the Internet. With tethered appliances, the dangers of excess come not from rogue third-party code, but from the much 2 See Skype, ‘Can I Call Emergency Numbers in the US and Canada?’, available at accessed 9 February 2007 (‘Skype is not a telephone replacement service and emergency numbers cannot be called from Skype.’). 3 Ryan Naraine, ‘Microsoft Says Recovery from Malware Becoming Impossible’, eWeek.com, (4 April 2006), available at accessed October 2007. 4 See, eg, Sharon E Gillett et al, ‘Do Appliances Threaten Internet Innovation?’, IEEE Communications (October 2001) at 46–51. Perfect Enforcement 127 more predictable interventions by regulators into the devices themselves, and in turn into the ways that people can use the appliances. The most obvious evolution of the computer and network—toward tethered appliancisation—is on balance a bad one. It invites regulatory intervention that disrupts a wise equilibrium that depends upon regulators acting with a light touch, as they traditionally have done within liberal societies. The Long Arm of Marshall, Texas TiVo introduced the first digital video recorder (DVR) in 1998.5 It allowed con- sumers to record and time-shift TV shows. After withstanding several claims that the TiVo DVR infringed other companies’ patents because it offered its users on- screen programming guides,6 the hunted became the hunter. In 2004, TiVo sued satellite TV distributor EchoStar for infringing TiVo’s own patents7 by building DVR functionality into some of EchoStar’s dish systems.8 A Texas jury found for TiVo. TiVo was awarded $90 million in damages and interest. In briefs filed under seal, TiVo apparently asked for more. In August 2006, the court issued the following ruling: Defendants are hereby … ordered to, within thirty (30) days of the issuance of this order, disable the DVR functionality (i.e., disable all storage to and playback from a hard disk drive of television data) in all but 192,708 units of the Infringing Products that have been placed with an end user or subscriber.9 That is, the court ordered EchoStar to kill the DVR functionality in products already owned by ‘end users’: millions of boxes which were already sitting in living rooms around the world10 with owners who might be using them at that 5 Jim Davis, ‘TiVo Launches ‘Smart TV’ Trial’, CNET News.com (22 December 1998), available at accessed October 2007. 6 See Richard Shim, TiVo, Gemstar End Lawsuit, Team Up, CNET News.com, 9 June 2003, available at 128 Jonathan Zittrain very instant.11 Imagine sitting down to watch television on an EchoStar box, and instead finding that all your recorded shows had been zapped, along with the DVR functionality itself—killed by remote signal traceable to the stroke of a judge’s quill in Marshall, Texas. The judicial logic for such an order is drawn from fundamental contraband rules: under certain circumstances, if an article infringes on intellectual property rights, it can be impounded and destroyed.12 Impoundment remedies are usually encountered only in the form of Prohibition-era-style raids on warehouses and distribution centres, which seize large amounts of contraband before it is sold to consumers.13 There are no house-to-house raids to, say, seize bootleg concert recordings or reclaim knockoff Rolexes and Louis Vuitton handbags from the people who purchased the goods. TiVo saw a new opportunity in its patent case, recognising that EchoStar’s dish system is one of an increasing number of modern tethered information appliances. The system periodically phones home to EchoStar, asking for updated program- ming for its internal software.14 This tethered functionality also means EchoStar can remotely destroy the units. To do so requires EchoStar only to load its central server with an update that kills EchoStar DVRs when they check in for new features. As of this writing, TiVo v EchoStar is pending appeal on other grounds.15 The order has been stayed, and no DVRs have yet been remotely destroyed.16 But such 11 Sharp-eyed readers of the TiVo injunction excerpt may have noticed something peculiar: the court’s order spares 192,708 EchoStar units. Why? EchoStar was ordered to pay damages to TiVo for lost sales of DVRs that TiVo would have sold if EchoStar had not been a competitor, and the court found that exactly 192,708 more TiVos would have been sold. See TiVo, 2006 US Dist LEXIS 64291 at *4. Since the $90 million in damages paid by EchoStar already reimbursed TiVo for those units, it would have been double dipping to kill those units. So 192,708 lucky EchoStar subscribers will get to keep their DVRs even if the court’s order is implemented. How should EchoStar choose those sub- scribers? The order does not specify. 12 See 17 USC § 503 (2000). Cf 35 USC § 283 (2000; 15 USC §1116(a) (2000)). 13 See, eg, Ben Barnier, ‘New York Ups Ante in Counterfeit Crackdown’ ABC News (2 February 2006), available at accessed October 2007; ‘China Seizes 58 Million Illegal Publications in Three Months’ People’s Daily Online (27 November 2006) available at October 2007. 14 EchoStar’s customer service agreement includes what might be termed a ‘tethering rights clause’: [EchoStar] reserves the rights to alter software, features and/or functionality in your DISH Network receivers, provide data and content to Personal Video Recorder/Digital Video Recorder (‘PVR/DVR’) products, store data and content on the hard drives of PVR/DVR products, and send electronic counter-measures to your DISH Network receivers, through periodic downloads. DISH Network will use commercially reasonable efforts to schedule these downloads to minimise interference with or interruption to your Services, but shall have no liability to you for any interruptions in Services aris- ing out of or related to such downloads. (EchoStar Satellite LLC, Residential Customer Agreement, available at accessed 1 June 2007.) Such clauses are typical. 15 On 3 October 2006, the Federal Circuit granted an indefinite stay of the injunction pending the outcome of EchoStar’s appeal. ‘TiVo Loses Ground on Appeals Court Ruling’, BusinessWeek Online (4 October 2006) available at accessed October 2007. 16 No case has tested whether consumers would have a remedy against EchoStar for their dead DVRs. On one hand, it might breach a manufacturer’s warranty of fitness to produce a device that Perfect Enforcement 129 remote remedies are not wholly unprecedented. In 2001, a US federal court heard a claim from a company called PlayMedia that AOL had included PlayMedia’s AMP MP3 playback software in version 6.0 of AOL’s software in a violation of a settlement agreement between PlayMedia and a company that AOL had acquired. The court agreed with PlayMedia and ordered AOL to prevent ‘any user of the AOL service from completing an online “session” … without AMP being removed from the user’s copy of AOL 6.0 by means of an AOL online “live update”’.17 TiVo v EchoStar and PlayMedia v AOL broach the strange and troubling issues that arise from the curious technological hybrids that increasingly populate the digital world. These hybrids mate the simplicity and reliability of television-like appliances with the privileged power of the vendor to reprogram those appliances over a network. Regulability and the Tethered Appliance As legal systems experienced the first wave of suits arising from use of the Internet, scholars such as Lawrence Lessig and Joel Reidenberg emphasised that code could be law.18 In this view, the software we use shapes and channels our online behaviour as surely as—or even more surely and subtly than—law itself. Restrictions can be enforced by the way a piece of software operates. Our ways of thinking about such ‘west coast code’19 are still maturing, and our instincts for when we object to such code are not well formed. Just as technology’s functional- ity defines the universe in which people can operate, it also defines the range of regulatory options reasonably available to a sovereign. A change in technology can change the power dynamic between those who promulgate the law and those who are subject to it.20 If regulators can induce certain alterations in the nature of Internet technolo- gies that others could not undo or widely circumvent, then many of the regulatory cannot lawfully be used for the purpose specified. On the other hand, ‘legal fitness’ is distinct from functional fitness, and the consumer’s ignorance of a patent (or of patent law) is no defense against consumer infringement. It is not clear that the seller of an infringing product owes indemnity to the user of it. 17 See PlayMedia Sys, Inc v America Online, Inc, 171 F Supp 2d 1094 (CD Ca 2001). 18 See Lawrence Lessig, ‘The Limits in Open Code: Regulatory Standards and the Future of the Net’, (1999) 14 Berkeley Technical Law Journal 759 at 761–2 [hereinafter ‘Limits in Open Code’]; see generally Lawrence Lessig, Code: Version 2.0 (2006) at 5, and its first edition, Code and Other Laws (1999). Lessig elaborated the idea that ‘code is law’ crediting Joel Reidenberg for the initial conception. See also Joel R Reidenberg, ‘Lex Informatica: The Formulation of Information Policy Rules through Technology’ (1998) 76 Texas Law Review 553. 19 ‘West coast code’ refers to the code embedded in computer software and hardware, so dubbed because much of its development has occurred in West Coast locations such as Silicon Valley, California and Redmond, Washington. This code has been contrasted with the more traditional regu- latory ‘east coast code’ that Congress enacts in Washington, DC. See Lawrence Lessig, Code and Other Laws (2000) 53, available at accessed October 2007. 20 Ibid at 24–5 (describing the fallacy of ‘is-ism’). 130 Jonathan Zittrain limitations occasioned by the Internet would evaporate. Lessig and others have worried greatly about such potential changes, fearing that blunderbuss technol- ogy regulation by overeager regulators will intrude on the creative freedom of technology makers and the civic freedoms of those who use the technology.21 So far Lessig’s worries have not come to pass. A system’s level of generativity can change the direction of the power flow between sovereign and subject in favour of the subject, and generative Internet technology has not been easy to alter. There have been private attempts to use code to build so-called trusted systems, software that outsiders can trust to limit users’ behaviour—for example, by allowing a song to be played only three times before it ‘expires,’ or by preventing an e-book from being printed.22 (Code-based enforcement mechanisms are also variously called digital rights management systems or technical protection measures.)23 Most trusted systems have failed, often because either savvy users have cracked them early on or the market has simply rejected them. The few that have achieved some measure of adoption—like Apple’s iTunes, which allows purchased songs to exist on only five registered devices at once24—are either readily circumvented, or tailored so they do not prevent most users’ desired behaviour. Even the governments most determined to regulate certain flows of information—such as China—have found it difficult to suppress the flow of data on the Internet.25 To be sure, with enough effort, censorship can have some effect, especially because most citizens prefer to slow down for speed bumps rather than invent ways around them.26 When a Web site fails to load, for example, users generally visit a substitute site rather than wait. Taking advantage of this reality, Chinese regulators have used their extensive control over ISPs’ routing of data 21 See, eg, Julie E Cohen, ‘Some Reflections on Copyright Management Systems and Laws Designed to Protect Them’ (1997) 12 Berkeley Technical Law Journal 161 at 163 (noting the possible negative effects of broad protection for copyright management systems). Cf Lawrence Lessig, ‘Open Code and Open Societies: Values of Internet Governance’ (1999) 74 Chicago-Kent Law Review 1405 at 1408–13 (discussing how open source software and freedom of participation were instrumental to the growth of the Internet). 22 See generally Mark Stefik, The Internet Edge: Social, Technical and Legal Challenges for a Networked World (2000) 55–78; Jonathan L Zittrain, Technological Complements to Copyright (2005). 23 For a discussion of the terminology used to describe intellectual property rights, see Peter K Yu, ‘Intellectual Property and the Information Ecosystem’ (2005) Michigan State Law Review 1 at 4–6 (considering possible terms such as Wendy Gordon’s GOLEM—‘Government-Originated Legally Enforced Monopolies’—and IMP—‘Imposed Monopoly Privileges’). 24 See Apple, ‘Sync Both Ways’, available at accessed 1 June 2007. 25 See, eg, OpenNet Initative, ‘Internet Filtering in China in 2004–2005’ (2005), available at accessed October 2007. See generally Jack Linchuan Qui, ‘Virtual Censorship in China: Keeping the Gate Between the Cyberspaces’ (Winter 1999/2000) 1(4) International Law Journal of Community Law & Policy 1, available at (discussing efforts by the Chinese government to adapt–and censor—evolving Internet technologies). 26 See Jack Goldsmith and Tim Wu, Who Controls the Internet? (2006) 113, 120 (characterising most attempts at ‘sidestepping copyright’ as mere phases and noting Steve Jobs’s observation that users ‘would rather pay for music online than spend hours evading detection’). Perfect Enforcement 131 packets to steer users away from undesirable Web sites by simply causing the Web pages to fail to load in the course of normal surfing. But so long as the endpoints remain generative and any sort of basic Internet access remains available, subversively minded techies can make applications that offer a way around network blocks.27 Such applications can be distributed through the network, and unsavvy users can then partake simply by double-clicking on an icon. Comprehensive regulatory crackdowns require a non-generative end- point or influence over the individual using it to ensure that the endpoint is not repurposed. For example, non-generative endpoints like radios and telephones can be constrained by filtering the networks they use. Even if someone is unafraid to turn a radio tuning knob or dial a telephone number to the outside world, radio broadcasts can be jammed, and phone connections can be disabled or monitored. Because radios and telephones are not generative, such jamming cannot be cir- cumvented. North Korea has gone even further with endpoint lockdown. There, by law the radios themselves are built so that they cannot be tuned to frequencies other than those with official broadcasts.28 With generative devices like PCs, the regulator must settle for either much leakier enforcement or much more resource-intensive measures that target the individual—such as compelling citizens to perform their Internet surfing in cyber cafés or public libraries, where they might limit their activities for fear that others are watching. The shift toward non-generative endpoint technology driven by consumer security worries changes the equation.29 The traditional appliance, or nearly any object, for that matter, once placed with an individual, belongs to that person. Tethered appliances belong to a new class of technology. They are appliances in that they are easy to use, while not easy to tinker with. They are tethered because it is easy for their vendors to change them from afar, long after the devices have left warehouses and showrooms. Consider how useful it was in 2003 that Apple could introduce the iTunes Store directly into iTunes software found on PCs running Mac OS.30 Similarly, consumers can turn on a TiVo—or EchoStar—box to find 27 See, eg, Nart Villeneuve, Director, Citizen Lab at the University of Toronto, ‘Technical Ways to Get Around Censorship’ available at accessed 1 June 2007; Ethan Zuckerman, Fellow, Berkman Center for Internet & Society at Harvard Law School, ‘How to Blog Anonymously’, available at accessed 1 June 2007. 28 See BBC News, Country Profile: North Korea, available at accessed 14 February 2007, 10:24 GMT; Cathy Hong, ‘Puncturing a Regime with Balloons’, The Village Voice (13–19 August 2003) available at accessed October 2007. 29 For a review of the places where interventions can be made to affect user behaviour in the con- text of intellectual property enforcement, including through modification to endpoint devices, see Julie Cohen, ‘Pervasively Distributed Copyright Enforcement’ (2006) 95 Georgetown Law Journal 1. 30 Matt Richtel, ‘Apple Is Said to Be Entering E-Music Fray with Pay Service’, New York Times (28 April 2003) at C1, available at 132 Jonathan Zittrain that, thanks to a remote update, it can do new things, such as share programmes with other televisions in the house.31 These tethered appliances receive remote updates from the manufacturer, but they generally are not configured to allow anyone else to tinker with them—to invent new features and distribute them to other owners who would not know how to program the boxes themselves. Updates come from only one source, with a model of product development limited to non-user innovation. Indeed, recall that some recent devices, like the iPhone, are updated in ways that actively seek out and erase any user modifications. These boxes thus resemble the early propri- etary information services like CompuServe and AOL, for which only the service providers could add new features. Any user inventiveness was cabined by delays in chartering and understanding consumer focus groups, the hassles of forging deals with partners to invent and implement suggested features, and the burdens of performing technical R&D. Yet tethered appliances are much more powerful than traditional appliances. Under the old regime, a toaster, once purchased, remains a toaster. An upgraded model might offer a third slot, but no manufacturer’s representative visits con- sumers and retrofits old toasters. Buy a record and it can be played as many times as the owner wants. If the original musician wishes to rerecord a certain track, she will have to feature it in a successive release—the older work has been released to the four winds and cannot be recalled.32 A shift to smarter appliances, ones that can be updated by—and only by—their makers, is fundamentally changing the way in which we experience our technologies. Appliances become contingent: rented instead of owned, even if one pays up front for them, since they are subject to instantaneous revision. A continuing connection to a producer paves the way for easier post-acquisition improvements: the modern equivalent of third slots for old toasters. That sounds good: more features, instantly distributed. So what is the drawback? Those who believe that markets reflect demand will rightly ask why a producer would make post hoc changes to technology that customers may not want. One answer is that they may be compelled to do so. Consider EchoStar’s los- ing verdict in Marshall, Texas. If producers can alter their products long after the products have been bought and installed in homes and offices, it occasions a sea change in the regulability of those products and their users. With products 9E0DEED7123DF93BA15757C0A9659C8B63> accessed October 2007; Peter Cohen, ‘iTunes Music Store Launches with 200K+ Songs’, MacWorld (28 April 2003) available at accessed October 2007. 31 See eg, Press Release, ‘TiVo, TiVo Delivers New Service Enhancements for Series2 Subscribers, Introduces New Pricing for Multiple TiVo Households’ (9 June 2004) available at accessed October 2007. 32 French copyright law recognises at least a nominal right of withdrawal (‘droit de retrait’). See Jean-Luc Piotraut, ‘An Authors’ Rights-Based Copyright Law: The Fairness and Morality of French and American Law Compared’ (2006) 24 Cardozo Arts & Entertainment Law Journal 549, 608. Authors of software are not entitled to this right. Ibid. Perfect Enforcement 133 tethered to the network, regulators—perhaps on their own initiative to advance broadly defined public policy, or perhaps acting on behalf of parties like TiVo claiming private harms—finally have a toolkit for exercising meaningful control over the famously anarchic Internet. Types of Perfect Enforcement The law as we have known it has had flexible borders. This flexibility derives from prosecutorial and police discretion and from the artifice of the outlaw. When code is law, however, execution is exquisite, and law can be self-enforcing. The flexibil- ity recedes. Those who control the tethered appliance can control the behaviour undertaken with the device in a number of ways: preemption, specific injunction and surveillance. Preemption Preemption entails anticipating and designing against undesirable conduct before it happens. Many of the examples of code as law (or, more generally, architecture as law) fit into this category. Lessig points out that speeding can be regulated quite effectively through the previously mentioned use of speed bumps.33 Put a speed bump in the road and people slow down rather than risk damaging their cars. Likewise, most DVD players have Macrovision copy protection that causes a sig- nal to be embedded in the playback of DVDs, stymieing most attempts to record DVDs onto a VCR.34 Owners of Microsoft’s Zune music player can beam music to other Zune owners, but music so transferred can be played only three times or within three days of the transfer.35 This kind of limitation arguably preempts much of the damage that might otherwise be thought to arise if music subject to copyright could be shared freely. With TiVo, a broadcaster can flag a programme as ‘premium’ and assign it an expiration date.36 A little red flag then appears next to it in the viewer’s list of recorded programmes, and the TiVo will refuse to play the programme after its expiration date. The box’s makers (or regulators of the makers) could further decide to automatically reprogram the TiVo to limit its fast-forwarding functionality or to restrict its hours of operability. (In China, 33 See Lessig, Code: Version 2.0, above n 18 at 128, 135. 34 Macrovision, ‘Video Copy Protection FAQ’, available at accessed 1 June 2007; see also Macrovision, ‘Secure DVD Content in Today’s Digital Home’, available at accessed 1 June 2007. 35 Zune.net, ‘Beam Your Beats’, available at accessed 29 March 2007. 36 See Associated Press, ‘TiVo Fans Fear Start of Recording Restrictions’ MSNBC.com (21 September 2005) available at accessed October 2007. 134 Jonathan Zittrain makers of multiplayer games have been compelled to limit the number of hours a day that subscribers can play in an effort to curb gaming addiction.)37 Preemption does not require constant updates so long as the device cannot easily be modified once it is in the user’s possession; the idea is to design the product with broadly defined limits that do not require further intervention to serve the regulator’s or designer’s purposes. Specific Injunction Specific injunction takes advantage of the communication that routinely occurs between a particular tethered appliance and its manufacturer, after it is in con- sumer hands, to reflect changed circumstances. The TiVo v EchoStar remedy belongs in this category, as it mandates modification of the EchoStar units after they have already been designed and distributed. This remote remedy was practi- cable because the tethering allowed the devices to be completely reprogrammed, even though the initial design of the EchoStar had not anticipated a patent infringement judgment. Specific injunction also allows for much more tailored remedies, like the PlayMedia-specific court order discussed earlier. Such tailoring can be content- specific, user-specific, or even time-specific. These remedies can apply to some units and not others, allowing regulators to winnow out bad uses from good ones on the basis of individual adjudication, rather than rely on the generalities of ex ante legislative-style drafting. For example, suppose a particular television broadcast were found to infringe a copyright or to damage someone’s reputa- tion. In a world of old-fashioned televisions and VCRs, or PCs and peer-to-peer networks, the broadcaster or creator could be sued, but anyone who recorded the broadcast could, as a practical matter, retain a copy. Today, it is possible to require DVR makers to delete the offending broadcast from any DVRs that have recorded it or, perhaps acting with more precision, to retroactively edit out the slice of defamatory content from the recorded programme. This control extends beyond any particular content medium: as e-book devices become popular, the same exci- sions could be performed for print materials. Tailoring also could be user-specific, requiring, say, the prevention or elimination of prurient material from the devices of registered sex offenders, but not from others’ devices. Surveillance Tethered appliances have the capacity to relay information about their uses back to the manufacturer. We have become accustomed to the idea that websites track 37 See Brian Ashcraft, ‘China Rolls Out Anti-Addiction Software’ Kotaku (13 April 2007) available at accessed October 2007. Perfect Enforcement 135 our behaviour when we access them—an online bookseller, for example. knows what books we have browsed and bought at its site. Tethered appliances take this knowledge a step further, recording what we do with the appliances even in transactions that have nothing to do with the vendor. A TiVo knows whether its owner watches FOX News or PBS. It knows when someone replays some scenes and skips others. This information is routinely sent to the TiVo mothership;38 for example, in the case of Janet Jackson’s ‘wardrobe malfunction’ during the 2004 US Super Bowl halftime show, TiVo was able to calculate that this moment was replayed three times more frequently than any other during the broadcast.39 TiVo promises not to release such surveillance information in personally iden- tifiable form, but the company tempers the promise with an industry-standard exception for regulators who request it through legal process.40 Automakers General Motors and BMW offer similar privacy policies for the computer systems, such as OnStar, built into their automobiles. OnStar’s uses range from provid- ing turn-by-turn driving directions with the aid of Global Positioning System (GPS) satellites, to monitoring tire pressure, providing emergency assistance and facilitating hands-free calling with embedded microphones and speakers. The FBI realised that it could eavesdrop on conversations occurring inside an OnStar-equipped vehicle by remotely reprogramming the system to activate its microphones for use as a ‘roving bug’ and it has secretly ordered an anonymous carmaker to do just that on at least one occasion.41 38 TiVo Privacy Policy § 2.2 (May 2006) available at accessed October 2007. (‘The collection of Personally Identifiable Viewing Information is necessary for the use of certain advanced TiVo features. ... If you expressly choose to allow TiVo to collect your Personally Identifiable Viewing Information, TiVo may use this information to provide the requested services as well as for surveys, audience measurement, and other legitimate business purposes.’). 39 See Ben Charny, ‘Jackson’s Super Bowl Flash Grabs TiVo Users’ CNet News.com (2 February 2004) available at accessed October 2007. 40 TiVo Privacy Policy, above n 38, §3.6 (noting that TiVo ‘may be legally obligated to disclose User Information to local, state or federal governmental agencies or Third Parties under certain cir- cumstances (including in response to a subpoena)’). Other service providers, like antivirus software vendor Symantec, have been even less committal in their willingness to protect user privacy. They have stated that their products would not be updated to detect Magic Lantern, an FBI keystroke log- ging Trojan. See John Leyden, ‘AV Vendors Split Over FBI Trojan Snoops’ The Register (27 November 2001) available at accessed October 2007. 41 See 18 USC § 2518(4) (2000) (describing what orders authorising or approving of the intercep- tion of wire, oral or electronic communications must specify, and mentioning that the orders can be done ex parte). The carmaker complied under protest, and in 2004 a federal appellate court handed down an opinion titled Company v United States, with the generic caption designed to prevent iden- tification of the carmaker or the target of the investigation (349 F3d 1132 (9th Cir 2003)). The court found that the company could theoretically be ordered to perform the surveillance, but that, in this case, the FBI’s surveillance had interfered with the computer system’s normal use: a car with a secret open line to the FBI could not simultaneously connect to the automaker, and therefore if the occu- pants used the system to solicit emergency help, it would not function. Ibid. (Presumably, the FBI would not come to the rescue in the way the automaker promised its customers who use the system.) The implication is that such secret surveillance would have been legally acceptable if the system were redesigned to simultaneously process emergency requests. 136 Jonathan Zittrain A similar dynamic is possible with nearly all mobile phones. Mobile phones can be reprogrammed at a distance, allowing their microphones to be secretly turned on even when the phone is powered down. All ambient noise and conver- sation can then be continuously picked up and relayed back to law enforcement authorities, regardless of whether the phone is being used for a call.42 On modern PCs equipped with an automatic update feature, there is no technical barrier that prevents the implementation of any similar form of surveillance on the machine, whether it involves turning on the PC’s microphone and video camera, or search- ing and sharing any documents stored on the machine. Such surveillance could be introduced through a targeted update from the OS maker or from any other provider of software running on the machine. Surveillance need not be limited to targeted eavesdropping that is part of a criminal or civil investigation. It can also be effected more generally. In 1996, law student Michael Adler offered the hypothetical of an Internet-wide search for contraband.43 He pointed out that some digital items might be illegal to pos- sess or be indicative of other illegal activity—for example, child pornography, leaked classified documents or stores of material copied without permission of the copyright holder. A Net-wide search could be instigated that would inventory connected machines and report back when smoking guns were found. Tethering makes these approaches practicable and inexpensive for regulators. A government need only regulate certain critical private intermediaries—those who control the tethered appliances—to change the way individuals experience the world. When a doctrine’s scope has been limited by prudential enforcement costs, its reach can be increased as the costs diminish. Evaluating Perfect Enforcement The prospect of more thorough or ‘perfect’ law enforcement may seem appeal- ing. If one could wave a wand and make it impossible for people to kill each other, there might seem little reason to hesitate. Although the common law has only rarely sought to prohibit outright the continued distribution of defamatory materials by booksellers and newsstands, much less continued possession by pur- chasers, ease of enforcement through tethered appliances could make it so that all such material—wherever it might be found—can vanish into the memory hole. 42 See Brian Wheeler, ‘This Goes No Further’, BBC News (2 March 2004) available at accessed October 2007; see also United States v Tomero, 462 F Supp 2d 565, 569 (SDNY 2006) (holding that continuous mobile phone monitoring fits within the ‘roving bug’ statute). The Tomero opinion is ambiguous about whether the bug in question was physically attached to the phone or effected through a remote update. 43 See Michael Adler, ‘Cyberspace, General Searches, and Digital Contraband: the Fourth Amendment and the Net-Wide Search’ (1996) 105 Yale Law Journal 1093; see also Lessig, Code, above n 18, at 20–23, 25–6; Lawrence Lessig, ‘Constitution and Code’ (1996) 27 Columbia Law Review 1 at 6–7. Perfect Enforcement 137 Even when it comes to waving the regulator’s wand for the purpose of eradicating online evils like harassment, invasion of privacy, and copyright infringement, there are important reasons to hesitate.44 Objections to the Underlying Substantive Law Some people are consistently diffident about the presence of law in the online space. Those with undiluted libertarian values might oppose easier enforcement of laws as a general matter, because they believe that self-defence is the best solu- tion to harm by others, especially within a medium that carries bits, not bullets.45 By these lights, the most common online harms simply are not as harmful as those in the physical world, and therefore they call for lesser intrusions. For example, defamatory speech might be met not by a lawsuit for money damages or an injunction requiring deletion of the lies, but rather by more speech that corrects the record. A well-configured email client can adequately block spam, making it unnecessary to resort to intervention by a public authority. Material harmful to minors can be defanged by using parental filters, or by providing better education to children about what to expect when they go online and how to deal with images of violence and hate. Such ‘just deal with it’ arguments are deployed less often against the online cir- culation of images of child abuse. The creation and distribution of child pornog- raphy is nearly universally understood as a significant harm. In this context, those arguing in favour of an anarchic environment shift to claims that the activity is not very common or that existing tools and remedies are sufficiently effective—or they rely on some of the other objections described below. 44 Dan Burk and Tarleton Gillespie have offered an autonomy-based argument against the deploy- ment of trusted systems. See Dan Burk and Tarleton Gillespie, ‘Autonomy and Morality in DRM and Anti-Circumvention Law’ (2006) 4 Triple C 239 (‘State sponsorship of DRM in effect treats information users as moral incompetents, incapable of deciding the proper use of information products.’). While few other scholars have analysed the downsides of perfect enforcement in the context of the Internet or elsewhere, some have warned against assuming that perfect enforcement is desirable. See, eg, Mark A Lemley and R Anthony Reese, ‘Reducing Digital Copyright Infringement Without Restricting Innovation’ (2004) 56 Stanford Law Review 1345 at 1432–4; Alexandra Natapoff, ‘Underenforcement’ (2006) 75 Fordham Law Review 1715 at 1741; Eyal Zamir, ‘The Efficiency of Paternalism’ (1998) 84 Virginia Law Review 229 at 280 (‘[P]erfect enforcement is rarely the optimal level of enforcement.’); Julie Cohen, ‘Pervasively Distributed Copyright Enforcement’, above n 29 (‘The proper balance between enforcement and restraint is an age-old question in market-democratic societies, and solutions have always entailed compromise. It would be odd if the advent of digital networked technologies altered this dynamic so completely that middle-ground possibilities ceased to exist.’). 45 See David R Johnson & David G Post, ‘Law and Borders—The Rise of Law in Cyberspace’ (1996) 48 Stanford Law Review 1367 at 1367, 1383, 1387–8 (arguing that self-governance can and should be central to cyberspace regulation); John Perry Barlow, ‘A Declaration of the Independence of Cyberspace’ (8 February 1996) available at accessed October 2007 (‘Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather.’). 138 Jonathan Zittrain One can also argue against stronger enforcement regimes by objecting to the laws that will be enforced. For example, many of those who argue against increased copyright enforcement—undertaken through laws that broaden infringement penalties46 or through trusted systems that preempt infringement47—argue that copyright law itself is too expansive.48 For those who believe that intellectual property rights have gone too far, it is natural to argue against regimes that make such rights easier to enforce, independent of seeking to reform the copyright law itself. Similarly, those who believe in lower taxes might object to a plan that makes it easier for intermediaries to collect and remit use and sales taxes for online trans- actions.49 Likewise, the large contingent of people who routinely engage in illegal online file sharing may naturally disfavour anything that interferes with these 46 See Eric Goldman, ‘A Road to No Warez: The No Electronic Theft Act and Criminal Copyright Infringement’ (2003) 82 University of Oregon Law Review 369 (discussing the history of the act and dif- ficulties that have arisen when attempting to enforce it); Note, ‘Exploitative Publishers, Untrustworthy Systems, and the Dream of a Digital Revolution for Artists’ (2001) 114 Harvard Law Review 2438 at 2455–6 (asserting that the NET Act’s self-help mechanisms are likely to be ineffective because copy protections are ‘routinely cracked’); Declan McCullagh, ‘Perspective: The New Jailbird Jingle’ CNET News.com (27 January 2003) available at accessed October 2007. (chronicling the NET Act’s ineffectiveness). 47 For criticism of trusted system legislation, see Drew Clark, ‘How Copyright Became Controversial’ in Proceedings of the 12th Annual Conference on Computers, Freedom and Privacy (2002), sess. Future of Intellectual Property 1, available at accessed October 2007. (criticising the DMCA); Julie E Cohen, ‘Lochner in Cyberspace: The New Economic Orthodoxy of “Rights Management” 97’ (1998) Michigan Law Review 462 at 494–5 (characterising support for DMCA and other legislation enlarging intellectual property rights as ‘Lochner pure and simple’); Lisa J Beyer Sims, ‘Mutiny on the Net: Ridding P2P Pirates of Their Booty’ (2003) 53 Emory Law Journal 1887 at 1907, 1937–9 (describing objections to SBDTPA and DMCA § 1201 on grounds of overbreadth and interference with the fair use doctrine); Declan McCullagh, ‘New Copyright Bill Heading to DC’, Wired (7 September 2001) available at accessed October 2007 (describing responses to SSSCA); Letter from Shari Steele, Executive Dir, Elec Freedom Found., to Fritz Hollings, Senator, and Ted Stevens, Senator (5 November 2001), available at accessed October 2007 (discussing the pro- posed SSSCA). 48 See Yochai Benkler, ‘Free as the Air to Common Use: First Amendment Constraints on Enclosure of the Public Domain’ (1999) 74 New York University Law Review 354 (asserting that expansive intellectual property rights constrain the availability of information); Yochai Benkler, ‘Through the Looking Glass: Alice and the Constitutional Foundations of the Public Domain’ (2003) 66 Law & Contemporary Problems 173 at 216–18 (criticising the NET Act and DMCA for expanding copyright protection in such a way that will chill expression); Neil Weinstock Netanel, ‘Locating Copyright Within the First Amendment Skein’ (2001) 54 Stanford Law Review 1 (arguing that the expansion of copyright law limits the incentivising effect of the regime and burdens speech); Pamela Samuelson, ‘Intellectual Property and the Digital Economy: Why the Anti-Circumvention Regulations Need to Be Revised’ (1999) 14 Berkeley Technical Law Journal 519 (criticising the DMCA as overly broad and describing some problems with expansive copyright protections). 49 See, eg, Dean F Andal, ‘Read My E-Mail, No New Taxes!’, Cal-Tax Digest (April 1997), available at accessed October 2007; see generally William V Vetter, ‘Preying on the Web: Tax Collection in the Virtual World’ (2001) 28 Florida State University Law Review 649 (focusing on constitutional and jurisdictional issues); Charles E McLure, Jr, ‘Taxation of Electronic Commerce: Economic Objectives, Technological Constraints, and Tax Laws’ (1997) 52 Taxation Law Review 269. Perfect Enforcement 139 activities.50 To be sure, many of those people may download even though they believe it to be wrong—in which case they might welcome a system that better prevents them from yielding to temptation. Law Professor William Stuntz notes the use of legal procedure—evolving doctrines of Fourth and Fifth Amendment protection—as a way of limiting the substantive application of unpopular laws in 18th- and 19th-century America such as those involving first heresy and sedition, and later railroad and antitrust regulation.51 In that context, he argues, judges interpreted the Fourth and Fifth Amendments in ways designed to increase the costs to law enforcement of collect- ing evidence from private parties. When the judiciary began defining and enforc- ing a right to privacy that limited the sorts of searches police could undertake, it became more difficult to successfully prosecute objectionable crimes like heresy, sedition or trade offences: ‘It is as if privacy protection were a proxy for something else, a tool with which courts or juries could limit the government’s substantive power.’52 Challenging the rise of tethered appliances helps maintain certain costs on the exercise of government power—costs that reduce the enforcement of objectionable laws. The drawback to arguing generally against perfect enforcement because one objects to the laws likely to be enforced is that it preaches to the choir. Certainly, those who oppose copyright laws will also oppose changes to code that facilitate the law’s online enforcement. To persuade those who are more favourably dis- posed to enforcement of substantive laws using tethered appliances, we must look to other objections. Portability and Enforceability without the Rule of Law While it might be understandable that those opposed to a substantive law would also favour continued barriers to its enforcement, others might say that the price of living under the rule of law is that law ought to be respected, even if one dis- agrees with it. In this view, the way to protest an undesirable law is to pursue its modification or repeal, rather than to celebrate the difficulty of its enforcement.53 The rise of procedural privacy limits described by Stuntz was itself an artifact of the law—the decisions of judges with license to interpret the Constitution. This legally sanctioned mandate is distinct from one allowing individuals to flout the 50 According to a 2005 Pew Internet & American Life Project study, 27% of adult Internet users reported engaging in file-sharing. Pew Internet & American Life Project, Internet Activities (11 January 2007) available at accessed October 2007. 51 See William J Stuntz, ‘The Substantive Origins of Criminal Procedure’ (1995) 105 Yale Law Journal 393 at 394–5. For a related discussion, which also draws on Stuntz, see Lessig, Code, above n 18, at 213. 52 Stuntz, previous n at 395. 53 Cf Lessig, Code, above n 18 at 309 (supporting exercise of free speech through democratic chan- nels in societies observing the rule of law, rather than through ‘technological tricks’). 140 Jonathan Zittrain law when they feel like it, simply because they cannot be easily prevented from engaging in the illicit act and caught. But not every society operates according to a framework of laws that are democratically promulgated and then enforced by an independent judiciary. Governments like those of China or Saudi Arabia might particularly benefit from technological configurations that allow for inexpensive surveillance or the removal of material authored by political dissidents. In a world where tethered appliances dominate, the cat-and-mouse game tilts toward the cat. Recall that the FBI can secretly eavesdrop on any automobile with an OnStar navigation system by obtaining a judge’s order and ensuring that the surveillance does not otherwise disrupt the system’s functioning. In a place without the rule of law, the prospect of cars rolling off the assembly line surveillance-ready is particularly unsettling. China’s government has already begun experimenting with these sorts of approaches. For example, the PC telephone program Skype is not amenable to third-party changes and is tethered to Skype for its updates. Skype’s distribution partner in China has agreed to censor words like ‘Falun Gong’ and ‘Dalai Lama’ in its text messaging for the Chinese version of the program.54 Other services that are not generative at the technical layer have been similarly modified: Google.cn is censored by Google at the behest of the Chinese government, and Microsoft’s MSN Spaces Chinese blog service automatically filters out sensitive words from blog titles.55 There is an ongoing debate about the degree to which firms chartered in freer societies should assist in censorship or surveillance taking place in less free societ- ies.56 The argument considered here is one layer deeper than that debate: if the information ecosystem at the cutting edge evolves into one that is not generative at its core, then authoritarian governments will naturally inherit an ability to enforce their wills more easily, without substantially needing to change technolo- gies and services or to curtail the breadth of their influence. Because it is often less obvious to users and the wider world, the ability to enforce quietly using qualities of the technology itself is worrisome. Technologies that lend themselves to an easy and tightly coupled expression of governmental power simply will be portable from one society to the next. It will make irrelevant the question about how firms like Google and Skype should operate outside their home countries. This conclusion suggests that although some social gain may result from better enforcement of existing laws in free societies, the gain might be more than offset by better enforcement in societies that are less free—under repressive governments today, or anywhere in the future. If the gains and losses remain coupled, it might 54 See Marguerite Reardon, ‘Skype Bows to Chinese Censors’, CNet News.com (20 April 2006), avail- able at accessed October 2007. 55 See Rebecca MacKinnon, ‘China’s Internet: Let a Thousand Filters Bloom’ YaleGlobal (28 June 2005) available at accessed October 2007. 56 See Jonathan L Zittrain & John G Palfrey, Jr, ‘Reluctant Gatekeepers: Corporate Ethics on a Filtered Internet’ in Ronald J Deibert et al (eds), Access Denied: The Practice and Policy of Global Internet Filtering (2008). Perfect Enforcement 141 make sense to favour retention of generative technologies to put what law professor James Boyle has called the ‘Libertarian gotcha’ to authoritarian regimes: if one wants technological progress and the associated economic benefits, one must be prepared to accept some measure of social liberalisation made possible with that technology.57 Like many regimes that want to harness the benefits of the market while forgoing political liberalisation, China is wrestling with this tension today.58 In an attempt to save money and establish independence from an overseas soft- ware vendor like Microsoft, China has encouraged the adoption of GNU/Linux,59 an operating system least amenable in its current form to appliancisation because anyone can modify it and install it on a non-locked-down endpoint PC. China’s attempt, therefore, represents either a misunderstanding of the key role that end- points can play in regulation or a calculated judgment that the benefits of inter- national technology independence outweigh the costs of less regulability. If one objects to censorship in societies that have not developed the rule of law, one can support the maintenance of a generative core in information technology, minimising the opportunities for some societies that wish to exploit the informa- tion revolution to discover new tools for control. Amplification and the Lock-in of Mistakes When a regulator makes mistakes in the way it construes or applies a law, a stron- ger ability to compel compliance implies a stronger ability to compel compliance with all mandates, even those that are the results of mistaken interpretations. Gaps in translation may also arise between a legal mandate and its technologi- cal manifestation. This is especially true when technological design is used as a preemptive measure. Under US First Amendment doctrine, prior restraints on speech—preventing speech from occurring in the first place, rather than pun- ishing it after the fact if indeed it is unlawful—are greatly disfavoured.60 Design features mandated to prevent speech-related behaviours, on the premise that such behaviours might turn out to be unlawful, could be thought to belong in just that category.61 Consider the Australian web-hosting company that automatically 57 See James Boyle, ‘Foucault in Cyberspace: Surveillance, Sovereignty, and Hardwired Censors’ (1997) 66 University of Cincinnati Law Review 177; see also Jonathan Zittrain, ‘A History of Online Gatekeeping’ (2006) 19 Harvard Journal of Law & Technology. 253 at 295. Boyle believes the ‘Libertarian gotcha’ to be contingent, not inherent. In other words, because code can be changed, it is possible to take a technology and then refashion it to make it easier to regulate. 58 Fareed Zakaria, The Future of Freedom: Illiberal Democracy at Home and Abroad (reprint edn 2004) 81–5, 91–2, 156. 59 Ingrid Marson, ‘China: Local Software for Local People’, CNet News.com (14 November 2005) available at accessed October 2007. 60 See, eg, Bantam Books, Inc v Sullivan, 372 US 58, 83 (1963) (‘Any system of prior restraints of expression comes to this Court bearing a heavy presumption against its constitutional validity.’). See also Lawrence Tribe, American Constitutional Law (1999). 61 See Lyombe Eko, ‘New Medium, Old Free Speech Regimes: The Historical and Ideological Foundations of French & American Regulations of Bias-Motivated Speech and Symbolic Expression 142 Jonathan Zittrain deletes all of its clients’ multimedia files every night unless it receives specific assurances up front that the files in a given directory are placed with the permis- sion of the copyright owner or are uncopyrighted.62 Preemptive design may have a hard time tailoring the technical algorithms to the legal rules. Even with some ongoing human oversight, the blacklists of objectionable websites maintained by commercial filtering programs are con- sistently overbroad, erroneously placing websites into categories to which they do not belong.63 For example, when the US government sponsored a service to assist Iranians in overcoming Internet filtering imposed by the Iranian govern- ment, the US-sponsored service in turn sought to filter out pornographic sites so that Iranians would not use the circumvention service to obtain pornog- raphy. The service filtered any site with ‘ass’ in its domain name—including usembassy.state.gov, the US Department of State’s online portal for its own overseas missions.64 In the realm of copyright, whether a particular kind of copying qualifies for a fair use defence is in many instances notoriously difficult to determine ahead of time.65 Some argue that broad attempts to embed copyright protections in technology fall short because the technology can not easily take into account on the Internet’ (2006) 18 Loyola Los Angeles International & Comparative Law Review 69 at 124 (not- ing a possible connection between US prior restraint doctrine and the US conception of the Internet as a ‘free marketplace of ideas’); John G Palfrey, Jr, & Robert Rogoyski, ‘The Move to the Middle: The Enduring Threat of “Harmful” Speech to the End-to-End Principle’ (2006) 21 Washington University Journal of Law & Policy (2006) 31 at 52 (discussing a Pennsylvania law requiring ISPs to deny access to Web sites containing child pornography and a court decision that declared the law unconstitutional, partly on prior restraint grounds) (citing Ctr for Democracy & Tech v Pappert, 337 F Supp 2d 606 (ED Pa 2004); see also Zieper v Metzinger, 392 F Supp 2d 516 (SDNY 2005, aff ’d, 474 F3d 60 (2d Cir 2007)). 62 Exetel, ‘Hosting Support Facilities: Frequently Asked Questions’, available at accessed 4 July 2007. 63 See United States v Am. Library Ass’n, 539 US 194, 215–16 (Breyer, J, concurring) (arguing that the standard should have been heightened scrutiny for a law requiring libraries to use filtering systems in order to receive public funding and noting that ‘[t]he [filtering] technology, in its current form, does not function perfectly, for to some extent it also screens out constitutionally protected materi- als that fall outside the scope of the statute ( i.e., “overblocks”) and fails to prevent access to some materials that the statute deems harmful ( i.e., “underblocks”)’); ACLU v Ashcroft, 322 F3d 240, 266–7 (3d Cir 2003), aff ’d and remanded by 542 US 656 (2004) (‘We conclude that [COPA] is substantially overbroad in that it places significant burdens on Web publishers’ communication of speech that is constitutionally protected as to adults and adults’ ability to access such speech. In so doing, COPA encroaches upon a significant amount of protected speech beyond that which the Government may target constitutionally in preventing children’s exposure to material that is obscene for minors.’); Katherine A Miltner, ‘Note: Discriminatory Filtering: CIPA’s Effect on Our Nation’s Youth and Why the Supreme Court Erred in Upholding the Constitutionality of the Children’s Internet Protection Act’ (2005) 57 Federal Communications Law Journal 555 (criticising the Supreme Court’s American Library Association decision on constitutional grounds, including overbreadth). 64 OpenNet Initiative, ‘Unintended Risks and Consequences of Circumvention Technologies’ (5 May 2004), available at accessed October 2007. 65 Cf Lawrence Lessig, Free Culture: How Big Media Uses Technology and the Law to Lock Down Culture and Control Creativity (2004) 197 (observing that ‘[F]air use in America simply means the right to hire a lawyer’). Perfect Enforcement 143 possible fair use defenses.66 The law prohibiting the circumvention of trusted systems disregards possibilities for fair use—which might make sense, since such an exception could swallow the rule.67 Such judgments appear to rely on the fact that the materials within a trusted system can still be found and copied in non- trusted analog formats, thus digital prohibitions are never complete.68 The worry that a particular speech-related activity will be precluded by design is blunted when the technology merely makes the activity less convenient rather than pre- venting it altogether. However, if we migrate to an information ecosystem in which tethered appliances predominate, that analog safety valve will wane. For specific injunctions, the worries about mistakes may appear weaker. A specific injunction to halt an activity or destroy its fruits issues only after an adjudication. If we move to a regime in which individuals, and not just distribu- tors, are susceptible to impoundment remedies for digital contraband, these rem- edies might be applied only after the status of the contraband has been officially determined.69 Indeed, one might think that an ability to easily recall infringing materials after the fact might make it possible to be more generous about allow- ing distribution in the first place—cases could proceed to final judgments rather than being functionally decided in earlier stages on the claim that continued distribution of the objectionable material would cause irreparable harm. If cats can easily be put back into bags, there can be less worry about letting them out to begin with. However, the ability to perfectly (in the sense of thoroughly) scrub everyone’s digital repositories of unlawful content may compromise the values that belie fear of prior restraints, even though the scrub would not be ‘prior’ in fact. Preventing the copying of a work of copyrighted music stops a behaviour without remov- ing the work from the public sphere, since presumably the work is still available through authorised channels. It is a different matter to eliminate entirely a piece of digital contraband. Such elimination can make it difficult to understand, re- evaluate, or even discuss what happened and why. In ruling against a gag order at a trial, the US Supreme Court worried that the order was an ‘immediate and 66 See Dan L Burk and Julie E Cohen, ‘Fair Use Infrastructure for Rights Management Systems’ (2001) 15 Harvard Journal of Law & Technology 41 at 50–51 (discussing how technological controls interact with fair use principles); Mark Gimbel, ‘Some Thoughts on Implications of Trusted Systems for Intellectual Property Law’ (1998) 50 Stanford Law Review 1671; see also Digital Rights Management Conference, available at accessed 29 March 2007 (containing links to articles and news about DRM and fair use). 67 See Storage Tech Corp v Custom Hardware Eng’g & Consulting, Inc, 421 F 3d 1307, 1318–21 (Fed Cir 2005); Universal City Studios, Inc v Corley, 273 F3d 429 (2d Cir 2001); 321 Studios v MGM Studios, Inc, 307 F Supp 2d 1085 (ND Cal 2004). 68 See Universal City Studies, Inc v Corley, previous n. 69 Cf Fair Use: The Story of the Letter U and the Numeral 2 (1995) (describing how copies of the band Negativland’s release U2 were impounded as part of a settlement agreement between the band and Island Records, who sued Negativland for the release’s use of U2’s name and samples of their music. The copies were not impounded until after the settlement established that the releases were contraband). 144 Jonathan Zittrain irreversible sanction’.70 ‘If it can be said that a threat of criminal or civil sanctions after publication “chills” speech, prior restraint “freezes” it at least for the time.’71 Post hoc scrubs are not immediate, but they have the prospect of being permanent and irreversible—a freezing of speech that takes place after it has been uttered, and no longer just ‘for the time’. That the speech had an initial opportunity to be broadcast may make a scrub less worrisome than if it were blocked from the start, but removing this information from the public discourse means that those who come after us will have to rely on secondary sources to make sense of its removal. To be sure, we can think of cases where complete elimination would be ideal. These are cases in which the public interest is not implicated, and for which con- tinued harm is thought to accrue so long as the material circulates: leaked medi- cal records, child abuse images and nuclear weapon designs.72 But the number of instances in which legal judgments effecting censorship are overturned or revised— years later—counsels that an ability to thoroughly enforce bans on content makes the law too powerful and its judgments too permanent, since the material covered by the judgment would be permanently blocked from the public view. Imagine a world in which all copies of once-censored books like Candide, The Call of the Wild and Ulysses had been permanently destroyed at the time of the censoring and could not be studied or enjoyed after subsequent decisionmakers lifted the ban.73 In a world of tethered appliances, the primary backstop against perfectly enforced mis- takes would have to come from the fact that there would be different views about what to ban found among multiple sovereigns—so a particular piece of samizdat might live on in one jurisdiction even as it was made difficult to find in another. The use of tethered appliances for surveillance may be least susceptible to an objection of mistake, since surveillance can be used to start a case rather than close it. For example, the use of cameras at red traffic lights has met with some objection because of the level of thoroughness they provide—a sense of snooping simply not possible with police alone doing the watching.74 And there are instances where the cameras report false positives.75 However, those 70 Nebraska Press Ass’n v Stuart, 427 US 539, 559 (1976). 71 Ibid. 72 In 1979, the US government blocked publication of the Progressive article ‘The H-Bomb Secret: How We Got It, Why We’re Telling It’, which included information on how nuclear weapons functioned. The case was later dropped. See United States v Progressive, Inc, 467 F Supp 990 (WD Wis 1979); see also A DeVolpi et al, Born Secret: The H-Bomb, the ‘Progressive’ Case, and National Security (1981). 73 See John M Ockerbloom, Books Banned Online, available at accessed 1 June 2007. 74 Charles Memminger, ‘Law Enforcement Inc Is Next Big Private Industry’ Honolulu Star-Bulletin (8 July 2001) available at accessed October 2007 (‘[The use of traffic light cameras] feels icky, hints at technology run amok and provides us with a glance into the future where, smile, we’re constantly on some candid camera or another and privacy will be a concept as quaint as horse-drawn carriages and Nintendo 64.’). 75 See, eg, Nicholas J Garber et al, An Evaluation of Red Light Camera (Photo-Red) Enforcement Programs in Virginia (January. 2005) 108–10 available at accessed October 2007 (discussing possible malfunctions of the cameras and possibility of false positives). Perfect Enforcement 145 accused can have their day in court to explain or deny the charges inspired by the cameras’ initial reviews. Moreover, since running a red light might cause an accident and result in physical harm, the cameras seem well-tailored to dealing with a true hazard, and thus less objectionable. And the mechanisation of iden- tifying violators might even make the system more fair, because the occupant of the vehicle cannot earn special treatment based on individual characteristics like race, wealth or gender. The prospects for abuse are greater when the cam- eras in mobile phones or the microphones of OnStar can be serendipitously repurposed for surveillance. These sensors are much more invasive and general purpose. Bulwarks against Government There has been a simmering debate about the meaning of the Second Amendment to the US Constitution, which concerns ‘the right of the people to keep and bear Arms’.76 It is not clear whether the constitutional language refers to a collective right that has to do with militias, or an individual one that could more readily be interpreted to preclude gun control legislation. At present, most reported deci- sions and scholarly authority favour the former interpretation, but the momen- tum may be shifting.77 For our purposes, we can extract one strand from this debate without having to join it: one reason to prohibit the government’s dispos- session of individual firearms is to maintain the prospect that individuals could revolt against a tyrannical regime, or provide a disincentive to a regime consider- ing going down such a path.78 These check-on-government notions are echoed by some members of technical communities, such as those who place more faith in their own encryption to prevent secrets from being compromised than in any government guarantees of self-restraint. Such a description may unnecessarily demean the techies’ worries as a form of paranoia. Translated into a more formal and precise claim, one might worry that the boundless but unnoticeable searches permitted by digital advances can be as disruptive to the equilibrium between citi- zen and law enforcement as any enforcement-thwarting tools such as encryption. The equilibrium between citizens and law enforcement has crucially relied on some measure of citizen cooperation. Abuse of surveillance has traditionally been limited not simply by the conscience of those searching or by procedural rules prohibiting the introduction of illegally obtained evidence, but also by the public’s own objections. If occasioned through tethered appliances, such surveil- lance can be undertaken almost entirely in secret, both as a general matter and for any specific search. Stuntz has explained the value of a renewed focus on physical 76 US Constitution amend. II. 77 See, eg, Parker v District of Columbia, 2007 WL 702084, at *9–11 (DC Cir 9 March 2007); Tony Mauro, ‘Scholar’s Shift in Thinking Angers Liberals’ USA Today (27 August 1999) available at accessed October 2007. 78 Parker v District of Columbia, previous n at *9. 146 Jonathan Zittrain ‘data mining’ via group sweeps—for example, the searching of all cars near the site of a terrorist threat—and pointed out that such searches are naturally (and healthily) limited because large swathes of the public are noticeably burdened by them.79 The public, in turn, can effectively check such government action by objecting through judicial or political processes, should the sweeps become too onerous. No such check is present in the controlled digital environment; extensive searching can be done with no noticeable burden—indeed, without notice of any kind—on the parties searched. For example, the previously mentioned FBI use of an OnStar-like system to listen in on the occupants of a car is public knowledge only because the manufacturer chose to formally object.80 The rise of tethered appliances significantly reduces the number and variety of people and institutions required to apply the state’s power on a mass scale. It removes a practical check on the use of that power. It diminishes a rule’s ability to attain legitimacy as people choose to participate in its enforcement or at least not stand in its way. A government able to pressure the provider of BlackBerries could insist on surveillance of emails sent to and from each device.81 And such surveillance would require few people doing the enforcement work. Traditionally, ongoing mass surveillance or control would require a large investment of resources and, in par- ticular, people. Eavesdropping has required police willing to plant and monitor bugs; seizure of contraband has required agents willing to perform raids. Further, a great deal of routine law enforcement activity has required the cooperation of private parties, such as landlords, banks and employers. The potential for abuse of governmental power is limited not only by whatever procedural protections are afforded in a jurisdiction that recognises the rule of law, but also more implicitly by the decisions made by parties asked to assist. Sometimes the police refuse to fire on a crowd even if a dictator orders it, and less dramatically, whistleblowers among a group of participating enforcers can slow down, disrupt, leak or report on anything they perceive as abusive in a law enforcement action.82 79 See William J Stuntz, ‘Local Policing After the Terror’ (2002) 111 Yale Law Journal 2137 at 2163, 2165–6. 80 349 F3d 1132 (9th Cir 2003). Similar instances of burdenless yet extensive search made possible by the digital space have continued to emerge. In at least one recent case, the FBI employed the tech- nique of installing spyware via email for surveillance purposes. See Declan McCullagh, ‘FBI Remotely Installs Spyware to Trace Bomb Threat’ Cnet News (18 July 2007) available at accessed October 2007. Recent proposals by German officials would broadly legalise similar methods for counterterrorism efforts. See Melissa Eddy, ‘Germany Wants to Spy on Suspects via Web’ Associated Press (21 August 2007), available at accessed October 2007. 81 See Hepting v AT&T Corp, 439 F Supp 2d 974 (ND Cal 2006) (denying summary judgment motion in a class-action lawsuit where plaintiffs allege that the defendant telecommunication carrier was collaborating with the National Security Agency in a massive warrantless surveillance program). 82 Richard Posner cites whistleblowers as the reason not to worry about routine automated gov- ernment data mining of citizen communications. See Richard A Posner, ‘Editorial: Our Domestic Intelligence Crisis’, Washington Post (21 December 2005) at A31. Perfect Enforcement 147 Compare a citywide smoking ban that enters into effect as each proprietor acts to enforce it—under penalty for failing to do so, to be sure—with an alter- native ordinance implemented by installing highly sensitive smoke detectors in every public place, wired directly to a central enforcement office. Some in favour of the ordinance may still wish to see it implemented by people rather than mechanical fiat. The latter encourages the proliferation of simple punish- ment-avoiding behaviour that is anathema to open, participatory societies. As law professor Lior Strahilevitz points out, most laws are not self-enforcing, and a measure of the law’s value and importance may be found in just how much those affected by it (including as victims) urge law enforcement to take a stand, or invoke what private rights of action they may have.83 Strahilevitz points to laws against vice and gambling, but the idea can apply to the problems arising from technology as well. Law ought to be understood not simply by its mean- ing as a text, but by the ways in which it is or is not internalised by the people it affects—whether as targets of the law, victims to be helped by it, or those charged with enforcing it.84 The Benefits of Tolerated Uses A particular activity might be illegal, but in some cases those with standing to complain about it sometimes hold back on trying to stop it while they determine whether they really object. If they decide they do object, they can sue. Tim Wu calls this phenomenon ‘tolerated uses’,85 and copyright infringement shows how it can work. When Congress passed the Digital Millennium Copyright Act of 1998 (DMCA),86 it sought to enlist certain online service providers to help stop the unauthorised spread of copyrighted material. ISPs that just routed packets for others were declared not responsible for copyright infringement taking place over their communication channels.87 Intermediaries that hosted content—such as the CompuServe and Prodigy forums, or Internet hosting sites such as Geocities. com—had more responsibility. They would be unambiguously clear of liability 83 See Lior Jacob Strahilevitz, ‘“How’s My Driving?” for Everyone (and Everything?)’ (2006) 81 New York University Law Review 1699. 84 For an elaboration of objections along these lines, including rights to engage in acts of con- science, see Burk and Gillespie, above n 44. 85 See Tim Wu, ‘Does YouTube Really Have Legal Problems?’ Slate (26 October 2006) available at accessed October 2007. See also Julie Cohen, ‘Pervasively Distributed Copyright Enforcement’, above n 29 (‘Pervasively distributed copyright enforcement por- tends fundamental change in these processes. The linked regimes of authorization and constraint will constrict the “breathing room” that is a critical constituent of each of them.’) and Tim Wu, ‘Tolerated Use & the Return of Notice-Based Copyright’ (forthcoming). 86 Pub L No 105–3 04, 112 Stat 2860 (1998) (codified in scattered sections of 17 USC). 87 17 USC § 512(a) (2000). This is true at least so long as the ISPs have a policy for ‘terminating repeat infringers’, which in practice has not affected the way they operate. 148 Jonathan Zittrain for copyright infringement only if they acted expeditiously to take down infringing material once they were specifically notified of that infringement.88 Although many scholars have pointed out deficiencies and opportunities for abuse in this notice-and-takedown regime,89 the scheme reflects a balance. Under the DMCA safe harbours, intermediaries have been able to provide flexible plat- forms that allow for a broad variety of amateur expression. For example, Geocities and others have been able to host personal home pages, precursors to the blogs of today, without fear of copyright liability should any of the home page owners post infringing material—at least so long as they act after specific notification of an infringement. Had these intermediaries stopped offering these services for fear of crushing liability under a different legal configuration, people would have had far fewer options to broadcast online: they could have either hosted content through their own personal PCs, with several incumbent shortcomings,90 or forgone broadcasting altogether. Thanks to the incentives of notice-and-takedown, copy- right holders gained a ready means of redress for the most egregious instances of copyright infringement, without chilling individual expression across the board in the process. The DMCA legal regime supports the procrastination principle, allowing for experimentation of all sorts and later reining in excesses and abuses as they hap- pen, rather than preventing them from the outset. Compelling copyright holders to specifically demand takedown may seem like an unnecessary burden, but it may be helpful to them because it allows them to tolerate some facially infring- ing uses without forcing copyright holders to make a blanket choice between enforcement and no enforcement. Several media companies and publishers sim- ply have not figured out whether YouTube’s and others’ excerpts of their material are friend or foe. Companies are not monolithic, and there can be dissenting views within a company on the matter. A company with such diverse internal voices cannot come right out and give an even temporary blessing to apparent copyright infringement. A blessing would cure the material in question of its unlawful character, because the infringement would then be authorised. Yet at the same time, a copyright holder may be loathe to issue DMCA notices to try to get material removed each time it appears, because clips can serve a valuable promotional function. 88 Copyright owners subsequently launched a comprehensive campaign to use the DMCA to take down content. See, eg, ‘Chilling Effects, Chilling Effects Clearinghouse’, available at accessed 1 June 2007; Press Release, ‘Recording Indus. Ass’n of Am., Worldwide Music Industry Coordinates Its Strategy Against Piracy’ (28 October 1999) available at accessed October 2007. 89 See, eg, Lawrence Lessig, ‘The Internet Under Siege’ (2001) 127 Foreign Policy 56 available at accessed October 2007; Yochai Benkler, ‘Free as the Air to Common Use: First Amendment Constraints on Enclosure of the Public Domain’, above n 48 at 414–29; ‘Note: The Criminalization of Copyright Infringement in the Digital Era’ (1999)112 Harvard Law Review 1705. 90 The pages would then be available only when those PCs were turned on, and when not too many other people were viewing them. Further, it would be much more difficult to publish anonymously. Perfect Enforcement 149 The DMCA regime maintains a loose coupling between the law’s violation and its remedy, asking publishers to step forward and affirmatively declare that they want specific material wiped out as it arises and giving publishers the luxury to accede to some uses without forcing intermediaries to assume that the copyright holder would have wanted the material to be taken down. People might make videos that include copyrighted background music or television show clips and upload them to centralised video sharing services like YouTube. But YouTube does not have to seek these clips out and take them down unless it receives a specific complaint from the copyright holder. While requiring unprompted attempts at copyright enforcement by a firm like YouTube may not end up being unduly burdensome to the intermediary—it all depends on how its business model and technology are structured—requiring unprompted enforcement may end up precluding uses of copyrighted material to which the author or publisher actually does not object, or on which it has not yet come to a final view.91 Thus there may be some cases when preemptive regimes can be undesirable to the entities they are designed to help. A preemptive intervention to preclude some particular behaviour actually disempowers the people who might complain about it to decide that they are willing, after all, to tolerate it. Few would choose to toler- ate a murder, making it a good candidate for preemption through design, were that possible,92 but the intricacies of the markets and business models involved in the distribution of intellectual works means that reasonable copyright holders could disagree on whether it would be a good thing to prevent certain unauthor- ised distributions of their works. The generative history of the Internet shows that allowing openness to third- party innovation from multiple corners and through multiple business models (or no business model at all) ends up producing widely adopted, socially useful applications not readily anticipated or initiated through the standard corporate production cycle.93 For example, in retrospect, permitting the manufacture of VCRs was a great boon to the publishers who were initially opposed to it. The entire video rental industry was not anticipated by publishers, yet it became a substantial source of revenue for them.94 Had Hush-a-Phones, Carterfones and modems required preapproval, or been erasable at the touch of a button the way that an EchoStar 91 Of course, publishers still might like to be able to designate a particular clip as infringing and see all instances of it automatically removed. That is a narrower demand than wanting any infringing clip to be identified automatically in the first instance. 92 Gun control would appear to be a policy designed to preempt violent crimes, but I have promised not to enter that debate here. 93 See Jessica Litman, Digital Copyright (2001). 94 See Clayton Collins, ‘Why Blockbuster Clings to Its DVDs and Rentals’ Christian Science Monitor (24 February 2005) available at accessed October 2007 (reporting that the US video-rental business had $8.2 billion in rental revenue in 2003 and $14 billion in VHS and DVD sales). Jack Valenti, former head of the Motion Picture Association 150 Jonathan Zittrain DVR of today can be killed, the decisions to permit them might have gone the other way, and AT&T would not have benefited as people found new and varied uses for their phone lines. Some in the music, television and movie industries are embracing cheap net- works and the free flow of bits, experimenting with advertising models similar to those pioneered for free television, in which the more people who watch, the more money the publishers can make. For instance, the BBC has made a deal with the technology firm Azureus, makers of a peer-to-peer BitTorrent client that has been viewed as contraband on many university campuses and corporate networks.95 Users of Azureus’s software will now be able to download BBC televi- sion programmes for free, and with authorisation, reflecting both a shift in busi- ness model for the BBC and a conversion of Azureus from devil’s tool to helpful distribution vehicle. BitTorrent software ensures that people upload to others as they download, which means that the BBC will be able to release its programmes online without incurring the costs of a big bandwidth bill because many viewers will be downloading from fellow viewers rather than the BBC. EMI is releasing music on iTunes without digital rights management—charging more for such unfettered versions.96 The tools that we now take as so central to the modern Internet, including the Internet browser, also began and often remain on uncertain legal ground. As one surfs the Internet, it is easy to peek behind the curtain of most Web sites by ask- ing the browser to ‘view source’, thereby uncovering the code that generates the viewed pages. Users can click on nearly any text or graphic they see and promptly copy it to their own Web sites or save it permanently on their own PCs. The legal theories that make these activities possible are tenuous. Is it an implied license from the Web site owner? Perhaps, but what if the Web site owner has introduc- tory text that demands that no copies like that be made?97 Is it fair use? Perhaps. In the United States, fair use is determined by a fuzzy four-factor test that in practice rests in part on habit and custom, on people’s expectations.98 When a technology is deployed early, those expectations are unsettled, or perhaps settled of America, warned at a Congressional hearing that ‘the VCR is to the movie industry what the Boston Strangler was to a woman alone’. ‘Home Recording of Copyrighted Works: Hearings on HR 4783, HR 4794, HR 4808, HR 5250, HR 5488, and HR 5705 Before the Subcomm. on Courts, Civil Liberties, and the Admin. of Justice of the House Comm. on the Judiciary’, 97th Cong (1983) (statement of Jack Valenti, President, Motion Picture Association of America). (He later said that the MPAA did not want to prevent the VCR’s deployment; it simply wanted to be able, through a favorable ruling, to withhold permission for sale of the technology until manufacturers agreed to a per-unit fee on VCRs and blank videocassettes that would be remitted to the publishers.). 95 ‘BBC Moves to File-sharing Sites’ BBC News (20 December 2006) available at accessed October 2007. 96 Press Release, Apple, ‘Apple Unveils Higher Quality DRM-Free Music on the iTunes Store’ (2 April 2007) available at accessed October 2007. 97 Cf Specht v Netscape Commc’ns Corp, 150 F Supp 2d 585, 594 (SDNY 2001) (‘The few courts that have had occasion to consider click-wrap contracts have held them to be valid and enforceable.’). 98 See 17 USC § 107 (2000). Perfect Enforcement 151 in the wrong direction, especially among judges who might be called upon to apply the law without themselves having fully experienced the technologies in question. A gap between deployment and regulatory reaction gives the economic and legal systems time to adapt, helping ensure that doctrines like fair use are applied appropriately. The Undesirable Collapse of Conduct and Decision Rules Law professor Meir Dan-Cohen describes law as separately telling people how to behave and telling judges what penalties to impose should people break the law. In more general terms, he has observed that law comprises both conduct rules and decision rules.99 There is some disconnect between the two: people may know what the law requires without fully understanding the ramifications for breaking it.100 This division—what he calls an ‘acoustic separation’—can be helpful: a law can threaten a tough penalty in order to ensure that people obey it, but then later show unadvertised mercy to those who break it.101 If the mercy is not telegraphed ahead of time, people will be more likely to follow the law, while still benefiting from a lesser penalty if they break it and have an excuse to offer, such as duress. Perfect enforcement collapses the public understanding of the law with its application, eliminating a useful interface between the law’s terms and its applica- tion. Part of what makes us human are the choices that we make every day about what counts as right and wrong, and whether to give in to temptations that we believe to be wrong. In a completely monitored and controlled environment, those choices vanish. One cannot tell whether one’s behaviour is an expression of character or is merely compelled by immediate circumstance. Of course, it may be difficult to embrace one’s right to flout the law if the flouting entails a gross violation of the rights of another. Few would uphold the freedom of someone to murder as ‘part of what makes us human’. So we might try to categorise the most common lawbreaking behaviours online and see how often they relate to ‘merely’ speech-related wrongs rather than worse transgres- sions. This is just the sort of calculus by which prior restraints are disfavoured especially when they attach to speech, rather than when they are used to prevent lawbreaking behaviours such as those that lead to physical harm. If most of the 99 See Meir Dan-Cohen, ‘Decision Rules and Conduct Rules: On Acoustic Separation in Criminal Law’ (1984) 97 Harvard Law Review 625 at 626–30. 100 Consider, for example, the penalties for copyright infringement. Under the US copyright statu- tory damages provision, 17 USC § 504(c), a copyright plaintiff may collect between $750 and $30,000 per work infringed by a ‘regular’ infringer. Courts have wide discretion to choose a number within this range, and may consider factors such as deterrence, harm to the plaintiff ’s reputation, and value of the work. Thus, if a peer-to-peer user shares one hundred works and a court chooses a mid-range figure like $10,000, a typical downloader could be held liable for $1,000,000. This may be an example of an acoustic separation opposite from Dan-Cohen’s model—penalties far harsher than what a citizen would anticipate. 101 This process appears to be at work when professors deal out harsh midterm grades, but then temper those grades by adjusting the final exam. 152 Jonathan Zittrain abuses sought to be prevented are well addressed through post hoc remedies, and if they might be adequately discovered through existing law enforcement mechanisms, one should disfavour perfect enforcement to preempt them. At the very least, the prospect of abuse of powerful, asymmetric law enforcement tools reminds us that there is a balance to be struck rather than an unmitigated good in perfect enforcement. Web 2.0 and the End of Generativity The situation for online copyright illustrates that for perfect enforcement to work, generative alternatives must not be widely available.102 In 2007, the movie industry and technology makers unveiled a copy protection scheme for new high- definition DVDs to correct the flaws in the technical protection measures applied to regular DVDs over a decade earlier. The new system was compromised just as quickly; instructions quickly circulated describing how PC users could disable the copy protection on HD-DVDs.103 So long as the generative PC remains at the centre of the modern information ecosystem, the ability to deploy trusted systems with restrictions that interfere with user expectations is severely limited: tighten a screw too much, and it will become stripped. So could the generative PC ever really disappear? As David Post wrote in response to a law review article that was a precursor to this book, ‘a grid of 400 million open PCs is not less generative than a grid of 400 million open PCs and 500 million locked-down TiVos’.104 Users might shift some of their activities to tethered appliances in response to security threats, and they might even find themselves using locked-down PCs at work or in libraries and Internet cafés. But why would they abandon the generative PC at home? The prospect may be 102 Law professor Randal Picker argues in ‘Rewinding Sony: The Evolving Product, Phoning Home and the Duty of Ongoing Design’ (2005) 55 Case Western Reserve Law Review 749 at 766–8 that legal liability for PC software authors ought to be structured so that producers are encouraged to be able to update a product from afar if it turns out that the product enables infringing uses and an update would stop them. This is a strong but dangerous argument. Indeed, gatekeeping responsibilities might not stop at a software author’s own products. OS makers could be asked to become gatekeepers for appli- cations running on their systems. Consider, for example, the technical ease with which an OS maker could disable functionality on a tethered PC of software such as DeCSS, which enables decryption of DVDs, and which distributors of the software have been successfully sued. Any vendor of tethered software could be pressured to take such action, possibly removing the capability of noninfringing uses at the same time. The core problem with Picker’s proposal, even for those software producers who resemble traditional gatekeepers, is that it fails to take into account the generative loss from compelling software originators to retain control. 103 See Steve Sechrist, ‘Day of Reckoning for AACS Copyright Protection’ Display Daily (20 February 2007) available at accessed October 2007. 104 David Post, ‘Comment on the Generative Internet’, avilable at accessed June 2008. Perfect Enforcement 153 found in ‘Web 2.0’. As mentioned earlier, in part this label refers to generativity at the content layer, on sites like Wikipedia and Flickr, where content is driven by users.105 But it also refers to something far more technical—a way of building Web sites so that users feel less like they are looking at Web pages and more like they are using applications on their very own PCs.106 New online map services let users click to grasp a map section and move it around; new Internet mail services let users treat their online email repositories as if they were located on their PCs. Many of these technologies might be thought of as technologically generative because they provide hooks for developers from one Web site to draw upon the content and functionality of another—at least if the one lending the material consents.107 Yet the features that make tethered appliances worrisome—that they are less generative and that they can be so quickly and effectively regulated—apply with equal force to the software that migrates to become a service offered over the Internet. Consider Google’s popular map service. It is not only highly useful to end users: it also has an open API (application programming interface) to its map data,108 which means that a third party Web site creator can start with a mere list of street addresses and immediately produce on her site a Google Map with a digital push-pin at each address.109 This allows any number of ‘mash-ups’ to be made, combining Google Maps with third party geographic datasets. Internet developers are using the Google Maps API to create Web sites that find and map the nearest Starbucks, create and measure running routes, pinpoint the locations of traffic light cameras and collate candidates on dating sites to produce instant displays of where one’s best matches can be found.110 Because it allows coders access to its map data and functionality, Google’s map- ping service is generative. But it is also contingent: Google assigns each Web devel- oper a key and reserves the right to revoke that key at any time, for any reason—or to terminate the whole Google Maps service.111 It is certainly understandable that Google, in choosing to make a generative service out of something in which it 105 See, eg, Tim O’Reilly, ‘What Is Web 2.0’ O’Reilly (30 September 2005) available at accessed October 2007. 106 See Wikipedia, ‘Web 2.0’, available at accessed 1 June 2007, 09:00 GMT. 107 See, eg, Mapquest, ‘Copyright Information’ available at accessed 1 June 2007; Hotmail, ‘Microsoft Passport Network Terms of Use’ available at accessed 1 June 2007; Gmail Terms of Use, available at accessed 1 June 2007. 108 Google Maps API, available at accessed 1 June 2007. 109 Ibid. 110 Posting of Mike Pegg to Google Maps Mania, ‘25 Things to Do with Google Maps Mashups’, available at accessed 11 November 2006, 21:20. 111 See Google Maps API ‘Terms of Use’, available at accessed 1 June 2007. 154 Jonathan Zittrain has invested heavily, would want to control it. But this puts within the control of Google, and anyone who can regulate Google, all downstream uses of Google Maps—and maps in general, to the extent that Google Maps’ popularity means other mapping services will fail or never be built. Software built on open APIs that can be withdrawn is much more precarious than software built under the old PC model, where users with Windows could be expected to have Windows for months or years at a time, whether or not Microsoft wanted them to keep it. To the extent that we find ourselves primar- ily using a particular online service, whether to store our documents, photos or buddy lists, we may find switching to a new service more difficult, as the data is no longer on our PCs in a format that other software can read. This disconnect can make it more difficult for third parties to write software that interacts with other software, such as desktop search engines that can currently paw through everything on a PC in order to give us a unified search across a hard drive. Sites may also limit functionality that the user expects or assumes will be available. In 2007, for example, MySpace asked one of its most popular users to remove from her page a piece of music promotion software that was developed by an outside company. She was using it instead of MySpace’s own code.112 Google unexpect- edly closed its unsuccessful Google Video purchasing service, and remotely dis- abled users’ access to content they had purchased; after an outcry, Google offered limited refunds instead of restoring access to the videos.113 Continuous Internet access thus is not only facilitating the rise of appliances and PCs that can phone home and be reconfigured by their vendors at any moment. It is also allowing a wholesale shift in code and activities from endpoint PCs to the Web. There are many functional advantages to this, at least so long as one’s Internet connection does not fail. When users can read and compose email online, their inboxes and outboxes await no matter whose machines they borrow—or what operating system the machines have—so long as they have a standard browser. It is just a matter of getting to the right Web site and logging in. We are beginning to be able to use the Web to do word processing, spreadsheet analyses, indeed, nearly anything we might want to do. Once the endpoint is consigned to hosting only a browser, with new features limited to those added on the other end of the browser’s window, consumer demand for generative PCs can yield to demand for boxes that look like PCs but instead offer only that browser. Then, as with tethered appliances, when Web 2.0 services change their offerings, the user has no ability to keep using an older ver- sion, as one might do with software that stops being actively made available. 112 Brad Stone, ‘MySpace Restrictions Upset Some Users’ New York Times (20 March 2007) available at accessed October 2007. 113 See Michael Liedtke, ‘Google to Stop Web Video Rentals, Sales’, Yahoo! News (10 August 2007) available at accessed 13 August 2007; BoingBoing, ‘Google Video Robs Customers of the Videos They “Own”, ’ available at accessed 13 August 2007. Perfect Enforcement 155 This is an unfortunate transformation. It is a mistake to think of the Web browser as the apex of the PC’s evolution, especially as new peer-to-peer applica- tions show that PCs can be used to ease network traffic congestion and to allow people directly to interact in new ways.114 Just as those applications are beginning to show promise—whether as ad hoc networks that PCs can create among each other in the absence of connectivity to an ISP, or as distributed processing and storage devices that could apply wasted computing cycles to faraway computa- tional problems115—there is less reason for those shopping for a PC to factor generative capacity into a short-term purchasing decision. As a 2007 Wall Street Journal headline put it: ‘ “Dumb terminals can be a smart move”: Computing Devices Lack Extras but Offer Security, Cost Savings’.116 * * * Generative networks like the Internet can be partially controlled, and there is important work to be done to enumerate the ways in which governments try to censor the Net.117 But the key move to watch is a sea change in control over the endpoint: lock down the device, and network censorship and control can be extraordinarily reinforced. The prospect of tethered appliances and software as service permits major regulatory intrusions to be implemented as minor techni- cal adjustments to code or requests to service providers. Generative technologies ought to be given wide latitude to find a variety of uses—including ones that encroach upon other interests. These encroachments may be undesirable, but they may also create opportunities to reconceptualise the rights underlying the threatened traditional markets and business models. An information technology environment capable of recursive innovation118 in the realms of business, art and culture will best thrive with continued regulatory forbearance, recognising that the disruption occasioned by generative information technology often amounts to a long-term gain even as it causes a short-term threat to some powerful and legitimate interests. 114 One example of this would be BitTorrent, ‘a peer-assisted, digital content delivery platform’ that distributes the cost of sharing files by breaking them down into smaller pieces that are each supplied by separate peers in the network. BitTorrent, Company Overview, available at accessed 1 June 2007. 115 A variety of programs already allow users to contribute idle CPU time to far-flung projects. See, eg, SETI@home, ‘The Science of SETI@home’, available at accessed 1 June 2007; ClimatePrediction.Net, available at accessed 1 June 2007; World Community Grid, ‘About Us’, available at accessed 1 June 2007; Rosetta@home, available at accessed 1 June 2007. 116 Christopher Lawton, ‘“Dumb Terminals”’ Can Be a Smart Move’, Wall Street Journal (30 January 2007) at B3, available at accessed October 2007. 117 See generally Ronald J Deibert et al (eds), Access Denied: The Practice and Policy of Global Internet Filtering (2008). 118 Recursively generative applications are capable of producing not only new works, but also new generative applications that can then be used to create new works. 156 Jonathan Zittrain The generative spirit allows for all sorts of software to be built, and all sorts of content to be exchanged, without anticipating what markets want—or what level of harm can arise. The development of much software today, and thus of the generative services facilitated at the content layer of the Internet, is undertaken by disparate groups, often not acting in concert, whose work can become greater than the sum of its parts because it is not funnelled through a single vendor’s development cycle.119 The keys to maintaining a generative system are to ensure its internal security without resorting to lockdown, and to find ways to enable enough enforcement against its undesirable uses without requiring a system of perfect enforcement. To do this, I suggest that we should explore how some enterprises that are genera- tive at the content level have managed to remain productive without requiring extensive lockdown or external regulation, and apply those lessons to the future of the Internet. 119 See Jonathan Zittrain, ‘Normative Principles for Evaluating Free and Proprietary Software’ (2004) 71 University of Chicago Law Review 265, 272–3 (describing an open development model for software). 7 Criteria for Normative Technology The Acceptability of ‘Code as Law’ in Light of Democratic and Constitutional Values BERT-JAAP KOOPS I. Introduction* Technology has always had a certain normative element—it is never neutral. However, since a decade or two, something is changing. With the advent of infor- mation and communication technologies (ICT) and the Internet, technology is being used more and more intentionally as an instrument to regulate human behaviour. Notable examples are Digital Rights Management (DRM) systems (enforcing—or extending—copyright), filtering systems (which block ‘harmful’ content), Privacy Enhancing Technologies (PETs, which allow citizens the control over personal data that they are losing in the digital age), and terminator tech- nologies (which prevent genetically modified foods to multiply, forcing farmers to buy new crops each year). Sometimes, like with PETs, norms are incorporated in technology to enforce existing legal rules, but in other cases, they are built in to supplement or extend legal rights, thus setting new norms. This development means that the normative role of technology is becoming more important, to such an extent that we may speak of a qualitative difference: technology, increasingly, enforces or supplements law as an important regulatory instrument. That many such efforts are not very effective so far—DRM and filter- ing systems are often easily circumvented, and PETs have in the past decade not yet surpassed the stage of ‘promising concept’—does not diminish the potential import of this finding: however tentatively, technology is increasingly used inten- tionally to enforce or establish norms. Technology that sets new norms clearly raises questions about the acceptability of the norms, but also if technology is used ‘only’ to enforce existing legal norms, its acceptability can be questioned, * This paper was written as part of a project on law, technology, and shifting balances of power, funded by the Dutch Organisation for Scientific Research. I thank the colleagues at TILT, in particular Eva Asscher and Maurice Schellekens, Roger Brownsword, Bärbel Dorbeck-Jung, Bert van Roermund, Karen Yeung and the Tilburg Legal Research Master students for their comments on earlier versions. 158 Bert-Jaap Koops since the reduction of ‘ought’ or ‘ought not’ to ‘can’ or ‘cannot’ threatens the flex- ibility and human interpretation of norms that are fundamental elements of law in practice. The topos of ‘code as law’1 has been put on the agenda by scholars like Joel Reidenberg and Lawrence Lessig. It basically refers to the notion that increas- ingly, technology is intentionally being used in a normative way, thus influencing people’s behaviour to an ever larger extent. Whereas ‘code as law’ is an often-used phrase to denote this phenomenon or topos, I shall use the term ‘normative tech- nology’ to indicate this type of technology itself, ie, technology with intentionally built-in mechanisms to influence people’s behaviour.2 Normative technology is part of the fourth category of Lessig’s (1999b: 506ff ) four modalities of regulation (laws, norms, markets and architecture): Code sets these features [conditions on one’s access to areas of cyberspace]; they are features selected by code writers; they constrain some behavior (for example, electronic eavesdropping) by making other behavior possible (encryption). They embed certain values, or they make the realization of certain values impossible. ‘Code as law’ is viewed sometimes from an optimistic and sometimes from a pessimistic point of view. Joel Reidenberg (1993; 1998) was one of the first schol- ars to point out that in the digital age, software and hardware tend to regulate themselves, or rather, Internet users and developers tend to regulate themselves through technology. He coined the term Lex Informatica to refer to this develop- ment, thus comparing the newly emerging technology-embedded ‘law’ with the largely bottom-up-developed Lex Mercatoria of the Middle Ages. Initially viewing this as positive, since traditional legislatures are not fit—for many reasons—to regulate the Internet by law and hence inviting public authorities to embrace the emerging Lex Informatica to fill the gap of Internet regulation (Reidenberg 1998), Reidenberg later turned more pessimistic, noticing the downside of self- regulatory norms being built-in in technology that bypasses democratic control (cf Reidenberg 2007). Lawrence Lessig has been most influential in deepening and disseminating the understanding of the role that normative technology nowadays plays. In Code and Other Laws of Cyberspace, Lessig argues, for example in relation to privacy, that ‘the 1 ‘Code’ here means: computer code. The topos is also phrased sometimes as ‘Code as code’, indi- cating that West-Coast code (Silicon Valley software) functions as East-Coast code (Washington, DC, codified law) (Lessig 1999a: 53). 2 Lessig (1999a; 1999b) often uses the term ‘code’ (without inverted commas) for the technology itself, but this does not allow distinguishing between technology (or computer code) at large and technology (or computer code) with intentionally built-in rules. The latter significantly differs from the former. Technology in itself of course has a regulatory effect on people’s behaviour (it limits and extends people’s behavioural scope). The novelty of ‘code as law’ is that technology is used intentionally as an instrument to influence the way people behave, supplementing law as a regulatory instrument. In order to distinguish this particular type of technology from technology at large, I have not adopted Lessig’s term ‘code’ but instead use ‘normative technology’, the adjective showing that behaviour- influencing norms are part of the technology itself. Criteria for Normative Technology 159 code [technology] has already upset a traditional balance. It has already changed the control that individuals have over facts about their private lives’ (1999a: 142). Privacy is thus threatened through normative technology, beyond the grasp of legislatures that try and establish a desirable level of privacy through democrati- cally legitimated laws (cf Koops and Leenes 2005). At the same time, Lessig calls for a response to privacy-threatening technology in the form of PETs. In other words, in Lessig’s view, ‘code’ that disturbs the traditional balance between privacy and other interests should be checked by ‘code’ that incorporates privacy values (Lessig 1999a). Whether that is feasible remains to be seen: in practice, PETs are rarely used on a wide scale. In fact, technology is gradually eroding privacy, and built-in privacy threats play a role in this process (Koops and Leenes 2005). And privacy is not the only area in which normative technology threatens democrati- cally and constitutionally established balances between conflicting interests. Also in the fields of freedom of expression (Lambers 2006) and intellectual property (Helberger 2006; Reidenberg 2007), ‘code’ is functioning more and more as ‘law’ by intentionally regulating human behaviour. In short, [c]ode is increasingly being sought as a regulatory mechanism in conjunction with or as an alternative to law for addressing societal concerns such as crime, privacy, intellectual property protection, and the revitalization of democratic discourse. (Kesan and Shah 2004: 279) This is a significant difference with traditional technology, which in itself of course has a regulatory effect on people’s behaviour. The novelty of ‘code as law’ is that technology is nowadays being used intentionally as an instrument to influence the way people behave, supplementing law as a regulatory instrument. A key difference between ‘code’ and ‘law’ is that normative technology, both in its norm-enforcing and in its norm-establishing form, influences how people can behave, while law influences how people should behave. This is why the rise of intentionally normative technology, in contrast to traditional technology, raises the democratic and constitutional issues that are central to this paper. II. Research Question While the topos of ‘code as law’ has been researched from different perspectives since the seminal work of Reidenberg and Lessig, no clear conclusions have emerged so far in academic research on the acceptability of this development. Scholars regularly offer opinions on this, yet a framework for assessing ‘code as law’ seems to be lacking. How should rules that are built-in in technology be assessed, given that technology has special characteristics when it enforces or establishes legal norms? It is safe to say that a norm that is built-in in technology to regulate people’s behaviour, such as an anti-copying measure on a CD that is used to enforce legal rights as laid down in copyright law, is more than just a ‘fea- ture’ of technology: it sets its own standard in absolutely enforcing existing legal 160 Bert-Jaap Koops rights or in establishing new rights. This raises the question how the criteria— both procedural and substantive—that are traditionally applied to laws, apply to norms that are embedded in technology. In particular, there are concerns that fundamental safeguards of democratic and constitutional values may not apply fully or perhaps at all to regulation by technology, while the impact on citizens’ behaviour can be equally significant as the impact of legal norms enforced by legal procedures. This leads to my main research question: Which criteria should be used to assess the acceptability, in light of democratic and constitutional values, of normative technology? This question is part of the larger question how normative technology can or should be assessed. Since such a large question cannot be addressed in a single essay, I restrict myself to the first step in assessing ‘code as law’: presenting a set of criteria that can be used as a checklist for assessing the acceptability of normative technology. A systematic presentation of criteria for normative technology, which has so far not been undertaken in the literature, can bring the academic debate on ‘code as law’ a step forward, by providing a tool for authors who write about the acceptability of specific cases of normative technology. I will start with some thoughts on why normative technology merits assess- ment and why criteria relating to democratic and constitutional values are relevant for such an assessment (section III). I will use a heuristic method to assemble a set of criteria (section IV) which I will then elaborate and structure (section V). I end with a conclusion how the set of criteria could be further refined and tested (section VI), together with an agenda for further research (section VII). I should stress that this essay has a hybrid character. The topic and research question are fundamental and complex, while their treatment is rather casual. I have chosen to present my thoughts in the form of an essay, as this genre allows addressing a difficult issue in article-length form without too many ifs and buts. The essay is an attempt to put thoughts on paper in order to invite debate, reflec- tion, underpinning, elaboration and testing, which I hope subsequent scholars will embark upon. III. Why Should Normative Technology be Assessed? A starting point of this essay is that normative technology regulates behaviour in unprecedented ways, and that, as it is used intentionally as an instrument to influence human behaviour, it should comply with criteria that society considers important for regulation. As Lessig (1999a: 224) argues: ‘If code is a lawmaker, then it should embrace the values of a particular kind of lawmaking.’ This holds, first, for technology that is used as an enforcement instrument by traditional, legitimate, law-makers. It should be noted that the acceptability of norm-enforcing normative technology is not automatically guaranteed by the Criteria for Normative Technology 161 norm having been promulgated by a legitimate public law-making body. The way in which a legal norm is translated and inscribed in technology is a separate activity that should be assessed in its own right, because ‘law in the books’ is not and cannot be exactly the same as ‘law in technology’ (cf Hildebrandt and Koops 2007: 22–28). In the translation process, choices and reductions take place, and these choices are not necessarily made by public authorities subject to demo- cratic checks and balances, but by technology developers who are at best subject to EDP auditors. The rule as embedded in technology can hardly ever be the same as the rule established by the legislature. Moving beyond norm-enforcing technology, democratic and constitutional criteria are a fortiori relevant for norm-establishing technology by public bodies, because the regular checks and balances of law-making risk being circumvented by this application of normative technology. Second, democratic and constitutional values are arguably also relevant for technology where private parties build in norms to influence users’ behaviour. Not only can ‘code’ be the long hand of the law, but also the invisible hand of the market or of society.3 In the latter case, particularly when there is a large impact on the lives of people (eg, an addictive component being added to tobacco, or ‘terminator technology’ used in genetically-modified organisms to prevent multi- plication), it is valid to investigate the acceptability of rules embedded in technol- ogy by non-state actors: Some of the examples in this book show the potential of future code regulation to circumvent constitutional safeguards. … Code as self-regulation is a very strong regulator. … ‘Code’ as a product of self-regulation should, for that reason, be subjected to some of the criteria that were used to judge the validity and legitimacy of legal sys- tems. (Asscher and Dommering 2006: 249–51) Traditionally, the acceptability of ‘private’ regulation can be and has been inter- preted separately from acceptability of ‘public’ regulation, but a sharp distinc- tion between public and private regulation can no longer be made as we are moving towards a world of polycentric governance. Polycentric governance is the notion that regulation is effected from various, partly overlapping, circles of power. It combines the vertical concept of multi-level governance (multiple, interacting layers of public regulation at local, national, supranational, and international levels) with the horizontal concept of regulation by non-state actors, creating a paradigm in which regulation is both vertically and hori- zontally a complex and interactive process. From this perspective, normative technology as developed and applied by private parties is an inextricable part 3 It is important to realise that some authors, in their work on ‘code as law’, sometimes focus on only one of these two types of ‘code’. Brownsword (2004, 2005), for example, is largely con- cerned with normative technology that is used as a compliance tool to mandatorily enforce legal norms. This establishes a very different context and perspective from authors, like Lambers (2006), Helberger (2006), and Reidenberg (2007), who focus on normative technology as created in the private sector. 162 Bert-Jaap Koops of the regulatory arena, and as such merits being assessed by generic criteria for regulation. As Reidenberg (1998: 583) notes: ‘[t]he technical community, will- ingly or not, now has become a policy community, and with policy influence comes public responsibility’. Nevertheless, the acceptability of ‘publicly embedded’ rules in technology does differ, to a certain extent, from that of ‘privately embedded’ rules. How big the difference actually is, will often depend on the context. Sometimes, normative technology developed by private parties has a distinct ‘public element’, eg, DRM systems having received a stamp of approval by law-makers that outlawed the circumvention of DRM, or search engines filtering results to hide material that the Chinese government does not like its citizens to see. It can therefore be use- ful to assess the acceptability of ‘private’ normative technology with the same set of criteria as that used for ‘public’ normative technology, while keeping in mind that, depending on the context, certain criteria from the set will be less relevant or should be interpreted differently. This means that it is useful to develop a single set of criteria for both norm- enforcing and norm-establishing, public as well as private, normative technol- ogy; within this single set, however, the interpretation of criteria and the relative weight accorded to them will be context-dependent. IV. Finding Criteria for the Acceptability of Normative Technology Since my starting point in this essay is that normative technology, as it influences human behaviour in unprecedented ways, should comply with criteria that soci- ety considers important for regulation, a good place to start looking for criteria for acceptability of normative technology is to study criteria for law. This is, of course, opening Pandora’s box, for libraries are filled with books on the question ‘What is good law?’, but it is hard to find in any of these libraries the definitive volume that sums it all up nicely. Not only are there opposing views on the very core of law, eg, naturalist versus proceduralist views of what constitutes law, but also, ‘good law’ is a fluid concept very much in debate under the influence of cur- rent societal developments. Rather than trying to find my way in these labyrinths and derive acceptability criteria in a top-down manner from a theory-based interpretation of law, I will approach the matter in a pragmatic, bottom-up way, namely by looking at the criteria that are applied in practice by scholars writing about technology regula- tion. I shall start with the few authors who have provided criteria for normative technology in particular. Since this is only a small group, I complement them with authors who offer criteria, more or less explicitly, for assessing technology regula- tion in today’s world of polycentric governance. Criteria for Normative Technology 163 Although the latter do not address ‘code as law’ in particular, inspiration can be drawn from the criteria they apply, because they touch upon topical areas where governance notions are challenged by technology developments; using their work may therefore provide additional insight into relevant elements of today’s range of views on the importance of democratic and constitutional values for technology regulation. I should stress that the selection of this latter group of authors is subjective, with more than the usual amount of subjectivity that is inevitable in legal scholarship: I chose these rather than others largely because I have recently been working with these texts. This need not be a meth- odological flaw: it is based on the assumption that the acceptability of norma- tive technology is closely related to the acceptability of technology regulation in today’s world of polycentric governance, and on the assumption that in any sufficiently large subset of authors on technology regulation, more or less the same criteria will show up. Those who do not share these assumptions, I would invite to use a different or larger collection of authors, or to use a top-down approach from legal theory, to see to what extent this leads to substantially dif- ferent results. My methodology for finding criteria for the acceptability of normative tech- nology thus is a heuristic process. It does not lay a claim to procedural justice, in which the criteria would be valid because the right procedure was followed to find them, but rather to outcome justice, in which the criteria are valid because the outcome is accepted by the reader as a reasonable one. The proof of the pudding will be in the eating. This implies that the resulting set of criteria is to be tested by legal scholars, who should try and point out errors or gaps in the criteria set, so that in a re-iterative process a firmer and better collection of criteria can be built. A. Authors on Normative Technology Lawrence Lessig does not provide an extensive framework for assessing normative technology in his seminal book. His starting point is by and large the body of con- stitutional values, both substantive and structural or procedural, with a priority for the latter: ‘structure builds substance’ (Lessig 1999a: 7–8). Although he does not systematically specify this body of values, at certain points Lessig highlights what he considers key values, notably liberty (p 5), transparency (pp 224–5), and having a choice (pp 237–9), neatly summed up in the closing lines: ‘liberty is constructed by structures that preserve a space for individual choice, however that choice may be constrained’ (p 239). Joel Reidenberg likewise has not systematically articulated criteria for accept- ability of the emerging Lex Informatica, but several criteria can be inferred from his work. In his 1998 paper, he tends to stress democratic supervision (‘Because technical designs and choices are made by technologists, government policymak- ers should play an important role as public policy advocates promoting policy objectives’; Reidenberg 1998: 580), and a need for optimally flexible rules, which 164 Bert-Jaap Koops can be limited when fundamental values are at stake: ‘flexibility is only undesir- able when fundamental public interests are at stake and the public interest requires rules that individual participants in the network might not choose themselves’ (1998: 580, 584). Focusing on normative technology as an enforcement tool by governments, Reidenberg (2004: 229) applies two criteria: legal authority (‘[a]s a threshold matter, states must have a legal process in place to authorize the use and choice of technological enforcement tools’) and proportionality (‘the basic prin- ciple … should be that a state only use the least intrusive means to accomplish the rule enforcement’). Normative technology that is developed by private parties— technologists fighting against intellectual-property rights—threatens to bypass the most fundamental principles: the rule of law and democracy (Reidenberg 2007: 19). Thus we see Reidenberg applying, for diverse forms of normative technology, the criteria of rule of law, legal authority, democracy, fundamental values, flexibility, and proportionality. Kesan and Shah’s work is mostly concerned with describing rather than evaluating the processes that shape normative technology, although they stress the importance of ‘societal concerns’ and social values (Kesan and Shah 2004). In a more normative article, however, they have highlighted three characteris- tics of code that regulators should pay attention to: transparency of rules, open standards (which can be seen as transparency of the rule-making process), and default settings. The latter are important, because users tend not to change default settings, partly because they have a legitimating effect: apparently, the default is ‘normal’. This means that default settings ought to be made optimal for users, in light of the values that are at stake for them (Shah and Kesan 2003: 5–8). The main systematic attempt to date to offer a set of criteria to judge nor- mative technology by, has been made by Lodewijk Asscher in the Institute for Information Law’s ‘code as law’ project (Dommering and Asscher 2006). He puts forward a fairly rough and tentative set of criteria, presented in the form of questions, using ‘code’ to indicate ‘normative technology’ (Asscher 2006: 85). 1. Can code rules be understood? If so, are they transparent and accessible to the general public? 2. Can the rules be trusted? Are they reliable in the sense of predictability? 3. Is there an authority that makes the code rules? 4. Is there a choice? This set can be summarised as: transparency, reliability, accountability, and choice. All of these are procedural criteria. Brownsword, in his discussion of ‘techno-regulation’ (his term for normative technology that secures compliance, ie, norm-enforcing technology), basically presents two criteria for regulatory intervention: effectiveness and legitimacy; legitimacy comes down to respect for human rights and human dignity (2004: 210). Brownsword seems to regard human rights and human dignity as one, Criteria for Normative Technology 165 integrated criterion, interpreting human rights as the major manifestation of human dignity.4 An essential consequence of his view on human rights is that human beings should have a choice: the autonomy that underpins human rights ‘implies the provision of a context offering more rather than fewer options’ (p 218). For human dignity, it is important not only that right choices are made (to comply with the rules) but also that wrong choices can be made, and that not all ‘bad’ things are simply made impossible, for human life is enriched by suffering. As Fukuyama (2002: 172–3) argues: what we consider to be the highest and most admirable human qualities … are often related to the way that we react to, confront, overcome, and frequently succumb to pain, suffering, and death. In the absence of these human evils there would be no sympathy, compassion, courage, heroism, solidarity, or strength of character. A person who has not confronted suffering or death has no depth. As a result, Brownsword’s key criterion for assessing compliance-proof normative technology is the existence of choice (2004: 230–32). Implicitly, he also suggests that some kind of flawedness or fallibility needs to exist: techno-regulation should not be allowed to create a perfect world without suffering, since humanity is thereby reduced to flatness. A trade-off can be observed here between effective- ness and choice: the more effective a techno-rule is, the less choice people have to disobey it, and vice versa. In subsequent work, Brownsword has outlined as (additional) criteria the principles of good governance: transparency and accountability (2005: section III). Particularly interesting is the remark that, even if techno-regulation is implemented in a fully transparent and accountable way, in due time, trans- parency is lost because the rule built-in in the techno-object for later genera- tions simply becomes part of the object’s features and is no longer recognised for what it once was: a normative rule that purposefully influences people’s behaviour. Then, ‘it is only outsiders and historians who can trace the invisible hand of regulation’ (Brownsword 2005: section III(i)). Overall (with consid- erable interpretation on my account), Brownsword’s criteria come down to effectiveness, respect for human rights, choice, fallibility, transparency and accountability. In passing, Brownsword (2004: 223–4) also mentions a set of principles for good regulation proposed by Trebilcock and Iacobucci in the form of oppositional pairs: independence ↔ accountability; expertise ↔ detachment; transparency ↔ confidentiality; efficiency ↔ due process; and predictability ↔ flexibility. This is also a productive set of criteria to use for technology regulation, since it empha- sises the interrelatedness of requirements and the often inherent tension existing between criteria. 4 At least, of human dignity as empowerment, which Brownsword opposes to human dignity as constraint. Brownsword himself favours human dignity as empowerment (see p 232) and hence emphasises the importance of human rights as the ultimate touchstone (p 234). 166 Bert-Jaap Koops B. Authors on Technology Regulation Bärbel Dorbeck-Jung has assessed regulation of nanotechnologies on require- ments for good governance. In her paper (Dorbeck-Jung 2007: section 2.3), she mentions as basic principles for legitimacy in the democratic constitutional state: legality (rule of law), constitutional rights, democratic decision-making and con- trol, checks and balances of power, and judicial review, which should guarantee the underlying basic values of freedom, equality, legal certainty, democracy, and effectiveness. Moreover, in the current context of multi-actor and multi-level governance, additional measures are required to ensure legitimacy: stronger par- ticipation of citizens in the regulation process, increased transparency of regula- tion, and increased accountability and control of the impartiality and objectivity of regulators. With some reshuffling and combining, Dorbeck-Jung’s good- governance criteria may be summarised as: human rights (including freedom, equality); constitutional principles (including legality, legal certainty, checks and balances); democratic decision-making (including participation of citizens); accountability (including judicial review and control); and effectiveness. Besides these specific criteria, Dorbeck-Jung (2007: section 2.3) also refers to Scharpf ’s distinction between input and output legitimacy. Input legitimacy implies legitimacy through rules-of-the-game and the procedure followed, output legitimacy means that the result establishes legitimacy. Although in this section, I aim at output rather than input legitimacy, in the context of normative technology, input legitimacy is a primary concern. Because technology is often irreversible— once it is developed and applied in society, it is hard to fundamentally remove it from society in those applications—the process of developing technology is a key focus when normativity is at stake. After all, it may well be too late when technol- ogy simply appears in society to ask whether it is acceptable to use this technol- ogy; quite often, the genie may then be out of the bottle never to be put back in. It is still relevant to look at output legitimacy, since the way in which normative technology functions in society is in itself a useful yardstick, but the difficulty of reversing technology does imply that criteria addressing the process of technol- ogy development—‘rules of the game’—should be a key part of our acceptability criteria. In the context of ICT regulation, together with TILT colleagues, I have developed a set of criteria for assessing self-regulation (Koops et al 2006). Self-regulation differs from normative technology, but it is fairly similar in the way both differ from government regulation. This set may therefore be of interest: 1. fairness (respect for fundamental rights and social interests); 2. inclusiveness (participation of relevant actors); 3. compliance; 4. transparency (both of the rule-making process and of the rules); 5. legal certainty; 6. context; 7. efficiency. Criteria for Normative Technology 167 Some criteria for regulation in general are given by Christopher Hood, as described by Raab and De Hert (2008): a tool of government must be selected after examining alternative possible tools, it must be matched to the job, not be ‘barbaric’, and be effective with the minimum possible costs. These criteria can be summarised as checking alternatives, effectiveness, moral values, and efficiency. Another look at legitimacy, in this case of non-traditional regulatory actors, is Anton Vedder’s analysis of Non-Governmental Organisations. He outlines three dimensions of legitimacy (Vedder 2007: 7–9, 203–4): 1. a moral dimension: legitimacy in terms of conformity to moral norms and val- ues; these include substantive values (eg, related to core human rights) but also procedural moral values like accountability, responsibility, and transparency; 2. a legal or regulatory dimension: legitimacy in terms of conformity to rules; this is related to the notion of legality or (as I interpret it) the rule of law; 3. a social dimension: legitimacy in terms of consent or representation of those involved or affected; this is related to representativeness. Vedder posits a hierarchy by considering the moral dimension as primary and the legal and social dimensions as secondary. Legal and social criteria should follow from or help to fulfil moral criteria. This essentially means that the values and norms embedded in the regulatory subject ‘should be acceptable in principle for all, and that those acceptable values and norms are integrated as fully as possible into the NGO’s organizational structures and activities’ (p 207). This could apply, mutatis mutandis, to the values and norms built-in in normative technology. V. A Systematic Set of Criteria for Acceptability of Normative Technology Putting all of the above in a melting pot, we can concoct several sets of criteria, depending on the desired level of abstraction (see Table 7.1). The most abstract set of criteria is: substantive criteria, procedural criteria, and criteria of result, but this is not very helpful for concretely assessing normative technology. At a lower level of abstraction, we can split substantive criteria into human rights and (other) moral values,5 procedural criteria into principles like the rule of law and democracy, transparency, and accountability, and principles of result into being flexible and allowing for choice. All of these can be further detailed into enumerations of lower-level criteria. This can be summarised in the following table, in which I have tried to include as many of the criteria listed in the previ- ous section as possible. 5 I see moral values as a broad category which encompasses human rights. This row can therefore be seen as including all those substantive moral values that are not established as specific human rights. 168 Bert-Jaap Koops Table 7.1 Overview of criteria* Level of abstraction Order Level 1 criteria Level 2 criteria Level 3 criteria Level 4 criteria (far from exhaustive) Primary criteria substan- tive criteria core substantive principles human rights equality and non- discrimination freedom of expression privacy other moral values autonomy human dignity success ↔ fallibility rule of law due process legality legal certainty checks and balances procedural criteria core procedural principles democracy democratic decision-making all-stakeholder participation Secondary criteria procedural criteria (continued) other procedural principles transparency of rule-making checking alternatives justifying choices subsidiarity accountability review audit expertise ↔ independence effi ciency proportionality result criteria principles of result choice ↔ effectiveness possibility of choice optimal default settings fl exibility context-adaptability transparency of rules *There is substantial, unavoidable overlap between various categories and criteria in this table, which for the sake of clarity I have simplifi ed away. When discussing criteria for assessing normative technology, it is useful to bear in mind the level of abstraction at which the criteria are formulated. Level 1 through level 4 moves from abstract to concrete; this should not be confused with importance: some level 4 criteria, like autonomy and equality, are very fundamental, but they are simply more concrete than the abstract notion of ‘core substantive principles’. Criteria for Normative Technology 169 I consider level 3 in this table the aptest level of abstraction for discussing the acceptability of normative technology, at least for the purposes of this essay, which looks at normative technology at large. When assessments are made of concrete instances of normative technology, a finer-grained set of criteria at level 4 will be more suited. Within the set of criteria, it may be useful to posit a hierarchy, in that the upper- half criteria of core substantive and core procedural principles are more important than the lower-half criteria of other principles of procedure and result criteria. Such a hierarchy implies that the primary criteria (above the line) should be met before the secondary criteria come into view (cf Rawls 1972: 302–3). However, there is always some (or much) tension between criteria, and with these still rather abstract and broad criteria, assessments cannot yield yes-or-no answers, but rather more-or-less answers. In certain cases, lesser scores on a primary criterion may well be considered to be outweighed by higher scores on secondary criteria. But the hierarchical order does provide a bottom-line: if core principles are met only to a low extent, then the overall assessment must be negative. In summary, the acceptability of normative technology can be assessed by using the following, hierarchically ordered, set of criteria: 1. primary criteria — human rights — other moral values — rule of law — democracy 2. secondary criteria — transparency of rule-making — checking alternatives — accountability — expertise ↔ independence — efficiency — choice ↔ effectiveness — flexibility — transparency of rules How could this set of criteria be applied? Some authors, such as Brownsword, try to judge the acceptability of normative technology as such, ie, on the general level of ‘code as law’. Not only is it extremely complex to answer this question on a general, abstract level, but also, it seems not to do justice to the great variety in normative technology that is already visible today. Some instances, such as road bumps and safety belts, are widely accepted, while other forms, including DRM and Internet filtering systems, are considerably contested. Moreover, the accept- ability partly depends on the context and scale, so that it is hard to say whether or not a type of normative technology, such as filtering systems, is acceptable if one does not know the scope of its filtering, whether it takes place in the United States or in Japan, etc. Likewise, the interpretation and relative importance of 170 Bert-Jaap Koops criteria evolves over time, and with the many stages that technology goes through (from fundamental research and tentative experimentation through development and testing to application and marketing), conclusions drawn today about the acceptability of normative technology may differ from the conclusions drawn 10 or 20 years from now. It is therefore more fruitful, in my opinion, to ask whether or not a concrete instance of normative technology is acceptable, and to answer this question for fairly specific instances in fairly particular contexts, for example, a procedure and a computer program to block child pornography on the Internet, a privacy- friendly identity-management system, or a ‘terminator’ mechanism in a genetically- modified rice crop. It is beyond the scope of this essay to undertake an assessment of such concrete cases. I restrict myself here to noting that a concrete assessment will never be a straightforward and uncontested exercise. For one thing, several of the criteria are culture-dependent, in their interpretation (eg, moral values, democracy) or in their importance (eg, human rights, choice). For another thing, quite diverging models can be used in applying a criterion, for example, when assessing efficiency or effectiveness. Moreover, for government-instigated normative technology as an enforcement tool (the long hand of the law), a dif- ferent interpretation of criteria and weights should be chosen than for norma- tive technology created in the private sector (the invisible hand of the market). Altogether, the context-dependent application of the criteria necessitates a context-sensitive procedure for interpreting and weighing of criteria, which is a challenge to develop. But however flexible and multi-interpretable as the criteria may be, this list could provide a heuristic assessment tool in the form of a check list for scholars and regulators with which to approach normative technology, by going through the list and interpreting and weighing the criteria in light of the particular context. VI. Conclusion This essay started from the notion that various instances of normative technology, in their intentional steering of people’s behaviour, are not evidently acceptable. It argued that normative technology, both the norm-enforcing and the norm- establishing variants, by public as well as by private parties, needs to be assessed on its acceptability to society, and that democratic and constitutional values play an important role in this. This is in line with the literature on ‘code as law’, which on the whole is quite critical of and worried over normative technology. If we take the rise of normative technology as well as democratic and constitutional values seriously, something should be done. This essay has attempted to contribute to this, by presenting a checklist of criteria for assessing instances of normative technology, consisting of substantive, procedural, and result criteria. The heuristic process to find criteria for normative Criteria for Normative Technology 171 technology has yielded a rather classic list of criteria, which at least in its higher levels of abstraction resembles often-used criteria for the acceptability of law. This is not altogether surprising: in a world of polycentric governance, acceptability of regulation translates into the most fundamental, overarching notions that we have, like justice and legitimacy, and such notions can be applied to ‘law in the books’ as well as to ‘‘law’ in the technology’. The added value of the set of criteria as presented in this essay, in Table 7.1, is that it is a systematic presentation of criteria that can be applied to normative technology, based on a distinction in levels of abstraction and on a hierarchy of primary criteria (eg, human rights and democracy) and secondary criteria (eg, transparency of rule-making and rules, accountability, and a trade-off between effectiveness and choice). At levels 1 through 3, the set aims at being compre- hensive (readers are invited to point out gaps or unnecessary overlaps), and the criteria at level 3 of abstraction are proposed as providing a useful checklist for concrete assessments. Not all criteria are equally relevant in specific cases, and criteria may have to be interpreted differently depending on the context of the specific instance of normative technology. Also, as notions of acceptability co-evolve over time with social, cultural, and institutional settings (Rip 2006), the criteria will have to be regularly re-assessed themselves. Nevertheless, this set of criteria can be used by scholars and regulators as a checklist with which to approach normative technol- ogy. At the least, using such a checklist guarantees that all relevant values are taken into account in an assessment, and it makes more transparent the process of arriv- ing at a conclusion on the acceptability of normative technology. If authors, when applying the checklist, explain their stance on particular criteria and the weight accorded to them, the academic and political debate about normative technology may be raised to a higher level. VII. Agenda for Further Research As noted in my description of the heuristic process of arriving at criteria, the proof of the pudding (the set of criteria) is in the eating: the criteria set is to be tested by legal scholars, who should try and point out errors or gaps in the crite- ria set, by using different selections of authors who apply criteria in practice, by arriving at criteria from a theory-based interpretation of justice or legitimacy, or simply by testing the set on a concrete instance of normative technology to check its usability. Thus, in a re-iterative process, a firmer and better collection of cri- teria can be built. A next step is to systematically assess concrete cases of normative technology, in which deficiencies in acceptability can be noted. Building on such concrete cases, perhaps more overarching conclusions about the acceptability of norma- tive technology in general can be drawn if systematic deficiencies should appear. 172 Bert-Jaap Koops Recommendations can then be given, not only for concrete cases, but also for redressing generic acceptability deficits in normative technology. These recom- mendations could be addressed to the developers, providers, or controllers of normative technology—often in the private or semi-private sector—but also to regulators, since they can use regulatory frameworks to steer normative technol- ogy in more acceptable directions. A particular challenge for research is Ambient Intelligence (AmI), in which smart environments continuously make instantaneous decisions on citizens and consumers based on profiles and large collections of personal data. In an AmI environment, legal mechanisms to protect the privacy and equality of weak parties—citizens, consumers, employees—will be inadequate, and hence, legal norms should be incorporated into the AmI architecture itself to establish Ambient Law (Hildebrandt and Koops 2007). The acceptability, in light of democratic and constitutional values, of Ambient Law as embedded in the AmI infrastructure is an important field of further study and hence of application of the set of criteria as presented in this essay. Most of the current literature follows the line of research as outlined so far. Authors consider that acceptability of normative technology follows from its compliance with acceptability criteria, and they aim to enhance acceptability by controlling in some way the technology: for example, the rules should be trans- parent (open source), the design process should be fair by discussing the rules- to-be-built-in with all stakeholders, and the technology should leave a choice to users of non-compliance. Yet this line of research—towards controlling normative technology—is not enough. The reverse implications are much less considered but they are at least as important to study. What does ‘acceptability’ mean in a society that feeds upon normative technology? Notions like good governance, democracy, and legitimacy are not set in stone, and are much debated today from the per- spectives of, for example, globalisation, multi-level governance, the increasing importance of multinational enterprises and NGOs, to name a few develop- ments associated with polycentric governance. Normative technology should be added to the list of developments that trigger a reconsideration of what it means to live in a democratic constitutional state. To be sure, we should preserve democratic and constitutional values for lack of a better alternative, but we may well need to change our precise interpretation of these values in today’s and tomorrow’s society. If only because ‘code’ is not equivalent to ‘law’, the ‘rule of law’ cannot simply lay down all the criteria for the ‘rule of code’; it shall need to adapt, if only to a small extent, to the particulars—positive as well as negative—of normative tech- nology. Research should be done, therefore, into questions like: who is the demos that has a say in democratic rule-making in normative technology? How should transparency be defined if transparent rules in technology, by the mere passing of time, slowly turn into technological features without a normative flavour? What do autonomy and liberty mean when technology increasingly restricts making Criteria for Normative Technology 173 choices, but at the same time opens up new paths to explore where no-one went before, simply because the technological doors to these paths were formerly closed? What do fairness and equality mean when we live in surroundings that make the choices for us, based on individual preferences and profiles? What does ‘law’ mean in a world of Ambient Intelligence and Ambient Law? Rather than only look at normative technology from the perspective of safe- guarding the democratic constitutional state, we should thus also look at demo- cratic and constitutional values from the perspective of normative technology. The interaction between these two perspectives makes the future of law fascinat- ing, disturbing, daunting, and, more than ever, unpredictable. References Asscher, LF (2006) ‘“Code” as Law. Using Fuller to Assess Code Rules’ in Dommering and Asscher (eds), Coding Regulation. Essays on the Normative Role of Information Technology (The Hague, TMC Asser Press) 61–90. Asscher, LF and EJ Dommering (2006)‘Code: Further Research’ in Dommering and Asscher (eds), Coding Regulation. Essays on the Normative Role of Information Technology, (The Hague, TMC Asser Press) 249–55. Brownsword, R (2004) ‘What the World Needs Now: Techno-Regulation, Human Rights and Human Dignity’ in R Brownsword (ed), Global Governance and the Quest for Justice, vol 4: Human Rights (Oxford, Hart Publishing) 203–34. —— (2005) ‘Code, Control, and Choice: Why East is East and West is West’ 21 Legal Studies 1–21. Dommering, EJ and LF Asscher (2006) (eds), Coding Regulation. Essays on the Normative Role of Information Technology, IT & Law Series vol 12 (The Hague, TMC Asser Press). Dorbeck-Jung, BR (2007) ‘What can Prudent Public Regulators Learn from the United Kingdom Government’s Nanotechnological Regulatory Activities?’ 1 (3) NanoEthics 257–70. Fukuyama, Francis (2002) Our Posthuman Future. Consequences of the Biotechnology Revolution (London: Profile Books). Helberger, N (2006) ‘Code and (Intellectual) Property’ in Dommering and Asscher (eds), Coding Regulation. Essays on the Normative Role of Information Technology (The Hague, TMC Asser Press) 205–48. Hildebrandt, M and BJ Koops (eds) (2007) D7.9: A Vision of Ambient Law, FIDIS Deliverable, October 2007, available at accessed 13 June 2008. Kesan, Jay P and Rajiv C Shah (2003–04) ‘Deconstructing Code’ 6 Yale Journal of Law & Technology 277–389, available at accessed 13 June 2008. 174 Bert-Jaap Koops Koops, Bert-Jaap and Ronald Leenes (2005) ‘“Code”and the Slow Erosion of Privacy’ 12 (1) Michigan Telecommunications & Technology Law Review 115–88, available at accessed 13 June 2008. Koops, Bert-Jaap, Miriam Lips, Corien Prins and Maurice Schellekens (eds) (2006) Starting Points for ICT Regulation. Deconstructing Prevalent Policy One- Liners, IT & Law Series vol 9 (The Hague, TMC Asser Press). Lambers, R (2006) ‘Code and Speech. Speech Control Through Network Architecture’ in Dommering and Asscher (eds), Coding Regulation. Essays on the Normative Role of Information Technology (The Hague, TMC Asser Press) 91–140. Lessig, L (1999a), Code and Other Laws of Cyberspace (New York, Basic Books). —— (1999b), ‘The Law of the Horse: What Cyberlaw Might Teach’ 113 Harvard Law Review 501–46. Raab, C and P De Hert (2008) ‘Tools for Technology Regulation: Seeking Analytical Approaches beyond Lessig and Hood’ in Brownsword and Yeung (eds), Regulating Technologies (Oxford, Hart Publishing) 263–86. Rawls, John (1972) A Theory of Justice (Oxford, Oxford University Press). Reidenberg, Joel R (1993), ‘Rules of the Road for Global Electronic Highways: Merging the Trade and Technical Paradigms’ 6 Harvard Journal of Law & Technology 287. —— (1998) ‘Lex Informatica, The Formulation of Information Policy Rules Through Technology’ 76 (3) Texas Law Review 553–84. —— (2004) ‘States and Internet Enforcement’ 1 University of Ottawa Law & Technology Journal 213–30, available at accessed 13 June 2008. —— (2007) ‘The Rule of Intellectual Property Law in the Internet Economy’ 44 (4) Houston Law Review 1073–95, available at accessed 13 June 2008. Rip, A (2006) ‘A Co-evolutionary Approach to Reflexive Governance—and its Ironies’ in JP Voss, D Bauknecht and R Kemp (eds), Reflexive Governance for Sustainable Development (Cheltenham, Edward Elgar). Shah, Rajiv C and Jay P Kesan (2003) ‘Manipulating the Governance Characteristics of Code’ 5 (4) Info pp 3–9, available at accessed 13 June 2008. Vedder, Anton (2007), ‘Questioning the Legitimacy of Non-Governmental Organizations’ and ‘Towards a Defensible Conceptualization of the Legitimacy of NGOs’, in: Anton Vedder (ed), NGO Involvement in International Governance and Policy: Sources of Legitimacy (Leiden–Boston, Martinus Nijhoff Publishers) 1–20 and 197–211. 8 A Vision of Ambient Law MIREILLE HILDEBRANDT I. Introduction On 1 November 1755, All Saints’ Day, Lisbon was shocked by an earthquake that brought on a series of waves in European politics and architecture. In her The Faces of Injustice Judith Shklar discusses this earthquake as a turning point for the demarcation line between natural and man-made disaster, shifting the borders of responsibility of governments to include harm caused by natural disasters that should have been anticipated.1 The urban architecture of Lisbon, a city with a myriad of small streets that offered no shelter once the houses started coming down, was partly to blame for the excessive amount of casualities.2 To have an idea of this architecture the reader—if familiar with Lisbon—could think of the famous Alfama district, the Moorish labyrinth of closely knit alleys, adorned with freshly washed laundry stretched across the narrow streets, stemming from an era when housing was considered a private enterprise not falling within the scope of public competence. After the earthquake Lisbon was reconstructed under the supervision of the marquis de Pombal who practically reigned Portugal as an enlightened absolutist monarch. He planned broad avenues which should allow people to rescue their lives by running to the middle of the road in the case of another earthquake and took care that earthquake-resistant buildings were con- structed.3 One could paraphrase Shklar by saying that when natural disaster has public consequences, governments should intervene to the greatest extent possible to prevent harm. It seems remarkable that in today’s world, bristling with socio-technical imbroglios that have a major impact on the risks and opportunities of citizens everywhere, the development of technological infrastructures is left mainly to scientific research, technical engineers and market forces. Quoting Lawrence Lessig one could claim that ‘governments should intervene … when private 1 J Shklar (1990) speaks of passive injustice when referring to a blameworthy lack of intervention by governments that could have prevented serious harm. 2 The phrasing could suggest that we can blame non-humans for harm caused. About the issue of attributing civil or criminal liability to non-human intelligent agents, see Hildebrandt (2008a). 3 Mullin (1992: 157). 176 Mireille Hildebrandt action has public consequences’.4 In fact, we can link his advocacy to Dewey’s discussion of the Public and its Problems of 1927, in which Dewey claimed that democracy implies that those that suffer the indirect consequences of a decision or action have found a way to participate in the decision.5 Dewey’s concern for democracy stemmed from the fact that emerging technological infrastructures had facilitiated a complex societal context in which indirect consequences of decisions taken outside the domain of national politics were massive, requiring more participatory conceptions of democracy in addition to representative dem- ocratic theory.6 In today’s world one could translate his concern by arguing that citizens who suffer or enjoy the effects of new technological infrastructures, like for instance Ambient Intelligence (AmI), should be able to influence decisions regarding the funding, designing and marketing of such emerging technologies. Instead of endorsing a paralysing technological determinism (akin to a fatalist acceptance of natural disaster) civil society and its government should realise that technologies are neither good not bad but never neutral,7 acknowledging that technologies can be constructed in different ways, with different normative implications.8 In this contribution I will introduce the concept of technological normativity and compare it to legal normativity. After establishing how the two compare, their relationship will be explored, coming to the conclusion that modern law is in fact embodied in a specific technology: the written and printed script (section II). The idea that modern law is articulated in the script is elaborated in an analysis of oral, written and letterised traditions, including a speculative investigation of the transition from letterisation to digitilisation, followed by a similar analysis of the implications of the transition from orality and the script to the letter-press for law (section III). The implications of the transition from the printing press to digital communication for the constitution of law are initiated with a discussion of the vision of Ambient Intelligence, explaining the massive normative impact the realisation of this vision would have on our every day life. I will argue that this normative impact will change the mélange of positive and negative freedom that forms the backbone of constitutional democracy, unless we find ways to articu- late the legal framework of democracy and the rule of law into the technological architecture it aims to regulate, creating what has been called ‘Ambient Law’ (sec- tion IV). The conclusion must be that lawyers and computer scientists should negotiate mutual transformations in the legal and technological infrastructure to sustain and reinvent democracy and rule of law in the age of Ambient Intelligence (section V). 4 Lessig (1999: 233). 5 Dewey (1927); Hildebrandt and Gutwirth (2007). 6 An original analysis of the debate between Dewey and Lippmann on the issue of democracy and technocratic government can be found in Marres (2005: ch 2). 7 Kranzberg (1986: 544–560). 8 Cp Ihde’s discussion of the multistability of technologies in Ihde (1990: 144–51), which concerns the different ways in which the same technology can be culturally embodied, leading to a measure of unpredictability of the actual use of a technology after its introduction. A Vision of Ambient Law 177 II. Technological and Legal Normativity A. Technological Normativity? Before moving into the argument concerning technological embodiment of legal norms, we need to establish the extent to which technologies (devices and infra- structures) have a normative impact. With a normative impact I do not refer to explicit prescriptive rules, enacted by a legislator. Many of the norms that regulate our interactions do not derive from deliberately issued decrees, they rather derive from habits that have given rise to certain expectations, mostly remaining within the scope of tacit knowledge.9 Neither do I use the term ‘normative’ as equivalent to ‘moral’,10 recalling Kranzberg’s proposition that techology is neither good nor bad, but never neutral. To decide upon the moral significance of a technology we must first define what we hold to be good or bad, after which we can evaluate the normative impact in those terms. To do this we must first describe the normative impact, which is situated in the way a specific technology induces/enforces or inhibits/rules out certain types of behaviour. A smart car may for instance detect a driver’s fatigue and warn the driver of the risk she is taking when continuing the journey.11 This warning may inhibit certain behaviour: the driver may think twice before starting a trip, or, if she is already on her way she may stop the car and take a cab. Another type of smart car may simply direct the driver to a parking lot and technically prohibit the continuance of the journey: in this case the car rules out certain behaviour. In the case of inducing or inhibiting a driver’s actions we must acknowledge that the technology is not determinate of the driver’s behaviour: the smart car only regulates her interactions. In the case of enforcing or prohibit- ing the behaviour of the driver the car actually determines her actions. One can rephrase this in terms of regulative and constitutive technological normativity, regulating or determining our actions and limiting or constituting our ways of doing things. Evidently many technologies are constitutive of our interactions: without eye glasses I would not be able to read, without a telephone I could not talk with another person across long distance, without an MRI scan a medical researcher could not analyse certain types of brain damage. At the same time these technologies may be regulative: a car is constitutive for car-driving as such and if it warns us about not having fastened our seat belt it is regulative of (safe) driving. We can compare the regulative and constitutive normativity of technologies with regulative and constitutive legal norms:12 the legal prohibition to violate the speed 9 Following Wittgenstein’s discussion of to rule-following, see Winch (1958: 57–62). Cf Taylor (1995). 10 Cp Verbeek (2006). 11 Jin, Park et al (2007). 12 On the difference between regulative and constitutive rules see Searle (1969) for an application in the field of law see Mittag (2006). Searle discusses the difference in the context of what he calls brute and institutional facts. Institutional facts are constituted by constitutive rules, which are socially constructed; brute facts exist independent of human existence. In other work I have relativised this distinction (Hildebrandt forthcoming a). 178 Mireille Hildebrandt limit is regulative of our driving a car, while the registration of a marriage with the civil registry is constitutive of being legally married. Neither law nor technology have a monopoly on regulating and even constituting our behaviours and in this sense we can agree with Lawrence Lessig, who has saliently described Code’s law- like implications.13 Apart from law and computer code many other technologies, market forces and social interaction all have a normative impact. For this reason I concord with Lessig that in order to sustain fundamental legal principles like privacy, fairness and non-discrimination lawyers need to take into account the normative impact of technological devices and infrastructures.14 B. Legal Normativity The fact that technologies have a normative impact does not—however—imply the equivalence of legal and technological regulation. As Gutwirth, De Sutter & De Hert argue in their contribution, we should not confuse or conflate the practices of lawyers with those of technologists. Neither should we conflate the normative impact of law and technologies on the interactions of citizens. First of all, technological regulation seems to influence our behaviour patterns via a backdoor, creating a tacit understanding of the technology that settles under the skin, allowing us to work with it effectively. Its prescriptions are not written down in the form of decrees one must obey, they are as it were inscribed in the hardware and software that we have to deal with. In a masterly description Bruno Latour narrates how a Berlin key forces its user to lock the door behind him when entering a house, because otherwise he cannot close the door at all.15 Second, in a constitutional democracy law has a specific role in sustaining the balance of power between citizens (their interaction being regulated amongst others by social norms), business enterprise (their interaction being regulated amongst others by the market) and the state (its interactions with citizens being regulated amongst others by law). Law rules at a meta-level that cannot be reduced to being just one of the instruments of government policy making. It provides the framework within which business enterprise, citizens and government officials can interact. For this reason technological devices and infrastructures should be regulated to a certain extent by law, precisely because they regulate our interactions, whether they were intended as such or not. This implies that legal and technological instru- ments are not exchangeable tools to achieve specific policy objectives, depending on which tool is more efficient or effective. Such a vision of law and technology 13 Though we evidently disagree whereas he states: ‘Architecture is a kind of law: it determines what people can and cannot do’ (Lessig 1999: 59), because neither law nor technology is per se determinate (constitutive) of human behaviour. 14 About the normative impact of profiling technologies in terms of human autonomy and non- discrimination see Zarsky (2002–03). 15 Latour (1993). Of course one can read the ‘directions for use’ of technological device as a set of prescriptions, but the normative impact is inscribed in the device itself and rather concerns the effects of its use than its conditions. A Vision of Ambient Law 179 would boil down to legal and technological instrumentalism (and neutralism),16 having no regard for the values incorporated into specific legal and technologi- cal devices. Legal instrumentalism cannot conceptualise the legal architecture of democracy and rule of law that safeguards a particular set of checks and balances between citizens, state and civil society. In a constitutional democracy law is not just instrumental for achieving policy goals, as it should always be instrumental for the protection of citizens against the state as well. C. The Relationship between Legal and Technological Normativity What does this mean for the relationship between law and technology? If we can agree that technologies have a normative impact but should not be conflated with law, the question remains how the two relate. How could the practices of lawyers and the practices of technologists relate, taking into account that technologies do regulate our behaviours and that law aims to provide the meta-perspective? There is, however, another point, which I will elaborate here. Instead of separat- ing Law from Technology17 or the practices of laywers from those of technologal experts, I will develop the argument that modern law is already embedded in a specific technology,18 being the written and printed script. Obvious as this may be, this embodiment had major consequenses for the constitution of modern law, and raises the question whether the normative impact of emerging technologies requires us to re-embody parts of the law in technologies other than the script in order to regulate their normative impact.19 This would require a new interest of lawyers for the practices of computer engineers and perhaps even a new literacy of lawyers in terms of the relevant technologies. No doubt, in a society with an oral tradition some people must have resisted the idea of making law dependent on the written word, claiming it would confuse the practice of writing with the practice of speaking the law. Writing started its history as a monopoly of a class of scribes and making law dependent on writing would greatly expand the monopoly of the literates.20 Speaking the law in an oral tradition was performed by a court that practiced mediation, requiring the cooperation of the parties that were basically 16 Hildebrandt (2008a). 17 Lévy (1990: 12–15) on the dangers of using grand abstractions like Technology to describe the impact of technologies. Cf Ihde (1990: 4–10) discussing technological determinism, utopism and dystopian perspectives and Verbeek (2005: 11), arguing against an instrumentalist (technology as a neutral tool) and a substantivist conception (technology as determinative of human action) of technology. 18 Cf Lévy (1990: 16) who remarks that we take the script and the printing press for granted, blind to the fact that they are in fact technologies (constitutive of our lifeworld and selves). This point is stressed by Eisenstein (2005); Goody and Watt (1963) and by Ong (1982). 19 Hildebrandt (forthcoming b). 20 About the advent of a class of scribes with a monopoly on administrative functions that require writing skills see Goody and Watt (1963: 313–14). About the privileged role of the scribe in legal tradi- tions that depend on writing Glenn (2004: 62–3). 180 Mireille Hildebrandt peers of the judge.21 Written law created new hierarchies and segmentations in society, not necessarily beneficial for the illiterate majority. The transition from an oral to a written legal tradition (and from a hand-written to a printing press legal tradition) has transformed law. In fact, modern law cannot be separated from its embodiment in the script, and it may be unwise to resist a transition of written law into one embodied in other technologies, taking for granted this would neces- sarily be the end of the rule of law. One could even fear that a failure to rearticu- late legal norms in the technological infrastructure it aims to regulate, would in fact threaten the rule of law. The challenge we face is to discuss how legal norma- tivity should be embodied in which technological devices and infrastructure(s). Before initiating an answer to this question I will first describe the transition of the lifeworld induced by the shift from orality to the script and from the hand- written script to the printing press. These transitions will be complemented with a description of the ensuing shift from oral to written law to law in the age of the printing press. Becoming aware of the profound impact of law’s embodiment in the script may sensitise the reader to both the possibilities and dangers of reem- bodiment of legal norms in emerging technologies like Code, multi-agent systems (MAS), personal digital assistents (PDA) and other types of machine to machine (M2M) communications. III. The Technological Articulation of Law A. Transition of the Lifeworld i. From orality to script To grasp the transition of human societies depending on oral communication to societies based on the written word I will follow the work of the French philoso- pher Paul Ricoeur, who provided a penetrating account of what happens to our lifeworld and sense of self when we move from orality to written text.22 The first point he makes is fixation. Both written and spoken text actualise what is virtually present in language by selecting combinations of sounds, words and sentences to create meaning. Written text—however—suspends the volatility of the spoken word, it fixes meaning in a material form by inscribing it in stone statutes, clay tablets, on papyrus rolls or sheets of papers. Paradoxically this attempt to petrify meaning creates a distantiation of 21 Glenn (2004: 64–5) about dispute resolution in non-state societies and (2004: 176–80) about dispute resolution in the islamic tradition. Cf Hildebrandt (2006b). 22 Ricoeur (1986: 87–114); Goody and Watt (1963); Ong (1982) and Glenn (2004: esp ch 3 about oral traditions and the difference with written traditions). Ihde (1990: 80–84) providing a ‘phenom- enology of reading and writing’. A Vision of Ambient Law 181 meaning,23 because the words are externalised and objectified on a material outside the human body.24 The second point Ricoeur makes is that the text is liberated from the custody of the author, because simultaneous presence of author and speaker is not neces- sary. It is this distantiation of the author25 that creates the need for interpreta- tion, since the author cannot explain the intended meaning to the reader. The context of the reader wil co-determine her response to the text and thus generate new meaning, adapted to the differences of time and location between writer and reader. This does not mean that the meaning of the text is now determined by the reader’s response, because the text will be read by many more readers who may discuss their interpretations in writing thus forming a chain of texts that keep the text in constant flux. It also does not mean that the text becomes flooded with discontinuous meaning, because the very fact that fragmentation would render the text meaningless implies that readers will feel constraint by previous and anticipated meaning—thus contributing to the continuity of meaning.26 The third point made by Ricoeur concerns the shift from ostensive reference— referring to a shared Umwelt—to a non-ostensive reference that consists of the world that is created by texts that refer to each other (creating a shared context). Distantiation of ostensive reference27 is already present in spoken language, which allows one to speak of what and who are absent or elsewhere (distantiation of the here) and of events happening in the past or the future (distantiation of the now).28 Language allows one to plan ahead, based on imagined scenarios of how past and present may evolve. Written language stretches this virtualisation of the here and now, enabling one to reinvent both the past (remembering) and the future (planning) to a much further extent.29 The fourth point discussed by Ricoeur is the creation of a virtually unlimited public, which enables people to form translocal communities that need not share a local Umwelt—as long as they share a common context, as generated by texts. This move from a face-to-face community to a society of strangers was the condi- tion of possibility for large scale empires, even if only a literate class had access to the texts that held them together as a polity. The distantiation of the audience30 23 Cf Geisler (1985: 73). 24 Cf Goody and Watt (1963: 339). 25 Cf Geisler (1985: 73). 26 The context of the reader not only shapes the reader’s response, it is also renegotiated as a result of the reader’s acquintance with the text. The context, in other words, is not a given, but in constant flux, Cf Lévy (1990: 26). 27 Cf Geisler (1985: 74) and Goody and Watt (1963: 306). 28 Cf Lévy (1998: 91–4). 29 Cf Lévy (1998: 50–51). Cf Levinson (1999: 53) about the generalising quality of speech—the capacity that gives us a sense of things not present, the world as it is not—as abstraction. Also Goody and Watt (1963: 330), referring to Spengler’s discussion of ‘writing’ that ‘liberates’ one ‘from the tyr- anny of the present’. 30 Cf Geisler (1985: 74). 182 Mireille Hildebrandt enlarged the scale of societies, providing the tools to install some type of hierarchy between those that rule and those that are being ruled. ii. From hand-written to printed script To extend the analysis of the move from orality to script we will follow cyber phi- losopher Pierre Lévy on the transitions in human society generated by the shift from handwritten script to the printing press.31 One of the points he makes concerns the close relationship between a master and a pupil in the medieval system of education, coming down to the fact that one read a manuscript under the guidance of a master. This may indicate the lingering priority of orality, finally disturbed when the massive availability of printed text made individual education impractical, generating a new way of reading: both individual and in silence.32 Another point made by Lévy regards the limited amount of primary texts in the age of handwritten manuscripts, confounded with accumulating commentaries, written in the (oral) style of question and answer. After the introduction of the printing press this scholastic way of teaching was replaced due to the abundance of texts that needed some sort of systemisation to make sense of them. Instead of accumulated texts and commentaries, books were now ordered by means of tables of contents, an index, matrices, graphics and written in a more analytical style announcing the advent of modern science. The shift from aloud and public to silent and private reading,33 together with the shift from guided study of a limited set of primary and secondary texts to indi- vidual study of a variety of texts of which the authority could no longer be taken for granted, tends—according to Lévy—to an exchange of situational, interstitial rationality for cartesian rationalisation, categorisation and universalism. The accumulation of text and context, initiated by the script, enhanced to an unprec- edented level by the printing press, required new techniques to sort and file, clas- sify and archive all the printed material in order to retain access to its content.34 So, the alphabetic script,35 and especially letterpress printing, create an external memory, allowing the distantiation described above (of the meaning, the author, 31 Lévy (1990). Cf Eisenstein (2005) and Chappell and Bringhurst (1999). 32 Cf Goody and Watt (1963: 319). Cf Manguel (1996: 41–53), who traces instances of silent reading to the early middle ages and before, especially in monasteries. About the primacy of orality in manu- script (hand-writing) cultures see Ong (1982: 117), and Goody and Watt (1963: 316–17) about using the script as a mnenomic technique. 33 Cf Goody and Watt (1963: 339) about writing as an encouragement of private thought. 34 Lévy (1990: 108–12). Eisenstein (2005: 70–81). See also Chappell and Bringhurst (1999: 39–40) on the invention of the page as a major step forward in systemisation and indexing compared to the scroll. Goody and Watt (1963: 334) about the increasing inconsistency of the totality of written expres- sion leading to social stratification. 35 Before the alphabetic—phonetic—script there were the ideographic and pictographic script, which provided less scope for virtualisation, as they did not constitute a set of letters to be recombined into words. See Lévy (1998: 50–51, 103–4, 111, 127); Goody and Watt (1963: 311–19); Ihde (1990: 82–3) and Ong (1982: 84–91). A Vision of Ambient Law 183 the ostensive reference and the public), which can be understood as a process of deterritorialisation (as the written or printed word cannot be contained within a territory and creates communities beyond kinship and territory).36 Lévy speaks of virtualisation, by which he means a process that translates actual events or inter- actions back into the problems they solved, thus creating chances for a variety of new actualisations. Not only language, the script or the letterpress involve this process of virtualisation, but also eg money, medical technologies and the concept of the contract. They all provide the means to decontextualise actual occurences into abstract or generic formats that provide a range of chances to experiment with novel responses. In mentioning the contract as a major tool to virtualise vio- lence, Lévy touches upon the workings of the law in terms of virtualisation. One could rephrase his account of the contract by stating that law provides a format to creatively resolve a host of potential problems that could otherwise have given rise to violent combat. iii. From letter-isation to digitilisation The alphabetic script and letterpress printing perform what Lévy calls letter- isation (movable type printing),37 which means using the same separated individual letter in different sequences to make an array of different words and sentences, out of which stories and arguments can be composed and written down on stones, clay-tablets, scrolls and in printed books. This can be contrasted with numerisa- tion: using just the numbers 1 and 0 in different sequences to make an unlimited amount of hyperlinked texts, models, images, sounds, movements, compiled on discs, in data bases and floating around on the electronic highway. The turn from books, discs and data bases to the electronic highway will be constitutive of a new way of life. Summing up Lévy suggests we are in a transition from a linear sense of time to segments and points; from accumulation to instant access; from delay and duration to real time and immediacy; from universalisation to contextualisation; from theory to modelling; from interpretation to simulation; from semantics to syntaxis; from truth to effectiveness; from semantics to pragmatics; from stability to change.38 How this change in our sense of time and space will eventually affect us may be too early to spin out with any degree of precision, but that it will require 36 Cf Ong (1982: 102–7) about the distancing effected by writing. 37 Chappell and Bringhurst (1999: 5) indicate that the printing press has been invented four cen- turies before Gutenberg in China. The fact that the Chinese script is ideographic meant that letterisa- tion was out of the question, restricting the impact of the printing press as compared to its impact in Europe. See also ibid at 8, where they distinguish between the phase of woodblock printing, movable type letter press and electronic ‘texts’. Cf Goody and Watt (1963: 319–32) about the effects of the inven- tion of the fully phonetic script in Greece. 38 Lévy (1990, 143). Cp. Levinson (1999: chapter 4) about the similarity between premodern non- literate cultures and postmodern digitialised cultures, (Cf Ong (1982: 133) writing about the age of ‘secondary orality’), Levinson (1999 at 160–164) about horizontal (simultaneous) vs. vertical (histori- cal) dissemination of information, and Goody and Watt (1963: 340) about similarities between mass media culture and oral cultures. 184 Mireille Hildebrandt us to reinvent the law as the meta-language that holds together constitutional democracy seems apparent. B. Transition of the Legal Tradtion: i. From oral to written legal tradition Having discussed the impact of transitions from orality to script, printing press and digitilisation on the lifeworld, we now need to investigate what this means for our legal tradition. In this section I will trace the first major transition, from oral to written law, inspired by Ricoeur’s analysis of the shift from orality to the script.39 First, written law externalises legal norms by materialising them in the form of inscriptions on stone, clay, scrolls and books, thus providing them with an inde- pendent existence. The law is no longer in the mouth of the judge as it can now be found on a piece of paper that may outlive the judge, the legislator who enacted it and the first generation who fell under its jurisdiction. This entails a distantiation —or virtualisation—of the meaning of the law. Second, the externalisation of the law enables a durability in time and space that allows a shift from local to translocal law. This entails a distantiation of the author of the law and creates the need for reiterant interpretation of the meaning of the law. Such recurrent interpretation results in accumulating comments, and commentaries on comments, generating new texts that nourish on intertextual reference. This, then, creates the need for a class of scribes (lawyers) that guards the intrasystematic coherence and the historical continuity of the law. The legal profession is born in the wake of the need for interpretation and systemisation of legal texts. Third, by enabling a translocal jurisdiction a written law is the condition of possibility for a translocal polity, in which a law enacted by a few can regulate the life of many since the addressees of the law need not have a face-to-face relation- ship. Their equality before the law consists in their equal distance to the law. Fourth, as written law has—in principle—an unlimited public it allows for the formation of large scale polities and jurisdictions. In fact absolutist government depends on a written law, executed by an army of civil servants who can be ruled from the center of an extended territory, all constraint by the same written law. In short, written law has facilitated the emergence of the modern state, excer- cising a moderate control over a vast territory by means of law, initiating the rule by law (legal instrumentalism). At the same time written law has produced a class of professional lawyers to control the intrasystematic coherence of the law, thus laying the foundations for the autonomy of the law which initiates the rule of law (legal embodiment or moderate government). 39 Hildebrandt (2002: 90–93). Cf on oral legal traditions Glenn (2004: 8–13, 61–5). A Vision of Ambient Law 185 ii. From hand-written to printed script in law On the verge of written and printed law, the printing press has again extended the set of possible addresseess of written legal rules. The audience of printed matter is not only virtually but practically unlimited.40 It has allowed a proliferation of texts, demanding ever more permanent care for intratextual coherence and conti- nuity in time, creating a body of texts that emphasise this intrasystematic meaning of written law: legal dogmatics and legal doctrine. The low transaction costs41 of printed law—compared to handwritten law—have evoked a process of democ- ratisation, enabling addressees to read and interpret legal regulations, while such democratisation depends on the literacy of the adressees of legal norms (principle of publicity), which is of course facilitated by the printing press.42 We may conclude that the printing press was the condition of possiblity for written law to be instrumental to the modern national state (providing the means for a detailed rule by law), democracy (providing the means to develop literacy on a full scale)43 and the rule of law (providing the need for an autonomous class of lawyers to interpret and sustain the intrasystemic coherence of law, cf the conclu- sion of the previous subsection). IV. A Vision of Ambient Law A. Law and Emerging Technologies: Mutual Transformations The point of the analysis of legal traditions dependent on orality, writing and the printing press was to demonstrate that law cannot be separated from its techno- logical embodiment. Facing life in a digitalised world, in intelligent environments with hybrid multi-agent systems, with real time monitoring and real time adap- tation to one’s inferred preferences, legal normativity will have to be reinvented. Depending on a law inscribed in printed matter may turn out to be like moving around as a dynosaur: it follows a ‘logic’ that does not match the ‘logic’ of mass data storage and intelligent data mining. One may counter that this is not a valid argument, because we should not follow whatever logic is on offer. I agree that we 40 The only ‘obstacle’ may be the fact that people do not speak the same language. The rise of the national state—in the era of the printing press—demonstrates an effort to establish national languages and national law to consolidate the territorial borders that are inherently artificial and need continu- ous maintenance. Cf Goody and Watt (1963: 332), about the world of knowledge transcending politi- cal units. The reach of the printing press is far greater in the case of multilinguistic education. 41 I am using the term here to refer to the relatively low costs of producing—and gaining access to—content in the form of printed books, compared to hand-written manuscripts. 42 Cf Goody and Watt (1963: 316) about the link between the phonemic system of the phonetic alphabet, the advent of democracy and (1963: 332) about political democracy in Greece in relation to widespread literacy. 43 Cf Lévy (1998: 127). Cf Goody and Watt (1963: 316, 332). Cf Bawden and Robinson (2000) and Habermas (1990). 186 Mireille Hildebrandt should not follow technological paradigms as a matter of course but rather use them to balance any emerging monopolies. If we turn our backs on technological embodiment of legal norms we may not discriminate information from noise and may not have access to the knowledge that makes a difference. Law in that case cannot provide any kind of countervailing power, and has no chance to effectively embody transparency rights, nor to effectively embody the opacity required to enjoy the liberties enshrined in constitutional democracy. This does not mean that written and unwritten law should be discarded. As we all know written law depends on unwritten law, like any system depends on the lifeworld it nourises and feeds on.44 So probably a digitalised law will depend on written and unwrit- ten law, extending its scope and its capacity to provide effective protection against manipulation. To assess the implications of digitilisation for law is no easy task. Instead of providing answers I will at least raise a set of questions, building on Lévy’s analysis of the transition from letterisation to digitilisation. If ‘regulating technologies’ is indeed understood as the double challenge of sustaining a legal framework that regulates emerging technologies, while acknowledging that technologies them- selves have a regulative (normative) impact on human society, we need to urgently face the issue of digitilisation as a process that will regulate and constitute our lifeworld and for that very reason needs to be regulated and constituted by law. In that sense ‘regulating technologies’ implies mutual transformations of law and technology. The questions raised by the digital age regard the linear sense of time inherent in modern law, confronted with the segments and points defining its digitalised environments (compare reading a book to zapping around television programs or surfing the Internet); the slow accumulation of legal texts like statutes, treaties, case law and doctrine that need to be studied and interconnected, confronted with instant online access to of all the sources of the law (compare handbooks with selected cases to direct access to all verdicts given; compare a printed book with a hypertext);45 the delay and duration inherent in procedural safeguards that embody protection against hasty judgements,46 confronted with series of real time decisions taken by multi-agent systems in smart environments; modern law’s ambition to achieve equal application of general legal norms to equal cases (examplifying law’s tendency to universalisation and systemisation), confronted with refined personalisation and contextualisation made possible by advanced data-mining technologies; the care with which legal theory has constructed and 44 About the relationship between lifeworld and systems see Habermas (1996: 21–3). 45 Lévy (1990: 29–31) describes six characteristics of hypertext: the principles of metamorphosis, of heterogenity, of multiplicity and incorporation of different levels, of exteriority, of topology and of mobile centres. One could contrast them with principles of identity, homogeneity, unification, interiority, separateness and centrality, that seem to have more affinity with printed handbooks. Cf Levinson (1999: 30–34, 116–18). 46 One of the important characteristics of the practice of judges is the hesitation, the delay, the suspension of judgement, Cf Latour (2004: 202–3). A Vision of Ambient Law 187 sustained the theoretical legitimisation and critical assessment of the positive law, confronted with a world in which models replace theory (demanding effec- tiveness instead of correspondance to reality); the hermeneutical practice of law (always involved in interpreting both the facts of the case and the legal norms that should apply), confronted with a world in which simulation rather than interpre- tation turns out to be the best way to anticipate future events; the emphasis on meaning as a reference to the world outside law (semantics), confronted with an emphasis on links and networks (syntaxis) and the actual consequences of doing things one way or another (pragmatics); the emphasis on legal certainty, intra- systematic coherence, continuity and stability (legal doctrine and jurisprudence), confronted with a rapidly changing fluid world that needs permanent real time monitoring (pattern recognition) instead of the slow construction of durable knowledge that is universal and survives the ravages of time. B. Ambient Law: A Vision of Legal Protection in the Digital Age Coming down from the discussion of orality, script and digitilisation, and having raised a host of questions I will now indicate in which direction we may seek the mutual transformation of law and technology in the field of Ambient Intelligence. This can serve as an indication of what is meant with technological embodiment of law in the case of emerging technologies that have a normative impact which cannot easily be regulated by written law. Ambient Intelligence is a still a vision.47 A vision of a future stuffed with smart things that know about your habits, life style, desires and preferences, about the risks you may run and about the opportunities you may nourish on. Smart cars that communicate with the road (detecting a wet surface), with other cars (preventing collusion), with traffic monitoring systems (to adjust your speed or to change your direction), while at the same time checking your behavioural biometrics (pupil shape, eye movement frequency and yawn frequency) for signs of fatigue or stress in order to advice or force you to slow down or even stop driv- ing. Smart dust travelling in your blood to detect the level of relevant elements in your blood, implants that check your heart-rate, breathing pattern, brain states, all monitoring your health and sending out alarms when things go wrong or communicating with the environment to adapt room temperature or oxygen levels. Smart fridges that order groceries you seem to prefer when they run out of stock and communicate with other fridges to get the last update on bugs in the software or whatever else. Smart things require real time sophisticated profil- ing, based on data-mining processes that generate new knowledge by detecting unexpected patterns in data bases. These patterns allow refined categorisation 47 ISTAG (2001); Aarts and Marzano (2003). For ethical considerations see Bohn, Coroama et al (2005). 188 Mireille Hildebrandt of people and things in different contexts, providing a detailed profile that can be used to influence a person’s everyday choices, credit rating, earning capacity, insurance premium, job offers, discounts. To ensure an expensive car insurance companies already demand a black box installed in one’s car that tracks the driv- ing behaviour from day to day, tuning the premium to your expected performance in terms of safe driving. In the case of an accident this black box will also provide justice authorities with a new type of evidence. To be sure, the adaptation of the environment to a person’s inferred preferences depends on the extent to which he generates a profit for the service provider that organises the adaptation: someone somewhere is paying for all the comfort and we may guess it will be the consumer in the end. Data protection legislation fails to protect citizens against the implications of this type of smartness in two ways: first, it is focused on protecting personal data, not on protecting a person against unwarranted application of profiles one is not aware of; second the technological tools to exercise the rights that have been attributed are not part of the technological infrastructure that is constructed, let alone to exercise new rights like a right of access to profiles that may affect the risks and opportunities one is attributed.48 To sustain constitutional democracy we need to reinvent the balance between what Gutwirth and De Hert have coined legal opacity tools and legal transpar- ency tools.49 Opacity tools protect individual citizens from being transparent for their government (or any other large organisation that could manipulate an individual in the case of a knowledge asymmetry), they provide a kind of right to be left alone. Transparency tools provide individual citizens with rights to gain access to their personal data, to correct them if wrong and to check whether they have not been stored longer then necessary, used for other purposes or trans- ferred (sold) to third parties without consent. In terms of Berlin’s concept of lib- erty opacity tools provide negative freedom (freedom from) while transparency tools provide positive freedom (freedom to).50 The problem with today’s legal opacity tools is that they fail to conceptualise the legal status of profiles, while it is profiles (not data) that constitute new ways of making people transparent. At the same time there is an urgent need for lawyers—whether legislator, judge, advocate, prosecutor or academic—to sit down with the technical engineers, information system specialists and computer scientists to discover how the tech- nological infrastructure that is prepared at this very moment could be designed in a way that enables the right balance of opacity and transparency. Lawyers may have to learn from constructive technology assessment (CTA)51 to ask the right questions in order to initiate the mutual transformations necessary in a consti- tutional democracy. 48 Hildebrandt (2006a). 49 Gutwirth and De Hert (2005). 50 Berlin (1969). 51 Rip, Misa, et al (1995). A Vision of Ambient Law 189 V. Conclusions: The Blind and the Lame In his Les technologies de l’intelligence Lévy discusses the relationship between computer engineers and sociologists as one between a blind practice and a lame practice: as long as engineers stick to the technicalities and sociologists move in afterwards to add some social aspects, the problem of the human machine inter- face will not be resolved. To separate knowledge of machines and cognitive and social competence boils down to the artificiel construction of a blind person (the ‘pure’ technologist) and a lame person (the ‘pure’ social science specialist) who are then forced to associate, but too late, the harm has been done.52 The point made is valid for the relationship between computer engineers and lawyers as well. The embodiment of modern law in the written and printed script cannot be taken for granted and may need extension into emerging technologies. Having studied the impact of such embodiment and having realised that tech- nology is neither good nor bad but never neutral, the conclusion must be that it will require the active involvement of both ICT specialists and lawyers to figure out which technological developments will sustain constitutional democracy and which will destroy it. In the case of Ambient Intelligence (AmI) we may need to develop an Ambient Law that is embodied in the algorithms and human machine interfaces that support AmI and for this we will have to break through our paraly- sis, ready to become literate in terms of a new script. References Aarts, E and S Marzano(eds) (2003) The New Everyday: Views on Ambient Intelligence (Rotterdam, 010). Bawden, D and L Robinson (2000) ‘A Distant Mirror?: The Internet and the Printing Press’ 52 (2) Aslib Proceedings 51–8. Berlin, I (1969) ‘Two concepts of liberty’ in his Four Essays on Liberty (Oxford and New York, Oxford University Press) 118–73; first published 1958. Bohn, J,V Coroama et al (2005) ‘Social, Economic, and Ethical Implications of Ambient Intelligence and Ubiquitous Computing’ in W Weber, J Rabaey and E Aarts (eds) Ambient Intelligence (Zurich, Springer) 5–29. Chappell, W and R Bringhurst (1999) A Short History of the Printed Word (Vancouver, Hartley & Marks); first published 1970. Dewey, J (1927) The Public & Its Problems (Chicago, The Swallow Press). 52 Lévy (1990: 61). Some Kantian undercurrent may be suspected (sensual input without concepts is blind; concepts without sensual input are empty). 190 Mireille Hildebrandt Eisenstein, E (2005) The Printing Revolution in Early Modern Europe 2nd edn (Cambridge and New York, Cambridge University Press). Geisler, Deborah M (1985) ‘Modern Interpretation Theory and Competitive Forensics. Understanding Hermeneutic Text’ III The National Forensic Journal 71–9. Glenn, HP (2004) Legal Traditions of the World, 2nd edn (Oxford, Oxford University Press). Goody, J and I Watt (1963) ‘The Consequences of Literacy’ 5 (3)Comparative Studies in Society and History 304–45. Gutwirth, S and P De Hert (2005) Privacy and Data Protection in a Democratic Constitutional State in M Hildebrandt and S Gutwirth (eds) Profiling: Implications for Democracy and Rule of Law, FIDIS deliverable 7.4 (Brussels); available at accessed 12 June 2008. Habermas, J (1990) Strukturwandel der Offentlichkeit. Untersuchungen zu einer Kategorie der burgerlichen Gesellschaft (Frankfurt am Main, Suhrkamp); first published 1962. —— (1996) Between Facts and Norms. Contributions on a Discours Theory on Law and Democracy, trans William Rehg (Cambridge, Polity Press). Hildebrandt, M (2002) Straf (begrip) en procesbeginsel. Een onderzoek naar de betekenis van straf en strafbegrip en naar de waarde van het procesbeginsel (Deventer, Kluwer/Sanders Instituut). —— (2006a) ‘From Data to Knowledge: The Challenges of a Crucial Technology’ 30 DuD—Datenschutz und Datensicherheit. —— (2006b) ‘Trial and ‘Fair Trial’: From Peer to Subject to Citizen’ The Trial on Trial. Judgment and Calling to Account in A Duff, L Farmer, S Marshall and V Tadros (Oxford and Portland, OR, Hart Publishing) 15–37. —— (2008a) ‘Ambient Intelligence, Criminal Liability and Democracy’ 2 (2) Criminal Law and Philosophy 163–80. —— (forthcoming a) ‘Legal and Technological Normativity’ 12 (2) TECHNÉ, Journal for Research in Technology and Philosophy. —— (forthcoming b) ‘Technology and the End of Law’ in E Claes and B Keirsbilck (eds), Facing the Limits of the Law (Dordrecht, Springer). Hildebrandt, M and S Gutwirth (2007) ‘(Re)presentation, pTA Citizens’ Juries and the Jury Trial’ 3 (1) Utrecht Law Review, available at accessed 12 June 2008. Ihde, D (1990) Technology and the Lifeworld: From Garden to Earth (Bloomington and Indianapolis, Indiana University Press). ISTAG (2001) Scenarios for Ambient Intelligence in 2010, Information Society Technology Advisory Group; available at accessed 12 June 2008. Jin, S, S-Y Park, et al (2007) ‘Driver Fatigue Detection Using a Genetic Algorithm’ 11 (1) Artificial Life and Robotics 87–90. Kranzberg, M (1986) ‘Technology and History: “Kranzberg’s Laws”’ 27 Technology and Culture 544–60. A Vision of Ambient Law 191 Latour, B (1993) La Clef de Berlin et autres leçons d’un amateur de sciences (Paris, La Découverte). —— (2004) La fabrique du droit. Une ethnographie du Conseil d’État (Paris, La Découverte). Lessig, L (1999) Code and Other Laws of Cyberspace (New York, Basic Books). Levinson, P (1999) Digital McLuhan: A Guide To the Information Millenium (London, Routledge). Lévy, P (1990) Les technologies de l’intelligence. L’avenir à l’ère informatique (Paris, La Découverte). —— (1998) Becoming Virtual: Reality in the Digital Age (New York and London, Plenum Trade). Manguel, A (1996) A History of Reading (New York, Penguin). Marres, N (2005) No Issue, No Public. Democratic Deficits after the Displacement of Politics (Amsterdam, Uitgegeven in eigen beheer). Available via: accessed 12 June 2008. Mittag, M (2006) ‘A Legal Theoretical Approach to Criminal Procedure Law: The Structure of Rules in the German Code of Criminal Procedure’ 7 (8) German Law Journal 637–46. Mullin, John R (1992) ‘The Reconstruction of Lisbon Following the Earthquake of 1755: A Study in Despotic Planning’ 7 Planning Perspectives 157–97. Ong, W (1982) Orality and Literacy: The Technologizing of the Word (London/New York, Methuen). Ricoeur, P (1986) Du texte a l’action. Essais d’herméneutique II (Paris, Editions du Seuil). Rip, A, T Misa et al (1995) Managing Technology in Society: The Approach of Constructive Technology Assessment (London, Pinter Publishers). Searle, JR (1969) Speech Acts: An Essay in the Philosophy of Language (Cambridge, Cambridge University Press). Shklar, J (1990) The Faces of Injustice (New Haven, Yale University Press). Taylor, C (1995) ‘To Follow a Rule’ in his Philosophical Arguments (Cambridge, MA, Harvard University Press) 165–81. Verbeek, P-P (2005) What Things Do: Philosophical Reflections on Technology, Agency and Design, translated b Robert P. Crease (Pennsylvania, Pennsylvania State University Press). —— (2006) ‘Materializing Morality. Design Ethics and Technological Mediation’ 31 (3) Science Technology & Human Values 361–80. Winch, P (1958) The Idea of a Social Science (London and Henley, Routledge & Kegan Paul). Zarsky, TZ (2002–03) ‘“Mine Your Own Business!”: Making the Case for the Implications of the Data Mining or Personal Information in the Forum of Public Opinion’ 5 (4) Yale Journal of Law & Technology 17–47. 9 The Trouble with Technology Regulation: Why Lessig’s ‘Optimal Mix’ Will Not Work SERGE GUTWIRTH, PAUL DE HERT AND LAURENT DE SUTTER∗ ‘Le droit ne sauve pas, il n’humanise pas, il n’administre pas, il n’économise aucun tracas. Le droit ne remplace rien d’autre.’ (Latour 2002: 292)1 Introduction There is no doubt that Lawrence Lessig’s work on the regulation of emerging technologies is seminal and imposes itself as a benchmark for any further reflec- tion on the issue. However, the precious paths opened by Lessig might not fully satisfy the legal actors and professionals who are making and constructing the law. In a famous paper, Judge Frank Easterbrook has even gone so far to state that the answers proposed by Lessig were ones of an amateur (Easterbrook 1996). Notwithstanding this harsh appreciation and Lessig’s extensive reaction to it (Lessig 1999a), legal academics have, all around the world, continued to use and expand Lessig’s argument to their own field of researches, while refining, enhanc- ing and/or criticising it. From the realm of cyberlaw and cybercriminality where it was first developed, the argument has then been applied to a wide range of top- ics, from human genetics to car safety devices. Roger Brownsword, for instance, both criticised and enriched Lessig’s construction respectively by questioning the reduction of choice and respect implied in the appeal to ‘techno-regulation’ or regulation through code, and by refining the notion of regulation through the con- ceptualisation of four different dimensions of regulation, eg the regulatory ‘phas- ing’, ‘pitch’ ‘modes’ and ‘range’ (Brownsword 2005 and 2007). However, despite their obvious merits, none of these additions to Lawrence Lessig’s argument seem ∗ The authors would like to thank Isabelle Stengers for her suggestions and comments upon earlier drafts of this contribution, especially as regards the paragraphs on the ‘ecology of practices’, a set of concepts she developed and proposed. 1 Quotes of the work of Latour and Stengers will be in French, unless an English translation is available. 194 Serge Gutwirth, Paul De Hert and Laurent De Sutter to have provided a satisfactory answer to the objections of the legal practitioner. Why? Simply because the trouble with regulation lies in its very core. To enrich and to nuance Lawrence Lessig’s concept of regulation does not wipe out the problems related to the fact that it primarily is a concept of a political nature—and not of a legal one. In the present paper we will argue that the concept of regulation –and specially Lawrence Lessig’s interpretation of it– is not relevant for the legal profession that practises the law, and this for two reasons: (1) regulation is too general a concept to recognise the specificity of the legal practice, particularly in its confrontation with new technologies; (2) regulation is too powerful a concept to allow the novelty of emerging technologies to be taken into account by those who, like legal practitioners, have to deal with it at their own pace and with their own tools and responsibilities. The generality and the power of the concept of regulation, however, should not be considered as perversities invisible to those who defend it. On the contrary, we will argue that the arguments in favour of the concept of regulation are precisely based on the assumption of its generality and its power, because, indeed, these features enable the completion of a well-defined political agenda. Those who favour regulation expect it to produce a convergence of legal, political, social and technological practices, which in its turn will contribute to effective control and channelling of emerging technologies. Our problem rests with the fact that the proposed agenda requires the differ- ent practices involved to tune themselves to a rhythm, a melody and a register dictated by an outside regulating instance. It is of the nature of the concept of regulation that law becomes the servant of politics. We will highlight that we judge this subjugation of the law in action to be contrary to the constraints that characterise law as a specific practice and regime d’énonciation. We are also confident that the political agenda defended by the proponents of regula- tion will not succeed due to the unwillingness of the legal professionals to do something other than that what they are expected to do, viz. to speak or say or produce the law. Lawrence Lessig’s Optimal Mix For Lawrence Lessig, regulating new technologies is a difficult task, which demands looking at a diversity of modalities which are concerned with the emergence of new technologies. Which are these modalities? There are four of them: the law, the market, social norms and the technology itself (also coined as the architecture or the ‘code’). Hence, there is no regulation in general, but only specific modes of regulation which each ‘constrain differently’: the legal one, the economical one, the social one and the technological/architectural one (Lessig 1999b: 235–9 and Lessig 2006: 340–45). Regulating new technologies means then to succeed in building some kind of concordance or interaction between these The Trouble with Technology Regulation 195 four different modes of regulation. Such a concordance, however, is not neutral: what is looked for is a good form of concordance—ie a form of concordance which satisfies the proper goals of the different practices involved. This good concordance or interaction of regulatory modalities is what he calls the ‘optimal mix’ (Lessig 1999a). Such an optimal mix must be constructed rather than simply described, and its optimality will depend on the object that must be controlled, as well as on the context and the flexibility of the four regulatory modes. Its construction requires the definition of a scale on which to measure its quality. The definition of a scale of that sort, to some extent, could even be considered as the central operation of construction of the optimal regulatory mix in a given case. For, in Lawrence Lessig’s view, to regulate a new technology is not a technocratic operation: it requires the active defence of a positive choice of values between those embed- ded in the different practices involved in the emergence of this new technology. Regulation, in his view, is a form of activism: My suggestion is that if we … understand how the different modalities regulate and how they are subject, in an important sense, to law, then we will see how liberty is constructed not simply through the limits we place on law. Rather, liberty is constructed by structures that preserve a space for individual choice, however that choice may be constrained. We are entering a time when our power to mock about structures that regulate is at an all- time high. It is imperative, then, that we understand just what to do with this power. And more important, what not to do.(Lessig 1999b: 239 and Lessig 2006: 345) Is Law an Activist Practice? To consider the regulation of new technologies from an activist perspective is not self-evident, particularly when it comes to the expectations one can have from the law and the work of the courts. Chapters fifteen and sixteen of Code and Other Laws of Cyberspace or chapters sixteen and seventeen of its second edition Code Version 2.0 endorse this hesitation, distinguishing between the role of the framers (politics) and the role of the courts (law). Judges ‘cannot be seen to be creative’ and their hesitancy and prudence should be understood (Lessig 1999b: 218, 222 and Lessig 2006: 319, 325). It is not up to the courts to make political choices when the values related to a case cannot be inferred with clarity and cer- tainty from the legislative framework, as it is often the case with conflicts created by emerging technologies. This, for Lessig, is one of the problems we face when making choices about cyberspace and how to regulate it. However, Lessig contends as a response to this problem, that in such cases judges, and especially lower court judges, should be ‘stronger’ and ‘kvetch’ about the issues and changes at stake; they should talk, whine and complain (Lessig 1999b: 222–3 and Lessig 2006: 325–6). They should then identify the competing values and resolve issues in a way most likely to induce political consideration or 196 Serge Gutwirth, Paul De Hert and Laurent De Sutter review of the solution. Hence there is no clear-cut ‘judicial activist’ role for courts in Lessig’s view, but a political role nevertheless: While it will never be the job of the courts to make final choices on questions of value, by raising these questions the courts may inspire others to decide them. … I would rather err on the side of harmless activism than on the side of debilitating passivity. It is a tiny role for courts to play in the much larger conversation we need to have –but to date have not started. (Lessig 1999b: 223 and Lessig 2006: 326–7). Hence, Lessig endorses the constraints of the practice of judges and lawyers, none- theless promoting a form of modest activism directed towards the production of effects in the political sphere. As a means of regulation among the three others, the role of law in the compo- sition of an optimal regulatory mix is thus both a selfish and an altruist one. It is a selfish role since it is indexed upon law’s own programme: the contribution of law to the composition of the optimal mix cannot be contradictory to law’s specific ends, Lessig suggests. But it is also an altruist role since what is asked from law is to contribute to something that exceeds its own realm: the regulatory activity of law produces important echoes within the regulatory activities of other prac- tices—and the other way round. However, as these two aspects are intrinsically incompatible, it is necessary to subject them to a third one in order to be able to make them serve together the regulatory purpose of law. And precisely, this third aspect of the regulatory role of law seems to be this regulatory purpose of law itself. That is: the assumption that law’s regulatory contribution to the composi- tion of the optimal mix also belongs to the same realm of activism as the one to which regulation in general belongs. In order to consider the regulation of new technologies as an activist task, it is thus necessary to consider law itself—but also markets, technology and social norms—as oriented towards regulation. But this makes for a weird picture. At the end of the day, doesn’t that mean that law’s own ends begin to fade into what law is contributing to? Isn’t it then regulation in general that begins to constitute law’s own regulatory ends? Regulating new technologies indeed implies having the capacity to count on the different practices which contribute to the composition of the optimal regulatory mix. And to count on these practices then implies that their own ends were, from the very beginning, to reach this optimal mix. Law, economy, technology and social norms must be assumed to be regulatory practices. The problem with Lawrence Lessig’s view of law’s contribution to the composi- tion of an optimal regulatory mix is twofold: (1) it reduces law to a regulation- oriented practice; (2) it makes law an activist practice. To a certain extent, this can be considered as only one problem, ie the problem of the political dimension of law. To consider law as a regulation-oriented activist practice means that law is expected to give up something in favour of something more important. For Lessig, what is more important than law’s own ends is obvious: it is good regula- tion itself. But what happens if there is nothing more important than law’s own ends? To ask a legal practitioner to contribute to something the importance of The Trouble with Technology Regulation 197 which could only be judged by somebody else is rather awkward. Although pru- dent and using the relativising Yiddish notion of ‘kvetching’, he nevertheless ends up expecting legal practitioners to be part of an optimal mix whose optimality cannot, by definition, be measured in legal terms. The optimality of the mix can only be measured politically: in terms of success and failure of the applica- tion of a given political agenda to a given new technology. This then means that the representative of law at the table where the optimal mix is discussed cannot remain a lawyer. It is somebody else who, when the time comes to decide upon the optimality of such a mix, speaks for the lawyers. This somebody is, of course: Lessig himself. Despite the enormous respect that one might have for his political stances, it is unlikely that lawyers will accept him as leading the definition of the legal perspective on the emergence of new technologies.2 Their indignation when confronted with their denunciation as ignorant instruments of . politics, morals, sciences or whatever should be taken serious. There is a set of constraints (set- tings, procedures, hesitations, that form the specific legal régime d’énonciation) that must be respected in order to make law or ‘to practise law’. Understanding the Ecology of Practices The trouble with regulation, from a legal practitioner’s perspective, lies with the fact that regulation assumes that law should serve aims other than legal ones. When constructing and producing the law, however, a legal practitioner is much more tied to what makes him a lawyer than to a political and regulatory agenda she is deemed to fulfil and by which she is mobilised. A good lawyer is compelled by what defines the legal practice. Nevertheless, what defines this practice? What opposes it to other practices (such as politics, the practice of those that design technology or the work of scientists in laboratories)? In her seminal work in the field of philosophy of sciences, Isabelle Stengers has tried to give some generic clues that may lead us to hazard a first answer to this question (Stengers, 1996). In Stengers’ concepts a practice can only be grasped by looking at its ‘requirements’ (exigences) and its ‘obligations’ (obligations), which together form the ‘constraints’ (contraintes) of the practice. For Isabelle Stengers a practice (eg a profession) can only be understood by taking seriously the constraints that one has acted upon to belong and continue to belong to it. A constraint, in Stengers view, is radically different from a ‘condition’ which is deemed to provide an ex post explanation, legitimization or ‘grounding’ of what happened. Neither is it an external limit or imperative. Constraints, to her, do not 2 To try to convince them to consider law as regulation-oriented could even prove counter-productive: if they accept Lessig’s picture, lawyers could very easily become very happy with the fact that they have such a huge power to orient the development of new technologies. If lawyers are right to behave like activists, they could as well be activists of de-regulation. 198 Serge Gutwirth, Paul De Hert and Laurent De Sutter explain, validate or legitimise the practitioner’s action. Instead, they compel the practitioner to act. Constraints leave no alternative than to act, although they do not imply or indicate how the practitioner must act. In other words, constraints call for fulfilment ‘as a matter of life and death’, but they remain open as to the ways to be fulfilled. ‘Une contrainte impose sa prise en compte mais ne dit pas comment elle doit être prise en compte’ (Stengers 1996: 74). Hence, constraints yield their signification as they emerge, during the process of their coming into existence. Indeed, in times of stability, the accomplishment of constraints by practitioners will resemble mere compliance with a pre-existing norm, but this is only an impression: it is essential not to boil down constraints to such mere compliance, because that would close the door for any transformation of what, in fact, only looked or was presented like a norm to comply with. The requirements of a practice address its exterior and concern issues related to the articulation of that practice and its environment and other practices. They can be seen as claims from the practitioner towards everything her practice depends upon in a certain situation: they can be demands and statements directed toward the outside world, but also they may express needs. Requirements have to do with the sense and reach its practitioners want their practice to have for others. In this sense requirements can be coined as being conventional, insofar as this conven- tion is steadily subject to reinvention pending new changes, opportunities and threats to its environment. Most clearly requirements aim at the preservation of the practice’s means of reproduction and to the recognition of what brings and holds its practitioners together, namely the practice’s ‘obligations’. While the requirements of a practice are addressed to the outside, the obliga- tions are turned inwards.3 If the requirements relate to the stability of the prod- ucts and creations of a practice in the outside world, the obligations refer to its internal and irreducible register of creativity (Stengers 1996: 91). Obligations are what permit a practice’s internal sense of validity: they spell out the regime of suc- cess of the practitioner’s action. Similarly, Latour would speak of the ‘régime the véridiction’ of law, science or politics (below). For a practitioner, in other words, the obligations are the constraints through which she may hesitate towards its requirements. If, as said above, requirements evoke a conventional dimension of a practice, the obligations might call to mind its identity, but again not in a petri- fied or given form. Obligations do not guarantee the fixed identity of a practice, but instead they define the peculiar mode of hesitation of its practitioners. These hesitations may yield changes and evolutions of the practice concerned. Obligations are what practitioners consider to be compelling in the way they inter- act with others (with their environment, when their practice is networked) or when 3 Stengers 1996: 89: Il est clair … que “exigence” et “obligation”, comme l’indiquent leur préfixes respectifs, dessinent une forme de topologie. “Ex” implique l’adresse à un “dehors”, une trelation d’extériorité, alors que “ob” implique une forme de face à face. On exige quelque chose de quelqu’un. On est obligé par, ou on est l’obligé de, avec, le cas échéant, la dimension de gratitude que la langue portugaise met en avant. The Trouble with Technology Regulation 199 it is at odds with mobilisations by its environment. Obligations encompass what they cannot betray without losing their belonging to the practice. In the context of an ‘ecology of practices’ practitioners are thus always pulled away from the network in which they are knitted, but simultaneously, as practitioners, they remain ‘obliged’. A practitioner must anwer to mobilisations, but not to the point of betrayal of her obligations. Hence, a practice can never be reduced to a mere function or expression of its environment, or to its role in the network. Law cannot be reduced to politics or economics, science cannot be accounted for merely by ‘social explanations’. The constraints of a practice, its obligations and requirements, confront every practitioner with the question of how to change without betraying. In other words, both requirements and obligations are part of what makes a good practi- tioner, because their interplay guarantees both change and innovation of a prac- tice against its dogmatic refuge and immobilism, and consistency and continuity against its evaporation or colonisation. A good practitioner can only innovate in her practice by taking its constraints seriously. However, neither the obligations nor the requirements of a practice can be determined entirely in advance, and the practice emerges from their interplay. The advantage of this analysis lies in its ability to designate an important kind of closed contextuality present in the idea of obligations. When producing practice, prac- titioners will ask themselves: will my produced practice be recognised by my peers as a legitimate part of the practice to which we belong? Requirements and obligations give us an important insight in the materiality of practices. This brings us to a sec- ond advantage of the ecology approach, which denotes the intellectual obligation to respect the diversity of practices and the preliminary duty to investigate the specificity of each practice.4 In order to understand how practices produce their outcome, is not so much the question of intentions or motives of the professional to act in that way or another that counts, but rather how professionals produce. This internal analysis needs to be made for the four practices involved in Lessig’s optimal mix perspective, but that would take us too far. Having said that we note in passing that the practice of politics could turn out to be far different from the instrumental, goal-oriented image that could be derived from reading Code and Other Laws of Cyberspace.5 Understanding the Legal Practice Having introduced this general picture, having described the set of tools that allow us to understand the twofold constraints of each practice, we shall now 4 For a similar plea and a description of the (internal) method to be followed, see Latour 2002: 278–79. 5 For an account of the constraints of politicians, especially of the requirement of a ‘sense of real- ity’ and the necessity to understand that politics is about events, not science or policy programs, see Ignatieff 2007. 200 Serge Gutwirth, Paul De Hert and Laurent De Sutter turn to law. What are the constraints of legal practice? What makes a lawyer a legal practitioner? Pursuing his own programme of a systematic description of the contemporary forms of véridiction, in his ethnography of the French Conseil d’Etat,6 Bruno Latour provided important clues to answer these questions (Latour 2002 and 2004a). First, there are the requirements of the legal practice, viz. to decide on a case within a reasonable time period, to qualify facts in order to move to a legal register, to decide on the basis of the file and claims formulated, to provide legal certainty or predictability, to build up precedents, to scrupulously respect procedures, to look at the legal past, to question the whole law while only declaring the law in one case. Legal practitioners are confronted with these ‘exi- gences’ and will respond to this by acting on basis of their ‘obligations’: interested in what makes them lawyers by hesitating in the way legal practice demands about the decision to take and the arguments to use. Latour highlights in extenso the accumulation of micro-procedures at work in the French Conseil d’Etat that manage to produce detachment and distance from the flesh and blood facts of the cases and to keep doubt and hesitation at bay (rapporteur/reviseur/commis- saire du gouvernement/section or court). What makes a legal practitioner, Latour states, is the way he answers the question: ‘Have I hesitated well, meaning accord- ing to the legal practice?’ Or with his own words: ‘La justice n’écrit droit que par des voies courbes. Autrement dit, si elle refusait d’errer, si elle appliquait une règle, on ne saurait la qualifier ni de juste, ni même de juridique. Pour qu’elle parle juste, il faut qu’elle ait hésité’ (Latour 2002: 162–3). But it is indeed not enough to hesitate in a given case: everybody does so. What is important is the specific quality of this hesitation. In the case of law, Bruno Latour writes, this quality is designated by the opera- tion about which a lawyer hesitates, namely the operation of ‘imputation’. When a lawyer considers a case, what he hesitates about is the way that he will make this case stick to the wholeness of law—and the only way to build such a relationship between a case and the wholeness of law is to connect the individuals at stake with the case to a legal reality such as, for instance, accountability or guiltiness. To declare somebody legally accountable for something is not to impute to him a moral quality: it is to impute a quality which requires the wholeness of law to be applicable to him—and not only the local provision that he may have infringed. This is why the choice of a type of legal imputation is, for a lawyer, a matter of hesitation: to connect somebody to the wholeness of law cannot be realised at will. If the lawyer has not well hesitated, the legal imputation through which he made the case stick to the wholeness of law can be declared legally void: hesitation is a 6 This study of the French supreme court in administrative law, the Conseil d’Etat is based on observations pursued between 1994 and 1999. The book represents a sort of ‘laboratory life’ of judges at work and tries to compare the type of objectivity reached in science laboratories with a very dif- ferent but highly specific type of objectivity in the law. It is also an effort to characterise the juridical enunciation and thus a contribution to the long-term enquiry into the comparison of regimes of enunciation. The Trouble with Technology Regulation 201 very delicate matter. Since it is not only a local provision which is at stake with a given case, but the wholeness of law, to hesitate is for a lawyer a trial through which he will have to show his ability to manipulate this wholeness, so that the imputation he realises can be declared compatible with it.7 Law, from this perspective, is an operation that tends to hold persons and things together within a web of relations that makes possible the imputation of acts, words and things to persons. It holds our societies together, via tiny and shallow, but crucial bonds (Latour 2004b: 35–6). Without the law’s quiet music, Latour lyrically writes ‘on aurait perdu la trace de ce que l’on a dit. Les énoncés flotteraient sans jamais pouvoir retrouver leurs énonciateurs. Rien ne lierait ensemble l’espace-temps en un continuum. On ne retrouverait pas la trace de nos actions. On n’imputerait pas de responsabilité’ (Latour 2002: 299). In the West, indeed, such legal bonds can truly be coined as quintessential, since they characterise the western societies since the Roman civilisation, and have sur- vived in a plethora of different—and sometimes opposite—political regimes. If the law in practice can be rightly said, with Latour, to be a particular way to construct bonds and a practice of attribution of responsibilities, indeed, it must be deemed to be able to respect its constraints in many different political frameworks. As said above, the constraints of a practice do not make change impossible. Both requirements and obligations of a practice evolve during their interplay. That is why the outcome of a legal exercise might well lead to a reversal of case law which remarkably still responds to the same obligations: reversals of case law are the result of the same operations, the same hesitations and the same endeavour to preserve the integrity of the maze of legal bonds. Revolutions are unthinkable in the law, they may happen elsewhere impacting upon the political framework of the law, the legislation, but not upon the law in its operation. In the French Conseil d’Etat, for instance, the conseillers and the other actors meticulously avoid falling back upon thoughtless certainties and easy reasoning: the procedures precisely aim at obliging all the participants to the construction of the decision, to foster and produce hesitation. So, the Conseil d’Etat provides for a subtle pro- cedural organisation of moments of hesitation: there is a first rapporteur, who is then judged by his colleagues, before the final draft is written (often by somebody else than the rapporteur). Having one eye on the case, and the other at the existing corpus of law, the rapporteur might propose some rearrangements of that corpus of (case)law. However, if accepted, these redistributions will always be presented as the outcome of a valid combination of principles already in place, and not as a radical change. Aware that what they produce should be recognised by the legal community as falling within the law, this fix on the law in its totality that is at 7 One then could say that the wholeness of law is the ‘requirement’ that defines the identity of the legal practice (it is a practice that involves the wholeness of law in every case); while the hesitation about the imputation is the ‘obligation’ that designates the field of creativity of the legal practitioner (it is a practice that challenges the wholeness of law in every case). 202 Serge Gutwirth, Paul De Hert and Laurent De Sutter stake with every new decision is the proper characteristic of the legal practice. In a marvellous chapter Latour convincingly describes that this precise stance sharply differentiates the legal practice from eg the practice of lab scientists who have other, less ‘total’ methods of verifying their output.8 Legal certainty is one of the requirements of the legal practice, contrary to science where the idea of scientific certainty is absent and would be considered as horrifying.9 Whereas scientists can problematise, attack and impact upon the present state of their science, and even revolutionise the existing ‘paradigms’, legal practitioners have to be prudent and guarantee the continuity of law that should always be and be deemed to be there (Latour 2002: 258).10 Latour’s description of the judges in the French Conseil d’Etat can be con- trasted to Mitchell Lasser’s description of the judges in the French Cour de cas- sation (Lasser 1995, 2003 and 2004). The decisions of the latter, Lasser writes, are 8 Latour 2002: 273: Le Droit avec un grand “D” est le destinateur incontesté de tous leur actes de langage. Tandis que la question de la méthode scientifique intervient rarement dans la discussion des chercheurs (où elle a un rôle décoratif, polémique, pédagogique), faire le droit, dire le droit, rester dans les limites du droit, apparaît comme l’une des charactéristiques de l’animal même. 9 Latour 2004: 93: But unlike scientists, who dream of overturning a paradigm, of putting their names to a radi- cal change, a scientific revolution, or a major discovery, conseillers du gouvernement invariably present their innovations as the expression of a principle that was already in existence, so that even when it is transformed completely the corpus of administrative law is ‘even more’ the same than it was before. This prowess is required by the essential notion of legal predictability [sécurité juridique], which would seem quite out of place to a researcher. Just imagine the effect of a notion of scientific certainty on research: what was discovered would have to be expressed as a simpler and more coherent reformulation of an established principle, so that no one could ever be surprised by the emergence of a new fact or a new theory. 10 Latour 2004a: 91–2: In science, the role of the conseiller du gouvernement could be replicated only by entrusting a sci- entist with the overwhelming task of reviewing his entire discipline from the beginning, in order to test its coherence and to ensure its relation to the facts, before proposing the existence or non- existence of a given phenomenon in a formal deposition, although the final decision would not be his, and although he would have to work alone, guided only by his own knowledge and his own conscience, being content to publish his conclusions quite independently. Although something like this role can be found in the form of scientific review articles, which are commissioned from experienced scientists in mid-career, who are expected to summarise the state of the art for their peers, review articles don’t have this peculiar mixture of authority and absence of authority. Either the conseiller du gouvernement is like a scientific expert, in which case his greater authority should relieve his peers of their obligation to doubt—he knows more about the issue than they do—or he is simply not playing the role of the expert, in which case why place on his shoulders the crushing burden of having to review the whole case in order to enlighten the process of judgment? The role of the conseiller du gouvernement resembles that of a scientist only to the extent that he speaks and publishes in his own name; similarly, there is something of the conseiller du gouvernement in all scientists, who see themselves as enlightening the world. The conseiller du gouvernement is, then, a strange and complex hybrid, which has something of the sovereignty of lex animata, law embodied in a man, but whose pronouncements bind no-one but himself, whereas in the old world sover- eigns always had the last word. In that case, what does he do? What is his function? He gives the whole team the occasion to doubt properly, thereby avoiding any precipitously-reached solution, or any cheaply-bought consensus. The Trouble with Technology Regulation 203 very short and framed into collegial and impersonal single-sentence syllogisms without concurrences or dissents. They contain few factual presentations and references to precedents, and leave policy consideration out of scope. From this perspective, indeed, the French judge can be depicted as a passive intermediary who mechanically applies age-old legislation. In short, the Cour de cassation decision is indeed a remarkably formalist-looking docu- ment, one that goes to great pains to convey that the French civil judge is nothing more than the passive agent of the statutory law. Needless to say, it is quite hard to imagine how any legal system could function if its judiciary actually behaved in accordance with the official French portrait of the judicial role. The list of potential problems is simply insurmountable. (Lasser 2003: 8) Lasser’s analysis describes the co-existence of two distinct ‘discursive spheres’ of which only one, namely ‘the single-sentence syllogism premised on code-based textual grounds’, is made public through systematic official publication of the judg- ments, producing ‘an image of formalist and magisterial judicial decision-making produced by syllogistically deductive means’ (Lasser 2003: 3, 5). Behind this formal mode, however, Lasser detects a second, informal mode. French judges, like their colleagues abroad, must decide cases, adapt to times and requirements and judge well. The institutional design allows for this in an effective way, Lasser argues. Firstly, there is the notion of the ‘sources of the law’ with its hierarchic structure, which at first sight seems to restrict the law-making status and authority to the leg- islature, but in reality opens the door for flexibility in judicial decision-making. In effect French civil judges are empowered to change their interpretations as needed— in the name of ‘equity’ in particular cases or in the name of ‘legal adaptation or mod- ernization’ in classes of cases over time—precisely because these interpretations do not and cannot constitute ‘law’. (Lasser 2003: 3) Secondly, the different professional players within the Cour de cassation develop a more informal decision-making process based on the construction and deploy- ment of what Lasser calls a ‘socially responsive hermeneutics’ resulting in debates and discussions that produce meaningful solutions that make good sense. In this search for solutions that make good sense, the influence and inspiration of the legal doctrine—the legal academic writings—is an important factor, follow- ing from its close articulation in the French legal system especially through the particular genre of the note (a doctrinal comment published together with the judgment) (Lasser 2003: 9–11). The organisation of the deliberation process in the Cour de cassation is similar to the one Latour sees in the Conseil d’Etat. The Cour de cassation judges only take a decision after having taken into account the arguments of the conclusions of the advocate general and the rapport of reporting judge, who belong to the same category of highly skilled and rigidly trained French magistrats. For every case, Lasser writes, these two institutional players have the job ‘to research the state of the law and of prior decisions; to canvass the extensive academic literature; and to lay out the social as well as the legal pros and cons of potential judicial 204 Serge Gutwirth, Paul De Hert and Laurent De Sutter solutions, including the one that they eventually propose to their brethren’ (Lasser 2003: 15).11 Lasser’s perspective differs from Latour’s, since the latter strongly emphasises the distinctive legal features of the deliberation process of the Conseil d’Etat. Latour observes the sayings and doings of the players in the Conseil d’Etat pre- cisely to inform the particular mode of existence of the law, its peculiar régime d’énonciation, or what Stengers would call the obligations of the legal practitio- ners. Here, the focus is laid upon the particular mode of hesitation of the judges and the way they take distance from the facts of each case, extracting its legal substance through the operation of qualification. Differently, Lasser concentrates his attention upon another aspect of the legal practice of judges in the Cour de cassation, namely upon the discrete ways through which they let enter other than legal considerations in the deliberation, notwithstanding its purely legally framed outcome. Lasser, one could say, addresses the issue of the articulation of the law on the one hand, and of moral, social, economical and other concerns, on the other. How have those concerns, invisible in the public decision of the Cour de cassation, been knitted in the production of the legal decision? How do the judges of the Cour de cassation deal with these mobilisations? The point where the two authors meet is relevant to us: they both describe a process that deliberately organises and imposes the peculiar legal regime of decision-making. In both courts the judges are constrained to follow a slow, pre-formatted and temporising route of induced moments of hesitation about the qualification of the facts, its consequences and the need to preserve the continuity and wholeness of the law. A Principled Legal Detachment (First Consequence) With reference to Lessig’s concept of regulation, one can draw two conclusions from this short presentation of Isabelle Stengers’ generic concept of a practice and Bruno Latour’s descriptions of the legal practice in particular. The first conclu- sion concerns law’s exterior. Contrarily to what Lessig suggests, legal practitioners must in some respect remain indifferent to external calls. A good legal practitioner has an ability to focus on the legal issues at stake. Latour observes that by ‘quali- fying’ the facts, the judges actually get rid of the particularities of each case. The 11 Lasser 2003: 13–14: What is so distinctive about the French judicial system, however, is not only that it possesses two such radically different modes of judicial argument, but that one of them is kept more or less entirely hidden from public view. Only a tiny handful of conclusions and reports are published in any given year, despite the fact that they are produced in every French Cour de cassation case; and even on those extremely rare occasions when they do see the light of day in the court reporters, they tend to be very severely edited. In short, it turns out that the French civil judicial system maintains two radically different modes of argument at the same time: the rigidly syllogistic deductions that are published in the Court’s official judicial decisions, and the stunningly frank and wide-open equity debates over social needs that are hidden within the walls of the Court’s closed chambers. The Trouble with Technology Regulation 205 real legal work, the legal hermeneutics starts when the facts of the case have been ‘subsumed’ into legal concepts that can trigger the reflexive legal process. The seri- ous questions emerge only after the case at stake is transformed into legal matter that can be the object of the legal operation. It is not the facts as such that interest the judges, but the way they can legally apprehend or ‘catch’ them. However, to be indifferent to what is not law does not mean that lawyers gen- erally are contemptuous of what lies out there: it simply means that, when law is concerned, they try not to care for law’s exterior and rather build up the legal dis- tance so essential for the legal operation. Latour observes with what ease first and subsequent drafts are rejected in the production process of the French Conseil d’Etat. The whole process is designed to distance the individual members and collaborators from the interests at stake in a given case. Whereas scientists tend to approach their object of attention as closely as possible and remain strictly bound by what it allows them to think, in law everything is done to construct a solution as far away as possible from the particularities and passions of the case. In sci- ence it is always possible to go back to the facts, as it produces robust and reliable knowledge about these facts. In contrast, law does not produce knowledge (but it spins bonds), which explains why qualifications do never tell us more or generate knowledge about the facts at hand.12 Latour underlines that the interests at stake in the cases, such as the realities of government and the measure of injustice done to the claimant, are far from being unknown by the judges. He amply describes in his third chapter how most judges of the French Conseil d’Etat have had years of experience in societal and politi- cal life (Latour 2002:119–138). The judges’ indifference should therefore not be considered as a form of general autism, but a principled form of legal detachment. The case of constitutional lawyers, in this perspective, makes a perfectly good example of this indifference. It has often been said that constitutional lawyers, especially in the United States, should be considered as a sort of interface between the realm of law and the realm of politics. However, even in their most activist period—ie the Brown period of the Warren Court—the judges in the Supreme 12 Latour 2004a: 101: Lawyers and scientists are each scandalized by the other’s forms of enunciation. They both speak truth, but each according to a quite different criterion of truth. … Scientists … don’t under- stand how judges can be content with what is wrapped in their files, or how they can apply the term `incontrovertible fact’ to a submission that has been contradicted by a counter-submission. Scientists, by contrast, measure the quality of their referential grip in terms of the mediate character of their instruments and their theories. Without making this long detour, they would have nothing to say other than whatever fell immediately before the senses, which would be of no interest, and would have no value as information. Judges, for their part, hold that the quality of their judgments is closely dependent on their ability to avoid the two hazards of ultra petita and infra petita: that is, issuing a judgment that either goes beyond or falls short of that which the parties have asked for. What seems to judges to be a major failing is considered by scientists to be their greatest strength; yes, they can only attain precision by progressively distancing themselves from direct contact. And that which scientists regard as the greatest defect of law is taken as a compliment by the conseillers: they do indeed stick to what can be elicited from the file, without addition or subtraction. Here, we have two distinct conceptions of exactitude and talent, or of faithfulness and professionalism. 206 Serge Gutwirth, Paul De Hert and Laurent De Sutter Court of the United States were well aware of what respectively constitutes law and politics. They were very concerned with avoiding any political interference with their legal work. What made them good lawyers was not the fact that they acted in conformity with the most progressive political views of their time, but the fact that they acted as lawyers who considered their ‘obligations’ of creativity with as much respect as they did for the ‘requirements’ that, as lawyers, they felt they had to obey. The decisions of the Supreme Court Justices during this period were not induced by moral or political ethics: it was legal ethics. This also is why they must be considered as legal geniuses. If they merely had obeyed a political or moral agenda, their legal creations would have vanished already a long time ago. If those creations lasted, it is only because the judges who followed them were unable to change the legal stream that the former had initiated without betraying themselves as lawyers.13 The foregoing shows that Latour’s analysis of the legal practice of the French Conseil d’Etat can be brought on a larger scale of analysis. In many respects French judges from outside the Conseil d’Etat and judges from other legal systems will recognise themselves in the interplay between requirements and obligations of their colleagues. All legal practitioners will recognise the peculiar processes of detachment and hesitation described and will demonstrate some kind of legal indifference to the non-legal aspects of the case.14 The Law’s Shallowness (Second Consequence) The second conclusion that one can draw from Isabelle Stengers’ and Bruno Latour’s work relating to the definition of the legal practice concerns the ‘inte- rior’ of the law. As an activist practice, law is assumed by Lawrence Lessig to adopt the political content of the general regulatory agenda to which it is sup- posed to comply. Without going as far as to state that law has no content or that judges are merely automats applying legal texts, it is nevertheless closer to reality to accept and celebrate that lawyers are mostly interested in the operations 13 Lessig, describing the American legal system, is in many ways sensible to the obligations of the legal profession, as was shown above when we discussed his ‘harmless activism’. He refutes the popular vision of the Supreme Court of Chief Justice Earl Warren (the ‘Warren Court’) as a wildly activist court, that made up constitutional law and imposed its own values onto the political and legal system, but acknowledges the duty of courts to respect the principle of interpretive fidelity and refrain from making, not finding, constitutional law (Lessig 1999b: 223 and Lessig 2006: 315). 14 This does not mean that the way a particular law is interpreted may not be a concern for legal practitioners. The way this concern is spelled out depends on the respective legal institution. For instance, the US Supreme Court, when interpreting the US Constitution, which is not a law but a commitment that binds all US citizens, calls to US citizens to recognize themselves as bound by their interpretation. Call, but has no power to impose. But this call constrains them (the judges), as the concern for the continuity of administrative law constrains the French Conseil d’Etat. Those are what we would call specific obligations characterizing those institutions, producing a distinct touch in the ecology of practice. The Trouble with Technology Regulation 207 of law. Imputation, but also qualification, distinction, definition, etc., are such operations. These operations have not much to do with content: their purpose is to articulate content from which they are as distant as possible, if not indifferent to. To be a good lawyer, then, is to hesitate in a way that could produce a legally relevant articulation of that kind. That is: to give a legally relevant answer to the question: ‘Does this case stick well with the wholeness of law?’ When a person working as the illustrator-reporter for a gardening journal requests a press card attesting to her status as a professional journalist and the card is refused by the Commission supérieure de la carte d’identité des jour- nalistes, the judges will consider whether she is to a sufficient degree to be con- sidered a professional journalist in the sense of the French law on press cards. We will not learn from the judgment what a ‘journalist’ really is or what ‘sufficient’ stands for (Latour 2002: 245). In the same way, criminal judges qualify facts as crimes (and by this act of qualification ‘catch’ the real facts into the paper reality of the Criminal Code) and then sanction the culprit with a reference to a scale of sanctions that hardly has any internal coherence.15 As said before, the law does not produce any information or novelty in the sense of scientific knowledge, but it arranges things as to ensure that the particular facts are just the external occasion for a change which alters only the law itself, and not the facts about which eventually one can learn nothing more than the name of the claimant (Latour 2002: 248). A second way of accounting for this characteristic of the legal practice – superficiality, indifference towards facts and disregard of content- is to under- stand that law is constructivist and performative (Latour, 2002: 253). Granted that the majority of cases brought before the administrative judge consider disagree- ment about facts and granted that judges, contrary to scientists in a laboratory, must decide a case, judges are endowed with the power to have a last say on facts, to freeze them, to call the dispute to an end (by an ‘arrêt’) and to decide what they mean or imply under the law. From Latour’s perspective the law’s importance, its particular mode of existence, has to do not with any ‘essence of law’, its ‘fun- damental values’ or its ‘underlying foundations’, but with its operation and the way it is performed. This is why law is a fabrique, in the two French senses of the word, a ‘fabric’ and a factory: a delicately woven fabric the binds us together and a production of those bonds (Latour 2002: 280). If the law is able as it should be to intermingle everywhere, it can only be shallow or superficial : it can only con- nect everything—persons, things, acts, words—because it nearly touches what it 15 Dayez 2007: En droit pénal, on le sait, l’opération tient en deux temps: la qualification d’un fait permet de l’appréhender en le réduisant à sa definition légale (en ramenant l’originalité du fait de sa catégorié). Ce qui permet ensuite de le sanctionner par référence à une échelle de peines dont la cohérence est purement interne au système (les peines ne se justifiant en effet que l’une par rapport à l’autre et jamais, bien sûr, en elles-mêmes). … Le Code crée dont une sorte de monde parallèle au monde réel et pouvant en tenir lieu, de sort que ses habitués se meuvent continuellement dans une abstraction doint ils ne s’avisent même plus qu’elle ne coïncide pas au réel. 208 Serge Gutwirth, Paul De Hert and Laurent De Sutter binds. The law’s shalowness is thus one of its peculiar features that adds up to its grandeur. No great call or mission, no transcendent discourse, but a multitude of small works: that is the law the anthropologist describes. The law, Latour writes, is not a saviour, it does not humanise, it does not administer, it doesn’t make things easier, no, the law just does not replace anything else (Latour 2002: 292, Latour’s italics). While producing bonds between humans, between humans and things, between the past and the future, between statutes and a case, the law does not execute directives, but it constructs and reconstructs itself steadily in relation to the legislative framework, the cases at stake and the prescribed procedures. Next to its superficiality the law has its own temporality: judges and lawyers can only proceed slowly, meticoulously, along repetitive and coded processes. How to change without betrayal? How to incorporate without contamination? The legal practice always temporises, slows down. Against political urgency it installs its own particular slow, compelling and capillary procedures. Latour’s Jurists and the Dworkinean Legal Author Writing Successive Chapters Latour’s description of legal practice has been compared by one author to Dworkin’s famous characterisation of the legal practice as a ‘chain enterprise’ (Weller 2007). In this image of law, interpretation is seen as an extension of an institution and history made up of ‘innumerable decisions, structures, conven- tions, and practices’ (Dworkin 1982: 193). For Dworkin judges can be compared to authors who would consecutively be writing the different chapters of a collec- tive novel, each of them obliged both to take into account what the others have already written, and to pursue their effort seeking the highest possible quality of the collective product. From this perspective, judges are subject to a double bind: they must take into account the pre-existing law, but they must also see to its continuity, advance and creativity. Dworkin’s metaphor indeed expresses that the legal practice is neither completely free, unbound and merely dependent upon the judges’ preferences, nor already determined or set into the ‘already given’ or exist- ing law. For Dworkin, judges are constrained by the law ‘already there’, but not to the point that they may not be creative. In hard cases, they are like the new novelist picking up the thread of what has been already collectively written, but obliged to individually pursue the novel as a collective endeavour, making it as beautiful or challenging as possible. While this metaphor of authors collectively writing suc- cessive chapters in a book grasps some of the richness of the analyses of Stengers and Latour, we think one has to be careful while using the Dworkinean apparatus to understand the descriptive work of Latour. Firstly, we see a problem with Dworkin’s contention that the first author has complete freedom to create, a freedom that indeed appears to be more limited The Trouble with Technology Regulation 209 for the successive authors of the legal novel. This contention strongly suggests that law has content, whereas for Latour law is to be understood as an autono- mous, shallow system that stretches out all over society and time, that is capable of linking people, events and rules. Law, then, rather than being formal, creates formality and form (‘met en forme’) (Latour 2002: 288). Perhaps Stanley Fish is closer to Latour when he stresses that all legal practitioners are tied (or not tied) by the same constraints, including the first author who is responding to some prefixed idea of what law demands him to do (Fish 2002). Latour criticises the ‘iusnaturalistic’ and ‘positivistic’ schools of legal thought when he speaks about law’s ‘autochthony’. There is, Latour holds, no outside foundation of the law. Law is and must always be ‘already there’ (be it respectively in nature or in the positive legal norms). If the law always brings a case in relation to a whole web of legal relations, it must always be deemed to be already born (Latour 2002: 274–5). This is why the very idea of the first author must lack pertinence from the point of view of any legal practitioner. Secondly, Dworkin’s right answer thesis matches neither the Stengersian ecol- ogy of practices approach, nor Latour’s observation of the law’s particular mode of existence. With Stengers we saw that constraints comply practitioners to act, but they do not indicate how they must do so. That is why practitioners who are grappling with the tension between their constraints and the mobilisations from the outside (especially in non-routine situations) are materially faced with the question how to act without betraying. Indeed, there is no right underlying, no implied or superior answer to be ‘found’, as all anwers need to be invented and constructed. In Latour’s terminology a right answer would be the answer that results from a production process in which the legal practitioners have hesitated well. Here too, the importance of the materiality of the legal practice is quintes- sential: the singularity of the legal profession derives from the way legal truths are produced. They are not, as Dworkin suggests, a question of legitimate interpreta- tive methods used in law.16 A Quick ‘Comparative’ Excursion Undoubtedly, the description of a judicial detachment from facts and con- tents feeds the idea of the shallowness of the law, and it will further feed the already horrified American comparative fascination about the French judicial practice. In recent years, however, authors such as Mitchell Lasser and Michel 16 The same can in many respects be concuded about Neil MacCormick’s work on legal reasoning, holding that a convincing legal judgement must be consistent and coherent with existing law rules and principles and have acceptable legal consequences (MacCormick 2005). Although MacCormick rejects the right answer thesis claiming that in every case more good answers are possible, he nevertheless, like Dworkin, sees it as the task of the courts to identify the underlying values and principles of law through an interpretative activity. 210 Serge Gutwirth, Paul De Hert and Laurent De Sutter Rosenfeld have successfully challenged this traditional portrait in their works on the European Court of Justice, the Cour de cassation and the United States Supreme Court, taking seriously differences in institutional design, style and rhetoric. We have already discussed Lasser’s analysis of the French Cour de cas- sation and the identification of what he calls its two ‘distinct discursive spheres’. In the American model of judicial discourse, the two modes of argument seem integrated in the same public space, namely, in the judicial decision itself with its long and often very expressive, political, literary and even speculative digressions and, of course, with its individually signed opinions, be it major- ity, dissenting or concurring opinions (Lasser, 2003, 2004; Rosenfeld, 2006). Both Lasser and Rosenfeld use the French Courts and the American Courts as extreme ends in a scale that successfully allows for comparison of other courts.17 On the one hand there is the Cartesian, deductive syllogistic French style, on the other hand there is the much more dialogical, conversational, analogical, and argumentative style of the Supreme Court. At the one extreme French judges ‘speak with one institutional voice and no dissents, whereas the Supreme Court speaks with a multiplicity of individual voices, dissenting opinions, concurring opinions, and, at times, in important constitutional cases, with only a plurality agreeing on the reasons why the winning party is entitled to judgment in her favour’ (Rosenfeld 2006:635). Indeed the authors invoke political and historical factors that account for differences in style, rhetoric and institutional settings of the different Courts (including the European Court of Justice we have not discussed here). Naturally, the identified differences can be relativised and contextualised to the point of significantly blurring the stated contrasts between the courts in France and in the United States. Both Lasser and Rosenfeld brilliantly do so. It is however not our aim to engage in a discussion about the importance of policy and ethical considerations in American, European and French courts. Neither is it our goal to demonstrate that the traditional portrait of French automates as opposed to American adepts to principle-based theory is probably in need of reconsidera- tion. The question we are exploring is different: why is it so self-evident to accept that what ‘happens’ in all these courts is ‘law’? Why can we say that what happens there belongs to the register of law? What is so peculiar to it that we spontaneously recognise it as law? Of course, the law participates in other enunciative regimes and/or practices such as politics, science and religion. Therefore, it is certainly a legitimate question to wonder and inquire how this happens and to which extent. But the mere fact that law is intertwined with eg politics cannot contribute to the description of the law’s own regime, because such interlacement is a characteristic of all the régimes 17 On this scale, the European Court of Justice can be understood as an instance of the bifurcated French discursive model, but one that is softened it by adopting a systemic, ‘meta’ teleological form of argumentation that it deploys publicly in both its judicial decisions and its AG Opinions (Lasser 2003, 2004; Rosenfeld 2006). The Trouble with Technology Regulation 211 d’énonciation or practices in an ecology of practices.18 Hence, yes, the law is caught in a web of interactions in which all kind of other practices and practitioners must find mutually fertile forms of articulation. It belongs to Latour’s and Stengers explicit endeavours to explore how these different ‘modes of existence’ (as Latour calls them in a still unpublished work) enter into ‘diplomacy’ as to construct a com- mon world, in a newly conceptualised—unKantian—‘cosmopolitical proposal’ (see also Latour 2003; Stengers 1997 and 2005; Gutwirth 2004). Their attempt is to revive who we -Westerners- are (beyond generalist categorisations and easy dualisms) by rediscovering the distinct modes of existence and practices that lie at the heart of our societies, such as the sciences, the market, politics, religion and, of course, law (Latour 2002: 265; Latour 2005: 232–41). This will undoubtedly slow down the pace of our reasoning and ‘create an opportunity to arouse a slightly different awareness of the situations and problems mobilizing us’ (Stengers 2005: 994). One can still raise the question whether it is possible to extract the distinctive mode of existence (or the constraints) of the law from an ethnographic study of the French Conseil d’Etat or, in other words, whether the descriptions of Latour can be considered to be generically valid. Is the mode of legal hesitation relevant to ‘Western law’ as a whole? We believe it is and we hope we have already started to show why. The idea might be hard to accept given the many differences in form and substance that comparative lawyers have already been identifying amongst the many Western legal systems. But the fact is that there is not really a debate or controversy about what must be studied under the denominator of ‘law’ in comparative law, just as the fact we already evoked that no one questions that it is law that happens in courts in Germany, France, Belgium, England and the United States. Indeed, if we look at the peculiar traits of the law as described by Latour, it is hard to contest their existence at a more generic scale.19 Undoubtedly, Latour has been one of the first to try to extract the law’s irreducible spinal mar- row through an ethnographic study, next to his parallel undertakings to identify the other irreducible modes of existence such as the sciences, politics and religion. Taking seriously these irreducible characteristics of the law makes it difficult to mobilise the law in a vast programme of regulation. 18 Latour (2004b: 35): Que les institutions comme la Science, la Religion, le Droit soient indéfiniment mêlées, à la façon des marbres veinés de San Marco dans lesquels aucune figure n’est clairement reconnaissable, c’est entendu … Mais la question de leur vérité et de leurs conditions de félicité n’en est pas résolue pour autant, car il y a toujours un régime particulier qui joue le rôle de dominante et qui m’autorise à dire que au Conseil d’Etat (l’exemple que j’avais choisi, il se décide juridiquement du vrai et du faux d’une façon qui n’est clairement pas religieuse ou scientifique ou technique ou politique. 19 Latour 2004b: 36: La notion même de procédure, l’assignation, la signature et son ‘tremblé’ si particulier puisqu’elle saute justement par dessus la division des plans d’énonciation, l’imputation, le lien entre texte et cas (‘journaliste au sens de l’article 123 du code’), et même des éléments très classiques en droit comme la responsabilité (‘celui ci est l’auteur de cet acte’), l’autorité (‘ce personnage est bien habilité à signer les actes’), la propriété (‘cette personne a bien titre à tenir cette terre’). 212 Serge Gutwirth, Paul De Hert and Laurent De Sutter The Modesty of the Legal Profession in the Microsoft case Let us return to Latour’s observations about the role of the facts and the particu- larities in the legal practice: the facts of a case are what the judges want to get rid of in order pass to the legal work and its operations (qualification, distinction, definition, imputation). Legal practitioners are humble, in a certain sense: if they are not particularly drawn to content issues, it is because their interest lies and must lie in the legal operations, which in its turn is a consequence of their respon- sibility for maintaining the continuity of law or ‘legal certainty’. And to produce legal certainty, one doesn’t need a definition of law, or a content to law; one only needs the means to insure that the show will go on. Indeed, there is something of a theatre technician in every good lawyer. To criticise or to praise the show is not his business; neither is the task to produce it or to direct it. However, the lawyers’ lack of interest in what is not law does not imply that they express contempt in front of it. It only means that when they are practising law they do not have to care for anything else than their practice. Of course, as every- body knows, lawyers are often called upon when political or economical problems are at stake. But in such cases they are not called upon to solve these political or economical problems as such. What they are called upon for is to give their opin- ion as legal specialists concerning the way political or economical solutions to political or economical problems can fit into the picture of law. The question that lawyers are asked to answer is always the same: ‘Will it fit?’ And if the answer is ‘No’, it is not their job to make it fit. However, of course, they may answer: ‘We can try’ and then use their legal creativity to make it fit. In this case, it is necessary for them to have the intuition that such a solution could fit: this solution should have whet their legal appetite; that is: it should have made them begin to hesitate. When Lessig evokes the regulatory power of law, what his argument is lacking is precisely this whetting of lawyers’ appetite. How could a lawyer hesitate if some- body has already decided which answer he will give to questions that he doesn’t know about? How could lawyers contribute to the construction of an optimal mix when they don’t have any case at hand? For lawyers to hesitate, it would be first necessary to define a regulatory problem—and then to check whether this prob- lem has legal consequences. How could lawyers know about it in advance? Isn’t it clear that, on the contrary, lawyers only arrive when it is too late? One cannot hesitate in advance: in advance one can only hazard. It is already difficult enough to defend a hazardous political programme; imagine how difficult it should be for people who do not have any programme to defend, but only a practice to cherish! For them, to regulate is something way too heavy for their shoulders. A good example of the modesty of lawyers can be found in the recent court trials that opposed Microsoft to the United States, and then to Europe. In both cases, the reason why Microsoft was prosecuted was the same: the attempt to build upon the dominant position of its operating system in the computer world The Trouble with Technology Regulation 213 to also impose Windows Media Player. In the United States, however, Microsoft was not so much put into trouble. After a harsh decision taken at First Instance by a District court, the Court of Appeal rejected most of the accusations directed towards Microsoft, with the exception of minor points. In Europe, the situation of Microsoft seems to be more delicate. When the European Commission decided to take action against Microsoft, this action was presented as grounded on the infringement of article 82 of the EC treaty, that is, on the fact that Microsoft was building a problematic monopoly, namely by rendering competition concerning media players in the computer market almost impossible. For analysts, this case featured the same problem as the American one: the problem of interoperability between computer systems (First 2006). This problem, according to them, was at the core of the possibility to implement the information society. Without interop- erability, information will not be able to travel as easily as it should in order to reach a true information society. Indeed, this is a political problem. But, for the lawyers in the different US courts as for the lawyers in the European Commission and the judges of the Grand Chamber of the European Court of First Instance (17 September 2007, Microsoft v Commission, Case T–201/04) interoperability was not the major problem, neither did it seem a particular issue for the judges. What interested the lawyers and judges in this case, what triggered their legal work, was the point of law respectively concerning the US anti-trust legislation, the European competition law or other requirements such as coherence (Bertea 2005).20 In other words, although the decision of the European Court has important political and economic drawbacks, the judges actually did not do anything else than apply their legal skills to a number of articles of the EC treaty dealing with competition issues and the abuse of dominant position. The hot economic and political debate about interoperability was not present in the judgment: their question was ‘How to qualify the case in terms of the pre-existing law on competition?’ The Microsoft case can be presented as a good illustration of the distance of the legal practice from technology policy. Moreover, it can also be presented as a good illustration of trouble we have with Lawrence Lessig’s argument concerning the regulatory role of law. For lawyers in the various US courts and the European Court of First Instance, the Microsoft case was business as usual. The question then is: why would we need to regulate something that seems to be ‘regulated’ very well by already existing legal provisions? If the political problem of interoper- ability can be legally tackled by very traditional legal operations, how then might legal practitioners find themselves constrained to be creative as a result of external objectives of governance (or ‘regulation’)? The only way we see such a ‘turn’ to 20 ‘The ECJ’s interpretive technique is therefore oriented primarily towards developing a proper legal order, namely, one that would be sufficiently certain, uniform and effective’ (Lasser, 2003: 54). For an account of the specific obligations and requirements of the European Court of Justice, its system- atic and purposive character of interpretative techniques, and its use in particular of the principle of effectiveness, see also Jacobs (2004). For an account of the specific obligations and requirements of EU policymaking re. anti-terrorism, constraints that cloud every straightforward instrumental analysis, see Levi and Wall (2004: 217). 214 Serge Gutwirth, Paul De Hert and Laurent De Sutter occur is through a change to the legislative framework, which indeed takes place in the political mode, far from the particular constraints of legal practitioners. The former is nicely illustrated by the following story. In 1998, a report from the French Conseil d’Etat regarding the challenging of law by ‘Internet and Digital Networks’ concluded that no change should be made to the actual legislation in order to deal with this challenge: ‘The whole legislation is applicable to the Internet […] There is no need for a specific regulation for the Internet and digital networks’ (Coudert, Debet and De Hert 2007). Is not this conclusion significant? When lawyers are directly asked to answer a political question—ie the neces- sity to change the legislation—they will never reply with a political answer; they will reply with a legal answer. ‘No, they will say, it is not necessary to change the legislation; if a problem occurs, we will apply what already exists. If you, politi- cians, want to transform the legislation, this is your problem’. And indeed it was a bit stupid to ask lawyers what was their opinion about the emergence of a new technology that had caused no problem yet. How could they imagine any prob- lems to happen? The only problems that lawyers can foresee concern law: they can foresee legal contradictions or legal inconsistencies. But they cannot foresee facts. When the professionals of the law are invited to sit down at the table of the regulators, one may be sure either that they will be very recalcitrant and of little help, or that they will turn into something other than legal practitioners in order to be helpful. Why Insist on the need for Regulation? Why then does Lessig insist on the regulatory role of law? When he asks lawyers to behave like activists, would he maybe like them to act as spokesmen of his own activism? In other words, is not the concept of regulation essentially a political concept, in the sense that it is a tool used in order to achieve a certain end? Surely, when Lessig presents a picture of regulation involving an optimal mix of law, technology, economy and social norms, he intends to embed this regulation upon a more solid and encompassing foundation making it the product of a kind of ‘holding’ of practices and modes of existence. Unfortunately, and at least from the point of view of the law, such a foundational exercise is problematic, if not impossible, because it demands from the law to come and help in the realisation of the political ends fixed to the idea of regulation, which cannot but clash with the constraints and thus the resistance of legal professionals. If, which is unlikely, Lessig’s proposal would not imply the former, but only consist in saying that tech- nology, economy, law and social norms should produce some effects rather than some others, it would indeed be a rather banal proposal. Lessig expects some specific effects from the concept of regulation itself—and hence not from law, economy, technology and social norms as such. He expects his regulation to organise the resistance against both technological libertarianism The Trouble with Technology Regulation 215 and technological totalitarianism. As such his argument is a very noble one: on one hand he wants to defend new technologies against their outer enemies; and on the other hand he wants to defend them against their inner enemies. What he wants is a balanced world of technology. The problem, however, is that in trying to help build a balanced world of technology, Lessig attaches too great an impor- tance to the control—both inner and outer—of his world. He does not give a chance to the unexpected possibilities that can emerge from the development of the new technologies that he wants to regulate. Neither does he give a chance to the unexpected creativity of the other practices that will come to grips with these technologies Conclusions By definition regulation comes from above, or at least from somewhere else. It imposes itself from the outside. It aims at conducting and constraining behaviour, and according to Lessig’s perspective, the behaviour of the actors that make cyber- space exist. As a consequence, the notion of regulation makes it impossible to think about the relationships between the regulatory system and what it regulates in terms other than ones of compliance or ‘application’. Conducting and guiding behaviour through regulation also implies that there is an end or an objective to realise, regulation of behaviour without an aim being pointless. Lessig sees four tools or modalities of regulation: law, social norms, markets and technology (architecture or code). To him, optimal regulation can be obtained by an optimal articulation of these modalities or by their optimal tuning to realise the ends to be reached. From that perspective the four named modalities are to be considered as instruments of the regulation and they have to accept that they are instrumental to it and its objectives. For sure, this will not be self-evident but Lessig, seeing some possible problems, also sees possible responses. Nevertheless, to him the problems lie with the modalities and not with the regulation itself. In this contribution we have tried to show that those problems are important and persistent as regards the law, especially if we make the effort to take seri- ously the constraints of legal practitioners (Stengers) and the particular régime d’énonciation or mode of existence of the law (Latour). When mobilised or appealed to by the ‘outside world’, judges and other legal practitioners are not free to do what they want if they take their job seriously. Neither will they be dis- posed to betray what makes them legal practitioners. Their ‘internal’ constraints do heavily impact upon the way they can deal with ‘external’ mobilisations. This is not to say that change and innovation are unthinkable in the law, but instead, that innovation and change within the law are only thinkable if the constraints of the legal practice are fulfilled in the eyes of the legal practitioners. The pro- cess of renewal in the law is per definition slow and temporising because, in our societies, the law must pursue its meticulous and precious task of weaving legal 216 Serge Gutwirth, Paul De Hert and Laurent De Sutter bonds between the past and the future, between people, things and words, and, between the case at hand and the totality of the existing law. Constrained by their procedures, the processes of hesitation and the generation of ‘objectivity’ through distance, legal practitioners must remain indifferent to the outside storms and urgencies. They must construct the law in the interplay between their internal obligations and requirements and the external mobilisations they are confronted with. The law has its own pace, and that is why the West celebrates it since its ear- liest times, regardless of the many different political regimes it has gone through. Expecting legal practitioners to behave merely like tools or modalities of an exter- nal regulation can be insulting for them. Regulatory aims and regulation can only be proposed to them, not imposed. One must take into account the law’s own dynamics, the own devenir of the law. The trouble with regulation is certainly not only legal. As a matter of fact, the mere idea of regulation implies a form of top down government, which raises the question of who is sitting at the top. Obviously, governments and legislators are—and thus politics. Although this was not the main point of our contribution, we are convinced that the difficulties with regulation extend much further than to law. We are convinced that regulation does not give more chances to technol- ogy, social norms or markets than it does to law, because in Lessig’s argument the four of them are considered as rather passive forces at the service of politics and governance. As everybody knows, the road to hell is paved with good intentions. Regulation is one of those good intentions. However, by criticising Lessig’s concept of regula- tion, we wouldn’t want to give the impression that his argument is a threat that we should all fight against. Instead, what we wanted to express was a disappoint- ment towards a position that now dominates the legal discussions around new technologies—while at the same time rendering it impossible to go further. We do not consider that regulation is a terminus. On the contrary, we rather see it as a point where to start in order to build a more interesting legal appreciation of the emergence of new technologies. At the end of the present paper, it is not a mys- tery that we would see this legal appreciation formulated in the terms of the legal practice itself, rather than in the terms of what, for lack of better words, we are forced to qualify as political science. We trust that to ask the lawyers themselves how they deal with new technologies would always be more interesting and more enlightening than to define some very sophisticated program, however balanced and nuanced it might be, in order to avoid their escape. Shouldn’t it have been obvious from the start that lawyers do not like programmes, but prefer cases? To concentrate on cases rather than on programmes is, in our opinion, the only way to recall that it is only if we let new technologies develop themselves to the point where they become actually problematic that lawyers could intervene and add their own appreciation to the picture. Is it more risky to wait than to regulate? Of course! But a risk is always worth taking. To be afraid is never a solution: it can only lead to defiance, tension or contempt. We believe that it is not what Lawrence Lessig really wants. The Trouble with Technology Regulation 217 References Bertea, Stefano (2005) ‘Looking for Coherence within the European Community’ 11 (2) European Law Journal 154–72. Brownsword, Roger (2005) ‘Code, Control and Choice: Why East is East and West is West’, 25 (1) Legal Studies 1–21. —— (2007) ‘Red Lights and Rogues: Regulating Human Genetics’ in Han Somsen (ed), The Regulatory Challenge of Biotechnology. Human Genetics, Food and Patents (Cheltenham, Edward Elgar Publishing) 39–62. Coudert, F, Debet, A and De Hert, P (2007) ‘Constitutional Rights and New Technologies in France’ in R Leenes, EJ Koops and P De Hert (eds) Constitutional Rights and New Technologies. A Comparative Study, Information Technology and Law Series vol 15 (The Hague, TMC Asser Press) 95–136. Dayez, Bruno (2007) ‘Le mythe de la justice juste’ 6265 Journal des Tribunaux 320–21. Dworkin, Ronald (1982) ‘Law as Interpretation’ 9 (1) Critical Inquiry: The Politics of Interpretation 179–200. Easterbrook, Frank H (1996) ‘Cyberspace and the Law of the Horse’ University of Chicago Legal Forum 201. First, Harry (2006) ‘Microsoft and the Evolution of the Intellectual Property Concept’ Wisconsin Law Review 1369–432, available as New York University Law and Economics Working Papers, Paper 74, accessed 10 June 2008. Fish, Stanley (1982) ‘Working on the Chain Gang: Interpretation in the Law and in Literary Criticism’ 9 (1) Critical Inquiry: The Politics of Interpretation 201–16. Gutwirth, Serge (2004) ‘Le cosmopolitique, le droit et les choses’ in Frédéric Audren and Laurent De Sutter (eds), Pratiques cosmopolitiques du droit, Cosmopolitiques, Cahiers théoriques pour l’écologie politique no 8 (Paris, L’Aube) 77–88. Ignatieff, Michael ‘I got it wrong on Iraq’, Guardian online (13 August 2007), avail- able at accessed 10 June 2008. Jacobs, Francis G (2004) ‘The Evolution Of The European Legal Order’ 41 Common Market Law Review 303–16. Lasser, Mitchell (1995) ‘Judicial (Self-)Portraits: Judicial Discourse in the French Legal System’ 104 Yale Law Journal 1325 ff. —— (2003) ‘Anticipating Three Models of Judicial Control, Debate and Legitimacy: The European Court of Justice, the Cour de cassation and the United States Supreme Court’ Jean Monnet Working Paper no 1/03, available at accessed 10 June 2008. —— (2004) Judicial Deliberations: A Comparative Analysis of Judicial Transparency and Legitimacy (Oxford, Oxford University Press). Latour, Bruno (2002) La fabrique du droit. Une ethnographie du Conseil d’Etat (Paris, La Découverte). 218 Serge Gutwirth, Paul De Hert and Laurent De Sutter —— (2003) Un monde pluriel mais commun. Entretiens avec François Ewald (Paris, Editions de l’Aube). —— (2004a), ‘Scientific Objects and Legal Objectivity’ in Alain Pottage and Matha Mondy (eds), Law, Anthropology and the Constitution of the Social: Making Persons and Things (Cambridge, Cambridge UP, 2004). 73–114 (trans- lation of the fifth chapter of Latour 2002, pp 207–59). —— (2004b), ‘Note brève sur l’écologie du droit saisie comme énonciation’ in Frédéric Audren and Laurent De Sutter (eds), Pratiques cosmopolitiques du droit, Cosmopolitiques. Cahiers théoriques pour l’écologie politique no 8 (Paris, L’Aube) 34–40. —— (2005) Reassembling the Social. An Introduction to Actor-Network Theory (Oxford, Oxford University Press). Lessig, Laurence (1999a) ‘The Law of the Horse: What Cyberlaw Might Teach’ 113 Harvard Law Review 501–46. —— (1999b) Code and Other Laws of Cyberspace (New York, Basic Books). —— (2006) Code version 2.0 (New York, Basic Books), available at accessed 10 June 2008. Levi, Michael and Wall, David S (2004) ‘Technologies, Security, and Privacy in the Post- 9/11 European Information Society’ 31 (2) Journal of Law and Society 194–220. MacCormick, Neil (2005) Rhetoric and the Rule of Law. A Theory of Legal Reasoning (Oxford, Oxford University Press). Reidenberg, Joel (1997) ‘Governing Networks and Rule-Making in Cyberspace’in B Kahin and C Nesson (eds), Borders in Cyberspace: Information Policy and the Global Information Infrastructure (Cambridge, Mass, The MIT Press). —— (1998) ‘Lex Informatica: The Formulation of Information Policy Rules Through Technology’ 76 Texas Law Review 552–93. Rosenfeld, Michel (2006) ‘Comparing constitutional review by the European Court of Justice and the U.S. Supreme Court’ 4 (4) International Journal of Constitutional Law 618–51. Rotenberg, Boris (2005) ‘The Legal Regulation of Software Interoperability in the EU’ Jean Monnet Working Paper no7/05, available at accessed 10 June 2008. Stengers, Isabelle (1996) Cosmopolitiques. Tome 1. La guerre des sciences (Paris, La découverte/Les empêcheurs de tourner en rond). —— (1997) Cosmopolitiques. Tome 7. Pour en finir avec la tolérance (Paris, La découverte/les empêcheurs de tourner en rond). —— (2005) ‘The Cosmopolitical Proposal’ in Bruno Latour and Peter Weibel (eds), Making Things Public. Atmospheres of Democracy (Karlsruhe, ZKM-Zentrum für Kunst und Medientechnologie; Cambridge, MA, The MIT Press) 994–1003. Weller, Jean-Marc (2007) ‘La disparition des bœufs du Père Verdon. Travail admi- nistratif ordinaire et statut de la signification’, 67 Droit et Société 713–40. Part Two Technology as a Regulatory Target 10 Cloning Trojan Horses: Precautionary Regulation of Reproductive Technologies HAN SOMSEN* I. Introduction Biotechnology offers modern societies tools to reprogramme existing life, or even to create entirely new life forms. Combined with the exponential growth of knowledge about the human genome, advances in biotechnology now raise the plausible prospect of a future in which humans live (much) longer and, above all, healthier and happier lives. In that future world, happiness is most likely to manifest itself in all the familiar ways: the birth of a healthy child, a mother’s sense of wonderment after having been outwitted by her 10-year-old at a game of chess, the enjoyment of an inspired musical performance, or simply the faint smell of freshly cut grass after a Summer’s day. Technological constraints apart, whether or not this happy future will materialise logically depends on, first, present choices about the pursuit of such a future and, second, regulators’ effectiveness in trans- lating those choices in regulatory regimes. The focus of this paper is ultimately directed towards the first of these contin- gencies; it concerns the fundamental question whether we should allow the use of human genetics in the pursuit of individual happiness and, more particularly, if the so-called ‘precautionary principle’ has a role to play in pointing us in the direction of answers to that question.1 This is a mission that will prove rather a lot more difficult to answer than the promise of hope painted in the introductory paragraph would suggest. Difficulties first of all stem from the fact that all the mundane examples of happiness cited may be the result interventions that constitute ‘human * The author wishes to express his gratitude to his colleagues at TILT and Floor Fleurke (University of Amsterdam) for taking time to comment on this paper. Responsibility for mistakes remains exclu- sively with the author. 1 It is presumed that this question arises in the context of a liberal democracy, where notions of the good life are uniquely individual. In respect of reproductive technologies, ie technologies that allow couples the choice to procreate, that presumption is not hard to sustain. 222 Han Somsen enhancements’ in one way or another; preimplantation genetic diagnosis (pgd) for couples with genetic disorders, cognitive enhancement for the mentally dis- advantaged, or improved physical capabilities through genetic manipulation. Against the prospect of wide-spread or even small-scale use of human enhance- ment, many ethical objections are routinely voiced which, Harris eloquently and persuasively shows, upon closer scrutiny mostly are lacking in robustness and persuasiveness.2 Objections against human enhancement that are based on ‘risk’, in contrast, enjoy prima facie credibility: as a matter of both public and private policy we should evidently shy away from anything that is certain to harm ourselves or others. Yet, the specialised and highly complex field of risk regulation opens up a muddle of a different kind. This is because in contemporary highly complex sci- entific and social settings ‘certainty’ about the impacts of action x, y or z is increas- ingly exceptional, so that more and more often decisions need to be taken under circumstances of scientific uncertainty or even ignorance.3 Funtowicz refers to this state as ‘post-normal science’ when ‘typically facts are uncertain, values in dispute, stakes high, and decisions urgent’.4 Indeed, restrictive environmental regulation has been and will continue to be adopted at short notice to protect the environment against, for instance, new chemical compounds notwithstanding ambiguous evidence about their impacts.5 Such regulatory regimes often are the product of applications of the pre- cautionary principle. In its ideal form, the precautionary principle offers a rational and scientifically sound model for managing risk in circumstances of scientific uncertainty or ignorance. Despite decades of intense institutional and academic debate, what this ideal model exactly amounts to remains obscure, and I do not sufficiently lack in modesty to believe that this paper could change that unsatisfactory state of affairs. What I do believe, is that a persuasive argument can be constructed that, save a limited number of exceptional cases, precaution must as a general rule not become part of the tools that help us decide whether individuals should be allowed to pursue 2 J Harris, Enhancing Evolution (Princeton, NJ, and Oxford, Princeton University Press, 2007). 3 ‘Uncertainty’ refers to a situation under which it is possible to define all possible outcomes, but where there is no basis for the confident assigning of probabilities. ‘Ignorance’ refers to a situation under which it is possible neither to assign probabilities nor even to define all possible outcomes. From: Glossery in A Stirling, On Science and Precaution in the Management of Technological Risk, Final Report of a project for the EC Forward Studies Unit under the auspices of the ESTO Network. 4 SO Funtowicz, ‘Post-Normal Science. Science and Governance under Conditions of Complexity’ in M Tallacchini and R Doubleday (eds) Science Policy and the Law: Relationships Among Institutions, Experts, and the Public, Notizie di Politea, vol XVII, 62, pp 77–85. 5 Reg. (EC) No 1907/2006 concerning the Registration, Evaluation, Authorisation and Restriction of Chemicals (REACH), [2006] OJ L396/1. See Art1(3): [t]his Regulation is based on the principle that it is for manufacturers, importers and downstream users to ensure that they manufacture, place on the market or use such substances that do not adversely affect human health or the environment. Its provisions are underpinned by the precautionary principle.’ Cloning Trojan Horses 223 their individual notion of the good life through human enhancement. That argument will take the form of a dual challenge to precautionary regulation of reproductive technologies. The first challenge is conceptual and theoretical, and is articulated in section II. In that section, I first dissect different roles of precaution,6 a distinction that builds upon the classical division of risk analysis7 in risk assessment8 and subse- quent risk management.9 For the purposes of this paper, it will not be necessary to discuss issues pertaining to risk communication.10 I will show that precaution may, procedurally, serve deliberation (in risk management) and fact-finding (in the preceding risk assessment), or, substantively, enable regulators to channel regulatory tilt towards constraints on new technologies (ie in risk management). Applying these insights to the regulatory space of reproductive technologies, I will suggest that most justifications to embrace use of the precautionary principle in this novel field of application are tenuous at best. The second charge against application of the precautionary principle to repro- ductive technologies is predominantly empirical (section III). As a minimum baseline, I will argue that for enabling precaution to be a morally and legally accept- able principle, its application should give rise to similar regulatory impacts (ie direct us towards permissiveness or rather towards constraints) in similar circumstances. Two case studies that offer such similar circumstances will provide additional persuasive evidence that, besides the conceptual difficulties with its application articulated in section II, precaution also falls well short of that baseline. Finally, building upon these findings, section IV contains a warning against the elevation of precaution to a general principle pertaining to the regulation of human genetics. 6 The idea that precaution fulfils such different roles was recognised early by the Commission of the European Communities in its Communication on Precaution, where it observes: An analysis of the precautionary principle reveals two quite distinct aspects: (i) the political decision to act or not to act as such, which is linked to the factors triggering recourse to the precautionary principle; (ii) in the affirmative, how to act, ie the measures resulting from application of the precautionary principle. See COM (2000) 1, 13. 7 For definitions of these terms, I draw on the work of the Codex Alimentarius Commission. Risk Analysis is defined as ‘A process consisting of three components: risk assessment, risk management and risk communication’. 8 Risk assessment is a scientifically based process consisting of the following steps: (i) hazard identi- fication, (ii) hazard characterisation, (iii) exposure assessment, and (iv) risk characterisation. Ibid. 9 Risk management is: The process, distinct from risk assessment, of weighing policy alternatives, in consultation with all interested parties, considering risk assessment and other factors relevant for the health protection of consumers and for the promotion of fair trade practices, and, if needed, selecting appropriate prevention and control options. (Ibid). 10 Risk communication is: The interactive exchange of information and opinions throughout the risk analysis process concerning hazards and risks, risk-related factors and risk perceptions, among risk assessors, risk managers, consumers, industry, the academic community and other interested parties, including the explana- tion of risk assessment findings and the basis of risk management decisions. (Ibid). 224 Han Somsen II. Origins and Rise of the Precautionary State A. Context Specific Justifications for Environmental Precaution Although we should avoid waxing dogmatically on the proper interpretation of the precautionary principle, some illumination of its core is inescapable if we are to understand its development from a uniquely environmental regulatory tool into a principle that is increasingly invoked in many different contexts of uncer- tainty, which most definitely includes reproductive technologies.11 Numerous articulations of the precautionary principle are in simultaneous circulation, but Principle 15 of the Rio Declaration is generally deemed to offer an important and fairly representative example of its original meaning:12 In order to protect the environment, the precautionary approach shall be widely used by States according to their capabilities. Where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost- effective measures to prevent environmental degradation.13 The essence of the precautionary principle (in Principle 15 interestingly termed ‘precautionary approach)14 in this formulation appears to reside in its enabling nature: it allows public bodies to take preventive action to avoid threats of serious or irreversible damage in cases where they otherwise could not. Thus, where lack of full scientific certainty about risk would normally prevent states to limit or outlaw certain activities, precaution helps lowering this threshold to the point that proportional regulatory action can be taken. Despite having received more than its fair share of academic attention, many vital questions pertaining to the scope of precaution remain unanswered, and there is little prospect that this situation is about to change. Crucial among these unresolved issues are the minimum degree of seriousness or irreversibility of the damage, the nature of scientific uncertainty that is required before precaution is 11 Within the EU, the Court of First Instance (CFI) has suggested that precaution applies to all EU policies, referring to it as a ‘central plank’ of Community policy; Case T–70/99, Alpharma Inc v Council [2002] ECR II–3495 at para 135. 12 Legal articulations of precaution were first found in German environmental legislation. See S Marr and A Schwermer, ‘The Precautionary Principle in German Environmental Law’ (2004) 3 Yearbook of European Environmental Law 125–48. 13 Rio Declaration on Environment and Development (Rio de Janeiro, 3–14 June 1992) 31 International Legal Materials 874. 14 Some scholars insist on the importance of distinguishing the precautionary approach from the precautionary principle. The drafting of Principle 15, universally regarded as one of the first global manifestations of the precautionary principle, does not bear out this distinction. This being as it may, in this paper I will not insist on explicit invocation of the precautionary prin- ciple as evidence for its use. This is consistent with the way in which the principle has been applied in, for example, Australia, where courts have not shied away from its application even in cases where it is not explicitly mentioned in legislation. In such cases, precaution has been referred to as a ‘common sense’ principle, a principle of ‘common law’ or a mixture of these. See L Fisher, ‘Is the Precautionary Principle Justiciable’ (2001) 13 Journal for Environmental Law 315–34, esp n 91. Cloning Trojan Horses 225 triggered to take ‘cost-effective’ action, whether risks associated with the status quo must be taken into the equation, and whether precaution implies an obligation to conduct follow-up research aimed at ending the state of scientific uncertainty. Undisputedly therefore, the arrival of precaution in the arena of risk regulation has introduced a strong potential for arbitrariness, and this potential has materi- alised all too often. Where arbitrary precautionary measures fall within the sphere of EU or WTO law, the European Court of First Instance (CFI) and the European Court of Justice (ECJ) and, respectively, dispute settlement panels (DSP) and the Appellate Body (AB) enjoy jurisdiction to discipline the principle. Given the potential seriousness of the implications of frivolous precautionary regulation, judicial review is its indispensable and crucial counterpart. Leaving the principles emanating from this case law for what they are,15 what matters for now is that judicial review is not only necessary for the protection of individual rights, but also a prerequisite for the evolution of precaution towards its ideal form. Provided that those judicial checks and balances in place, there are numerous persuasive reasons for embracing precautionary environmental regula- tion. This is because, despite its obvious imperfections, as long as precaution is used for environmental regulation the principle helps redress some very clear and serious imbalances that ultimately undermine mankind’s chances of survival. First, there exists a worrisome imbalance between what we can do and what at present we can understand. Environmental precaution is a principle appropri- ate for a civilization in which technological and economic capabilities are out of sync with its scientific grasp of the ecological impacts of those activities. Crucial amongst the known unknowns that justify environmental precaution is the nature of ecological thresholds that trigger sudden global environmental catastrophes, eg the increasingly rapid disappearance of polar ice that in turn may reverse the gulf stream, etc. Second, we must acknowledge the disequilibria between those having a legal interest and voice in pursuing some industrial activity, and the environment, which does not possess legal personality and is not easily represented in the law, even if the law provides for access to justice for environmental Non Governmental Organizations (NGOs). Third, there are obvious conflicts of interest between present and future gen- erations. The costs of irreversible harm to the environment by definition are borne by future generations who are not represented in political and legal process. Precaution gives future generations a voice in legal and political processes. Some, but not all of the above imbalances exist in other policy fields where pre- caution is now surfacing. For human genetics, the first imbalance between human capabilities to act and to understand clearly applies, but it is not self-evident that critical thresholds exist in the way exemplified above. Neither, in the main, is the 15 For an overview of case law regarding the precautionary principle, see N de Sadeleer, ‘The Precautionary Principle in EC Health and Environmental Law’ (2006) 12 European Law Journal 139–72. 226 Han Somsen survival of future generations at stake. As for the protection of future generations: parents routinely unilaterally decide what is good for their children, often before birth, and there is no obvious reason to impose additional constraints on that parental prerogative when it comes to the use of modern reproductive tech- nologies. Finally, informed consent serves to redress any imbalance between those having a legal interest in pursuing the activity (doctor’s, researchers, patent appli- cants, etc.) and the target of those activities (patients and research subjects). For all these reasons, calls for precautionary constraints on genetic technologies are much less persuasive and the consequences of inevitable, even if only occa- sional, arbitrariness much harder to justify. In view of these reservations against precautionary creep to more and more policy fields besides environmental policy, it becomes important to understand the rise of the precautionary state. Because risk is a sociological and psychological construct,16 and hence can be easily manipulated, explanations may range from the sociological, the psychological, the cultural, to the political. This is a fact well understood by van den Daele.17 The precautionary principle, he compel- lingly argues, is not a legal principle aimed at risk prevention, but has become a political tool allowing states to exercise control over the direction of technological innovation. There is an abundance of empirical evidence to support this thesis, including the regulatory history of Novartis’ Bt-176 maize and the EU mora- torium on genetically modified organisms (gmo’s) more generally.18 Scientific evidence regarding risks of gmo’s resulting in proposals to allow the cultivation of gm crops or the marketing of gm produce has been routinely flouted by the executive, clearly for reasons that have little to do with risk and all the more with (manipulated) public opinion. Precaution in those cases serves as an alibi to duck otherwise manifest and inescapable charges of breaches of constitutional law.19 Indeed, whereas globalization irreversibly eats away at states’ regulatory powers, risk regulation invariably requires strong state involvement.20 By emphasizing risk and embracing precaution, states have hence mobilised forces that partly offset those losses of regulatory control. After all, the essence of precaution is to enable government to take action where it otherwise could not. It makes sense, therefore, that the principle should have become an attractive proposition for legislators that would otherwise lack justification to exercise their regulatory powers. This applies to the European Commission, for which the precautionary principle has been a vehicle allowing it gradually to occupy a number of new key policy fields, but also 16 P Slovic, The Perception of Risk (London, Earthscan, 2000). 17 W van den Daele, ‘Legal Framework and Political Strategy in Dealing with the Risks of New Technology: The Two Faces of the Precautionary Principle’ in Somsen, The Regulatory Challenge of Biotechnology (Chicester: Edward Elgar, 2007) 118–38. 18 Ibid 126. 19 Ibid 128. According to the German Basic Law, Art 20 S 3., the executive is bound by law and justice. Similar provisions feature in other European constitutions, as well as in the EC Treaty. 20 N Gunningham, ‘Regulating Biotechnology: Lessons from Environmental Policy’ in Somsen, above n 17, at 3–18. Cloning Trojan Horses 227 to national regulatory agencies that traditionally have been excluded from clinical arena’s in which they might have a (political, religious, or ethical) interest. This takes us to the next part of this paper, which distinguishes ways in which the precautionary principle can conceivably operate in the arena of human genetics, and human reproductive technologies in particular. B. Fact-Finding Precaution Conventionally, precaution is predominantly associated with risk management in cases where preceding risk assessments have to be conducted under circum- stances of scientific uncertainty. In such cases, where ‘science’ alone cannot give conclusive evidence as to the risks that should be associated with any given tech- nology, there are no good reasons to exclude other sources of discourse, eg ethics, religion etc. There are some increasingly strong voices, however, that advocate extending precaution to the risk assessment phase of risk analysis which, as is evidenced by legal articulations of risk assessment procedures, traditionally has been the pre- serve of the science designed to expose the facts about adverse effects, their likeli- hood, and impacts.21 Yet, to say that precaution rationally stimulates deliberation in risk management under circumstances of scientific uncertainty is evidently quite different from implying that precaution in risk assessment results in more accurate decisions (ie less uncertainty, for example, in respect of the question whether ‘saviour siblings’ in later life will more often be lacking in self respect than naturally conceived donors). When it comes to arriving at accurate decisions (finding the factual basis of risk, or of anything else for that matter), which is what risk assessments are about, the pedigree of deliberation is rather less compelling. In fact, as Sunstein argues, not only is there no evidence that deliberation between opposing perspectives more often than not results in establishing the truth, it often does not even lead to accurate use of the available information, nor even can it be shown to result in establishing truth more often than do non-deliberative models of decision-making.22 This means that precautionary deliberation to reveal truths, what I will term ‘fact-finding precaution’, is a risky idea. In line with European and international case law, I therefore feel that precaution should not be employed as a tool to 21 J Tickner and D Kriebel, ‘The Role of Science and Precaution in Environmental and Public Health Policy’ in E Fisher, J Jones and R von Schomberg, Implementing the Precautionary Principle (Cheltenham, Edward Elgar, 2006) 42–3; A Stirling, O Renn and P van Zwanenberg, ‘A Framework for the Precautionary Governance of Food Safety: Integrating Science and Participation in the Social Appraisal of Risk’ in Fisher, Jones and von Schomberg, previous n at 284–16. See the Commission of the European Communities in its Communication on the Precautionary Principle, where it argues that ‘measures applying the precautionary principle belong in the general framework of risk analysis, and in particular risk management.’ (COM (2000) 1, 13). 22 Ibid, 57 ff. 228 Han Somsen ascertain scientific truth, and for that reason also should play no role in the risk assessment phase of risk regulation.23 C. Deliberative Precaution Decisions about the acceptability of particular applications of biotechnology, Julia Black argues, are often informed by a single predominant paradigm. Indeed, it is true that decisions about, for instance, genetically modified organisms (gmo’s) to a large degree continue to be outcomes of quantitative risk assessments, with mere lip service being paid to qualitative considerations such as ethics, the esthetics of the country-side, socio-agricultural impacts of gmo’s, etc.24 Similarly, decisions about the introduction of new reproductive technologies, such as pgd, are usually framed in terms of ethics. Black, in the company of many eminent scholars, is of the opinion that pre- dominance of any given paradigm that unduly discriminates against relevant societal actors hinders the articulation of regulatory goals that enjoy broad sup- port, and ultimately undermines effective regulation of the biotechnologies more generally.25 Presuming that this is true, then there surely is something to be said for the proposition that biotechnology regulation should stimulate dissemina- tion of important insights that representatives of different paradigms may offer on one and the same problem in circumstances where uncertainty prevails.26 Precaution serves this role and as such accords with our notion of common sense (although it is prudent to presume that common sense may not be very common). The precautionary principle requires deliberation at the interface between science and policy in circumstances where science is unable to speak with a single and unequivocal voice. Schomberg therefore believes that the pre- cautionary principle is first and foremost a deliberative principle.27 This implies that the precautionary principle is not so much a principle that urges regulators to stray on the side of caution (informing us what to choose), as a procedural 23 For a comprehensive review of case law, see n 15 above. 24 See for instance Dir 2001/18/EC on the Deliberate Release into the Environment of Genetically Modified Organisms [2001] OJ L106/1. Recital 9 provides: ‘Respect for ethical principles recognised in a Member State is particularly important. Member States may take into consideration ethical aspects when GMOs are deliberately released or placed on the market as or in products.’ See also Art 9 (con- sultation of ethical committee(s)). 25 J Black, ‘Regulation as Facilitation: negotiating the genetic revolution (1998) 61 MLR 621–60. 26 With Black, Aristotle believed that by organizing deliberation this way all come together … they may surpass—collectively and as a body, although not individually—the quality of the few best … When there are many who contribute to the process of deliberation, each can bring his share of goodness and moral prudence … some appreciate one part, some another, and all together appreciate all. (Quoted in CR Sunstein, Infotopia (Oxford, Oxford University Press, 2006) at 49). 27 R von Schomberg, ‘The Precautionary Principle and its Normative Challenges’ in Fisher, Jones and von Schomberg, n 21 above, ch 2. Cloning Trojan Horses 229 principle that instructs to take account of all relevant knowledge in circumstances of scientific uncertainty and ignorance (directing us how to choose). Because risk management, as opposed to risk assessment, is in any event a political process, deliberative precaution can be plausibly accommodated in that part of risk regulation, and indeed has been widely employed in that way in environmental risk regulation in Europe since the early 1990s. In this guise, the value of precaution resides simply in the fact that it stimulates deliberation as a response to situations of scientific uncertainty. Precaution in those circumstances is attractive for the same reasons as we pursue deliberation in liberal democracies generally. Scientific uncertainty exists in respect of, for example, clinical risks of xeno-transplantion, germ-line therapy or reproductive cloning, and there is much to be said for deliberative precaution in such instances. In European institutional practice and case law there exists a (slim) majority opinion that, even though precaution operates in the twilight of science and policy, the role of precaution ought to be confined to the risk management phase of risk regulation. The European Commission Communication of February 2000 presents the precautionary principle as a risk management tool, and in the process emphasises the importance of keeping it out of risk assessment: The precautionary principle is particularly relevant to the management of risk. The principle, which is essentially used by decision-makers in the management of risks should not be confused with the element of caution that scientists apply in their assess- ment of scientific data.28 The precautionary principle is likewise perceived by the European Court of Justice (ECJ) as constituting ‘an integral part of the decision-making processes leading the adoption of any measure for the protection of human health’.29 The difference between ‘deliberative precaution’, in favour of which all the usual justifications of political deliberation speak, and ‘fact-finding precaution’, for which there exists no such obvious justification, is important for another self- evident reason. In liberal democracies that foster autonomy and equality, the dis- tinction implies that when we need to decide about technologies that interfere in the public sphere, as is the case with for example nuclear power or gmo’s that are released into the environment, there is in any event a good case for deliberation and hence deliberative precaution which, as I suggested, can and probably should take place in the risk management phase. When impacts of those technological interventions remain in the private sphere, however, and where the only interest can possibly be to establish truths that direct autonomous individuals in the pursuit of the good life (eg about clinical risks 28 COM (200) 1, summary, para 4, emphasis added. Other public institutions also consider the precautionary principle as relevant only in the risk management phase. See eg: the Scientific Steering Committee’s Working Group on Harmonisation of Risk Assessment Procedures in the Scientific Committees advising the European Commission in the area of Human and Environmental Health (First Report on the Harmonisation of Risk Assessment Procedures, 2000). 29 eg Case C–236/01, Monsanto ECR 2003 I–08105 at para 133. 230 Han Somsen associated with pgd) there is no obvious role for deliberative precaution, unless the sum total of those individual choices will give rise to interferences in the public sphere that are manifestly undesirable.30 D. Enabling Precaution Typically, at least in the all important political arena, the precautionary principle is advocated mainly as a enabling principle. As an enabling principle, precaution differs from both deliberative precaution and fact-finding precaution, in that its effect is not to stimulate and structure deliberation in the risk management phase, nor to establish scientific facts in the risk assessment phase. Rather, ‘enabling precaution’ directs regulators in their decisions if due to scientific uncertainty no risk assessment is possible, or if notwithstanding a risk assessment a state of uncertainty remains. When under such circumstances uncertainty about the impact of a technology persists, enabling precaution posits that regulators should temporarily prohibit or constrain that technology until there is new evidence sug- gesting no risk or acceptable risk. The underlying sentiment that appears to inform enabling precaution is that the road to hell is paved with dangerous precedent for change.31 It is also often implicit that precaution should apply to proposed change, but not to the status quo. However, as I observed elsewhere,32 the presumption that the value of maintaining the status quo is worthy of priority over conscious change is without empirical foundation (we have as little notion of how dangerous a future without change will be, as we understand risks of introducing new tech- nologies), and against the background of the unstoppable evolution of life even illogical.33 We would also do well to remember that the impact of overly liberal use of environmental precaution comes in the shape of lost commercial product devel- opment that could have answered real societal needs. This is a problem serious enough to justify some strong reservations about precaution. Importantly how- ever, in other cases benefits often do accrue to an environment that structurally 30 Eg, this could be the case if individual parental preferences for boys over girls in combination with technologies that allow for sex selection gives rise to social pressure and demographical problems. The focus of regulators in such circumstances should be to mitigate such problems, which in our example might take the form of allowing sex selection only for the purposes of family balancing. 31 FM Cronford, Microcosmogrpahia Academia (London, Bowes and Bowes, 1908; reprinted 1966) as quoted in Harris, n 2 above, at 34. The Principle of Dangerous Precedent is that you should not now do an admittedly right action for fear you, or your equally timid successors, should not have the courage to do right in same future case … Every public action whish is not customary, either is wrong, or, if it is right, is a dangerous precedent. It follows that nothing should be done for the first time.’ 32 H Somsen, Regulering van humane genetica in het neo-eugenetische tijdperk (Nijmegen, Wolf Legal Publishing, 2006) 7. 33 Harris, n 2 above, at 34. Cloning Trojan Horses 231 receives a rough deal from both society and the law. Occasional misuse of precaution thereby becomes somewhat easier to swallow, although thereby not acceptable. Also, within the sphere of environmental regulation, the impact of precaution on regulatory tilt is crystal clear: where precaution kicks in, activities with environmental impacts will be prohibited, or in some way made subject to restraint. Finally and probably most importantly, precautionary regulation is always temporary, ie for as long as scientific uncertainty prevails, and implies an obligation actively to engage in research to end that uncertainty.34 Contrast this with use of precaution in the sphere of reproductive cloning. Here the effect of precaution is to justify a permanent erosion of rights of pres- ent right-holders, whilst benefits exist only in the realm of our imagination. After all, as I will explore in more detail below, it is uncertainty about the rights of, for instance, future clones that triggers precautionary bans on reproductive cloning, so that those future right holders (clones) will remain imaginary and impacts on their rights forever uncertain. Precautionary regulatory powers thereby become self perpetuating. Put differently, unlike scientific uncertainty, uncertainty about for instance the impact of reproductive cloning on individual autonomy will be eternal, presuming at least that bans on reproductive cloning prove effective. What we should acknowledge too is that, as subsequent examples will show, for the sake of ‘potential rights’ and ‘potential rights holder’, this kind of reproductive precaution has an all too real potential to result in unnecessary actual death and actual appalling illness of present and real right holders (chil- dren and adults). In short, the case for enabling precaution in the sphere of reproductive technolo- gies is weak on many counts. If we wish to invoke enabling precaution (eg in respect of reproductive cloning), it ought to be accompanied by some guarantees so as to ensure that precautionary restrictions do not become de facto self-perpetuating. For gmo’s, such guarantees have taken the form of a step-by-step approach in respect of field trials. This requires small-scale experience to be gained with risks concerning, for example, cross-pollination, before a gradual increase in the acreage of gm crops is sanctioned. I see no reason why such a step-by-step approach could not also allow small-scale reproductive cloning. 34 Art 5.7.of the all important SPS Agreement reads accordingly: in cases where relevant scientific evidence is insufficient, a Member may provisionally adopt sanitary or phytosanitary measures on the basis of available pertinent information, including that from the relevant international organizations as well as from sanitary or phytosanitary measures applied by other Members. In such circumstances, Members shall seek to obtain the additional information necessary for a more objective assessment of risk and review the sanitary or phytosanitary measure accordingly within a reasonable period of time. (emphasis added) The Panel in EC Biotech ruled: If procedural delay could be used, directly or indirectly, as an instrument to manage or control risks, then Members could evade the obligations to be observed in respect of operational SPS measures, such as Art 5.1, which requires that SPS measures be based on a risk assessment. The ruling is published on the Internet at accessed 15 June 2008. 232 Han Somsen III. ‘Enabling precaution’ in Reproductive Technologies In the previous sections, after having identified the precautionary principle as a principle enabling public regulatory intervention, I argued that precaution fulfils different roles that should be clearly distinguished. ‘Fact-finding precaution’ would be justified only if that mode of deliberative fact finding could be shown to be more reliable than competing methods, in par- ticular the kind of specialist independent scientific expertise that currently forms the basis of risk assessments. Couples could then use that more accurate informa- tion to arrive at the best possible informed autonomous decision about whether or not to avail themselves of a reproductive technology. In reality, however, there is no evidence that fact-finding precaution yields more accurate information than independent scientific expertise, which is of course why international and domes- tic courts insist that the role of precaution should be confined to deliberation in the political risk management phase of risk analyses. ‘Deliberative precaution’ may serve useful purposes in all those cases in which deliberation traditionally features, or should feature which, in liberal democra- cies like our own, may include risk management. However, in as far as we do not normally deliberate about the reproductive rights of couples, there is also no compelling case for introducing deliberative precaution in respect of new repro- ductive technologies. ‘Enabling precaution’, finally, often is based on the flawed assumption that it should only apply to change, and not equally to the status quo, and in any event requires rigorous judicial control to address the real prospect of unbridled state power. In sum, there are few arguments that direct us to embrace any of the mani- festations of precaution that could empower public authorities to hinder indi- viduals in the pursuit of their uniquely individual visions of the good life. At best, a difficult argument may be construed that some technologies impact on the common heritage of mankind in manners that are reminiscent to those we are now witnessing in the shape of environmental decline. Indeed, UNESCO’s International Bioethics Committee has taken the first vital step that is necessary for the construction of such an argument by arguing that ‘the human genome must be preserved as the common heritage of mankind’.35 How the IBC thinks the 35 See Art 1 of the Universal Declaration on The Human Genome and Human Rights, 3 December 1997, published on the Internet at accessed 15 June 2008. UNESCO of course understands that the human genome is subject to evolu- tionary change. See Art 3 of the same Convention: The human genome, which by its nature evolves, is subject to mutations. It contains potentialities that are expressed differently according to each individual’s natural and social environment includ- ing the individual’s state of health, living conditions, nutritionand education. Published on the Internet at accessed 15 June 2008. Cloning Trojan Horses 233 human genome can be preserved without artificially intervening in the continuous process of natural evolution must by necessity forever remain a mystery. This being as it may, such an argument, if indeed it could be construed, would at best justify recourse to deliberative precaution in a very limited number of cases, eg to decide about reproductive macro cloning.36 Enabling precaution would have to be made subject to some regulatory device, such a step-by-step approach, that is aimed to ending states of scientific uncertainty in the future. If these arguments by themselves still fail to sway proponents of reproductive precaution, (such as most eminently Roger Brownsword who, as we will see, relies on enabling precaution in an attempt to protect future right holders and future rights), then an empirical assessment of the impact of precaution on the regula- tion of reproductive technologies in practice perhaps has more persuasive force. Thus, what I am sure that everybody agrees enabling precaution must not mean is that, through its exercise, the state can restrict any activity at will, for the sake of preventing any unknown risk, to any category of rights or interests, for any given period of time. Clearly, in that guise, the precautionary principle becomes a tool for arbitrary government, which is precisely why enabling precaution in any event must be judicially disciplined.37 Disciplining precaution involves circumscribing the types of activities and unknown risks that may trigger precaution, as well as the types of rights and interests to be protected by precautionary intervention. Precaution may be further contained by general principles of sound administra- tion, including proportionality, use of best available expert advise, obligation of proper reasoning (especially if expert advise is not followed), and of course the availability of administrative or judicial review. The disciplining of precaution hence boils down to de-politicizing precaution so that it can become subject to judicial review. The latter, in turn, inevitably requires proceduralizing precaution. Yet, the scope of this part of the paper is less ambitious; I simply aim to show that the precautionary principle, at its present stage of development, cannot be said to be sufficiently contained to be able to perform a constructive role in regulating reproductive technologies. As a minimum baseline for accepting the opposite proposition, ie that enabling precaution does have a constructive role to play, I will demand that enabling precaution gives rise to similar outcomes in similar settings. With this I do not mean that in all cases where enabling precau- tion is exercised regulatory prohibitions in some shape or form should result. Rather, I will merely expect that in respect of one and the same activity precaution affects regulatory tilt in comparable manners.38 If we were to discard this simple 36 Roger Brownsword distinguishes macro-cloning (on a large-scale, as in Brave New World) from micro-cloning (on a small scale). See R Brownsword, ‘Stem Cells and Cloning: Where the Regulatory Consensus Fails’ (2005) 39 New England Law Review at 555–62. 37 See Commission’s Communication on the Precautionary Principle, n 6 above, in which con- siderable emphasis is put on the fact that application of the precautionary principle should never be arbitrary or discriminatory. 38 The term ‘regulatory tilt’ I derive from Brownsword in R Brownsword ‘Red Lights and Rogues’ in H Somsen, n 17 above. 234 Han Somsen requirement, the precautionary principle could turn into an instrument in aid of arbitrary government and, in a worse case scenario, attribute unbridled powers to regulatory authorities. Before doing so, however, I wish to afford some attention to the reasoning that leads Brownsword (cautiously) to embrace precaution in the realm of human genetics: On this analysis, the State is authorised to act in a precautionary way for the sake of the integrity of the Community—for example, where an act is permitted under a procedural justification has a negative impingement on possible rights-holders, on arguable rights or on the viability of the community of rights itself In such cases, the State’s case for intervention does not rest on the need to intervene for the sake of a clear and settled overriding right, but for reasons of risk-avoidance relative to the regime of rights and the community of right-holders.39 Thus, precaution may justify state intervention in the sphere of individual human rights for the sake of the protecting future generations of rights-holders against uncertain risks that could plausibly undermine future rights or arguable rights. There can be absolutely no doubt that Brownsword’s defence of precaution stems from a deeply held commitment to human rights, for the sake of which he might be reluctantly prepared to sacrifice some of these rights now for the sake of future right holders. Yet, if the road to hell is paved at all (which somehow sounds improbable), it is more likely to be paved with good intentions than with danger- ous precedent for change, and I will try to show that this is a case in point. Advisory committees and political institutions have regularly resorted to pre- caution in debates on reproductive technologies. In those instances, it is invariably enabling precaution that is at work: it is invoked to justify the substance of deci- sions regarding reproductive technologies, sometimes in the face of conflicting independent specialised advice. Compared to Brownsword’s concern for future right holders, the good intentions of these advisory committees and executive bodies are likely to be more dubious. In the next part of the essay, I will try to provide empirical evidence that enabling precaution does not perform the role that Brownsword has in mind, and in fact invites its systematic abuse. For this, we first need to understand more precisely what human rights propo- nents hope to gain from enabling precaution and how, if precaution falls in the wrong hands, things may turn out sour. A. The Case for Precautionary Constraints on Individual Reproductive Autonomy In his many scholarly articles and books, Brownsword explores three bioethical approaches that together form a bio-ethical triangle: the first is utilitarian, the second reflects a human rights paradigm, and the third amounts to a manifestation 39 Brownsword, n 42 below at 451, my italics. Cloning Trojan Horses 235 of communitarianism championing the restrictive virtues of human dignity (‘dignitarianism’). Human rights approaches, which Brownsword favours, foster individual autonomy, and thus are sympathetic to unrestricted access to reproductive technologies, subject to a requirement of prior informed consent. Paradoxically, however, exercise of the right to have access to these technologies, such as repro- ductive human cloning and pgd, may conceivably come to undermine individual autonomy. This is because newborns may to some extent be said to have been pre-programmed, undermining their sense of autonomy, or self. Since it is respect for individual autonomy on which all human rights rest and which preconditions any community of rights, this is of the gravest possible concern to human rights lawyers. Nobody can be sure if human reproductive cloning will have this effect, but it might. Nor can we be sure that human cloning would involve any breaches of human rights, but then again it might. Precaution, as an enabling principle that allows government to take action under circumstances of uncertainty where it otherwise could not, presents itself as an attractive solution to this dilemma. For as long we are not sure about the implications of human cloning for individual autonomy, we should embrace enabling precaution, which will have the effect of ruling out cloning.40 To be sure, precaution has its appeal for utilitarians and dignitarians too. Utitlitarians are guided by the overall consequences of any regulatory choice. If those on the whole are positive, utiltitarians will give the green light. However, as observed, it is difficult to be sure about the ultimate consequences of a nascent technology such as reproductive cloning, and in those circumstances utilitarians might wish to embrace precaution to further the utilitarian agenda. Utilitarian use of precaution most probably takes a form that very closely resembles its classical environmental form. That is, for as long as scientific knowledge is insufficient to rule out serious harm to present and future generations, disciplined recourse to enabling precaution may be justified. Dignitarians, on the other hand, are driven by constraining notions of human dignity that allow for no compromise. Any commodification or instrumentalisation of human life is a no-go, and whereas deliberative precaution is therefore certainly not what dignitarians are interested in, enabling precaution amounts to an attractive additional arrow on the bow of dignitarians operating in the political arena. Experience with environmental precaution suggests that the principle is sufficiently plastic to accommodate uncompromising approaches to our ecology. Thus, the Position of the European Environmental Bureau (EEB) on the 40 In order to avoid having to explore the two different notions of human dignity, a first which empowers and forms the justification for the existence of human rights, and a second which con- straints and justifies limitations of human rights, I am simplifying the argument somewhat by ignor- ing the fact that Brownsword is concerned about erosions of the first notion of human dignity as a result of reproductive cloning. 236 Han Somsen precautionary principle lowers the threshold of ‘scientific uncertainty’ to situations to ‘that which is not known; this also includes that which we are not aware we don’t know—this is unimaginable and potentially limitless’.41 In the sphere of reproductive medicine, this kind of precautionary fundamentalism was a point of particular irritation for the House of Commons Science and Technology Committee: The HFEA has drawn on theoretical psychosocial harms in formulating policy on sex selection, invoking the precautionary principle. It concerns us that the potential for harm is often quoted without recourse to a growing body of evidence of its absence. Ms Philippa Taylor, a contributor to our online consultation, told us that while she would not oppose sex selection for social reasons if evidence of lack of harm could be found, she remained confident that “you will not find there is no psychological impact on children from sex selection”. It is not difficult to imagine how the concept of ‘irreversibility’ may turn out to be similarly accommodating to dignitarian vigour. In sum: precaution has the troubling potential to play a role in each of the three bioethical approaches, but each time in pursuit of entirely different and often con- flicting ends. That, at the same time, is the most serious problem with precaution in the arena of the bioethical triangle: it has the potential to evolve into a Trojan horse undermining the very cause it was to serve. Brownsword clearly recognises this potential: Some states may be more risk-averse (more precautionary in their approach) than others and we must be careful that this stewardship jurisdiction is not a hostage to dignitarian fortune.42 Given the state of conceptual and definitional anarchy that still marks the precau- tionary principle, this concern should come as no surprise. Neither is it predictable how reproductive precaution impacts on regulatory tilt. Thus, political use of precaution in the sense observed by Van den Daele may tip the regulatory balance towards prohibiting access to reproductive tech- nologies, or have the opposite effect: an almost complete liberalization of access to reproductive technologies. To illustrate this latter point, I will explore two recent cases in which enabling precaution has been used to decide about access to reproductive technologies. I examine these policy outcomes because they are based on, or at least argued to be consistent with enabling precaution. If a simple comparison of those outcomes reveals that they have little or nothing in com- mon, then we can conclude that regulatory practice falls short of my baseline, which I believe means that enabling precaution has no role to play in reproduc- tive technologies. 41 EEB Position on the Precautionary Principle, December 1999, published on the Internet at accessed 15 June 2008. 42 R Brownsword, ‘Happy families, consenting couples, and children with dignity: sex selection and saviour siblings’, (2005) 17 Child and Family Law Quarterly 4, 437–73 at 472. Cloning Trojan Horses 237 B. Precautionary Regulation of Reproductive Technologies in Action i. House of Commons Science and Technology Committee: Utilitarian and Human Rights Precaution The House of Commons Science and Technology Committee, in its important report Human Reproductive Technologies and the Law, at times deals with the precautionary principle as an irrational impediment to medical progress,43 at times embraces the principle to lend support to its agenda for reproductive tech- nologies.44 This agenda appears to be mostly situated in the utilitarian corner of Brownsword’s bioethical triangle, but a human rights approach at times can also be discerned. The Committee explicitly acknowledges the reality that precaution has become a ‘fashionable’ political tool, criticises its ‘excessive use’ in assisted reproduction, but all the same does not shy away from defining precaution so as to suit its own agenda. Many of the decisions about what to regulate or to legislate about depend on the approach taken with regard to the balance of harm and benefit or potential harm and potential benefit. It has become fashionable to specify that authorities (whether that be Governments, agencies, industry, watchdogs etc) should take a “precautionary approach” or adopt the “precautionary principle”. This means different things to different pres- sure groups, and to different sides of the argument. In respect of medical advances it has never meant “proceed only where there is evidence of no harm”. If it did many of the advances would never be made. In medical research practice it means proceeding through carefully regulated and tightly overseen research stages, requiring—among other things—vigilance and peer review. In clinical practice it means proceed cautiously and in a manner amenable to ethical oversight and clinical audit while there is no evidence of sufficiently serious harm or potential harm to outweigh benefit or potential benefit, while being vigilant in looking for unintended and otherwise adverse outcomes. We do not see why the area human reproductive technologies should do anything other than pro- ceed under a precautionary principle currently prevalent in scientific, research and clinical practise. This means—as specified in paragraph 46 above—that alleged harms to society or to patients need to be demonstrated before forward progress is unduly impeded. 43 See House of Commons Science and Technology Committee, Human Reproductive Technologies and the Law, Fifth Report of Session 2004–05, vol 1, at para 70: However, it is disappointing that both the BMA and the Royal Society of London seem unwilling to countenance the idea that applications of cloning such as this could have a future. In doing so, they have become unlikely advocates of a prohibitively restrictive application of the precautionary principle. We question whether their stance owes more to the protection of the public image of doctors and scientists; that they fear that a more pragmatic approach to reproductive cloning would leave them open to criticism. 44 Ibid at para 47. As for the utilitarian approach adopted, this is palatable throughout the docu- ment and expressed in general terms in para 46: Reproductive and research freedoms must be balanced against the interests of society but alleged harms to society, too, should be based on evidence. 238 Han Somsen The irony that the Committee’s view on precaution is not all that different from the proactionary principle developed by transhumanists will not be lost on the attentive reader: People’s freedom to innovate technologically is highly valuable, even critical, to human- ity. This implies several imperatives when restrictive measures are proposed: Assess risks and opportunities according to available science, not popular perception. Account for both the costs of the restrictions themselves, and those of opportunities foregone. Favor measures that are proportionate to the probability and magnitude of impacts, and that have a high expectation value.45 Although this certainly is an unexpected twist, we should applaud the Committee for at least explicitly acknowledging the political nature of precaution, as the worse cases of abuse are those where the principle is advanced as an unshakeable legal principle directing reproductive regulatory choices in one particular and inevitable direction. We do not need to establish whether the Committee’s under- standing of precaution amounts to the most faithful representation of the principle. What matters is that enabling precaution as a legal principle is at a stage of develop- ment in which the Committee’s interpretation is not much less plausible than any of the many possible alternatives. In this particular guise, precaution becomes a tool for advancing a utilitarian agenda, or at times a human rights approach. Needless to say that, on the basis of this reading of precaution, the Committee finds no objections to various applications of pgd, including those that result in sex-selection or saviour siblings. Moreover, in respect of reproductive cloning, which involves much greater risks than pgd, the Committee’s reasoning gives rise to a utilitarian outcome. It concludes: If there is to be a total prohibition of any form of reproductive cloning, it is important that it is supported by principled arguments why such a technique should be banned even if it were shown to be safe, effective and reliable. Without such arguments, an indefinite absolute ban could not be considered rational. The Minister’s refusal to enter into any discussion of reproductive cloning is not an encouraging starting point for an open-minded review of the adequacy of existing legislation. As for gamete intrafallopian transfer (GIFT) and intrauterine insemination (IUI), the emphasis on consent may suggest a human rights approach: However, given our acceptance of the position that the state should intervene only in carefully defined and justified circumstances, where there are specific harms, in repro- ductive decisions, the common law rules of consent are sufficient to protect patients in the face of these risks. It is consistent with our ethical approach that, rather than adding to the list of regulated fertility treatments, we should be decreasing the level of state intervention. We accept that GIFT (gamete intrafallopian transfer) and IUI (intrauterine insemination) pose similar risks to IVF, but we have already concluded that these risks lie within accepted legal boundaries on what people can consent to. We have not been 45 Published on the Internet at accessed 15 June 2008. Cloning Trojan Horses 239 persuaded, therefore, that regulation should demand anything more than that the highest technical standards are observed. Whether the Committee in these cases ultimately pursues utilitarian- or auton- omy precaution is interesting, but not the point that I wish to underline. What I do wish to emphasise, instead, is that enabling precaution in all cases comes in aid of a vision that the Committee probably entertained well before precaution ever entered the equation. Precaution can be, and indeed has been used as a further argument to propose what the Committee was always going to propose. ii. The Dutch Health Minister: Dignitarian Precaution Dignitarian (ChristianUnion) participation in a Dutch coalition government besides Christian democrats (Christian Democrat Appeal) and social-democrats (Labour Party) offers an interesting glimpse of another guise of enabling precau- tion. In time, the current government will perhaps be remembered for how it used dignitarian precaution to transform a country with permissive regulatory regimes on reproductive technologies. The United Kingdom (UK) is such a country, and so are the Netherlands, but the question is for how much longer. The answer in part hinges on how Dutch dignitarians will play the precautionary card. A foretaste comes from the Dutch Christian Democrat Minister for Health. The Minister in 2006 decided against adapting the Act on Reproductive Technologies to technological progress so as to reflect new clinical uses of pgd. These new uses include tissue-typing for the sake of the conception of saviour siblings, pgd to serve carrier-parents, pgd in response to diseases with variable expression, pgd to exclude late-onset diseases, and pgd for sex-selection for social reasons (in prac- tice family balancing). Unlike utilitarian and human rights approaches, dignitarians place the sanctity of the embryo at the centre of their reproductive ethics.46 Precaution could be directed towards the embryo (or newborns, or parents, the choice at present is unfettered) by protecting it from any potential harms, including psychological harms, even where there is no (prospect of) scientific proof of any harm. The Minister’s precautionary reasoning directed at the protection of newborns illus- trates this point: Although children born after pgd do not appear to have a higher chance of any abnor- malities than children born after ‘normal’ ivf, the Advice underlines the significance of long-term follow-up studies to obtain greater certainty. For me, this is an additional reason to be particularly careful in respect of any extension in allowing pgd.47 Enabling precaution was also used to reject specific proposals made by the Health Council, an independent advisory body, to extend the use of pgd treatment, to 46 That an ethic based on the sanctity of embryo’s is an ultimately unsatisfactory way to resolve ethical issues is argued by Harris, n 2 above, ch 10 (The Irredeemable Paradox of the Embryo). While I tend to agree, an exploration of his arguments cannot be undertaken in this essay. 47 My translation and emphasis. 240 Han Somsen which the Minister replied. In respect of pgd to exclude embryo’s with genetic disorders with variable expression or incomplete penetration, the Minister argued: At this stage, use of pgd to select embryo’s for the sake of its future health to some extent appears to remain a matter of chance. Even embryo’s that are affected may not develop a disease, or—in the case of variable expression—only be mildly affected. Pgd treatment in those cases appears to offer a degree of certainty, but in light of the knowledge avail- able about these kind of diseases this amounts to a false sense of security. Again, for me this is a reason to exercise caution.48 Precaution in this case is used to protect embryo’s that might be needlessly created in such instances. To be sure, they also might not be needlessly created; we simply do not know for sure if certain genetic predispositions in any given embryo will or will not give rise to disease. The point of course is that many prospective parents may not want to participate in this genetic lottery, and pgd means that they do not have to. As we have seen, The House of Lords Committee in identical cases, on the basis of precaution, argued that properly informed parents ought to be able to consent to the use of pgd. A similar line of precautionary reasoning helps the Minister in rejecting the Health Council’s proposal that pgd ought to be available to parents in need of a ‘saviour sibling’. In these cases, pgd and tissue-typing are used to conceive a child with the (additional) aim to offer an existing child a chance of treatment for a genetic disease. Amongst other arguments that focus on instrumentalization, the Minister again argues: In other words, should a variety of technical interventions be allowed—of which as yet it is uncertain whether they do not cause damage in the long term—at the start of life, for the sake of donorship benefiting a third person?49 In brief, in all specific cases precaution can be applied in aid of a dignitarian agenda simply by making the embryo the object of uncertainty. Add to this the fact that this may concern any kind of uncertainty (psychological, health, social, etc.), and again there appears no temporal limit to the possibility to invoke pre- caution to limit patient autonomy. We should not shy away from articulating the impact of reproductive precau- tion in these cases frankly and honoustly: children that could be cured thanks to a tissue transplant (obtained non invasively from the umbilical cord of a saviour sibling) are now sure to die, and adult carriers of genetic diseases that could have healthy babies instead will raise children with debilitating diseases.50 48 My translation and emphasis. Ibid. 49 My translation and emphasis, Ibid. 50 Harris, n 2 above, at 177, notes that those that place the sanctity of the embryo at the centre of their reproductive ethics imply that ‘killing’ (an embryo) is invariably less moral than ‘allowing to die’ (newborns or adults affected by genetic disease). For various reasons, this is a thesis that is very difficult to maintain. Cloning Trojan Horses 241 IV. Conclusions Precaution is giving rise to tremendous conflict and confusion in the arena of environmental regulation. The aim of precaution in that regulatory field is to redress a number of imbalances between man and the environment that threaten the very existence of our species. Despite the difficulty in ensuring that precau- tion does not become a source of arbitrary exercise of public power, the combined effect of these imbalances provides sufficient justification for precautionary envi- ronmental regulation. In that sphere, international and domestic courts and tri- bunals have disciplined the precautionary principle by subjecting it to procedural review. To mention but a few of these disciplines: — precautionary measures taken within a WTO or EU context must always be temporary and proportional; — measures must be justified by scientific uncertainty; — measures must serve to protect a well defined regulatory aim that informs the regulatory regime in which the principle operates, mostly environmental and health protection against irreversible harm; — the required degree of scientific uncertainty is quantified with reference to the aim for which the science is used. That aim is the carrying out of an environ- mental risk assessment, a procedure that in turn is specified in the applicable legal regimes: if there is sufficient scientific knowledge to carry out the risk assessment, precaution therefore cannot be invoked, notwithstanding areas of scientific uncertainty.51 All in all, although much work remains to be done, this ongoing process of judi- cial fine-tuning of environmental precaution is ensuring that precaution could evolve towards an ideal type, and thereby become a real asset for environmental regulation. The case for precaution to regulate reproductive technologies is very tenuous by comparison. We should first of all seriously question the need for precaution in the sphere of reproductive technologies at all. There are only a very limited number of cases, such as germ-line genetic interventions and xenotransplantation or reproductive macro cloning, where the answer may be in the affirmative. Both technologies are surrounded by scientific uncertainty, and harbour unknown potentially irreversible (health) risks for future generations. Even in respect of these special cases, the case for deliberative precaution is more compelling than the case for fact-finding precaution, which is not the best way to arrive at accu- rate representations of scientific facts. When enabling precaution is applied, this 51 This is the outcome of the recent WTO case EU Biotech, published on the Internet at accessed 15 June 2008. EU law as yet does not explicitly quantify the required degree of scientific uncertainty in this way. Given the prece- dence of WTO law, however, it most probably will be within the foreseeable future. 242 Han Somsen should in any event be on condition that such precautionary bans do not factually amount to self perpetuating prohibitions. In the two cases that have been discussed in section III.B, the use of precaution has been difficult if not impossible to defend, and the dangers inherent in its use are both serious and manifest. In summary, on the basis of the evidence presented in the second part of this essay, at it its present stage of development, enabling precaution: — provides any regulator (dignitarian, utilitarian, human rights), — indefinite powers, — to restrict or sanction any activity, — on the basis of any degree of uncertainty of any nature — for the sake of any real or potential right-holder, and — any real or potential right. Whether we are dignitarians, utilitarians or human rights proponents, the case studies show that precaution can turn against us at any unannounced moment. Simply importing precaution from environmental regulation without simultane- ously thinking about ways to discipline the principle therefore amounts to cloning a Trojan horse. If we fail to tame precaution, we may have to conclude that the theoretical difference between a precautionary state and a totalitarian state is a matter of time. 11 The Transplantation of Human Fetal Brain Tissue: The Swiss Federal Law ANDREA BÜCHLER I. Introduction∗ If medical research now being carried out proves successful, it will be possible in future to cure illnesses such as Alzheimer’s disease and Parkinson’s disease by implanting human fetal tissue1 into the brain of the patient. Research in this area has also been conducted in Switzerland for some time. In 1995, human fetal tis- sue from a total of five donor fetuses was transplanted into the brain of a patient suffering from Parkinson’s disease at the Inselspital hospital in Berne. This trans- plant had previously been granted approval by the Ethics Commission. Further transplants of this nature are planned.2 ∗ The original text of this paper was written by the author in German. This English version is a translation of the original German by Nicholas MacCabe, a freelance translator living and working in Zurich. I wish to acknowledge the assistance of Marco Frei, research assistant at my chair, who made valuable contributions. 1 A word on terminology: In embryology, the distinction is usually drawn between the embryonic period (2nd to 10th week of pregnancy) and the fetal period (from the beginning of the 11th week of pregnancy). In the literature relevant to tissue transplants available today, the expression ‘fetal tissue’ is used throughout (see the references in the Swiss Academy of Medical Sciences guidelines on the trans- planting of fetal tissue on accessed August 2008). Swiss law’s definition on this point is more exact, referring to embryonic and fetal tissue and cells. For simplicity’s sake, the internationally established term ‘fetal tissue’ will be used here. 2 Although the operation proceeded without complication and the Ethics Commission had authorised further transplants, the researchers initially refrained from carrying these out. This was partly due to logistical problems, but partly also because the researchers had concluded that further fundamental research into surgery involving the central nervous system was necessary. Further work in this area is currently being carried out at the research laboratory of the neurosurgical clinic of the Inselspital hospital in Berne (for more information on this whole issue, see Federal Council comments on the TPG—Botschaft TPG, 126 available at accessed August 2008; a more recent research report from the Inselspital hospital can be found on accessed August 2008. 244 Andrea Büchler While transplantation of fetal tissue is not yet a feature of day-to-day medical practice, it does nevertheless already present a challenge to medical ethics today. This is because it raises issues which intersect between several major questions in the field of bioethics: questions regarding the status of a fetus, the conduct of research on a fetus, the use of organs and tissue, the transplanting of brain tissue as the embodiment of human personality, the use of aborted fetuses and the rights of the pregnant woman. The purpose of this paper is to provide an overview of the steps involved in a transplant of human fetal brain tissue, to identify and discuss the bio-medical and bio-ethical issues surrounding these procedures, and to describe how Swiss legislators addressed these issues in drawing up the Federal Law on the Transplantation of Organs, Tissues and Cells (Transplantationsgesetz, hereinafter TPG). To set the stage from a comparative law standpoint, let it first be said that there are as yet few laws devoted to the transplanting of fetal tissue. Legislation by individual US states3 and the Swedish transplant legislation of 1995 were among the first laws enacted in this area. Under German law, legislation on human tissue envisages the possibility of removing tissue from fetuses provided certain prerequisites are met.4 The international consensus seems to be that the transplantation of fetal tissue should also be permissible in a research context, with transplant practice being governed by ethical guidelines set out by special- ists in this field.5 II. The Transplant Procedure: An Overview When human fetal tissue is transplanted from fetuses which die as a result of pregnancy termination, tissue parts are removed and immediately transplanted for therapeutic purposes. The ‘raw material’ for the transplant can theoretically be secured both from artificially induced and natural abortions, though in practice tissue from artificially induced abortions is generally used. Indeed, in the case of tissue transplants from the brain of the fetus, which occur principally in the treat- ment of Parkinson’s disease,6 though there have also been some cases involving the treatment of Chorea-Huntington or Alzheimer’s patients,7 only tissue from artificially aborted fetuses is used. This is because the tissue must be ‘fresh’ when it 3 Bauer (1994: 11 ff ). 4 German Law on Human Tissue (Gesetz über Qualität und Sicherheit von menschlichen Geweben und Zellen (Gewebegesetz)), BGBl I 2007 (35), 1574, § 4a. 5 de Wert et al (2002: 79). 6 In the 1990’s some 250 operations were carried out on patients suffering from Parkinson’s disease in Europe and the US. See Federal Council comments on the TPG—Botschaft TPG, BBl 2002, 125. See also Tyler (1998: 627, 629); Bauer (1994: 4 ff ). 7 Tyler (1998: 627, 629); Bauer (1994: 5). The Transplantation of Human Fetal Brain Tissue 245 is secured and must be transplanted into the recipient’s brain immediately. Tissue from miscarried fetuses is not suitable for transplants, because the fetus generally dies in the mother’s body, and the fetus tissue is therefore already ‘dead’.8 However, even in the exceptional event of tissue from a miscarried fetus being secured in a ‘fresh’ state, it is unlikely that it could be transplanted to a recipient sufficiently quickly. This is because the recipient patient must already be prepared for the operation at the moment the tissue is secured from the fetus. This in turn requires some co-ordination in place and time between the pregnancy termination and the transplant, which is not possible in the case of miscarriages or spontaneous abor- tions. If one further considers that the successful treatment of a patient suffering from Parkinson’s disease, for example, requires tissue from up to ten fetuses, that all these fetuses must be of roughly the same age and that, as a result of the abor- tion methods used in Switzerland, not every fetus but only every second or third is technically in a condition making it possible to excise brain tissue, the scale of co-ordination necessary to conduct transplants with tissue from aborted fetuses becomes abundantly clear.9 In practice, the co-ordination requirements just mentioned greatly limit the scope for securing brain nerve cells from aborted fetuses and immediately implanting them into the brain of a patient. It is thus not surprising that medical research has devoted its efforts to developing new processes to enable sufficient quantities of cells to be made available at the same time and in the same place, and thus make it possible to use them in a transplant operation. Of the many avenues being explored, the following two are particularly noteworthy. The first approach involves taking the transplantable cells from an aborted fetus, multiplying them in a cell culture (so-called expansion) and increasing their clinical longevity. The sec- ond approach involves growing transplantable tissue from tissue stem cells. These stem cells can be secured both from fetuses from artificially or naturally aborted pregnancies (so-called embryonic germ cells, or EG cells) and from embryos grown in vitro (so-called embryonic stem cells or ES cells). Were one of these approaches to succeed—and research reports indicate that considerable progress is being made in this area—medical research would gain the ability to preserve cultures of transplantable cells for relatively long periods of time and/or to multiply transplantable cells in culture, or to breed transplantable cells from stem cells. Success of these endeavours would obviate the need to co-ordinate the time and location of several pregnancy terminations, thus potentially making the transplant of fetal tissue a practical clinical reality. This paper will not deal with these relatively new areas of medical research, however, and will limit itself therefore to considering transplants of fetal tissue involving the removal of transplantable brain tissue from artificially aborted embryos or fetuses. 8 Bauer (1994: 6 f); Sanders et al (1993: 400, 401 f). 9 For more on this whole issue, see Federal Council comments on the TPG—Botschaft TPG, BBl 2002, 124 ff . 246 Andrea Büchler III. Ethical and Legal Considerations at the Tissue Removal Stage A. Ethical Considerations Relating to the Removal of Brain Tissue from Aborted Human Fetuses As far as the ethical evaluation of tissue removal is concerned, there are three distinct steps to consider: the pregnancy termination, the release of the aborted fetus for tissue removal and the tissue removal itself. The ethical concerns which have been voiced with regard to the removal of transplantable tissue from aborted fetuses generally relate to only one of these steps. With regard to pregnancy termination it should also be mentioned that the intention here is not to examine the general ethical considerations which relate to termination per se. This discussion is limited to those ethical concerns which derive from the fear that the possibility of future tissue removal from an aborted fetus may have an ethically reprehensible influence on the pregnancy termination. i. The Possible Infl uence of Imminent Tissue Removal on a Pregnancy Termination a. Jeopardy to the Bodily Integrity and Autonomy of the Pregnant Woman A fear regularly expressed is that if the removal of tissue from an aborted fetus is permissible, the bodily integrity of the pregnant woman at the stage of the pregnancy termination could be at risk. These concerns are based on the fact that there are often close institutional bonds between those involved in the transplant and those involved in the termination, and that these can easily result in the interests of research and of the transplant recipient in obtaining tissue as securely as possible being accorded pri- ority over the pregnant woman’s interest in having the termination carried out in the least harmful manner possible.10 It is argued that any such conduct by those involved in carrying out the termination would contravene the principle of doing no harm which has been a cornerstone of medical ethics since ancient times. A further fear is that the woman’s freedom to decide on whether to go ahead with the termination could be circumscribed. Here, too, attention is drawn to the problem of institutional relationships between transplant teams and those involved in carrying out the termination. It is argued that the possibility cannot be ruled out that women whose decision to have a termination carried out is not yet final could be motivated by incentives or coercion to agree to a termination in order to make the needed tissue available.11 10 A particular danger is that the timing or method of a pregnancy termination might be deter- mined in order to facilitate tissue being obtained as easily as possible. See Schneider (1995: 211 ff); Bauer (1994: 107). 11 Schneider (1995: 215 ff ). The Transplantation of Human Fetal Brain Tissue 247 b. Instrumentalising Termination or Pregnancy Itself A related concern is the risk of an increase in the number of abortions. It is argued that the danger is not only of a woman being coerced into agreeing to a termina- tion. A greater danger is the possibility of her donating the fetus for the treatment of a sick person, which could increase the perceived legitimacy of termination both in the mind of the pregnant woman and in the view of her entourage, because the donation would serve a good purpose. It is feared that the increased personal and social legitimacy arising from medical science’s interest in using fetal tissue could exert some influence on the way in which pregnancy termination is practised.12 There is a suggestion here of a risk of pregnancy termination becoming instru- mentalised, so that termination could be practised as the means to the end of acquiring transplantable tissue. Some argue further, and are concerned that preg- nancies could even be initiated with a view to aborting the fetus at a later date, in order either to be able to provide transplantable tissue, for the treatment of a close relative, for example, or even to sell it.13 Both these scenarios, instrumentalisation of the termination or even instrumentalisation of the pregnancy itself cannot be reconciled with the ethical convictions of many citizens of Western industrialised nations, and it is in these nations that the conditions necessary for carrying out fetal transplants—medical know how, the requisite infrastructure and the legality of pregnancy termination—are generally to be found. Neither pregnancy nor its termination should be the means enabling the end in the form of a transplant. This is a view held by a broad consensus of people.14 ii. Consent to Tissue Removal Additional concerns are occasionally raised with regard to the donation of tissue. There is a very wide consensus that a person must give his or her informed con- sent, before parts of his or her body can be removed for the purpose of conduct- ing a transplant. This then raises the question as to who should consent to the removal of tissue from the brain of an aborted fetus. The fetus is obviously not able to make such a decision, which suggests that the consent of a person close to the fetus is necessary. The person closest to the fetus is the pregnant woman. Some however believe that her decision to terminate her pregnancy denies her any protective authority over the fetus. The view is that, by deciding to terminate the pregnancy, she has given second priority to the interests of the fetus versus her own interests. It is therefore argued that no-one has a valid right to consent to the removal of tissue from an aborted fetus, and that such removal thus necessarily runs counter to the ethics of research conducted on human beings.15 12 See Federal Council comments on the TPG—Botschaft TPG, BBl 2002, 161; Bauer (1994: 5, 106 ff ). A Canadian survey of 266 women aged between 18 and 40 asked its interviewees whether they would be more likely to terminate a pregnancy if they had the possibility of donating the fetal tissue. 12% answered Yes, 66.9% No and 21.1% were undecided; see Martin et al (1995: 545). 13 Bauer (1994: 5, 106 f ). 14 De Wert et al (2002: 79, 81); Bauer (1994: 5, 106 ff ). 15 Bauer (1994: 110 ff ). 248 Andrea Büchler iii. Tissue Removal The possibility of pregnancy termination being instrumentalised, in other words the danger that a termination be conducted solely for the purpose of making tissue available for an imminent transplant, has already been raised. If, however, pregnancy termination takes place independently of any considerations about the possible use of the fetus as a source of cells and tissue—ie if the fetus is not aborted specifically for the purpose of obtaining such cells and tissue—the question of the ethical justification for removing cells and tissue from the fetus assumes other dimensions. This question was raised, for example, during a debate in the US in 1989 on the permissibility of transplanting fetal tissue. There was, however, no disagreement on the question at that time. The utilisation of an aborted fetus in transplant research was regarded by all sides as legitimate, provided the abortion had not been conducted specifically for the purpose of making the tissue avail- able.16 This consensus was reached even though the various participants in the debate had widely differing views on the status of the fetus, with some regarding it merely as an accumulation of cells, or as an inanimate object, while others saw the fetus as constituting a potential human life or indeed even a life equivalent to that of any other human.17 The fact that the legitimacy of excising material from the aborted fetus was not questioned can be explained only if it was gener- ally assumed that the aborted fetus was already dead at the time the material was excised. Such an assumption must have been generally held at the time, since the proponents of the notion that the aborted fetus constituted a potential or com- plete human life would scarcely otherwise have regarded the removal of tissue from the brain of a fetus which was still living as legitimate.18 Discussions on this same subject today do give rise to controversy, however, because science has in the meantime considered the question of whether an aborted fetus is in fact dead at the moment when tissue is removed from it. If, as is usual in the case of transplant medicine, brain death is taken as determining the moment of death, then there would indeed appear to be an urgent need for a critical scientific assessment of the procedures involved in the transplant of fetal brain tissue, because the only such tissue which can be transplanted is that which is still ‘fresh’ at the time of removal and transplant. The neurophysiologist and neurosurgeon Detlef Linke writes on this subject as follows: Above all, it is obvious that no thought has been given to the fact that, if the brain tissue removed is still alive, no complete brain death can have occurred. This thus constitutes a relaxation of the unassailable criteria of brain death which are otherwise applied.19 16 Childress (1996: 203, 214 f ). 17 For a detailed discussion of the moral status of the fetus, see Bauer (1994: 99 ff ); Sanders et al (1993: 400, 402 f ). 18 Childress (1996: 203, 214 f ). 19 Linke (1993: 86) (Freely translated from the original German). The Transplantation of Human Fetal Brain Tissue 249 It should be noted that other scientists have indeed devoted some thought to this question. Their opinion, however, is that the criterion of brain death, which is widely recognised as the means for determining the point of death for born humans, cannot be applied to the unborn human, because many aspects of fetal brain development are not yet adequately understood. They thus recommend that the absence of spontaneous breathing and of a pulse be taken as indicating the death of a fetus.20 This shows that, although there is no agreement as to how the death of a fetus should be ascertained, there is agreement, at least among those who do not regard the fetus as a mere accumulation of cells, that the fetus should no longer be alive at the time when tissue is removed.21 If brain death is taken as the criterion deter- mining the time of death, it then follows that no „fresh’ or living brain tissue can be taken from the fetus, and that therefore the removal of living brain tissue from the fetus must be refrained from, if such removal can be permitted only if the fetus is dead. Conversely, if heart death and the absence of spontaneous breath- ing are the criteria chosen to determine the time of death, then removal of tissue from a still-functioning brain is both possible and permissible. Advocates of this latter view must however be prepared to face the question: ‘Is it conceivable that the brain cells of an aborted embryo or fetus can really be described as living cells in a dead organism?’22 B. The Influence of Ethical Debate on Transplant Legislation in Switzerland As mentioned in the introduction, Switzerland has so far seen only one attempt to transplant fetal brain tissue, the procedure undertaken at the Inselspital in Bern in 1995. At that time, authority to set the applicable norms rested with the national Ethics Commission. This has since changed, and Swiss Federal legislation has defined the applicable guidelines in the TPG mentioned earlier. I should like briefly to examine the genesis of this law, since it illustrates the extremely partici- pative legislative process operating in Switzerland, which explains why legislators are at pains to pay heed to the diverging interests of the various parties involved and to find compromise solutions. As part of a referendum on the Swiss Federal Constitution held on 7 February 1999, Swiss voters mandated the Confederation to develop a legal framework for transplant medicine (Art 119a, para 1 of the Swiss Federal Constitution). The Federal Council then took up this task and prepared a preliminary draft version of the future TPG. In line with standard 20 National Ethics Commission on Human Medicine, Research on Human Embryos and Fetus, Opinion no. 11/2006, Bern 2006, 72; Hüsing et al (2001: 173). 21 De Wert et al (2002: 79, 81, 84) and Bauer (1994: 29 f ). 22 Hüsing et al (2001: 173) (Freely translated from the original German). 250 Andrea Büchler Swiss legislative practice, this initial draft was then made available for consultation, which meant that state and social interest groups—such as the cantons, munici- palities and political parties—as well as economic and professional bodies were invited to opine on the text. Based on the results of this consultative process, the Federal Council then prepared a final draft, which was then submitted to the Swiss Parliament on 12 September 2001, along with a statement from the Federal Council. The National Council (first chamber of parliament) and Council of States (second chamber) then accepted the TPG on 8 October 2004, after making a number of changes to the draft text submitted by the Federal Council. The TPG was then published in the official Federal journal. Once the text of a law has been published in the journal, the countdown for a possible referendum begins. This countdown lasts one hundred days. In order for a popular referendum to be held on a proposed new Federal law, at least 50,000 signatures must be collected from voters and lodged with the Federal chancellery. In the case of the TPG, fewer than 50,000 signatures were collected. The ‘Nein zum TPG’ (no to the TPG) committee in fact collected only about 20 000 signatures. Since no referendum was held, the Federal Council was then able to set out the necessary enabling ordinances and to determine when these, and the TPG itself, would come into force. The day chosen by the Federal Council was 1 July 2007. Swiss Federal law governing the transplant of fetal tissue can be found in Section 9 of Chapter 2 of the TPG under the heading ‘Treatment of human embry- onic or fetal tissues or cells’. Under the law, the removal of tissue from aborted fetuses, including the removal of ‘fresh’ brain tissue, is permissible in principle. The next section of this paper examines how Swiss legislators have addressed the ethical issues surrounding such tissue removal, as well as whether and how they have protected the interests of those directly concerned and of the general public. i. Consideration of Individual Interests in Transplant Legislation a. The Interests of the Pregnant Woman The interests of the pregnant woman in cases where tissue is removed from a fetus aborted from her can—as shown—be affected in many different ways. Risks can arise to the interest the pregnant woman has in her bodily integrity during the conduct of the termination procedure, specifically if the timing and method of termination are chosen by those conducting it not with regard to the woman’s bodily integrity, but rather with a view to maximising the chances of securing the tissue needed for a subsequent transplant. In addition, the woman’s autonomy can be jeopardised in two possible ways. First, there is the danger that those interested in obtaining tissue might coerce her into agreeing to a termination. Second, there is the risk that she be denied the right to decide what happens to the fetus once the termination has been performed. In the TPG, Swiss legislation attempts to protect both the bodily integrity and the autonomy of the pregnant woman. Article 37 of the law, for example, pro- vides that the timing and method of pregnancy termination must be determined The Transplantation of Human Fetal Brain Tissue 251 independently of any possible future use of the aborted fetus in transplant medicine, as well as making infringements of this requirement punishable by law (Article 69, paragraph 1, section j of the TPG). Article 41 of the TPG fur- ther states that there must be no dependent relationship between the persons involved in conducting the termination and those involved with the transplant. To the greatest degree possible, the former should have no interest in altering the timing or method of the termination to the detriment of the pregnant woman’s interests.23 Article 41 of the TPG also protects the woman’s freedom to decide on whether the termination procedure should be carried out at all. The danger that she might be coerced into agreeing to termination is less acute if those involved in the conduct of the termination are independent of those conducting the transplant. The law also attempts to protect the pregnant woman’s freedom to decide on the donation of the fetus for transplant purposes. Tissue may be removed only if the woman, having previously been comprehensively informed on all mat- ters regarding tissue removal, has given her consent to such removal (Article 39, paragraph 2, TPG). The Federal Ordinance on the Transplantation of Human Organs, Tissues and Cells (Transplantationsverordnung, hereinafter TPV) states that the pregnant woman must, in particular, be fully informed, in terms she can understand, about the purpose for which the tissue is intended to be used and about the diagnostic tests which will be performed on her in order to protect the intended recipient of the transplanted tissue (Article 35, paragraph 1, TPV). The legislation thus puts paid to the objections of those who believe that the woman’s decision to terminate pregnancy disqualifies her from making any decision about the future of the aborted fetus. The question of whether the woman’s freedom to decide is sufficiently protected nevertheless remains, since the current Swiss legislation, unlike the German law on human fetal tissue, for example, allows the woman no right to alter her decision once she has made it.24 Such a right would appear to be necessary, however, since the termination period is one in which the woman is in an exceptional state of mind, so that her capacity for considered self- determination or autonomy could be temporarily impaired. This omission is not made good by Art 35 Abs 2 which requires that, after having been fully informed about the intended procedure, the pregnant woman should be allowed a reason- able amount of time to decide whether or not to consent to it. A right to alter her decision, once made and communicated, can conversely be inferred from Articles 27 and 28 of the Swiss Federal Civil Code, if we assume that the fetus continues to be imbued with its mother’s personality even after it has been removed from 23 Note in this connection that one of the reasons why the German Central Ethics Commission informed Germany’s Federal Chamber of Physicians in 1998 that it viewed the transplanting of fetal tissue as not being ethically justifiable was that the Commission assumed that some effect on preg- nancy terminations was probable if several terminations needed to be co-ordinated to occur at the same time and place; see Zentrale Ethikkommisson bei der Bundesärztekammer (1998: 1869). 24 § 4a Abs. 2 German Law on Human Tissue. 252 Andrea Büchler her body, as is contended in the literature in the case of in vitro embryos25 and of body parts or bodily substances removed from the body.26 Article 28 of the Swiss Civil Code states that any surgical intervention requires patient consent. The patient is, in principle, free to revoke such consent, and indeed Article 27 states that such consent cannot be construed as enduringly binding. Restrictions on the pregnant woman’s right to revoke her consent would appear to be justified only in cases where the interest of the patient intended to receive the transplanted tissue is greater and revocation would be to that patient’s detriment. Such circumstances can certainly be assumed to apply from the moment when transplantation of the fetal tissue has commenced. Prior to the commencement of transplant, revoca- tion should regularly be justifiable, though the passage of an unduly long period of time between consent first being given and then withdrawn can result in the pregnant woman being required to make good any efforts which her subsequent revocation of her consent had deprived of their utility (Art 404, para 2, Swiss Code of Obligations).27 b. Interests of the Progenitor The interests of the progenitor are barely mentioned in the discussion on fetal tissue transplants. Given the close physical bond between the pregnant woman and the fetus, the notion that the decision to terminate a pregnancy lies with the woman is justifiable. However, once this physical bond is broken, be it as a result of birth or abortion, it is not apparent why the progenitor should have a lesser interest in the fate of the born child or the aborted fetus than the mother. Under the TPG it is however sufficient if the pregnant woman agrees to subsequent tissue removal from the fetus (art 39, para 1 of the TPG). It is difficult to reconcile this position with the personal integrity rights of the progenitor.28 ii. Consideration of the Common Good in the Law on Transplants As the ethical debate already described has shown, the issue of tissue removal from a fetus impinges particularly on the common public interest in the fetus being pro- tected during pregnancy and in the fetus being treated with dignity after a possible pregnancy termination. The question which arises here is whether these interests, from a legal standpoint, constitute the interests of the fetus or the interests of the general public. The former would apply if the state were to recognise the fetus’s ability to have rights or interests of its own which it is the state’s duty to protect. Article 31, paragraph 2 of the Swiss Civil Code provides an initial point d’appui in this matter. Under this Article, a fetus is in principle capable of having rights 25 Dubler-Baretta (1989: 27 ff ). 26 For one example in lieu of many see Forkel (1974: 593, 595 ff ). 27 Bucher (1999: n 451). 28 Some ethical guidelines require that the biological father must also consent to the removal of tissue from the aborted fetus, see de Wert et al (2002: 79, 81 f ); Sanders et al (1993: 400, 406). The Transplantation of Human Fetal Brain Tissue 253 of its own, provided, however, that it is born living. The consensus view appears to be that it is live birth which is the condition precedent for the fetus’s acquiring rights of its own, and that it thus acquires such rights only from the moment of birth. A live birth can be assumed to have occurred only if the fetus, on leaving the mother’s body, is sufficiently evolved for its continuing development outside the mother’s body to be possible. This condition is unlikely to be met by a fetus which is at the development stage where it is suitable for tissue transplant, even if such a fetus were still alive after abortion in the medical and biological sense. Accordingly, under the interpretation of the law currently prevailing, such a fetus would not enjoy civil law rights either before or after being aborted. This view of the legal status of fetuses not yet born or not born alive has engendered considerable controversy. Some civil law authorities hold that live birth as defined in Article 31, paragraph 2 of the Swiss Civil Code is a condition subsequent, so that the fetus in the womb can have rights (and even be subject to obligations), but that these are then forfeited if the fetus is not born living. Some teaching on constitutional law goes even further, holding that not only unborn fetuses, but also fetuses which are stillborn have fundamental human rights, in particular the right to human dignity (Article 7 of the Swiss Federal Constitution). Although these two standpoints do not take an entirely congru- ent view of the significance of live birth, they do nevertheless both appear to be founded on a common ideological belief, namely that both tend to view the fetus as constituting a full human life. Even if the view that the fetus is a human being capable of having rights of its own and that it has the right to human dignity both prior to birth and after stillbirth can be justified, it is contradictory to current Swiss law. In the case of the fetus in the womb, this contradiction becomes obvious when one considers that the right to human dignity also provides anyone holding that right with an absolute right to life. By introducing a new abortion law based on the duration of a pregnancy, Swiss legislation has clearly denied any such right to the fetus in the womb. In the case of the aborted fetus, the contradiction emerges from the TPG itself, as will be demonstrated. As has been shown above, it is contrary to current Swiss law to regard the fetus as a human being endowed with legal rights, and this applies both during the period the fetus is in the mother’s womb and after any possible abortion. However, even if the fetus is not endowed with legal rights, it cannot be the case that it is not deemed worthy of protection, since there is a common public interest in the fetus being protected during pregnancy and being treated respectfully after any abortion which may occur. a. Interest in the Protection of the Fetus During Pregnancy If it is possible to make fetuses available for tissue removal for transplant medicine following pregnancy termination, this raises the danger that the number of termi- nations might increase, which would be contrary to the common public interest in the protection of unborn life or the protection of the fetus. On the one hand 254 Andrea Büchler it is conceivable that those interested in the fetal tissue could exert pressure on the pregnant woman to terminate. On the other hand, it is possible that pregnant women who are not yet sure of their potential decision to terminate a pregnancy might be more likely to opt for termination if there were a possibility that the fetus could be made available for the treatment of a sick person. Swiss legislation has taken it upon itself to defend the common public interest in the protection of unborn life. The law aims to obviate the danger that a preg- nant woman might be coerced into consenting to a termination by requiring, in Article 41 of the TPG, that those participating in the termination must be inde- pendent of those involved in any subsequent tissue transplant. Furthermore, the TPG provides for a number of measures to ensure the pos- sibility of making the fetus available for the treatment of a sick person after the termination does not have any influence on the pregnant woman’s deci- sion to terminate pregnancy. The most important of these is undoubtedly that contained in article 39, paragraph 2, which states ‘A request to use embryonic or fetal human tissue or cells for transplant purposes may be addressed to a woman only after her decision to terminate a pregnancy has been taken.’29 Directed donations, ie those in favour of a specified person are also not pos- sible under the law.30 If, when considering the possibility of a termination, the woman does not know that by deciding to terminate her pregnancy she will be given the possibility of making the aborted fetus available for the treat- ment of a sick person, any subsequent transplant cannot in fact have any influence on her decision to terminate. That at any rate applies as long as the science of fetal tissue transplants remains at the experimental stage. Were fetal tissue removal for transplants to develop into a standard process and were demand for fetal tissue to rise to an extent that the woman could be reason- ably sure of being asked to donate tissue once she had decided to terminate her pregnancy, then this assessment would obviously have to be revised. Swiss law does however state that standard treatments with fetal tissue may be con- ducted only in cases where there is no possibility of providing the patient with any alternative therapy of comparable utility (Article 38, paragraph 3, section b, of the TPG). It remains to be seen how this will develop in practice. What is guaranteed at this stage is that pregnant women electing termination make their decision independently of any possible future use to which the fetus might be put. b. Interest in the Fetus Being Treated with Dignity after Abortion At least as things now stand, the TPG does ensure that pregnancies are not termi- nated in order to secure tissue suitable for transplant purposes. The fetus is thus 29 Unlike the Swiss TPG, the German Law on human tissue states that the woman’s consent may be requested only after the death of the fetus has been ascertained; see § 4a Abs 1. 30 Art 37 Abs 2 lit b i.V.m. Art 69 Abs 1 lit l TPG. The Transplantation of Human Fetal Brain Tissue 255 afforded adequate protection prior to pregnancy termination. The question of the protection afforded to it after termination remains. Swiss law has recognised that an aborted fetus may still be alive at the moment when tissue is removed from it. This is apparent from the text of article 37, para- graph 2, section b of the TPG, which prohibits aborted embryos or fetuses from being kept alive artificially as whole entities in order to remove cells or tissue from them for transplant purposes. Had the legislator taken the view that any aborted fetus is always dead, this prohibition would have been unnecessary Nevertheless, nowhere does the TPG direct those who are conducting the tissue removal to ascertain that the fetus is dead before removing tissue. This means that the Swiss legislator has allowed for the possibility that the aborted fetus might still be alive at the time tissue is removed. Swiss legislation differs from that in Germany on this point, where the law on human tissue states that tissue may be removed from dead fetuses only after the fetus has been ascertained to be dead under the current state of knowledge of medical science.31 This prompts the question of whether other legislation regarding the removal of tissue from aborted embryos and fetuses exists, and whether such legislation may potentially prevail over the TPG if the latter is at odds with it. As has been mentioned, constitutional law recognises the fetus as having the right to human dignity (Article 7 of the Swiss Federal Constitution) not only prior to birth but also in the event of stillbirth. An important aspect of the right to human dignity is that anyone holding that right may not be used as a means to an end deter- mined by someone else, ie cannot be reduced to a mere object. This aspect of the right to human dignity appears to be violated in cases where tissue is removed from a fetus which is still alive, since the fetus’s status is effectively being reduced to that of a mere supplier of raw material for transplant medicine. If one further considers that Article 119a, paragraph 1 of the Swiss Federal Constitution bind- ingly mandates legislators to protect human dignity in transplant medicine, one can only conclude that the legislators have not fulfilled that mandate. Of course, this would apply only if an aborted fetus which is still living were regarded as having human dignity, which is not necessarily the case. It is also true that no obligation to extend the duty to protect basic human rights to stillborn human beings can be inferred from Switzerland’s current commitments under interna- tional law. It is thus not clear whether there is a higher law, above the TPG, which prohibits the removal of tissue from aborted human fetuses which are still alive. Equally, it is also clear that the law is not the only criterion defining what constitutes cor- rect conduct by those engaged in medical research. Indeed, the medical research community tends to regulate itself through a set of international ethical guide- lines. Most of these, including the ‘Network of European CNS Transplantation and Restoration (NECTAR)’, require that the fetus must be ascertained to be 31 See § 4a Abs. 1 Ziff. 1 of the German Law on Human Tissue. 256 Andrea Büchler dead before tissue is removed from it, with irreversible collapse of heart and lung functions being the criterion generally applied. IV. Ethical and Legal Considerations at the Tissue Transplant Stage This next section examines the ethical considerations surrounding the implanta- tion of tissue removed from the fetus into the brain of the recipient patient and explains the position which Swiss legislation has adopted in this area. A. Ethical Concerns Surrounding Brain Tissue Transplants i. Violation of the Principle of Doing No Harm One problem in the area of operations involving implants into the human central nervous system is that their effect has not been fully established. This state of uncertainty leaves operations of this kind open to criticism. One of the criticisms voiced is that interventions of this kind could do the patient more harm than good. Some critics believe that the physical integrity of the patient is at risk, as the extraneous transplanted material could under certain circumstances provoke immunological defence reactions which could lead to infection or the formation of tumours.32 Reference is made much more often, however, to the danger that the patient could suffer psychological damage. This concern is usually substantiated with the argument that the human brain is the seat of human personality, so that in operations on a patient’s brain a direct influence on the patient’s personality structure cannot be ruled out, and that this influence could extend to the patient’s loss of a sense of self.33 Greatly simplified, these potential personality modifications can be divided into simple and special personality changes. It is believed that the former can generally be induced by manipulation to the human central nervous system. The latter, by contrast, are thought to occur only in cases where extraneous material is trans- planted into the patient’s brain. Here, some critics believe that the transplanting of extraneous brain material can result in personality traits of the donor—ie a fetus in cases where fetal human brain tissue is used—being transferred to the recipient. It is believed that this could induce qualified identity problems in the recipient, though these would not necessarily arise with the same degree of acuteness in every intervention in the human central nervous system. Particularly in the case of brain tissue transplants where those parts of the recipient’s brain are removed and 32 Northoff (2001: 19 f ). 33 See Hüsing et al (2001: 178 ff ). The Transplantation of Human Fetal Brain Tissue 257 replaced with parts from the donor which form the organic foundation of human memory and human psychological characteristics, it is believed that the transfer of personality traits from the donor to the recipient cannot be ruled out.34 Most specialists do not believe that a transfer of personality traits could have occurred as a result of the type of neuro-transplants which have been conducted during clinical trials carried out so far.35 These have not been transplants of an entire brain or of entire brain lobes, as have been carried out on apes, but merely of individual nerve cells. These cells probably adapt to the attributes of the recipi- ent brain.36 This is mere conjecture, however, because there are as yet no long- term studies of the effects exerted by transplanted brain tissue. ii. Contravention of the Principle of Autonomy In addition there is also a danger that a clinical transplant trial might violate the principle of autonomy in a number of possible ways. First, it is debatable whether the doctors carrying out the operation comprehensively inform the patient about the risks associated with the imminent surgery. The medical specialists providing clinical care to the patient can generally be expected also to be involved in carry- ing out the clinical brain tissue transplant trial, so that in explaining matters to the patient they cannot act with complete impartiality. Indeed, they have an inter- est in securing the patient’s consent for the planned transplant. Second, even if a patient were comprehensively briefed ahead of a transplant trial, it is questionable whether that patient would be in a position to process all the information received and whether the patient could give consent to the operation on the basis of the information received. In the late stages of their illness, patients suffering from Parkinson’s disease often suffer deep depression and dementia, and these condi- tions can impair or annihilate their ability to understand the explanations given them, to form an opinion and to act according to that opinion. B. Consideration of the Ethical Issues in the Swiss Law on Transplants (TPG) Neither the TPG itself nor Federal Council’s report on this matter to the Swiss Parliament takes account of the possibility that, when human fetal tissue is trans- planted in the context of a clinical trial, the autonomy of the tissue recipient might be at risk. The Federal Council may however at least partially redress this omission in the TPG at the ordinance level. The transplant ordinance (TPV) does in fact state that article 6, inter alia, of the ordinance on clinical trials with medication (Vklin) is applicable to clinical transplant trials (article 26 paragraph 1 of the TPV). This 34 Hüsing et al (2001: 184 ff ). 35 McRae et al (2003: 282). 36 See Central Ethics Commission of the German Federal Chamber of Physicians—Zentrale Ethikkommisson bei der Bundesärztekammer (1998: 1869, 1870). 258 Andrea Büchler article in turn refers to articles 54 to 56 of the Law on Medication (Heilmittelgesetz or HMG). Article 54, paragraph 1, section a of the HMG provides that the person being subjected to the trial must give consent to the trial after having been compre- hensively informed of its characteristics. However, neither HMG nor Vklin require that the person providing the explanation to the patient should have no interest in the clinical trial being carried out. Given this, it is questionable whether the patient undergoing the clinical trial is in fact fully apprised of all the risks involved. Beyond this, article 55 of the HMG states that clinical trials may be conducted on persons with impaired judgment, provided that: first, no similar results could be obtained from trials on persons whose judgment was not so impaired (para- graph 1, section a); second, that the legal representatives of the person of impaired judgment have been informed of the risks involved and have given their consent to the trial (paragraph 1, section b); and third, that there are no detectable signs that the person of impaired judgment would object to the trial being carried out (paragraph 1, section d). Given this, it seems unlikely that the implantation of human fetal brain tissue into the brains of people of impaired judgement would be frequent, since this is a therapy principally used for patients suffering from Parkinson’s disease. In principle, similar therapeutic results can be achieved for a patient suffering from Parkinson’s disease whose faculties of judgement have been maintained as for a similarly afflicted patient whose powers of judgement have been impaired or even nullified by major depression or dementia. The situ- ation with Alzheimer’s disease is different, however. The transplanting of human fetal brain tissue into the brains of Alzheimers patients of impaired judgement is compatible with current legislation. The principle of autonomy is thus protected only to an inadequate extent in both the TPG itself and the ordinance legislation associated with it. With the TPG, Swiss legislation has nevertheless decreed a number of measures which protect the physical and psychological integrity of the tissue recipient—independently of the recipient’s consent. Article 38 of the TPG, for example, requires that trans- plants of fetal tissue must be authorised before they can be carried out. Article 2 provides that the Federal department responsible will issue authorisation for a clinical trial only if the clinical trial patient can be expected to derive some thera- peutic benefit from it (section a). Direct evidence of therapeutic benefit cannot of course be presented in such a case, because the whole purpose of the clinical trial is to ascertain whether such a benefit results. The Federal Council’s guidance on this matter does however state that those seeking permission to conduct the clinical trial must produce pre-clinical data which provides some foundation for the assumption of therapeutic benefit, and thus supports the expectation that the patient’s suffering will be alleviated to a significant extent. Clinical trials serving only the interests of science are thus frowned upon.37 Furthermore, approval 37 For more on this entire topic, see Federal Council comments on TPG—Botschaft TPG, BBl 2002, 162. The Transplantation of Human Fetal Brain Tissue 259 may be granted only if the necessary specialist and operational requirements for carrying out a clinical trial (section b) and an appropriate quality assurance system (section c) are in place. The TPG contains no further norms for the protection of the tissue recipi- ent. It should be noted in this context that the NECTAR guidelines provide that only cell suspensions or small tissue fragments may be transplanted, in order to prevent an unintended transfer of personality traits.38 The guidelines contain this provision even though the authors themselves regard it as very improbable that the transplanting of larger brain parts in humans will be technically possible in the foreseeable future, or that even if such transplants could be carried out in the future, they would in fact result in a transfer of personality characteristics. C. Summary In summary, it is certainly good that Swiss law subjects any transplanting of fetal tissue to prior authorisation and that the prerequisites for obtaining such authorisation provide some protection to the physical and psychological integrity of the tissue recipient. It is however problematical that the autonomy of the tissue recipient is not protected in the TPG itself, but rather in its associated ordinance, and even there only to a limited extent, since the ordinance does not require that the person informing the tissue recipient of the risks attending the operation be someone who themselves have no interest in the fetal transplant being carried out. There is thus no guarantee that the tissue recipient will be comprehensively informed about the risks associated with the surgery envisaged and will thus be able to give informed consent. Swiss law also does not entirely rule out the pos- sibility that clinical trials might be carried out on persons of impaired judgment. V. Closing Remarks Hopes have been set on the prospect that the use of fetal cells in transplant medi- cine could be effective in the treatment of certain major illnesses. The use of fetal tissue to therapeutic ends does however raise a number of ethical questions, and the transplanting of fetal brain tissue affects many disparate interests. With the TPG, Swiss law has attempted to protect a number of these interests, some of which are contradictory. At the tissue removal stage, for example, the protection of the physical integrity of the woman and the common public interest in avoid- ing increasing numbers of pregnancy terminations are accorded priority over the interests in obtaining fetal tissue suitable for transplants as easily as possible. At the tissue transplant stage, the interests of tissue recipients in the protection of 38 The guidelines can be donwloaded from accessed August 2008. See also de Wert et al (2002: 79, 84). 260 Andrea Büchler their bodily and psychological integrity is weighted more heavily than medical research’s interest in gaining scientific insights. Freedom of research has thus been constrained in favour of the individual interests just mentioned. Swiss law does however allow research many freedoms. The TPG does not, for example, require that the aborted fetus be ascertained to be dead before tissue is removed for transplant purposes. Swiss law also does not appear to intend to constrain research freedom with requirements which afford adequate protection to the autonomy of the tissue donor and tissue recipient. The following examples illustrate this clearly. First, the father of the aborted fetus has no say whatsoever in matters concerning the removal of fetal tissue. Second, there is no requirement that the person informing a tissue recipient of sound mind about the risks attend- ing a transplant be independent of the transplant itself. Third, the transplant of fetal brain tissue during clinical trials to persons of impaired judgment has not definitely been ruled out. All in all, a somewhat mixed picture. References and Other Materials Augustin, A (2001)‘Rechtliche Regelungen für Stammzellentherapien’ 118 (I) ZSR 163–85. Bauer, A (1994) Legal and Ethical Aspects of Fetal Tissue Transplantation (Dallas, Texas: RG Landes Company). Botschaft zum Bundesgesetz über die Transplantation von Organen, Geweben und Zellen (Transplantationsgesetz) vom 12.09.2001. BBl 2002: 29–246. Bucher, A (1999) Natürliche Personen und Persönlichkeitsschutz, 3rd edn (Basel/ Frankfurt am Main, Helbling Lichtenhahn Verlag). Childress, JF (1996) ‘Konsens in Ethik und Politik am Beispiel der Forschung an fötalem Gewebe’ in K Bayertz (eds), Moralischer Konsens. Technische Eingriffe in die menschliche Fortpflanzung als Modelfall (Frankfurt am Main, Suhrkamp Verlag). Deschnaux, H and Steinauer, PH (2001) Personnes physiques et tutelles, 4th edn (Bern, Stämpfli Verlag). Dubler-Baretta, R (1989) In-vitro-Fertilisation und Embryonentransfer in privatre- chtlicher Sicht (Basel, Schulthess Verlag). Forkel, H (1975)‘Verfügungen über Teile des menschlichen Körpers: Ein Beitrag zur zivilrechtlichen Erfassung von Transplantationen. Deutsche Juristenzeitung 29: 593–599. Gesetzesentwurf der Bundesregierung zum deutschen Gewebegesetz, Drucksache 16/3146, § 4a. Hüsing, B, Engels, E, Gaisser, S and Zimmer, R (2001) 39 Zelluläre Xenotransplantation (Bern, TA). Linke, D (1993) Hirnverpflanzung. Die erste Unsterblichkeit auf Erden (Reinbeck bei Hamburg, Rowohlt Verlag). The Transplantation of Human Fetal Brain Tissue 261 Martin, DK, Maclean, H, Lowy, FH, Williams, JI, Dunn, EV (1995) ‘Fetal Tissue Transplantation and Abortation Decisions: A Survey of Urban Women’ Canadian Medical Association Journal 153: 545–52. McRae, Cynthia, Cherin, Eva, Diem, Gretchen, Vo, Alexander H, Ellgring, Heiner J, Russell, Dan, Fahn, Stanley (2003) ‘Freed, Curt. Does Personality Change as a Result of Fetal Tissue Transplantation in the Brain?’ 250 Journal of Neurology: 282–6. Northoff, G (2001) Personale Identität und operative Eingriffe in das Gehirn. Neurophilosophische, empirische und ethische Untersuchungen (Paderborn, Mentis-Verlag). Sanders, LM, Giudice, L, Raffin, TA (1993) ‘Ethics of Fetal Tissue Transplantation’ 159 The Western Journal of Medicine 400–07. Schneider, I (1995) Föten: Der neue medizinische Rohstoff (Frankfurt am Main/ New York, Campus Verlag). Tyler, MS (1998) ‘Fetal Tissue Transplantatation’ in T Irons-Georges (ed) Magill’s Medical Guide, vol 1, rev edn (California/New Jersey, Salem Press). de Wert, G, Berghmans, RLP, Boer, GJ, Andersen, S, Brambati, B, Carvalho, AS, Dierickx, K, Elliston, S, Nunez, P, Osswald, W, Vicari, M (2002) ‘Ethical Guidance on Human Embryonic and Fetal Tissue Transplantatation: A European Overview’ Medicine, Health Care and Philosophy 5: 79–90. Zentrale Ethikkommisson bei der Bundesärztekammer (1998) ‘Übertragung von Nervenzellen in das Gehirn von Menschen’ Deutsches Ärzteblatt 95 (30): 1869–71. 12 Tools for Technology Regulation: Seeking Analytical Approaches Beyond Lessig and Hood CHARLES D RAAB AND PAUL DE HERT I. Introduction: Regulating Technology In parallel to the burgeoning interest in regulation, including regulation in the context of ‘globalisation’ (eg, Held and McGrew 2002; Koenig-Archibugi and Zürn 2005), recent years have seen many attempts, by legal practitioners, regu- lators and scholars, to comprehend regulation of technologies and the various ways in which legal and societal values can be protected against risks created by emerging technologies. Whether in a descriptive or a critical vein, there has been an emphasis on identifying the types of regulatory instrument or strategy that are found in regulatory regimes. In the field of technology regulation, it has become commonplace to use colourful metaphors to refer to these regulatory instruments and their complexity by using terms such as ‘tools’, ‘toolkit’ or ‘toolbox’, ‘mosaic’ and ‘mix’. This exploratory paper aims to cast light on how the regulation of technologies can be understood, by considering the way that regulatory instruments or tools are conceptualised and discussed in some relevant literature. In particular, we dwell, first, upon the work of Lessig (1998, 1999a, 1999b) and others, and later, of Hood (1983, 2007; Hood and Margetts 2007) and Murray (2007), seeking to com- pare their approaches and to indicate their usefulness for subsequent investigation of fields within technology. Lessig’s (1999b) famous treatment of regulation, based on the simple but convincing insight that regulation is not only done by law, but also by market, code and social norms, is more concerned with large, albeit important, issues of choice, values and the nature of (American) democracy, than with an analysis of the politics of regulation in another sense that needs to be incorporated into the study of technology regulation: what we might call a social and political science of ‘tooling’. Taking ‘tools’ more seriously helps in a constructive effort to apply more effectively the general and specific perspectives of those who analyse governance, and the governance of technology in terms of policy instruments. We claim that our understanding of regulation is considerably enhanced by adopting a critical 264 Charles D Raab and Paul De Hert attitude towards the use of metaphors such as ‘toolkits’ and even ‘tools’, bringing policy actors and their relationships into play, and understanding the processes whereby rules and tools are brought to bear on any field of activity. Most ‘tools’ approaches leave these issues out of account, thus losing sight of regulation as a social and political process and not just as a question of what tools do what jobs.1 After a brief introduction to Lessig’s approach to regulation, we refer to the case of privacy protection and, more briefly, the case of spamming. Both cases allow us to discuss the pros and cons of Lessig’s conceptualisation of policy instruments. We find that this conceptualisation lacks depth through its omission of an analysis of actors and processes of interplay between tools. At a later point, we introduce the main lines of Hood’s (1983; Hood and Margetts 2007) work on tools, which is of more general applicability outside the broad field of technology regulation. His The Tools of Government (1983), a classic within political science, contains a description of a range of regulatory tools and their combinations and substitu- tions allowing for an improved understanding of how regulators can operate. We then point to recent work of Murray (2007) and to our own work2 in order to point up the importance of more dynamic and complex approaches that integrate perspectives on regulatory tools and the actors and institutions that are involved in their use. In particular, we think it is timely to draw upon the work of Hood, which, though of wider empirical reference, has resonances with Lessig’s and lends itself to understanding the manifold interactions and resources that are frequently slighted in the technology context. States are thus shown to go beyond the enact- ment of legislation, as they combine a sometimes bewildering range of instru- ments. Albeit on a lesser scale, other regulatory participants also wield several instruments, often in conjunction with other participants. Understanding actors’ interrelations and practices, we believe, is crucial. Moreover, normative issues are important to take on board in research, in terms of the values which regulation aims to instil or reinforce. II. Introducing Lessig’s Approach to Regulation Lessig (1998; 1999a; 1999b: 85–99) has made a major contribution to the discus- sion of regulatory instruments by showing that regulation makes use of four main regulatory tools or ‘modalities’ as law, social norms, markets and ‘architecture’ 1 We broach the question of analysing this process, but do not here develop in any depth a system- atic framework for understanding policy actors and their relationship to the tools; those tasks remain for another occasion. 2 For the work of Raab, see the References below. De Hert is one of the co-authors of Wright et al (2008). Tools for Technology Regulation 265 (eg, in information systems: ‘code’). He argues that we must see instruments together, and not just as separately operating mechanisms. Each modality can operate directly on the individual being regulated, or indirectly on one or another modality, ultimately producing a regulatory effect. Modalities can both co-operate and conflict with each other.3 Lessig has opened the eyes of many in the legal profession by challenging the law’s classical self-understand- ing that narrows regulation to a question of making laws. Of course, he was not the first to see that law is just one of the existing regulatory instruments. Others have preceded him, sometimes even going as far as saying that regula- tion only depends to a minor degree on law and to a major degree on other societal ‘things’ like the market. Lessig (1998) distances himself from some of the more ‘hard-nosed’ tendencies of economic solutions by calling his approach the ‘New Chicago School’. In this approach, law is not set aside as a regula- tory instrument, but other instruments are added to the armoury. Lessig also shows that law does not only operate directly, but also indirectly through the other modalities, in order to ‘regulate to law’s own end’ (Lessig 1998: 672). The modalities act together on the ‘regulated entity—the entity feeling or suffering the constraints being described’ (Lessig 1998: 664). Lessig (1998: 685) indicates the interrelationships: [I]n principle, each of the four constraints described has a direct and indirect regula- tory effect on the others. Architecture might regulate individuals directly, but it also affects norms. Norms regulate directly, but changing norms will obviously affect markets. The market constrains directly but also indirectly affects the constraints of architecture. A complete account of how constraints change is an account of how these different constraints interact, but the complexity of this complete account easily overwhelms. Two further short passages (Lessig 1998: 664, 665) give a clear idea of Lessig’s construct, and embody key analytical foci—substitution, direct and indirect effects—as well as hint at a possible profusion of tools. First: Now, obviously, these four modalities do not regulate to the same degree—in some contexts, the most significant constraint may be law (an appeals court); in others, it is plainly not (a squash court). Nor do they regulate in the same way—law and norms, for example, typically regulate after the fact, while the market or architecture regulates more directly. The modalities differ both among themselves and within any particular modality. But however they differ, we can view them from this common perspec- tive—from a single view from which we might account for (1) the different constraints that regulate an individual and (2) the substitutions among these constraints that might be possible. 3 The emphasis he places on particular instruments has varied: latterly, the emphasis on law, ‘code’ and the market has overshadowed the role of social norms. This possibly reflects the somewhat less comprehensibility and conceptual coherence of norms as instruments, in contrast to the tangible ‘hardness’ of law, the cleverness of technical solutions, and the fashionableness of market and rational- choice approaches to social and human issues. 266 Charles D Raab and Paul De Hert Second: [T]he new school does not see these alternatives as displacing law. Rather, the new school views them as each subject to law—not perfectly, not completely, and not in any obvious way, but nonetheless, each itself an object of law’s regulation. Norms might constrain, but law can affect norms (think of advertising campaigns); architecture might constrain, but law can alter architecture (think of building codes); and the market might constrain, but law constitutes and can modify the market (taxes, subsidy). Thus, rather than dimin- ishing the role of law, these alternatives suggest a wider range of regulatory means for any particular state regulation. Thus, in the view of the new school, law not only regulates behavior directly, but law also regulates behavior indirectly, by regulating these other modalities of regulation directly. Some further points are worth noting: first, both the Old and the New Chicago School alike view constraints, or regulation, from a rational-choice standpoint that is embraced by economics, broadly defined in terms of conventional and ‘behavioral’ economics (Lessig 1998: 665). This will be referred to later, but it relates to one of several gaps in knowledge4 or methodological development that Lessig identifies. A first gap is alluded to it in his passage about the entity’s ‘feel- ing or suffering’ a constraint. For Lessig, this is a difference between objective and subjective constraints: ‘A constraint is subjective when a subject, whether or not consciously, recognizes it as a constraint. It is objective when, whether or not subjectively recognized, it actually functions as a constraint. Not all objective constraints are subjective; nor are all subjective constraints objective’ (Lessig 1998: 677). A rationally choosing state, it can be supposed, would need to understand how specific constraints operate in terms of this duality in order to make the ‘best’ choice in the circumstances, getting the most regulatory mileage out of the least resources. A related gap has to do with the analysis of meaning, which acts as a constraint beyond norms themselves. Another unfinished analytical business has to do with the constitutionalism of constraints, and with the political astuteness of low-profile indirect regulation. A further one has to do with the nature of sub- stitutions and how they can best be understood. In particular, there is a normative element to such choices, as for example between efficiency and freedom (Lessig 1998: 686–7): [t]he choice of modalities of regulation itself might present questions of value. One kind of regulation (through law, for example) might preserve a value that is otherwise not present when the same regulation is effected through another means (through norms, architecture, or the market). … A complete account of substitutions must account for the range of social values, including the value implicit in one mode of regulation over another. It must describe, that is, the values implicit in these different structures of regu- lation and make explicit the choice that these different structures embrace.5 4 Lessig (1998) uses the word ‘tools’ to refer to the analytical gaps that the New Chicago School must seek to fill. These should not be confused with regulatory ‘tools’. 5 We will come back to the question of the role of ethics in regulatory decision-making later on in our discussion of Hood’s approach to regulatory tools. Tools for Technology Regulation 267 III. Privacy and Spam: Two Illustrations One of the fields of application discussed by Lessig concerns the protection of infor- mation privacy. It affords a clear illustration of the way in which the instruments employed in the regulation of personal information practices have been classified with a view to understanding their advantages and limitations. In his early writings Lessig engaged in the debate about threats to privacy by emerging technologies and, like others,6 offered the regulation of privacy as an example where law should frame new technological developments (‘tame code’) (1999a: 514–21): that is, where the state acts to impose changes on code in order to increase the ability of the individual to exercise privacy choices. This regulatory solution, involving the Platform for Privacy Preferences (P3P) in Lessig’s example, is predicated upon his conception of privacy as a property right, in which the giving (or withholding) of consent is the linchpin for protecting privacy (Lessig 1999b: ch 11). This example points up the need for col- lective action by the state in order to enable individuals to control their own privacy, and thus exemplifies one pattern of interaction among modalities. Lessig’s solution makes intelligent use of existing regulatory tools. Privacy protection in his approach is not only a question of law: the ‘good’ code needs to come to the rescue of law, but it cannot be done without law; therefore, more and better law are necessary.7 Lessig’s application of his model to privacy protection has attracted criticisms that raise important general questions about the nature and implications of the 6 In the early years of the spread of the Internet as a mass socio-technical phenomenon, there occurred a convergence of views, reflected in practice, on the inventory of tools in their major catego- ries. A consensus emerged in the 1990s that legislation, and indeed any single regulatory device, was inadequate, especially as the difficulty of protecting privacy in ‘cyberspace’ became apparent. Industry Canada’s (1994: 15) policy paper on the Canadian Information Highway outlined possible ways for- ward for privacy protection, especially on the Internet. These would include ‘legislation, the advance- ment of a national voluntary privacy standard, the promotion of privacy protective technologies such as encryption and smart cards, and consumer education.’ 7 Other writers vary little from this. Bennett and Raab (2006: pt II) distinguish between transna- tional policy instruments, legal instruments and regulatory agencies, self-regulatory instruments, and technological instruments as four main types, although there are more specific tools and varieties at the meso and micro levels. Reidenberg (1997: 96) discusses a ‘complex mix’ of state, business, techni- cal and citizen mechanisms for privacy regulation. In a seminal article, Reidenberg (1998) explores a technological solution (‘lex informatica’). Elsewhere (Reidenberg and Schwartz 1998), he concentrates on legal and technical instruments, but also delineates six other instruments, on a different plane, that can be wielded by regulatory officials to achieve policy goals; these can perhaps be considered as specific strategies that regulators and governments may, or should use, in fulfilling their roles, rather than as macro-types of tool. They are: (1) persuasion that can be used to pressure industry to develop appropriate technical rules and mechanisms; (2) participation by data protection officials in the work of standards organizations that can promote mechanisms to assure the policy objectives of data protection; (3) funding through programs, such as ESPRIT, that may be used to develop technologies that assure data protection; (4) procurement by public bodies that has a substantial influence on the development of private markets and that may be used as a concerted tool to promote data protection goals; (5) regulating behavior by imposing liability that can be used as an indirect stimulus for the development of technical rules to assure data protection in network environments; (6) regulating standards that assures particular data protection rules are not circumvented. (Reidenberg and Schwartz 1998: 150–51) 268 Charles D Raab and Paul De Hert modalities beyond their use as solutions to ICT problems. His solutions and proposals for regulations hardly seem to fit non-American contexts.8 Even within the American context, doubt remains. Rotenberg (2001) reminds us that the USA was once in the vanguard of broad-principled, legislative regulation of technolo- gies that threaten to invade privacy, but that the approach has been reduced to one of giving ‘notice and choice’ to customers, threatening to become the platform for approaches in the contemporary online era. He therefore complains that, in the treatment of privacy in Lessig’s book (1999b: ch 11; and, 159–63), ‘code becomes a means by which to transfer decisions from the public realm to the privatized realm. … [I]t is a way to convert political rights into market commodities.’ Rotenberg sees law as of crucial importance as a tool for bringing public policy to bear, rather than leaving such protection to markets in which individual con- sumers activate technological solutions.9 Schwartz’s (2000) riposte to the ‘Lessig two-step’ private-property-plus-code package takes the argument further, casting doubt on technological solutions.10 Further, Schwartz (2000) brings the state back in by showing not only the importance of the rules embodied in ‘Fair Information Practices’ in information- privacy regimes, but by emphasising that the enactment and enforcement of Fair Information Practices by governmental actors—legislative and judicial—has the effect of (politically) shaping, or perhaps, as Lessig would say, ‘taming’, code. The politics of information is further implicated by Schwartz’s (2000: 786) argument that points up, more emphatically and fundamentally than does Lessig, that political institutions and their policies count for much in devising the frameworks for privacy 8 Understanding regulation necessitates an appreciation of the legal and normative context of a given society, and of the power implications of the deployment of particular tools. Thus, although it is possibly an attractive solution in many jurisdictions where alternatives have been tried and found to be difficult, this involvement of code probably sits more comfortably in contexts such as that of the USA, where political and business resistance to other legislative solutions, and to non-consumerist, human rights-based conceptions of privacy, have limited the range of alternative regulatory solutions in recent years. This contrasts with the approach typical of European countries and the European Union, in which more active roles are found for collective actors, such as regulatory agencies, to play—in prin- ciple, at least—key parts in the process by helping to implement legislation, thus not leaving the main protective initiatives to individual citizens or consumers, or to technological mechanisms. 9 If nothing else, upholding the value of law-based regulation, and not simply capitulating to solutions that, if designed in certain ways, could reduce the likelihood of protection, is a way of keep- ing regulatory options open through a multiplicity of instruments. Whilst Lessig himself professes to favour this approach—hence his discussion of the direct and indirect effects of law—the drift of his discussion of privacy is towards architecture as the best solution, to an under-appreciation of the role of law, and to a propertised conception of personal information. 10 Founding privacy protection upon a notion of personal control of information is weakened by difficulties in achieving a satisfactory online interface with the individual in order to enable effective control. Moreover, Schwartz (2000: 755) claims that the electronic butler that, in certain technical devices, acts as the individual’s agent for privacy protection will negotiate with sites regarding their privacy policies based on simplified instructions created by someone other than his putative master. As a result, code plus property may not only facilitate trading personal information on bad terms, but, more broadly, will shift power to those who decide how important shortcuts are to be taken. Property plus code may turn into a powerful means for generating an unsatisfactory level of privacy. Tools for Technology Regulation 269 protection, even including the role of code. Law, by enshrining Fair Information Practices and thus establishing some mandatory standards, may countervail and equilibrate the market, thus limiting the role of property-based negotiations. More widely than the privacy example, other deficiencies of Lessig’s approach are highlighted by Wall in the context of a discussion of the regulation of spam. Wall detects ambiguity in Lessig’s notion of ‘code’ or architecture,11 and argues that Lessig’s idea that ‘code is law’ is confusing because of the variety of functions that code performs. So, ‘[w]hile the code can do the work that satisfies law’s desire, surely it is of paramount importance that the law remains the authority’ (Wall 2004: 324).12 He also points out that the manifold facets of ‘code’, including a vari- ety of facilitative as well as restrictive ones, are not explored by Lessig; moreover, and similarly, the potentiality of all four modalities is under-conceptualised: [Lessig’s] focus upon their constraining qualities is restrictive and generally understates their value as instruments of behavioural governance. It also over-simplifies the relation- ship between them, which … Lessig tends to portray in functional rather than relational terms, reducing them at times to a simple flow chart … and therefore downplaying the complexities of any relationship that exists between them (Wall 2004: 324). A similar argument for a more complex and dynamic approach to regulatory instruments was proposed by the SWAMI researchers investigating AmI (Wright et al 2008: 178). They group a much larger number of regulatory instruments under three headings: technological, socio-economic, and legal and regulatory. This convenient classification seems roughly congruent with other typologies. But what is especially interesting is their remark that [t]he multiplicity of threats and vulnerabilities associated with AmI will require a multiplicity of safeguards to respond to the risks and problems posed by the emerg- ing technological systems and their applications. In some instances, a single safeguard might be sufficient to address a specified threat or vulnerability. More typically, how- ever, a combination of safeguards will be necessary to address each threat and vulner- ability. In still other instances, one safeguard might apply to numerous treats (sic) and 11 Wall 2004: 324: It varies from being one of the four modalities of constraint to being the meta-structure in which all constraints are located. In short, Lessig does not adequately acknowledge or accommodate the quite different ‘spaces’ that each modality represents. … However, a critical overview enables the ideas to be advanced. Particularly useful is the notion … of the ‘four modalities’ as spaces, or sub-architectures that empower as well as constrain within the overall architecture of cyberspace. Perceiving them as ‘spaces’ reveals the specific and different nature of the power relationships that each imposes to shape behaviour. 12 This criticism resembles Rotenberg’s (2001) plea, and draws upon Greenleaf ’s (1998) analysis and insistence on ‘digital realism’ in comprehending the relationships among instruments, in which the term ‘architecture’ is preferred to ‘code’ in order to include hardware (as Lessig (1999a: 507) does), Internet protocols, biometrics and other elements. In particular, Greenleaf (1998) emphasises, as indeed Lessig (1998) does as well, the indirect operation of law in its shaping of architecture, markets and social norms. This is a counter to the anti-law bias of both ‘digital libertarians’ and Foucauldians, and proffers a more realistic appreciation of the complex role of law, and its indispensability, in regu- lating cyberspace. 270 Charles D Raab and Paul De Hert vulnerabilities. … Just as the AmI world will be dynamic, constantly changing, the applicability of safeguards should also be regarded as subject to a dynamic, i.e., differ- ent and new safeguards may need to be introduced in order to cope with changes in the threats and vulnerabilities. Their argument suggests that we should be able to observe changes in safeguards as they respond to changes in threats, at least if the system in question were self- equilibrating.13 Or, if the regulatory response is not automatic, as the SWAMI quo- tation seems to imply, the theoretical formulation might serve as a guide to what regulatory measures it might be useful to develop deliberately in order to achieve a desired and stable relationship in which disturbances are reduced, contained, or counteracted. Moreover, the SWAMI passage does not prejudge whether the relationship between safeguard and threat is one-to-one, one-to-many, or many- to-one. These are all empirical possibilities, and we can only know more about this by focusing research upon specific threats and specific safeguards.14 IV. Seeing Tools in a Wider Context of Processes Arguments and theories that emphasise complexity in regulation may be a fruitful improvement upon theories such as Lessig’s that rely upon terms such as ‘tools’ and metaphors such as ‘toolbox’. We think it important to look critically at terms and metaphors of this kind, for they are often used without any second thoughts, although they are far from neutral.15 A tool is not just a way of carrying out some purpose or achieving some aim, but might create intended or unintended side-effects that bring about deeper change, perhaps with very far-reaching con- sequences that are difficult to model (Murray 2007). We have seen how Rotenberg (2001) and Schwartz (2000) cast critical light on the consequences of Lessig’s privacy tools. In another context, Brodeur (1997: 113) observes that, by advocat- ing and describing, for instance, an increase in the police’s electronic surveillance powers as an instrumental measure providing new tools, one ignores the impact of these techniques on fundamental freedoms. Far from being just a small amend- ment, the new measures could represent a fundamental change in the exercise of rights and freedoms. Current rhetorical practices can also be criticised for masking practical issues including, for example, the possibility that incompatible 13 In particular terms, the point alludes clearly to the concept of requisite variety in cybernetic theory (Ashby 1956: ch 11), in which variety in regulation is necessary in order to cope with the variety of disturbance coming from a system’s environment, in order to minimise the effect on the system’s ‘essential variables’. We can only cope here (and probably anywhere) with what Ashby regarded as the ‘picturesque’ summation of the Law, rather than with its mathematical exposition. 14 We will see later how Hood (1983) addresses issues of this kind. 15 There is no space here to consider the metaphoric or analogical aspects of these terms with spe- cific regard to their mechanistic or organismic connotations, as has been done by Deutsch (1963) in his general treatment of analytical models. For criticism of the ‘toolbox’, ‘mix’ and ‘mosaic’ idioms in privacy regulation, see Bennett and Raab (2006: 207–9). Tools for Technology Regulation 271 interests and resources could result in a ‘mosaic’ becoming a mass of confusing and irreconcilable strategies (Bennett and Grant 1999: 7–8). Or, as Wall (2004: 324), criticising Lessig, observes in a more general context, [i]t is very important not simply to view the ‘modalities’ as parts of a whole and assume that a functional relationship exists between each. While they may appear to ‘‘work’ together as a functional entity’, displaying a broad common purpose, they may not nec- essarily have any other unity, being shaped as a complex latticework of influences rather than as a driver (of social action). The concept of ‘assemblage’ (Haggerty and Ericson 2000) is much more useful in explaining the relationship between them. The foregoing supports the general argument that no instrument is self-sufficient, but requires tools wielded elsewhere in the entire regulatory regime in order to work. There are thus interdependencies, albeit of varying strength. There are important synergies among instruments (and among those who wield them) to be analysed and, in practice, cultivated and sustained in order to improve the regu- lability of information practices and processes. There may also be conflicts among them that scholarly analysis should try to explain and that policy-makers might wish to prevent or to resolve. Understanding these possibilities can be assisted by identifying issues, or questions, that should be considered with regard to any list of instruments that comprise the contents of a ‘toolbox’ in any branch of technol- ogy regulation, or indeed, with regard to a ‘matrix’ approach. Two questions can be posed: 1. What tools pertain to what technology practices, and according to what criteria can these instruments be compared and contrasted? 2. Are the instruments substitutable for each other, or are they complementary; and if complementary, how do they combine (and how might they combine better)? The first question concerns relating specific tools to specific practices and find- ing criteria for comparing and contrasting them. The first part of this is a major research task, particularly if it goes beyond description into recommendation, and if it is disaggregated to refer to specific countries or jurisdictions. Nevertheless, in the privacy field, for example, the analysis would take the form of identifying a range of typical personal-information practices (eg, data-collection, data-matching, data-sharing, or data-mining) and typical contexts (eg, retail marketing, policing, urban surveillance, transport policy, or health care). It would involve an observation of the tools that are typically found there, singly or together, to regulate these practices; or that could be found there if an effective regulatory ‘toolbox’ were to be developed.16 Second, we also need some way of appraising the main types 16 This paper is not the place for an empirical survey, but some phenomena may, for example, be found to be regulated more by codes of practice and the like than by technological tools; others, more by law than by citizen self-protection; and so on. Explaining these differing patterns, within and across contexts, practices, and jurisdictions opens up many avenues for multi-dimensional compara- tive analysis. 272 Charles D Raab and Paul De Hert of tool, and even more specific ones, especially as it is the latter that are more frequently the subject of debate and discussion (i.e., the merits of this law or that one, of this code of practice or that one, of this software or that, rather than of laws, codes and technological devices tout court). Here it is a matter of what ques- tions should be asked about instruments, in an attempt to see what works best, where, and why. For example, the scope of application, enforceability, account- ability, and the nature of the relevant policy community, have been proposed as a set of dimensions in which the instruments can be surveyed in order to show how they work, their strengths and their weaknesses (Bennett and Raab 2006: 209–19). Systematic application of criteria enables an overview across instruments, as well as, for each one, a way of interrogating the part it might play in combination with the others, which moves us closer to our second question. In addition, since technology regulation is not solely the business of one country, but extends to the world, it prompts the important question, which cannot be addressed here, how regulation of technology, as with other global issues, should evolve as domains of multi-level governance. The second question—how do regulatory instruments combine?—invites us to look at patterns and interactions amongst instruments. Lessig (1998, 1999a) properly argues that the instruments must be seen together, interacting, and not just as mechanisms that operate separately. Each modality can operate directly on the individual being regulated, or indirectly on one or another modality, ulti- mately producing a regulatory effect. Modalities can both co-operate and conflict with each other. However, his elaboration of these points is somewhat under- developed in one of its formulations (Lessig 1999b: ch 7), in which law, rather than other instruments, is depicted and discussed largely as the primum mobile that affects the other modalities, although in another discussion, Lessig (1999a) comes a bit closer to exploring two-way and n-way interactions in depth among the instruments.17 In terms of our second question, then, the various instruments appear to be both interchangeable or substitutable for each other in some respects, on the concep- tual plane (eg, code = law; legislation = authoritative but indirect state steering), and also complementary, in that one instrument makes up for the deficiencies of another. In Schwartz (2000), and in Lessig (1999b), Wall (2004), Greenleaf (1998) and Murray (2007) as well, we may observe not only a mixed strategy, but also one that has a purchase on the way the parts of the mixture or matrix interrelate. This is a most important advance on conceptualisations of ‘toolkits’ that do not theorise or problematise either the instruments or the way they work interactively 17 We saw that Lessig is particularly interested in law and code even though he depicts other regu- latory modes as well. As mentioned earlier, the prestige of this formulation of duality is reflected in many approaches that pay little attention to tools other than legislation and, in the case of privacy, PETs, although it is arguably too parsimonious and reductive to represent the reality of instrumental variety. However, it serves a useful purpose in Lessig’s argument, in which interrelationships are seen in greater detail in terms of the way law tames code, code displaces law, and law regulates code; through time, there may be moves and countermoves among the competing modalities. Tools for Technology Regulation 273 in practice.18 For Bennett and Raab (2006: ch 8), the interdependence of regulatory modalities is closer to synergy than to complementarity: whereas the latter normally implies that one element makes up for what is lacking in the other(s), synergy comes closer to a rather different meaning: that the optimal operation of each element depends upon the other(s). The claim is that the functionality of each modality is predicated upon that of the other ones, and that it is not just a question of an ensemble of complementary or substitutable tools. V. Seeking Analytical Equipment: Hood’s The Tools of Government Introduction It is no criticism to say that Lessig and other writers do not analyse tools or instruments beyond specific technology fields or policy domains. However, it is important to look elsewhere, to generic approaches, for further analytical equip- ment that can help us to understand better how specific modalities work. We therefore shift the focus to consider whether Hood’s (1983) ‘tools of government’ approach can play a useful part in understanding the complexity and instruments of technology regulation, and in helping with answers to the questions posed. Hood’s approach is not the only generic treatment of tools, but it is among the most prominent and is well-grounded in the study of political science and public policy.19 A brief outline of his cybernetics-derived schema of ‘government as a tool-kit’ is therefore in order. In his seminal work, The Tools of Government, Hood (1983: ch 1) identifies four basic administrative tools with which government ‘detects’ (takes in infor- mation) and ‘effects’ (makes an impact on the world): nodality, authority, treasure and organisation. Nodality is positional in socio-spatial terms. In Hood’s words, it ‘denotes the property of being in the middle of an information or social net- work’. It ‘gives government the ability to traffic in information’. It ‘equips govern- ment with a strategic position from which to dispense information, and likewise enables government to draw in information for no other reason than that it is a centre or clearing-house’. Authority ‘denotes the possession of legal or official power; it gives government the ability to ‘determine’ in a legal or official sense’. Treasure ‘denotes the possession of a stock of moneys or ‘‘fungible chattels’’ ’, 18 Lessig (1999: Appendix) purports to delve further into the interactions of tools, but does not really fulfil this aim. In Murray’s (2007) case, it is not clear whether ‘complementary’ and ‘symbiotic’ regulation are the same or different processes. 19 Hood (2007) has recently reassessed his scheme in the light of other, sometimes cognate, generic approaches, and has proclaimed its continued vigour. It is surprising that Lessig’s work is only referred to once, and fleetingly, by Hood (Hood and Margetts 2007: 176), where ‘code’ is seen in terms of ‘organisation’, yet this perception is not explored. 274 Charles D Raab and Paul De Hert which it can use for purposes of influence or to purchase resources such as ‘mer- cenaries’. Finally, organisation ‘denotes the possession of a stock of people with whatever skills they may have, land, buildings, materials and equipment’; it ‘gives government the physical ability to act directly, using its own forces rather than mercenaries’. Space does not allow a fuller laboration of these basic resources of regulation. However, in a networked society, there is useful and perhaps visionary insight in seeing nodality as an important regulatory device, in contrast to the bluntness of authority and treasure (Hood and Margetts 2007: ch 9), although its connotations of surveillance reach back historically to tap very ancient strate- gies of rulership.20 Further, Bruno et al (2006) point out that modern processes of governance, at least in Europe, rely heavily on innovative techniques such as benchmarking, mainstreaming and an open method of co-ordination. Although apparently softer in appearance, they are no less successful and disciplinary in nature then conventional regulatory instruments. It is very easy to relate these new techniques to the more conventional instrumental resources as described by Hood, especially nodality, authority and treasure. His work sensitises us to appreciate soft-law initiatives around favouring or threatening human rights through their reliance on authority.21 In addition, he points out that other kinds of organisation also use tools, and that they are not always very different from the ones that governments use, although some are unique to government (Hood 1983: 121). Moreover, disaggregating ‘government’ into its vertical levels shows that a range of jurisdictions, from village committees to international ‘governments’, deploy tools as well, although the main levels for this analysis are in the middle, at central and intermediate levels. But horizontal disaggregation of, for example, a central state into its component departments and agencies also enriches the field for comparative analysis and explanation (Hood 1983: 122). In one sense, Hood’s schema is too state-centric to encompass the way in which detectors and effectors are implemented as social practices that are available, in principle, to other agencies and individuals: as we have shown, technology regulation is hybrid and pluralist. States do not have a monopoly of the means of detecting and effecting; in societies with plural centres of power and a private sector, we easily see that these means are widely dispersed. It could be argued that, following the drift of academic policy-studies approaches in recent years, Hood’s 20 Using the concept of ‘tools’ has antecedents long before the work of academic lawyers writing about the regulation of technology. It has had earlier and more general exposition in terms of govern- mental activity, as we have noted, and it is important to consider how that idiom can be brought to bear on the regulation of technology. 21 An example of the former is the 2 May 2007 Communication from the European Commission ‘Promoting Data Protection by Privacy Enhancing Technologies (PETs)’. An example of the latter is the 24 November 2005 Communication from the Commission to the Council and the European Parliament on Improved Effectiveness, Enhanced Interoperability and Synergies Among European Databases in the Area of Justice and Home Affairs, COM (2005) 597 final, Brussels, 24 November 2005. Tools for Technology Regulation 275 analytical framework should be shifted from government to governance. Instead of asking, ‘what does government do?’, we would wish to ask ‘how is Internet content, or intellectual property, or privacy, etc. regulated?’. This is a functional question, rather than a structural one, although the answer will involve an understanding of organisations and roles, and it opens the way to a pluralist identification of contributors to regulation as well as their interactions, as noted above. Thus, as illustrations in a variety of fields indicate (Murray 2007), it is not only the state and its agencies that are involved in regulatory governance.22 One only needs to think about the tactics followed by firms such as Google and Microsoft to impose their business strategies in recent times: Google’s drive to handle users’ e-mail accounts, and Microsoft’s Passport. Nevertheless, as we will shortly see in Hood’s explanation of tool-sets, a state-centric approach is complex enough as a founda- tion for the analysis of tools, and is far from obsolete; therefore, this paper will not stray further into ‘governance’, in which a ‘tools’ schema would gain immeasurably in complexity but become too much to handle conveniently. Selecting and Combining Regulatory Tools For Hood, governing is partly a question of ‘[s]electing the right tool for the job [,which] often turns out to be a matter of faith and politics rather than of certainty’ (Hood 1983: 9). But he emphasises that the complexity of government activities shows ‘the application of a relatively small set of basic tools, endlessly repeated in varying mixes, emphases and contexts’ (Hood 1983: 8): this, there- fore, engages the two questions posed in section V, above. Moreover, an instru- ment can be used for a variety of purposes, so that new subjects of government activity do not need completely new tooling: ‘the same basic set of tools appears again and again as governments face up to ‘‘new’’ problems, such as computer privacy, glue-sniffing, micro-light aircraft and hang-gliders. Only the mixture varies’ (Hood 1983: 8). Obviously, this conceptualisation is at the high level, and not at the level of the specific variations of the tools that a governments applies to the concrete problems on its agenda; toolboxes, let us remember, often contain different sizes of screwdriver or Allen key. Those variations can be quite profuse: for example, ‘nodality’ includes several types of messages that can be used as effectors, but hybrid forms may number some 5,000 (Hood 1983: 31); ‘digital age’ tools may be more variegated still. None of this means that government invents or deploys a new tool for every new problem or situation (Hood 1983: 116), 22 However, it is consistent with our perspective to say that the state may come to play an important, and often crucial, role in encouraging, requiring (eg through procurement rules), shaping, or even limiting the use of other privacy instruments, such as codes of practice or PETs, perhaps by setting the normative and authority framework in which the interactive or collaborative governance participants in the public and private sectors perform their functions. In Murray’s (2007) terms, a prudent state in search of regulatory effectiveness would seek to understand how the system works and how com- munications flow amongst its participants, seeking to harness this activity. This is also consistent with Lessig’s (1998) emphasis on indirect action. 276 Charles D Raab and Paul De Hert but the exercise in counting is useful in reminding us that the basic set of tools ‘can be used as a keyboard capable of producing a very large number of possible combinations and mixes’ (Hood 1983; 117); all told, some billions.23 Bearing in mind the SWAMI quotation given earlier, ‘requisite variety’—to which Hood (1983: 117) refers—might indicate only a new mixture of old tools, rather than new (or more) tools. Hood (1983: 82–83, 106–8, 118–20) is alert to combinations and substitutions of effecting and detecting tools, and to the question of how to explain how and why tools are chosen, if indeed they are (Hood 1983: 118–20). But we would need some way of knowing whether ‘the mixture as before’, or some new mixture (each producing a unique pattern of tools), is the best way of dealing with specific chal- lenges, threats or disturbances. Hood (1983: 83–6) calls into play some criteria by which it could be judged which of the detecting or effecting tools, singly or in combination, does best for what kind of situation. The factors that affect this mix of tools include the size of the population involved, the nature of the tasks for which the tools are used, and the extent to which government rules in a climate of general resistance or consensus.24 In conditions of consensus or of general accep- tance of its regime, government may be able to use direct action tools sparingly:25 information, official tokens or the chequebook will often suffice as effectors in these circumstances. Government can reserve the typically more expensive tools of direct action for the emergency, the exceptional case, or the activity especially dear to its heart. At any rate, government, certainly in the contemporary world, would be impossible if no activity could be induced from its subjects except by applying coercion to each individual. Normally, governments use a mixture of tools, especially if one looks at the overall level: ‘[t]here may be single-tool agencies, but there is no such a thing as a single-tool government’ (Hood, 1983: 154). Moreover, the mixture of tools and resources can change over time, with different elements becoming dominant. Thus, he argues illustratively, government might emphasise propaganda to shape people’s knowledge and attitudes, or it may exercise its legal authority, stress financial pay- ments, or engage in direct action. Using nodality, it might put the emphasis on surveillance, in order to collect information about people; as Hood and Margetts (2007: 40) observe, ‘[m]ost of the nodality-based tools for detecting have been transformed by internet applications’. We have already noted Brodeur’s (1997: 113) comment on the effect of surveillance upon freedom. Thus the collection of information about people must also be seen as an instrument for effecting, and not just detecting, for—as privacy advocates frequently point out—surveillance may have a ‘chilling effect’ on human behaviour if people are intimidated 23 Famously, a monkey with a typewriter and a lot of time can produce the complete works of Shakespeare. This sets a challenging target for regulators. 24 For variables that explain why a particular combination of order-maintenance strategies is decided upon, see Brewer et al (1996: 233–7). 25 See Brewer et al (1996: 237). Tools for Technology Regulation 277 into refraining from certain kind of conduct, or into behaving in particular conformist ways, especially in public places: to be watched is to be governed.26 In this regard, we may note Margetts’ (1998) application of Hood’s framework to the question of government’s use of ICTs, or ‘computerisation’, in its opera- tions, considering the ‘NATO’ inventory. Concentrating mostly on effecting tools, she illustrates many uses of these technologies in modern government, showing their actual or potential transformative effect, and the factors that arbitrate their use and degree of success. She gives much less attention to the tools for detection and the means of gathering and processing information, but a fuller analysis of the ways in which the four kinds of tool are brought to bear in the world of ‘information age’ electronic government is provided in Hood and Margetts (2007). In the age, however, of intensified surveillance, new investigations are necessary in order to build upon their remarks about the surveillant applications of the tools, and perhaps especially nodality. Here we are in the world of large databases, intelligence-led policing, the requisitioning of communications and travel records, identity cards, mobile tracking technologies, and much else. Margetts (1998: 458) concludes ominously: ‘The by-product of the use of infor- mation technology in effecting tools … is a greatly enhanced detection capability of a particular kind: passive rather than active. Thus in Hood’s terms, information technology allows government to observe us more easily from its watchtower, rather than knocking on our door to pursue its enquiries’. Hood’s ‘Four Canons’ of Tool Application Part of the analysis of government or, indeed, governance has to do with apprais- ing its qualities in terms of the poles and continua of ‘good’ or ‘bad’, whether these are seen in utilitarian or ethical terms. If it seems fruitful to evaluate systematically a technology regulatory regime, such as privacy protection, in the light of a ‘tools’ schema, Hood’s approach has an affinity with Murray’s (2007) requirements for analysis and understanding in the process of choosing and applying modalities. It also seems better suited than Lessig’s (1999b) to this task, because it indicates some of the main routines by which governments (states) are likely to make choices, and not only the results of these decision-making processes. But there is a normative dimen- sion as well. Ethical criteria are, importantly, one amongst four that Hood identifies as interdependent ‘canons’ of good application, as follows (Hood 1983: 133): (1) The instrument or mix of instruments used in any given case should be selected after some examination of alternative possible tools for the job. (2) The tool should be matched to the job. Since there is no general purpose tool that will serve government effectively in all circumstances, government needs to under- stand the circumstances which favour each of the instruments available. 26 Hood (1983: ch 6) is worth revisiting for a deeper exploration of the topic of surveillance and other information practices; see also Hood and Margetts (2007: chs 2 and 9). 278 Charles D Raab and Paul De Hert (3) The choice must not be ‘barbaric’; it must satisfy certain ethical criteria, such as justice and fairness. (4) Effectiveness is not enough; the desired effect must be achieved with the minimum possible drain on the government’s bureaucratic resources. To an extent, these canons are offered in a critical vein, for they are all problematic in decision-making. For example, the first one calls to mind whether ‘satisficing’ rather than maximising should be the decision rule in choosing instruments.27 In practice, this is likely to be the case: the results may have to be merely ‘good enough’. Braybrooke and Lindblom (1963) convincingly warn against the syn- optic-rational decision-making model, as would be manifested if the first canon were zealously followed, on the grounds that it exceeds human decision-making capacity. We have seen how Lessig’s (1998) New Chicago School is unapologeti- cally rooted in a rational-choice stance. Hood, however, is alive to the unrealism of strict rational-choice procedures, and points instead to ‘intuition, experience, tra- dition, faith and serendipity’ (Hood 1983: 135) as well as to ‘a mixture of chance and trial and error’ (Hood 1983: 137) as the more practical and likely methods of problem-solving and of matching instruments to policy contexts.28 Matching the tool to the job would also require an enormous cognitive effort to understand the host of contingencies, contexts and policy goals that are inherent in a variety of decision-making situations, and that should guide choice (Hood 1983: 137–9). As for the third canon, Hood is sensitive to the argument that tools are not neu- tral: that ‘even the most innocent-seeming of them … can be used as instruments of repression’ (Hood 1983: 14). Moreover, the satisfaction of ethical criteria, such as justice and fairness, must be one of the canons governing the application of tools (Hood 1983: 133, 139–41) although it—as are the other canons—is dif- ficult to implement in practice and requires the exercise of moral judgment. The utilitarian, descriptive emphasis in Hood’s scheme does not prevent his wrestling with deeply normative issues, to which there is no formulaic solution that can be arrived at by invoking some rule; Lessig, too, appreciates the force of norma- tive questions. Thus, concepts of rights, harms, proportionality and the like are important criteria to bring to bear, not only in an outsider’s evaluation of the tools and mixes, but also in offering additional factors that may act as constraints and opportunities in governments’ choice of instruments; they therefore serve an explanatory purpose as well. Concerning the fourth, ‘using bureaucracy sparingly’ is both an economistic rule-of-thumb and a way of using light-touch regulation as far as possible, and these principles might not always tend in the same direction. Decision-making in terms of this canon is therefore intellectually highly absorbing, but the problems 27 Artists will know that an enormous range of shades and tones can be created through the mixture of only the three primary colours plus white. The ‘effects’ can be remarkable. 28 This is not the place to rehearse the well-worn, and largely futile, academic ‘debate’ between rationalist and incrementalist modes of decision-making; however, Hood’s (1983) middle position seems to resemble that of Dror (1968). Tools for Technology Regulation 279 of implementation are daunting, as is shown at length (Hood 1983: 141–50). However, they involve considerations that resonate with some of the factors and regulatory modalities involved in Lessig’s strategies, insofar as choices as between different kinds of constraint—general or particular, direct or indirect—are involved in regulatory implementation, and insofar as Lessig pays attention to the economic and political advantages of indirect action and the workings of subjec- tive constraints. They go beyond Lessig in that Hood is concerned with resources for effecting as well as for detecting, and some (treasure and organisation) are considered more subject to depletion than are others (nodality and authority). Therefore, there is a more elaborate menu of decision criteria, possibilities, and trade-offs, resulting in more complex configurations of instruments. On the one hand, as Hood (1983: 151) observes, ‘[e]ach of the canons … quickly becomes faltering and ambiguous when it is examined carefully’; on the other hand, the array can have the power of explanation of why and how governments choose to regulate in the way that they do, and why their decisions change over time. This seems to be in some respects a richer contribution to understanding than Lessig’s, although one that might lend itself to a further elaboration of his approach with regard to the regulation of technology. We cannot undertake this elaboration here, although some preliminary obser- vations of the difficulties faced by regulatory regimes, in these terms, can be offered: First, the tool or tools used so far seem to follow an accepted ‘legacy’ ros- ter, across countries, suggesting that any examination of alternatives is likely to be less thoroughgoing than tabula rasa, ‘rational’ calculation—informed by systems theory—might imply. It is likely to be governed by ‘politics’, as Hood (1983: 136) suggests in general, or by policy-borrowing and policy-learning; the latter are fac- tors examined by Bennett (1992) in his analysis of the instrument choices made by data protection systems in several countries. Second, whilst regulatory regimes should be founded on a basis of human rights, and incorporate justice and fair- ness in their very principles, the ethical criteria considered by Hood are those that pertain to the tools: in the privacy case, the laws, self-regulatory mechanisms, PETs, citizen action, consumer education and the like. The ethical questions that might be applied here would concern the relative burdens placed on the categories of policy actors or participants in the regulatory process: for example, whether it is fair to place the onus on the citizen or consumer to protect herself from privacy invasion, or whether it would be fairer to make the invader pay (analogous to the ‘polluter pays’ principle in environmental protection). Other questions could be developed along these lines. Third, if the injunction to ‘use bureaucracy spar- ingly’ is ambiguous, points in different directions, and is difficult to implement with precision, it may nevertheless have heuristic value and generate a host of questions that would not otherwise come to mind about regulation, such as the variable intensity of different tools (‘scalability’), and the extent of their directness and specificity. Taking all four canons together, however, the extent to which they conflict with each other, thus involving trade-offs, and the political processes and argumentation 280 Charles D Raab and Paul De Hert concerning their relative importance, become part of the analysis of how regimes are established, how they are modified, and how they actually work. To do this would not only require a detailed overview of the ‘toolbox’, but also an investigation of the ‘black box’ of government (and other sites of decision-making within overall governance) to understand the reasons for choices, the salience of different values, the perception of conflict or synergy amongst the tools, and the aims of regulatory policy. VI. Broadening the Analysis by Bringing in the Actors It is not certain how far the approaches canvassed so far have a purchase on action and decision-making processes within and among the participants and their organisations. As we have seen, there is abundant critical commentary in the literature concerning the applicability of Lessig’s concepts to problems of regulation created by new emerging technologies.29 Fundamentally, we feel that Lessig’s framework is too general to capture the challenges of regulation. A first remedy, we believe, is to bring policy actors and their relationships into play if an analysis of regulation is to reflect the politics of the processes whereby rules and instruments are brought to bear on any field of activity. It is evident that that any discussion of regulatory regimes cannot rest content with describing the tools, or even the nature of their interrelationships. Lessig’s (1998) account of direct and indirect effects needs to be extended into an understanding of the processes by which tools are created, deployed and modified.30 The discussion or compari- son of tools as such can only be one-dimensional if it leaves out of account the organisation of the means by which they are brought to bear on situations: in other words, ‘the politics of tooling’. These processes involve policy actors and policy action. Tools are wielded (or not) by individual or institutional actors who participate in regulatory regimes in which the tools are embedded. This means that, instead of only talking about ‘law’, we must talk about regulatory agencies or other means by which the law is interpreted, applied and adjudicated. Instead of talking only about ‘self-regulation’, we must know something about the self- regulators and their relationships with the regulated (and with the regulatory agencies), and about the enforcement and sanctioning processes within sectors or industries. Instead of talking only about software code or ‘PETs’, we must understand the design process, the factors that affect their implementation (for instance, whether certain specifications are required by procurers to be built into technical systems); and so on. 29 A fuller discussion of this commentary can be found in Raab and De Hert (2007). See also De Hert and Mantovani (2008). 30 We also acknowledge the importance of understanding processes of compliance and non- compliance; see Wu (2003). Tools for Technology Regulation 281 This approach is not so clearly in the frame of Hood’s (1983; Hood and Margetts 2007) analysis. Simplified sketches of a model of actors and action are present in in Bennett and Raab (2006: 220) and in Reidenberg (1997: 96).31 We do not go further along these lines in this paper,32 but we note the more ambitious analysis that is proposed by Murray (2007: 51, 54, 236, 238, 242). His ‘matrix’ idiom, or similar constructs that emphasise complexity, may be a fruitful improvement upon the conventional ‘toolbox’ metaphor, taking us closer to action and process in regulation.33 Murray’s discussion of ‘cyberspace’ regulation is a particularly use- ful development of regulatory analysis that emphasises the need to identify actors active within multi-level regulatory regimes. He constructs an abstract, general- purpose illustrative ‘matrix’ that represents a three-dimensional regulatory field by means of points and lines representing regulatory modalities (‘regulators’) and their protagonists or specific actions (Murray 2007: 54),34 This is later adapted to the facts of regulatory case-studies in a way that points up the pathologies of regulation that can occur when policy-makers do not fully grasp the nature of the interrelationships and activity found in the world they are aiming to affect. One of Murray’s leading illustrations of this is his discussion of the regulation of Internet domain names through the Internet Corporation for Assigned Names and Numbers (ICANN) (Murray 2007: ch 4). Top-down, static regulatory incur- sion triggered unexpected counter-reactions by other actors and modalities in the matrix, producing other ineffective interventions. In this case, as in others, the lesson drawn is that the system in question reacts to external regulatory initiatives in ways that provoke other interventions and responses in unpredictable ways that result in unstable and uncertain conditions. To use Kooiman’s (2003, 1993) terms, the complexity, diversity and dynamic properties of the regulatory environment 31 Reidenberg’s (1997: 96) proposed ‘governance paradigm’, anticipating Lessig, illustrates policy action by featuring the pluralist interplay of centres of decision-making and control, including the state in non-legislative but authoritative steering roles, with a considerable degree of control residing in the global information infrastructure’s ‘network sovereignty’. This involves states shifting to indirect means of shaping network rules and norms of conduct, with the role of standards organisations loom- ing large among the various centres of decision-making. These regulatory interrelationships are lightly sketched, but contribute to an understanding of complexities and different, more flexible, ways of put- ting together a regulatory package without Lessig’s dash to the market, but with an acknowledgement of the self-regulatory importance of the network. 32 Discussion of actors is taken a step further in Raab and Koops (2008). 33 Murray is not alone in examining hybrid regulatory solutions. Although Engel (2001) explicitly refrains from considering how different tools are deployed by different regulators, he draws atten- tion, in the context of the German Constitution, to the range of public and private bodies, at several jurisdictional levels, that are available to take part in governance activities; this perspective bears upon the multi-level governance aspect of ICT regulation. These bodies and regulatory sources appear very often in hybrid combinations that can be tailored to suit several factors that may be present in each case of regulation, although these solutions pose challenges to constitutional government. He argues that these can be tackled, to some extent, by ‘constitutionalising’ the hybrid institutions themselves by subjecting them to a regime of external rules. 34 We note the ambiguity in what the matrices depict: sometime an action, sometimes institutional or other participants, sometimes a technology, and sometimes a regulatory principle or modality. Whether this lack of clarity reduces the usefulness of the matrices at a heuristic level is for consideration. 282 Charles D Raab and Paul De Hert require to be properly understood and incorporated into regulatory strategy if such an outcome is to be avoided. The contrasting case of the regulation of video cassette recorders (VCRs) appears to bear this out, for a more organic regulatory settlement, involving the fulfilment of consumers’ market demands, evolved out of an initial failure, in the law courts, of external intervention. Murray therefore argues against a static ‘command and control’ regulatory model. He claims that external interventions, typically through law, are likely to be disruptive and inef- fective because they are grounded in insufficient understanding of the processes and interactions that they seek to regulate, and that they therefore go against the grain in what may be a self-defeating regulatory exercise. Instead, drawing upon systems theory and autopoiesis, he argues for a more dynamic, complementary and symbiotic approach, one that acknowledges that regulators and the regulated are not separate, and that relies on hybrids rather than instruments taken singly (Murray 2007: ch 8).35 An emphasis on the interactions of regulatory processes can lead to a reformed and potentially constructive grasp of the way modalities, or regulatory tools, and their actors work in practice. Understanding these political, social, economic and psychological dimensions cannot be simply a matter of answering the question, ‘who uses what tool(s), and why?’, and then mapping tools onto actors in a straight- forward fashion, for the world is more complicated than that. Instead, we have to understand tool-making, tool-using and tool-regulating as processes in which, in theory, several actors may participate in the making, using, and governing of each tool; or, indeed conflict with each other in these processes, for regulation is a political process, not a cut-and-dried matter of devising and applying technology and law. The limited scope for negotiating regulatory goals between the differing interests of individuals and firms online, as sketched in some of the works just reviewed, is only one instance of a much more intensive and extensive set of power relationships at many phases of the processing of information. Seeing ‘tools’ in a wider context of actors and processes, as some have begun to do, helps in a constructively critical effort to apply more effectively the general and specific perspectives of those who analyse governance, and the governance of technology in these terms. VII. Conclusion In this paper, we have looked closely at the instruments of regulation as depicted principally by Lessig and by Hood. We have argued that the approaches of Lessig and his commentators need to be augmented by more generic approaches that emphasise and demonstrate empirically the multiplicity of relationships and pathways among a larger number of tools, and that map the tools onto the cast of characters involved in regulatory processes. We have judged the time ripe to 35 Further discussion of Murray can be found in Raab and De Hert (2007). Tools for Technology Regulation 283 re-introduce the work of Hood in this area of research, for there are close affinities with that of Lessig. Although his work on regulation is of a general nature, it is very suited for exploring the multiple interactions and resources used or ignored in the technology debate. We then see, even more demonstrably, that states do more than just enact laws. Their toolbox is considerably better stocked, and there are few limits to the combinations of regulatory tools that are possible in theory, and very often in practice, in spite of political, economic, legal, ethical and other limitations. Moreover, we may be able to understand the actions of other regula- tors, such as ICT firms, along the same lines of instrumentation and action. In addition, beyond a comprehension of interrelations we can engage normative considerations that are important in understanding the values which regulation tries to achieve, or at least to maintain. However, what works, why, when and how, may be very hard to determine in regard to technology regulation, for several rea- sons; the inability to regulate in one country alone, given the global flow of trans- actional data and of communications; ambiguity about the aims of regulation; the relative weakness of regulatory agencies; the restrictions placed on regulation by governments and business interests that regard certain objectives of regulation as an unwelcome restraint on their activities; and other factors. Having examined authoritative writers on the tools or instruments of regula- tion, we have indicated the necessity of taking actors’ behaviour into consider- ation in the regulation of technologies. Instruments and their implementation are the product of decisions made through social, political, and economic processes. Explanations of the operation, and the possibilities and limitations, of regulatory tools are difficult to achieve without an understanding of the roles and interac- tions of policy actors in the process of regulating technologies. Only through this can we understand the power relations and the consonance or conflict of interests in and around the regulatory activity in which they are engaged, and in which instruments are implemented. Murray’s approach seems a useful step in the direc- tion of that kind of analysis. Although we have not explored all the richness of the analytic schemes canvassed earlier, we think that research may benefit from fusing these frameworks, whether they be social-scientific and generic, or legal, technological and specific. It would be important to revisit these approaches to the analysis of information policy instruments, and even more specifically, of the tools and regimes for regulation, in order to identify the generic characteristics of modalities, and to understand how and why they are chosen or resisted by decision-makers in concrete situations and in the processes and interactions of regulation. Such work remains to be done beyond the scope of this paper. References Ashby, WR (1956) Introduction to Cybernetics (New York, NY, Wiley). Bennett, C (1992) Regulating Privacy: Data Protection and Public Policy in Europe and the United States (Ithaca, NY, Cornell University Press). 284 Charles D Raab and Paul De Hert Bennett, C and Grant, R (eds) (1999) Visions of Privacy: Policy Choices for the Digital Age (Toronto, University of Toronto Press). Bennett, C and Raab, C (2006) The Governance of Privacy: Policy Instruments in Global Perspective, 2nd edn (Cambridge, MA, The MIT Press). Braybrooke, D and Lindblom, C (1963) A Strategy of Decision: Policy Evaluation as a Social Process (New York, NY, The Free Press of Glencoe). Brewer, J, Guelke, A, Hume, I, Moxon-Browne, E and Wilford, R (1996) The Police, Public Order, and The State: Policing in Great Britain, Northern Ireland, the Irish Republic, the USA, Israel, South Africa and China (London, Macmillan). Brodeur, J-P (1997) ‘Organized Crime: Trends in the Literature’ 35 International Annals of Criminology 89. De Hert, P and Mantovani, E (2008), ‘Review of The Regulation of Cyberspace by Andrew Murray’ 2 (1) Studies in Ethics, Law, and Technology, available at accessed 15 June 2008. Deutsch, K (1963) The Nerves of Government (New York, NY, The Free Press). Dror, Y (1968) Public Policy Reexamined (San Francisco, CA, Chandler Publishing Co). Greenleaf, G (1998) ‘An Endnote on Regulating Cyberspace: Architecture vs Law?’ 21 University of New South Wales Law Journal. Engel, C (2001) ‘Hybrid Governance Across National Jurisdictions as a Challenge to Constitutional Law’ 2 European Business Law Review 569. Haggerty, K and Ericson, R (2000) ‘The Surveillant Assemblage’ 51 British Journal of Sociology 605. Held, D and McGrew, A (2002) Governing Globalization: Power, Authority and Global Governance (Cambridge, Polity Press). Hood, C. (1983) The Tools of Government (London, Macmillan). —— (2007) ‘Intellectual Obsolescence and Intellectual Makeovers: Reflections on the Tools of Government after Two Decades’ 20 Governance 127. Hood, C and Margetts, H (2007) The Tools of Government in the Digital Age (London, Palgrave Macmillan). Industry Canada (1994) Privacy and the Canadian Information Highway: Building Canada’s Information and Communications Infrastructure (Ottawa: Information Highway Advisory Council, Industry Canada). Koenig-Archibugi, M and Zürn, M (eds) (2005) New Modes of Governance in the Global System: Exploring Publicness, Delegation and Inclusiveness (London, Palgrave). Kooiman, J (2003) Governing as Governance (London, Sage). —— (ed.) (1993) Modern Governance: New Government-Society Interactions (London, Sage). Lessig, L (1998) ‘The New Chicago School’ 27 The Journal of Legal Studies 660. —— (1999a) ‘The Law of the Horse: What Cyberlaw Might Teach’ 113 Harvard Law Review 501. —— (1999b) Code and Other Laws of Cyberspace (New York, NY, Basic Books). Tools for Technology Regulation 285 Margetts, H (1998) ‘Computerising the Tools of Government?’ in I Snellen and W van de Donk (eds), Public Administration in an Information Age: A Handbook (Amsterdam, IOS Press). Murray, A (2007) The Regulation of Cyberspace: Control in the Online Environment (Abingdon, Routledge-Cavendish). Raab, C and De Hert, P (2007) ‘The Regulation of Technology: Policy Tools and Policy Actors’ Tilburg University Legal Studies Working Paper no 004/2007, available at accessed 25 July 2008. Raab, C and Koops, B-J (2008) ‘Privacy Actors, Performances, and the Future of Privacy Protection’ in S Gutwirth et al (eds), Re-inventing Data Protection (Dordrecht, Springer), forthcoming. Reidenberg, J (1997) ‘Governing Networks and Rule-Making in Cyberspace’ in B Kahin and C Nesson (eds) Borders in Cyberspace: Information Policy and the Global Information Infrastructure (Cambridge, MA, The MIT Press). —— (1998) ‘Lex Informatica: The Formulation of Information Policy Rules Through Technology’ 76 Texas Law Review 552. Reidenberg, J and Schwartz, P (1998) Data Protection Law and On-line Services: Regulatory Responses, prepared as part of the project ‘Vie privée et société de l’information: Etude sur les problèmes posés par les nouveaux services en ligne en matière de protection des données et de la vie privée,’ commissioned from ARETE by Directorate General XV of the Commission of the European Communities. Rotenberg, M (2001) ‘Fair Information Practices and the Architecture of Privacy (What Larry Doesn’t Get)’ 1 Stanford Technology Law Review, at: accessed 10 June 2008. Schwartz, P (2000) ‘Beyond Lessig’s Code for Internet Privacy: Cyberspace Filters, Privacy-Control, and Fair Information Practices’ 2000 Wisconsin Law Review 743. Wall, D (2004) ‘Digital Realism and the Governance of Spam as Cybercrime’ 10 European Journal on Criminal Policy and Research 309. Wright, D, Gutwirth, S, Friedewald, M, Vildjiounaite, E and Punie, Y (eds) (2008) Safeguards in a World of Ambient Intelligence (Dordrecht, Springer). Wu, T (2003) ‘When Code Isn’t Law’ 89 Virginia Law Review 103. 13 Conceptualising the Post-Regulatory (Cyber)state ANDREW D MURRAY* Introduction Our society is evolving. This evolutionary process is continual and incremental and individual changes are often of such small increment that we fail to appreciate that they are occurring. While ‘paradigm social shifts’, such as the move from the Cold War to the post Cold War era,1 the development of the post-industrial soci- ety2 and the arrival of the post 9/11 society3 are discussed at length, our analyses of the incremental developments are more diffuse. This is to be expected: paradigm shifts reflect key social changes; incremental shifts represent society in evolution. This paper will discuss the effects of one of the key evolutionary changes in our socio-legal order in the last twenty years: the move from the regulatory state to the post-regulatory state,4 through the lens of one of the key evolutionary changes * This paper is based upon a presentation given to the TELOS conference ‘Regulating Technologies’ held at King’s College London on 7–8 April 2007. Thanks are due to Professors Roger Brownsword and Karen Yeung who organised the Conference. The themes contained herein are developed further in my book The Regulation of Cyberspace: Control in the Online Environment (Oxford, Routledge- Cavendish, 2006). 1 See, eg, H Schrecker (ed) Cold War Triumphalism: The Misuse of History After the Fall of Commu- nism (New York, The New Press, 2006); CC Moskos, JA Williams and DR Segal (eds), The Postmodern Military: Armed Forces After the Cold War (New York, OUP, 1999); F Cameron, US Foreign Policy After the Cold War, 2nd edn (Oxford, Routledge, 2005). 2 See, eg, A Gorz, Farewell to the Working Class: An Essay on Post-industrial Socialism (London, Pluto, 1982); D Bell, The Coming of Post-industrial Society (New York, Basic Books, 1976); K Armingeon and G Bonoli (eds), The Politics of Post-Industrial Welfare States (Oxford, Routledge, 2006). 3 See, eg, D McGoldrick, From ‘9–11’ to the Iraq War 2003: International Law in an Age of Complexity (Oxford, Hart, 2004); M Webb, Illusions of Security: Global Surveillance and Democracy in the Post-9/11 World (San Francisco, City Lights, 2006); JL Ritter and JM Daughtry (eds), Music in the Post 9/11 World (Oxford, Routledge, 2007). 4 See, J Black, ‘Decentring Regulation: Understanding the Role of Regulation and Self Regulation in a ‘Post-Regulatory’ World’ (2001) 54 Current Legal Problems 103; C Scott, ‘Regulation in the Age of Governance: The Rise of the Post Regulatory State’ in J Jordana and D Levi-Faur (eds) The Politics of Regulation: Institutions and Regulatory Reforms for the Age of Governance (Cheltenham, Edward Elgar, 2004). 288 Andrew Murray in our social ordering, the development of social and communications networks, facilitated through digital communications technology.5 Thinking about Regulation and Complexity The starting point of this paper is to think about how we contextualise and explain regulators, regulatory interventions and the process of regulation. To embark upon this process one needs to return to the root of the subject and ask the apparently simple question ‘what is regulation?’. The traditional answer suggests this question is not as simple and clear-cut as it seems. Rob Baldwin and Martin Cave in their classic student text Understanding Regulation explain that although ‘Regulation is spoken of as if an identifiable and discrete mode of governmental activity yet the term regulation has been defined in a number of ways.’6 This may seem to suggest that the term ‘regulation’ is an amorphous and variable term applied to any widely derived source of control or direction, but Baldwin and Cave go on to explain that traditionally there are two models of regulatory theory: (1) Philip Selznick’s notion of regulation ‘as control by a public agency’ and (2) the ‘administrative law’ model developed by a body of UK public lawyers in the 1980s and 1990s, including Carol Harlow and Richard Rawlings, Martin Loughlin and Anthony Ogus.7 In particular this second, public law, model may be seen in the migration from the ‘welfare state’ to the ‘regulatory state’ clearly extant in the policy of the Thatcher Government in the 1980s, with the privatisation of key public utilities and the creation of a variety of post- privatisation utilities regulators and a raft of quasi-non-governmental organisa- tions (QUANGOs).8 Despite being set against each other by Baldwin and Cave both these models share common themes. Vitally both define regulation as part of the menu of public law. Selznick defines regulation as the ‘control exercise[d] by a public agency’,9 while administrative law theory ties regulation to the develop- ment of the regulatory state. In a very real sense the term ‘regulation’ is being used as shorthand for public regulatory activity. Thus both see regulation as a narrow, static and closed field of study, valuable for the interpretation of the actions of 5 See, K Baloun, Inside Facebook: Life, Work and Visions of Greatness (Oxford, Trafford, 2007); J Liebowitz, Social Networking: The Essence of Innovation (Lanham, MD: Scarecrow Press, 2007); R Mansell (ed), Inside the Communication Revolution: Evolving Patterns of Social and Technical Interaction (Oxford, OUP, 2002). 6 R Baldwin and M Cave, Understanding Regulation (Oxford, OUP, 1999) at 1. 7 Ibid at 2. 8 Including, but not exclusively, the Office of the Water Regulator (Ofwat), the Office of the Gas and Electricity Markets Regulator (Ofgem), The Office of the Telecommunications Regulator (Oftel), the Office of the Rail Regulator (ORR), the Legal Services Commission, the Learning and Skills Development Agency, and the Disability Rights Commission. 9 P Selznick, ‘Focusing Organisational Research on Regulation’ in R Noll (ed) Regulatory Policy and the Social Sciences (Berkeley, University of California Press, 1985). Conceptualising the Post-Regulatory (Cyber)state 289 public bodies and the activities of Governments, but less valuable for those of us taking a wider view. How does one therefore develop a wider appreciation of regulatory activity beyond the horizon of the public regulatory activities envisaged by Selznick, Loughlin, Harlow and Rawlings? One answer is to look at how the regulatory settlement changes or evolves. Over time regulatory systems break down and change due to variations in the regulatory environment. This natural evolution of the regulatory settlement takes two forms: (1) external flux and (2) internal flux. The first, external flux, is more commonly known as disruptive events. These occur when economic, environmental or technological developments affect the regulatory settlement, events such as the invention of the MP3 file, the widespread reduction in cost of silicon chips and storage memory or the develop- ment of cell phone technology. The second, internal flux, occurs when moral or social developments force a response. Currently the introduction of smoking bans across Europe10 as well as bans on tobacco advertising in sport,11 may be seen as a response to a social/moral imperative or internal flux. The problem with this the- sis is that it quickly becomes impossible to map all the factors which affect both the environment and the subject. Almost any change either internally or exter- nally may lead to regulatory failure, a position which is simply untenable if one is to claim any sort of understanding of regulation and the regulatory environment. As a result, some regulatory theorists turned to systems theory and to Cybernetics to model this complexity. Systems theory represents collectively a series of mathematical models based upon Ludwig von Bertalanffy’s concept of complex systems: that is that systems are open to and interact with their environment, and can through interaction with their environment continually evolve.12 Sometimes linked with chaos theory, although in fact the two are distinct, one of the best known examples of systems theory is the Gardener’s Dilemma.13 The problem 10 See the Health Act 2006, ch 1 (England and Wales); Smoking, Health and Social Care (Scotland) Act 2005, pt 1 (Scotland); Public Health (Tobacco) Act 2002, s 47 (Eire). 11 Directive 2003/33/EC of the European Parliament and of the Council of 26 May 2003 on the approximation of the laws, regulations and administrative provisions of the Member States relating to the advertising and sponsorship of tobacco products. 12 L von Bertalanffy, General System Theory (New York, George Braziller, 1969). 13 The Gardener’s Dilemma is set out in full by David Post and David Johnson in their paper, ‘“Chaos prevailing on every continent”: Towards a New Theory of Decentralized Decision-Making in Complex Systems’ 73 Chicago-Kent Law Review (1998) 1055 at 1059. Imagine a garden consisting of many different plants of many different species, and a gardener who seeks to maximise some variable over the garden as a whole—total yield, for example. The gardener faces a particular decision: whether, with respect to each individual plant, to prune it back or leave it un-pruned. How can our gardener find the ‘best’ combination of pruned and un-pruned plants, the configuration that will produce the most luxuriant growth overall? The garden, we assume, has the following general characteristics. First, individuals are heterogeneous; the relationship between an individual’s state (pruned or unpruned) and its growth is different for each plant; for some plants, pruning will increase growth (by reducing the diversion of scarce nutrients, water, sunlight, etc., into unnecessary foliage), while for others the ‘shock’ of pruning will cause them to grow less vigorously. This heterogeneity may be due to differences among species—asparagus plants may 290 Andrew Murray with systems theory, though, is that it suggests that systems are computationally intractable: that is they quickly develop characteristics which are so complex that you cannot map them, never mind successfully regulate them. In response to this challenge regulators and regulatory theorists turned to the work of the brilliant psychiatrist and cyberneticist W Ross Ashby and his ‘Law of Requisite Variety’ which states ‘in active regulation only variety can control variety’.14 According to Ashby’s Law variety absorbs variety and defines the minimum number of states necessary for a controller to control a system of a given number of states. From this Ashby developed his ‘Good Regulator Theorem’ which states that ‘Every good regulator of a system must be a model of that system’.15 This theorem states that any regulator that is maximally successful and simple must be isomorphic with the system being regulated. Thus Ashby is suggesting that regulators require to react differently to pruning, on average, than tomatoes—and to differences between individuals of the same species (because of differences in overall health or vigour or genetic constitution, for example). Second, there are substantial spillover effects between and among the individual plants, by which we mean that each individual’s growth can be affected—positively or negatively—by the condition and growth of other plants. The condition of an individual’s neighbours will help determine, for example, the amount of sunlight that is likely to penetrate through to any individual plant, the amount of nutrients likely to remain in the immediate proximity in the soil, and the like, all of which will in turn affect the growth of that individual. Third, each plant’s response to being in one state or another (pruned/unpruned) is endogenously determined; ie a function of the state of some number of other plants. For example, a plant’s response to being pruned may depend on whether it is in area of high or low sunlight; in the former, it may grow more vigorously if unpruned (which will allow it to take better advantage of the available sunlight), while with less sunlight available it may do better if unnecessary foliage is pruned away (or vice versa in some cases). And whether an individual plant is in an area of high or low sunlight in turn depends on the state (pruned/unpruned) of its neighbours. Let us state this problem a bit more formally. The garden is a system consisting of some number (N) of individual elements—individual plants. Each element can be in one of only two possible states—for purposes of our problem, pruned or unpruned. The garden can contain at any time some elements in one state and some in the other; we will call each particular combination of pruned and unpruned plants a different configuration of the system. Because the elements of this system can be in one of only two possible states, we can represent any system configuration by a string of 1s and 0s, where a 1 indicates that a particular element is in the first (pruned) state, a 0 the opposite. Each element’s contribution to the system variable we are seek- ing to maximise is a function of both its own state and, because of inter-individual spillover effects, the state of some number of other elements; that is, a change in one element’s state (from 0 to 1, or pruned to unpruned, or vice versa) affects both its own contribution, and the contribution of a number of other plants, to the overall yield of the garden, and, conversely, each plant’s contribution to overall yield is a function of both its own state and the state of a number of other plants in the garden. Each configuration of the system produces some value for the system variable that we are seeking to maximise, and it is helpful to visualise a graph plotting the value of this system variable (aggregate yield in our example) against each different configuration of individual states (pruned and un-pruned) that the system can be in. Such a map of the way that this variable changes as the states of individual elements change produces a kind of ‘landscape,’ a multi-dimensional terrain that rises to ‘peaks’ of high yield in certain configurations and descends into ‘valleys’ of low yield for other configurations of the elements. The gardener’s dilemma, then, is to find a way to identify the system configuration—the combination of individual state settings for each of the plants—that produces the maximum yield of the garden as a whole, the highest point on the yield landscape. 14 WR Ashby, ‘Variety, Constraint, and the Law of Requisite Variety’ in W Buckley (ed) Modern Systems Research for the Behavioral Scientist (Chicago, Aldine, 1968). 15 RC Conant & WR Ashby, ‘Every Good Regulator of a System Must be a Model of that System’ (1970) 1 International Journal of Systems Science 89. Conceptualising the Post-Regulatory (Cyber)state 291 use a model in which they can project a regulatory response to each develop- ment, and in the same plane. But, as von Bertalanffy had already demonstrated a complex system continues to evolve as it interacts with the external environment, this suggests a continual parallel evolution of the regulatory environment and the regulatory settlement, a problem re-stated by Lon Fuller as his ‘polycentric web’,16 whichever strand of the web the regulator pulls upon, unintended consequences will follow. This suggests regulators may never be sure what effect their intervention in the regulatory status quo may have: outcomes are unpredictable. To deal with this problem the approach most commonly used by regulatory theorists today is to identify and evaluate macro-regulatory modalities which may be used by regula- tors to control patterns of behaviour within complex systems. An example of this practice may be seen in Baldwin and Cave’s taxonomy of regulatory strategies. In this the authors outline eight (alternative) regulatory strategies: (1) command and control, (2) self-regulation, (3) incentives, (4) market-harnessing controls, (5) disclosure, (6) direct action, (7) rights and liabilities laws, and (8) public compensation.17 The authors describe these eight strategies as the application of the ‘basic capacities or resources that governments possess and which can be used to influence industrial, economic or social activity’.18 Thus to use Baldwin and Cave’s model, government may (a) use legal authority and the command of law to pursue policy objectives, or it may (b) deploy wealth through contracts, loans, grants, subsidies or other incentives to influence conduct, (c) harness markets by channelling competitive forces to particular ends, (d) deploy information strate- gically, (e) act directly by taking physical action, or (f) confer protection to create incentives. Using this model Baldwin and Cave examine how government may influence the outcome of any situation by applying a mixture of incentives and controls to achieve the desired outcome. This removes the discussion from the detail of the regulatory settlement, where the Law of Requisite Variety suggests analysis would be futile, to the higher level of regulatory structures and strate- gies. In attempting to provide a model which simplifies the variety of regulatory strategies discussed above Mark Thatcher models how regulators seek to influence behaviour and suggests four families of regulatory interpretation: (1) classical economics, where regulation is an interference in the market that may be neces- sary, (2) political economy, where regulation is inherent to society, and is used by the state to ensure that the market functions, (3) political science and law, where regulation steers public activity and is concerned with controls over private activity, and (4) sociological, where regulation is informational norms that guide behaviour.19 An attempt to extend the traditional model of regulatory analysis 16 L Fuller, The Morality of Law (New Haven, Conn, Yale UP, 1964) at 106. 17 n 6, above, ch 4. 18 Ibid. 19 M Thatcher, Explaining Regulation Day 1 Sessions 4 and 5: A paper delivered at the Short Course on Regulation, The London School of Economics and Political Science, 11–15 September 2000. 292 Andrew Murray into Cyberspace was made by Lawrence Lessig in his monograph Code and Other Laws of Cyberspace. In this Lessig seeks to identify four ‘modalities of regulation’: (1) law, (2) market, (3) architecture, and (4) norms which may be used individu- ally or collectively either directly or indirectly by regulators.20 Each modality thus has a role to play in regulating your decision. Lessig suggests that the true regula- tory picture is one in which all four modalities are considered together. Regulators will design hybrid regulatory models choosing the best mix of the four to achieve the desired outcome.21 Similarly Colin Scott and myself in our paper Controlling the New Media: Hybrid Responses to New Forms of Power suggest a focus on hybrid models of regulation.22 We, like Professor Lessig, suggest four modalities of regu- lation which we title, (1) hierarchical control, (2) competition-based control, (3) community-based control, and (4) design-based control. We recognised that the development of regulatory structures is often organic in nature, still though we imagined regulatory bodies, through the employment of hierarchical controls, fashioning the structure of such organically developed systems.23 Thus ultimately we supported the consensus that regulators design regulatory systems. Complexity and Cyberspace These traditional regulatory models all share a common foundation. All are mod- elled upon the belief that regulatory designs are based upon active choices made by regulators: they suggest a regulator who works within a settled environment and who has time to positively consider policy decisions. It is as if all the variables in the garden in the Gardener’s Dilemma have suddenly frozen in time, allowing the gardener to evaluate the best approach to take: should he pursue a policy of pruning, or watering, to achieve the desired result? Thus all these models: Regulatory Systems, Regulatory Interpretations and Regulatory Modalities, share a common weakness of assuming a ‘causally deterministic regulatory universe’: that is the idea that future regulatory events are necessitated by past and present events combined with the laws of nature.24 This is similar to the state of affairs that the discipline of Physics found itself in the Regency and Victorian periods. Pierre Simon, Marquis de Laplace, is usually held up as the archetypical theorist of causal (or nomological) determinism, as most famously exhibited through the thought experiment: Laplace’s Demon. This experiment asks us to imagine an entity that knows all facts about the past and the present, and knows all natural 20 L Lessig, Code and Other Laws of Cyberspace, ver.2.0 (New York, Basic Books, 2006) at 122–123. 21 Ibid. 22 C Scott and A Murray, ‘Controlling the New Media: Hybrid Responses to New Forms of Power’ (2002) 65 MLR 491. 23 Ibid at 505. 24 It should be noted that there is no role for external events in these models, which applying systems theory we have to account for. Conceptualising the Post-Regulatory (Cyber)state 293 laws that govern the universe. Such an entity might, under certain circumstances, be able to use this knowledge to foresee the future, down to the smallest detail. Thus to paraphrase Laplace’s famous Deterministic Universe:25 ‘There should be a set of scientific laws that would allow us to predict everything that would happen in the (regulatory) universe, if only we knew the complete state of the (regulatory) universe at one time.’ But the deterministic views of Laplace, and others such as Lord Kelvin, were soon to be challenged from within the commu- nity of physicists. For every argument in favour of determinism there has to be one for indeterminism. Accepting indeterminism means embracing uncertainty. Sometimes, just as systems theory predicts, there is no way to accurately predict the outcome of any action (or inaction). This is most famously outlined by the Physicist Werner Heisenberg. Heisenberg won the Nobel Prize for Physics for The creation of quantum mechanics, the application of which has, inter alia, led to the discovery of the allotropic forms of hydrogen, and developed matrix mechan- ics, the first formalisation of quantum mechanics, but he is most famous for the ‘Heisenberg Uncertainty Principle’ which states that: ‘It is not possible to simulta- neously determine the position and momentum of a particle. Moreover, the better position is known, the less well the momentum is known (and vice versa).’26 The Heisenberg Uncertainty Principle means that quantum physicists must accept that there are areas they cannot measure, or as Professor James Trefil has observed ‘scientists had encountered an area of the Universe that our brains just aren’t wired to understand’.27 Accepting this limitation 20th Century physicists set about examining the quantum universe and immediately started to encounter a whole series of events which Laplace and his contemporaries and predecessors could never have imagined. New terms such as ‘Quantum Improbability’ and ‘Quantum Weirdness’ were developed to explain phenomena previously thought impossible: phenomena such as ‘Quantum Entanglement’ which states that certain pairs of subatomic particles when separated by huge distances can instantly ‘know’ what the other is doing. Thus if you spin one particle in a pair the other immediately starts spinning at the same speed in the opposite direction. This instantaneous communication violates a key principle of Einstein’s Special Theory of Relativity: that nothing can travel faster than the speed of light. The problem quantum theories such as this caused, led some physicists, most notably Albert Einstein, to regard quantum theory with contempt.28 Despite such assaults, the study of 25 Discussed in R. Schock, ‘On determinism, the universe, and related concepts’ (1962) 14 Syn- these 255. 26 See W Heisenberg, ‘Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik’ (1927) 43 Zeitscrift für Naturforschung 172. Discussed in D Cassidy, ‘Certain of Uncertainty’ in Uncertainty: The Life and Science of Werner Heisenberg (New York, WH Freeman, 1991). 27 Quoted in B Bryson, A Short History of Nearly Everything (New York, Broadway, 2003) at 189. 28 Famously Einstein, Podolsky and Rosen developed the EPR theory to show the macro effects of quantum theory, leading to Einstein calling Quantum Entanglement ‘spooky action at a distance’. Einstein also famously stated ‘God does not play dice’ in response to the apparent unpredictability of quantum physics. 294 Andrew Murray quantum physics flourished and quantum theory now provides the foundation of our modern appreciation of the Universe. Physicists have today embraced the complexity of the Universe in an attempt to better understand it. Quantum theory is being applied in the development of everything from computing to cryptogra- phy and forms the basis of contemporary attempts to develop a Laplacian ‘Grand Unified Theory’ such as Stephen Hawking’s scientific bestseller A Brief History of Time and Brian Greene’s Elegant Universe.29 It seems a bit far fetched to suggest a link between the work of Werner Heisenberg and that of Jeremy Clarkson, but sometimes academics gain value from looking beyond the narrow academic horizon. Jeremy Clarkson is a popular, and populist, journalist. He writes weekly columns for the Sunday Times and for the Sun. He also presents the popular television motoring show Top Gear and is the author of several bestselling books such as The World According to Clarkson and Born to Be Riled. On January 28 2007 he wrote a column for the Sunday Times entitled ‘The end is nigh, see it on YouTube’.30 In this he challenges the idea of a causally deterministic regulatory universe.31 He opens in his inimitable style: My biggest fear for the future of the planet and the wellbeing of our children is YouTube…you can log on if you wish to see next week’s episode of 24. This means the producers of 24 have gone to all the trouble of making a show, and paying the actors … and then finding no television company in the world is all that bothered about screen- ing it because everyone’s seen it already on the web. Fine, you might think. YouTube will be forced to treat the copyright laws with a bit more respect and that will be that. Except it won’t. Because the Internet’s like mercury, as soon as it becomes impossible to post copyrighted material on YouTube, some other computer nerd in Bangladesh will, for an outlay of 35p, start a new video sharing site. And you’ll be able to post it there. In this statement Clarkson captures many of the problems surrounding cyber- regulation in particular, and regulation in the global, networked society more generally. Firstly he captures regulatory arbitrage. This is seen in the last sentence ‘and you’ll be able to post it there’. Basically here he is saying if you find the regu- latory matrix which is being applied to you to be undesirable, the nature of the network allows you to seek out alternative outlets (regulated by alternate regula- tors) in a way a non-networked environment does not. Thus if UK copyright law is restrictive, use an intermediary bound by another set of regulatory values. This idea is of course linked to jurisdictional limitations (his reference to Bangladesh is not throw-away). UK regulators may only regulate in the UK environment. Despite some limited successes by regulators against distributors and technology 29 B Greene, Elegant Universe: Superstrings, Hidden Dimensions and the Quest for the Ultimate Theory (New York, Random House, 2000). S Hawking, A Brief History of Time: From the Big Bang to Black Holes (London, Bantam Books, 1995). 30 Available from: accessed 10 June 2008. 31 I do not imagine he realised or intended this, but I may be doing the man a disservice. Conceptualising the Post-Regulatory (Cyber)state 295 companies in cases such as: UEJF and LICRA v. Yahoo!,32 the French Loi rela- tive au Droit d’Auteur et aux Droits Voisins dans la Société de l’Information,33 Microsoft Corporation v. Commission of the European Union,34 and MGM et al. v. Grokster,35 the technological genie departs the bottle and the low (or even non-existent) barriers to entry in most online marketplaces means that someone else will take advantage of the technology. The reach of regulators is restricted; the reach of the technology is global. Thus although I must comply with UK copyright law, the ‘computer nerd in Bangladesh’ does not. I may use tools to shield myself such as anonymity and pseudonymity (which may or may not be effective), but if I believe them to be effective I will trust to them. All these things are possible because of the design of the network. With our ability to endlessly change our environment, any attempt to design an environmental response has to be accepted by the community as a whole or it will simply be engineered around. Further because of the nonintelligent nature of the network and our inability to tell by looking whether or not an informational bit forms part of a copyright pro- tected image, sound file or text file or a copyright free equivalent (or whether the poster is the copyright holder or not) it suggests that attempts to externally regu- late an environment built upon perfect horizontal communications and perfectly designed to store and carry information (whether legal or not or whether owned by another or not) is doomed to failure. Thus Clarkson’s thought experiment, like those of Heisenberg before him, suggests that attempts to externally regulate the digital environment suffer from their own uncertainty principle. There will be those among the readership who at this point believe that I am fall- ing into a familiar trap, the very trap that caught eminent scholars such as David Post and David Johnson36 and opinion formers such as John Perry Barlow.37 They believe that this argument has run its course, that all this paper is doing is restat- ing the Cyberlibertarian ethos of the 1990s.38 The Clarkson thought experiment does look remarkably like old school Cyberlibertarianism, with issues such as 32 UEJF and Licra v Yahoo! Inc. and Yahoo France, Tribunal de Grande Instance de Paris, 22 May 2000, available from accessed 10 June 2008; Tribunal de Grande Instance de Paris, 20 November 2000, available from http://www.lapres. net/yahen11.html> accessed 10 June 2008; Yahoo Inc v LICRA, 169 F.Supp. 2d 1181 (ND Cal 2001); Yahoo Inc v LICRA, 433 F 3d 1199 (9th Cir 2006). 33 For full details on this law and its effects see N Jondet, ‘La France v. Apple: who’s the dadvsi in DRMs?’ (2006) 3:4 SCRIPT-ed 473, available from http://www.law.ed.ac.uk/ahrc/script-ed/vol3-4/ jondet.asp> accessed 10 June 2008. 34 Case T–201/04, Judgement of the Court of First Instance (Grand Chamber), 17 September 2007. 35 545 US 913. 36 See D Johnson and D Post, ‘Law and Borders—The Rise of Law in Cyberspace’ (1996) 48 Stanford Law Review 1367. 37 JP Barlow, A Declaration of Independence for Cyberspace, available at accessed 10 June 2008. 38 A Murray, The Regulation of Cyberspace: Control in the Online Environment (Oxford, Routledge- Cavendish, 2006), 5–9. 296 Andrew Murray regulatory arbitrage, movement between regulatory competences, anonymity and pseudonmyity and the concept of a ‘community-led’ free market in regulation. The classical Cyberlibertarian position has long been discredited by the Cyber- paternalist school which demonstrated the role of external regulatory controls in cyberspace.39 To quote Joel Reidenberg, ‘the political governance process ordi- narily establishes the substantive law of the land. For Lex Informatica, however, the primary source of default rule-making is the technology developer and the social process by which customary uses evolve.’40 Therefore, Lex Informatica can be seen as an important system of rules analogous to a legal regime. According to this view, internet related conflicts and controversies reflect a state of flux in which Lex Informatica and established legal regimes are intersecting. In the light of Lex Informatica’s dependence on design choices, the attributes of public oversight associated with regulatory regimes, could be maintained by shifting the focus of government actions away from direct regulation of Cyberspace, toward influencing changes to its architecture. But, although I do not, and cannot argue with this stance, I think it is wrong to simply replace Cyberlibertarianism (no control) directly with Cyberpaternalism (control through code) without further evaluation of the regulatory matrix within this complex, global, networked envi- ronment. Although Cyberlibertarianism is not supportable today I propose that the Clarkson position represents Cyberlibertarianism 2.0: The Post-Regulatory (Cyber)state. Cyberlibertarianism 2.0 or the Post Regulatory (Cyber)state The challenge of the Clarkson position is different from that of traditional Cyberlibertarianism. It does not suggest that external regulation will always be ineffective unless supported by the community (it explicitly recognises the effectiveness of external regulation on ‘responsible citizens’ such as YouTube). To 39 Cyberpaternalism developed in the mid 1990s. Despite sympathising with the view that inter- networking leads to the disintegration of territorial and substantive borders as key paradigms for regulatory governance, cyberpaternalists such as Joel Reidenberg argued that new models and sources of rules were being created in their place. Reidenberg, in his famous paper Lex Informatica (below n 40), identified two distinct regulatory borders arising from complex rule-making processes involving States, the private sector, technical interests, and citizen forces. Each of these borders were seen as establishing the defining behavioural rules within their respective realms of the networking infrastruc- ture. The first set of borders encompassed the contractual agreements among various Internet Service Providers. The second type of border was the network architecture. The key factor at this level, he claimed, were the technical standards because they establish default boundary rules that impose order in network environments. To this end, he argued that, rather than being inherently unregulable due to its design or architecture, the internet is in fact regulated by its architecture. 40 J Reidenberg, ‘Lex Informatica: The Formation of Information Policy Rules Through Technology’ (1998) 76 Texas Law Review 553 at 567. Conceptualising the Post-Regulatory (Cyber)state 297 begin our evaluation of the Post Regulatory (Cyber)state, we need to look at its components. Let’s start with the Post-Regulatory state. The genesis of the concept of the post-regulatory state is hard to pin down but is probably to be found in Julia Black’s Current Legal Problems paper Decentring Regulation: Understanding the Role of Regulation and Self-Regulation in the Post Regulatory World.41 In this, Black uses post-regulatory as a synonym for the transition from direct regulation (the regulatory state) to indirect regulation (post-regulatory state). Probably though the first fully-ordered analysis of the post-regulatory state is in Colin Scott’s paper Regulation in the Age of Governance: The Rise of the Post Regulatory State,42 where he defines the post regulatory state as: the next stage of development for the State from the post-war Welfare State to the Regulatory State of the 1980s and early1990s to the current Post-regulatory state. In his paper Scott identifies four characteristics of the Post-Regulatory State. (1) Variety in Norms: whereas the regulatory state has core norms of primary and secondary legislation which are the only forms of rule-making in which the state uses its monopoly of legal force over economic and social actors. The post- regulatory state uses a plurality of state actors with formal rule making capacity (including agencies, sub-national governments and supranational institutions) such that rules may be multiple and overlapping with meaning assigned through processes of interpretation which are contingent upon a variety of factors. (2) Variety in Control Mechanisms: if a central characteristic of the regulatory state is an emphasis on hierarchy as an instrument of control, then a key feature of the post-regulatory state is a shift towards other bases for control, such as markets, social order or design. (3) Variety in Controllers: within the regulatory state literature state regulatory bodies are accorded a special place. In contrast no special legitimacy or value is placed on attributing control functions to state bodies—government departments, agencies and courts—within post-regulatory state thinking. Standard setting is observed at supranational level through a wide range of general and specific governance institutions such as trade associations, trade unions and NGOs. (4) Variety in Controllees: the regulatory state literature has traditionally viewed businesses as the key regulatees.43 The post-regulatory state perspective takes a wider view, recognising that the behaviour of a wider range of actors are relevant to the outcomes of ordering of social and economic life, including government itself and individual actors. Of course there can be no post- regulatory (Cyber)state if there were no (Cyber)state. Here the argument in favour of the post-regulatory (Cyber)state has problems because as pointed out by Cyber- paternalists in the 1990s, Cyberspace does not exist as a separate state. This in fact is often referred to as the Cyberspace Fallacy, or by Jack Goldsmith as Fallacy #1.44 41 n 4, above. 42 n 4, above. 43 Ibid. 44 J Goldmsith III, ‘Regulation of the Internet: Three Persistent Fallacies’ (1998) 73 Chicago-Kent Law Review 1119. 298 Andrew Murray I cannot sustain an argument that a separate (Cyber)state exists, to do so would add folly to fallacy, but I do believe that there are some unique features of Cyberspace and Cyber-regulation which make the study of the abstract (Cyber)state valuable. At the heart of my analysis are the very features which formed the heart of Cyberpaternalism in the 1990s: the unique environment of Cyberspace, with its man-made and flexible architecture. I base my analysis on two classical Cyberpaternalist concepts: Yochai Benkler’s, simplified network lay- ers model, based upon the Open Systems Interconnection Basic Reference Model, which he used (and uses) to describe how regulation introduced in one layer may regulate effectively in other network layers,45 and the concept of design-based regulation as a substitute for direct (or hierarchical) regulation. This is at the heart of Joel Reidenberg’s Lex Informatica, an exegesis on how technical design tools, in particular code, could be developed as a substitute for legal controls in the Cyber-regulatory sphere,46 and of course the work of Lawrence Lessig who in his stunning reply to Frank Easterbrook’s challenging paper, Cyberspace and the Law of the Horse,47 produced a new phrase for the Cyber-regulatory lexi- con: Code is Law.48 In these papers we see a common theme: all embrace, and indeed are founded upon, an increased role for design though technology and networks in the regulatory matrix. This is not a surprise, regulators often vary their language and even culture to reflect the regulatory environment they seek to control, and it is to be expected that (cyber)regulators would use the language and culture of Cyberspace in seeking to exert control. Given that Cyberspace is created by a fusing of digital technology and communications this is the expected result. But if we look a little deeper we see they are also linked by the theme of a ‘new’ regulatory partnership between Law and Technology. Much play is made of indirect regulation, where East Coast Codemakers may mandate West Coast Codemakers to achieve a regulatory outcome.49 Therefore an examination of the Cyberpaternalist approach developed in the 1990s shows it to have all of Scott’s characteristics of the Post-Regulatory State. Thus although the (Cyber)state never was a welfare state or a regulatory state (or even was a State) it is the model Post- Regulatory State. With the recognition that the cyber-state functions as a Post-Regulatory state the basic question remains, how should regulators model regulatory interven- tions in the complex environment of the post-regulatory state whether it be the 45 Y Benkler, ‘From Consumers to Users: Shifting the Deeper Structures of Regulation Toward Sustainable Commons and User Access’ (2000) 52 Federal Communications Law Journal 561. 46 J Reidenberg, n 40, above. 47 FH Easterbrook, ‘Cyberspace and the Law of the Horse’ (1996) University of Chicago Legal Forum 207. 48 L Lessig, ‘The Law of the Horse: What Cyberlaw Might Teach’ (1999) 113 Harvard Law Review 501. 49 See eg Scott & Murray, n 22 above; R Brownsword, ‘Code, Control, and Choice: Why East is East and West is West’ (2005) 21 Legal Studies 1; R Brownsword, ‘Neither East Nor West, Is Mid-West Best?’ (2006) 3:1 SCRIPT-ed 15, available from accessed 10 June 2008. Conceptualising the Post-Regulatory (Cyber)state 299 50 Benkler, above n 45 at 564. 51 New York, Random House, 2001. 52 Ibid at 148. (Cyber)state or more prosaically the traditional state? The first thing to bring to mind is the value and importance of layers. Most things in life require strong foundations, or roots, to withstand the ravages of time. This is of value to regula- tors. As a human society we build, or rather we take materials (and concepts) and develop higher-level products (or outcomes). Yochai Benkler demonstrated this with communications networks as seen in Figure 13.1.50 The key to Benkler’s model is the recognition that by introducing a regulatory modality at one of these layers you may vertically regulate, but that such vertical regulation is only effective from the bottom-up, that is regulation in a supporting layer is effective in the layers above, but does not affect the layers below. This is because the higher layers rely upon the infrastructure of the lower (they are their foundations or roots), but without reciprocal reliance the reverse is not true. An amendment in the content layer has no direct effect on the logical infrastructure layer or the physical infrastructure layer. This has been seized upon by regulators and commentators across the Media & Communications regulation spectrum, such as Professor Lessig who in his book The Future of Ideas51 explained that the Internet Protocols (the code layer) only developed because the telephone companies (the controllers of the physical layer) did not intervene in the devel- opment of the network.52 Layers are though only part of the complexity of the Cyber-regulatory model. The second concept which is key in mapping the Cyber-regulatory environ- ment is the environment itself. As both Professors Reidenberg and Lessig dem- onstrated, the environment has a unique characteristic in its malleability, a result Figure 13.1: Benkler’s Layers. 300 Andrew Murray of the man-made nature of code.53 In physical space environmental modalities suffer from a high degree of inertia integral to the physical laws of the Universe. This inertia can be most clearly illustrated by the second law of thermodynamics which states that a closed system will remain the same or become more disor- dered over time: in other words its entropy will always increase. The natural order of the Universe is that our physical surroundings (environment) become less regulated over time. To overcome this, or in other words to harness design-based controls, we must bring an external force to bear. Thus environmental modalities are resource intensive: to utilise an environmental modality the regulator must expend considerable initial resources to overcome this universal inertia. For this reason in those areas of regulatory policy where environmental modalities have traditionally been used, such as transport policy, we see a large proportion of the regulator’s resources being expended on design and construction. The develop- ment and construction of controls such as road humps, one-way systems, direc- tive road layouts and traffic control systems consume a considerable part of the transport planner’s annual budget. Even simple design-based systems such as the Inland Revenue’s self-assessment tax regime consume a considerable amount of resources in their implementation, although it should be recognised that the employment of design-based systems is often self-financing in the longer term due to the self-enforcing nature of most such controls. Despite this, the large ini- tial investment required to overcome the environment often mitigates against the extensive use of environmental modalities in the regulation of physical space. The employment of socially-mediated modalities, by comparison, does not usually require the overcoming of such inertia. Thus in designing regulatory structures for the physical world we usually give pre-eminence to socio-legal (or socially- mediated) modalities of regulation. Once one ventures into the higher layers of Cyberspace though, the environmental inertia obligated by the second law of thermodynamics no longer applies. This release allows for a new flexibility in the relationship between law, society and design. It is this which forms the basis of Socio-Technological-Legal Theory (STL). With the inertia of the physical laws overcome we can map a new regulatory model in which environmental modalities are equally functional with social-mediated modalities. In the STL model we can exploit regulatory settlements which design the environment. Understanding that regulatory discourse may include technology is another step in understanding regulation. The final concept I want to introduce is the power of the network. Cyberspace is a space of perfect (or near perfect) communications. It allows us to feel closer to others in the network and allows us to form new micro communities. It is truly a polycentric community. Professor Lessig mapped the individual as being a ‘pathetic dot’ in the middle of competing and overlapping regulatory modalities and spoke of regulators choosing the best mix of regulatory modalities to control 53 See Reidenberg, above n 40; Lessig above n 20 at 120–38. Conceptualising the Post-Regulatory (Cyber)state 301 the individual’s actions.54 In truth the individual dot is part of a complex com- munity of dots who through Information and Communications Technologies are empowered to gather and communicate more perfectly as individuals than at any time in our history (and it is fair to assume this ability will continue to grow and develop). Thus where regulators vie for regulatory acceptance they do not act in a regulatory vacuum, any action by any one member of the regulatory matrix (either as regulator or regulatee) has an effect on the actions of the others. This is because all regulators and regulatees form part of an environmental system and a change in any one aspect of this environment affects all who participate in that environment.55 It is wrong to imagine the regulatory subject, or ‘pathetic dot’ as being a merely passive receiver sitting at the middle of a torrent of regulatory demands. Rather the regulatory subject may be seen as simply another part of the regulatory matrix: they may be the focus of the regulator’s attentions, but they are also part of a Complex System, and as we saw when discussing the Gardener’s Dilemma, the actions of the regulatee effect the regulatory environment as much as regulators as may be seen in Figure 13.2. At each point in the regulatory matrix, a regulatory intervention may be made, but the complexity of the matrix means that it is impossible to predict the response of any other point in the matrix. Regulation within the complex, malleable, layered environment is considerably more complex to model than traditional regulation within physical space. This complexity is exponentially more difficult to calculate with each added modality, 54 Lessig above n 20 at 122–4. 55 See Murray, above n 38 at 234–40. Figure 13.2: Murray’s ‘Active Dot’ Matrix. 302 Andrew Murray and as a potential modality may be something as simple as a new software applet we see that Cyberspace is becoming increasingly difficult to model. Regulating the Post-regulatory Cyberstate Does this mean that Cyberspace is inherently unregulable? The answer is, of course, no. The list of successful regulatory interventions into the Cyberstate is extensive.56 What we are witnessing is a formalisation of the power of the com- munity. While regulatory theorists often discuss the formalisation of community standards through norms, communities have always more subtly affected the reg- ulatory environment. Thus consumers have previously used their market power to decide the fate of Betamax vs. VHS, Laserdisc and MP3s. Some were heavily supported in the market by developers (Laserdisc) while others developed more organically (MP3). Picking a winner or loser is still extremely difficult. Similarly the success of budget airlines, particularly in Europe where rail travel is often sub- sidised by the state, demonstrates the community matrix is more concerned with travelling cheaply than they are with the environment, despite politicians and media outlets trying to convince them of the alternative. The key to understanding and designing successful regulation in the post-regulatory state, all post-regulatory states not just the Cyberstate, is to depart from the accepted wisdom that regu- lators regulate and regulatees are merely actors to be regulated, or as Lawrence Lessig said are merely ‘pathetic dots’.57 The dot is not pathetic. The dot, as dem- onstrated in Figure 13.2, above, is part of the dot community. The dot community forms a matrix which determines whether or not a regulatory intervention is suc- cessful or whether it fails. The dot community supported YouTube over Google Video, despite the massive market presence of Google.58 The dot community (in the UK) has rejected many of the directive effects of the Obscene Publications Acts.59 The dot community rejects DRM technology both through direct action 56 Successes include the regulation of the UK Domain Name System and the creation of the Nominet Dispute Resolution Service a low-cost ADR process; the signing of the Convention on Cybercrime (Council of Europe, ETS No 185, Convention on Cybercrime, Budapest, 23 November 2001) and the suc- cess states have had in co-operatively policing child pornography under the convention and other joint operations; and the Unlawful Internet Gambling Enforcement Act of 2006 (HR 4411) which has success- fully regulates online gambling in the United States by banning payments to gambling providers. 57 Lessig above n 20 at 122. 58 There is no doubt that YouTube, an independent start-up, quickly overtook Google video to become the no.1 video sharing site on the internet with considerably less assets at its disposal. Customers quickly identified the key advantages of YouTube and through viral communications led to a migration from the large corporate site to the start-up, eventually leading to Google buying YouTube for $1.65bn. 59 Although (understandably) figures are hard to verify it is estimated in one major UK survey that nine million men and 1.4 million women used pornographic websites in the year 2005. See Porn UK, Independent on Sunday, 28 May 2006, available from: accessed 10 June 2008. Much of the content consumed can only be assumed to be in breach of the terms of the Obscene Publications Acts. Conceptualising the Post-Regulatory (Cyber)state 303 and indirectly by legal intervention.60 Thus the dot community decides the suc- cess or failure of a regulatory intervention. Returning to the Clarkson scenario, we find it is not a foregone conclusion, as he suggests that copyright is doomed to fail in Cyberspace. We, the dot community, choose whether or not we support the re-assertion of copyright over online content by choosing whether or not to sup- port alternatives to YouTube in the event they complied with a copyright ruling in regard to Fox copyrighted content such as ‘24’. The problem is that regulators see their role traditionally as ‘regulating the community.’ They believe the problems are caused by the actions of the community in seeking to ‘engineer around’ their controls, as can be seen in the regulatory response to peer-to-peer file sharing,61 reselling and parallel importation,62 and the regulation of adult content.63 They mistakenly see the community as a passive collective there to be controlled. They believe that the community is a static body to be regulated. They believe their role is to make an intervention causing a shift to another static settlement. Regulators then examine this outcome and declare themselves satisfied (regulatory success) or dissatisfied (regulatory failure) and the whole process begins over. In truth the process of regulation is much more complex. All parties in a regula- tory environment continually and simultaneously act as regulator and regulatee. Changes within the regulatory environment are therefore constant and as a result the first stage in designing a regulatory intervention in any complex regula- tory environment, including Cyberspace, is to develop a dynamic model of the environment recording all parties and mapping their contemporary regulatory settlements with each other. This recognises the role of the active community. The value of the regulatory matrix (shown above), for regulators and for regu- latory theorists, is as a substitute for traditional static regulatory models. If we look at the failure of the Internet Corporation for Assigned Numbers and Names (ICANN) to achieve widespread acceptance within the Cybercommunity, and with it legitimacy, we see structural failures in the regulatory intervention which led to ICANN’s creation: in other words ICANN was flawed from its inception.64 ICANN was created by an executive action of the US Government. This action: represented by Point A in Figure 3 was an external regulatory intervention into the settled regulatory matrix. It was the intention of the US Government to bring 60 Direct action includes the development of anti-DRM tools by the Cracker Community (crackers are individuals or groups who seek to engineer code solutions to closed or locked code), while legal interventions include the French Loi relative au Droit d’Auteur et aux Droits Voisins dans la Société de l’Information, above n 33 which led to the iTunes/EMI announcement in April 2007 that they would make available higher quality, DRM free music (at a price). 61 See (among others) A&M Records Inc v Napster Inc 114 F Supp 2d 896 (ND Cal 2000); Buma/ Stemra v KaZaA, Amsterdam District Court, 29 November 2001, rolnummer KG 01/2264; Universal Music Australia Pty Ltd v Sharman License Holdings Ltd [2005] FCA 1242; MGM et al v Grokster et al 125 SCt 2764 (2005); Digital Millennium Copyright Act (1998)(US) and Directive on the harmonisa- tion of certain aspects of copyright and related rights in the information society Dir 2001/29/EC. 62 See Independiente and ors v Music Trade-Online (HK) Ltd [2007] EWHC 533 (Ch). 63 See, eg, the abortive US Communications Decency Act of 1996. 64 See Murray, above n 38 at 234. 304 Andrew Murray stability to the process of awarding and managing domain names and to bring a degree of public accountability to the process. In fact the existence of ICANN has arguably destabilised the domain name sys- tem, while ICANN itself has been repeatedly criticised for being unaccountable.65 The question this raises for regulators and regulatory theorists is why has this hap- pened? Fortunately, some of the reasons for ICANN’s regulatory failures become apparent when we examine the effect it had on the regulatory matrix. Point B represents the United Nations in the guise of the World Intellectual Property Organisation (WIPO). WIPO saw the creation of ICANN initially as a threat, then as an opportunity.66 When invited by the US Department of Commerce to create a set of policy recommendations for ICANN with regard to Intellectual Property Rights, WIPO produced first a Green Paper, then a Final Report, highly favourable to trade mark holders.67 In so doing WIPO caused further changes, and tensions, within the regulatory matrix. One was the effect of alienating a large propor- tion of domain name owners, represented by Point C. Critics claimed ICANN was biased in favour of trade mark holders, and the community responded both 65 M Mueller, Ruling the Root (Cambridge, MA, MIT Press, 2002); M Froomkin, ‘Wrong Turn in Cyberspace: Using ICANN to Route Around the APA and the Constitution’ (2000) 50 Duke Law Journal 17. 66 Murray, above n 38 at 109–14. 67 WIPO, The Management of Internet Names and Addresses: Intellectual Property Issues, RFC-3, 23 December 1998, available at accessed 10 June 2008; WIPO, The Management of Internet Names and Addresses: Intellectual Property Issue, Final Report of the WIPO Internet Domain Name Process, 30 April 1999. Report available from from accessed 10 June 2008. Figure 13.3: The Regulatory Impact of ICANN. Conceptualising the Post-Regulatory (Cyber)state 305 through organised media campaigns and more directly through the election of highly critical candidates in ICANN’s At-Large elections.68 The actions of regula- tory bodies such as the US Government and WIPO, not only affected consumers: regulatory tensions were also created with other regulators. The European Union, represented at Point D, was concerned that the creation of ICANN could establish ‘permanent US jurisdiction over the Internet as a whole, including dispute resolu- tion and trademarks used on the Internet’.69 Although some of the concerns of the EU were addressed by the US Department of Commerce, there remains a degree of tension between the EU and ICANN which permeated the extensive discus- sions on the creation of the .eu top level domain. The European Union’s actions did not, though, end with the creation of ICANN. The EU states are influential members of the United Nations, and they, along with many others, have pushed the issue of Cyber-regulation onto the UN agenda through the World Summit on the Information Society (WSIS) which is represented in our model by Point E. WSIS is the key current regulatory intervention into the world of ICANN. WSIS is the highest profile event to date to deal with the threats and opportunities offered by Information and Communications Technology (ICT). The need for a UN Summit on this issue was first identified by the International Telecommunications Union in 1998, when by Resolution 73 of the ITU Plenipotentiary Conference in Minneapolis, they noted that telecommunications were playing an increasingly decisive and driving role at the political, economic, social and cultural levels and called upon the United Nations: ‘to ask the Secretary-General to coordinate with other interna- tional organizations and with the various partners concerned (Member States, Sector Members, etc.), with a view to holding a world summit on the information society.’70 This request was heard at the ninetieth plenary meeting of the General Assembly of the United Nations in December 2001, where the General Assembly accepted and endorsed a proposal from the ITU that a World Summit on the Information Society be convened, and instructed the Secretary-General of the UN to ‘inform all heads of State and Government of the adoption of the present resolution.’71 WSIS was to take place in two phases, the first phase taking place in Geneva from 10–12 December 2003 and the second phase taking place in Tunis, from 16–18 November 2005. The objec- tive of the Geneva phase was to develop and foster a clear statement of political will and take concrete steps to establish the foundations for an Information Society for all, reflecting all the different interests at stake. The objective of the second phase was to put the Geneva ‘Plan of Action’ into effect and to find solutions and reach 68 Murray, above n 38 at 114–18. Also see the activities of groups such as ; and all accessed 10 June 2008. 69 Council of the European Union/European Commission, Reply of the European Community and its Member States to the US Green Paper, March 1998. 70 Resolution 73, available at accessed 10 July 2008. 71 Resolution adopted by the General Assembly [on the report of the Second Committee (A/56/558/ Add.3)] 56/183. World Summit on the Information Society, 21 December 2001. 306 Andrew Murray agreements in the fields of internet governance, financing mechanisms, and follow-up and implementation of the Geneva and Tunis documents. While it is too early to gauge the success, or otherwise, of WSIS,72 there is little doubt that it has begun a new chap- ter in the discourse in global communications and media governance. WSIS invited Heads of State/Government, International NGOs and Civil Society representatives73 to contribute to a series of preparatory meetings (PrepComms) and to the Geneva and Tunis rounds on a series of issues ranging from the digital divide,74 to freedom of expression, network security, unsolicited commercial communications (SPAM) and protection of children.75 Central to the WSIS programme was the issue of internet governance: WSIS envisaged a ‘people-centred, inclusive and development-orientated Information Society where everyone can create, access, utilize and share information and knowledge, enabling individuals, communities and peoples to achieve their full potential in promoting their sustainable development and improving their quality of life.’76 These principles were at odds with the commonly held view of internet gover- nance as a Western-led process dominated by the Government of United States and (mostly US-based) NGOs such as ICANN, with developing nations largely absent from the process. As a result WSIS, it appeared, would have to tackle, head-on, the dominance of Western industrialised nations and in particular ICANN, in managing the Root server system and the Addressing Protocols of the logical infrastructure layer. Although to date the effect of WSIS is limited it is predicted that the WSIS process will eventually lead to the extinction of ICANN to be replaced by a ‘truly international’ regulatory body.77 Whatever results come from the WSIS process they will certainly create further regulatory tensions throughout the regulatory matrix and are unlikely to solve the current problems of ICANN and the domain name system. By simply modelling ICANN’s failings we can predict that attempts to impose an unsympathetic regulatory settlement are likely to lead to unplanned tensions and turmoil within the regulatory matrix, undermining the effectiveness of the regulatory intervention. A new ICANN is unlikely to have any more success than the old. 72 Many early commentators on WSIS have been critical of its lack of effect or ambition. See eg C Hamelink, ‘Did WSIS Achieve Anything At All?’ (2004) 66 Gazette: The International Journal for Communication Studies 281 (referring to the Geneva Round). M Raboy, ‘The World Summit on the Information Society and Its Legacy for Global Governance’ (2004) 66 Gazette: The International Journal for Communication Studies 225; K Diab, ‘Walk First then Surf ’ (2005) 772 Al-Ahram Weekly (8–14 December) (referring to the Tunis Round). 73 In UN parlance, civil society encompasses all those who are not part of government, private enterprise or intergovernmental organisations. In other words private individuals. 74 The ‘digital divide’ reflects the technology gap which has opened up between technology rich Western States and technology poor African and Asian States, and on the growing divide within States between the professional classes with stable and fast internet access and the working class, in particular immigrant communities, where access may be unstable, slow and difficult to obtain. See P Norris, Digital Divide: Civic Engagement, Information Poverty and the Internet Worldwide (Cambridge, CUP, 2001); M Warschauer Technology and Social Inclusion: Rethinking the Digital Divide (Cambridge, MA, MIT Press, 2004). 75 For a discussion of WSIS see M Raboy and N Landry Civil Society, Communication and Global Governance: Issues from the World Summit on the Information Society (Bern, Peter Lang, 2004). 76 WSIS, Declaration of Principles, Geneva 12 December 2003, Principle 1. 77 K Murphy, ‘Who Really Runs the Internet?’ (2005) Computer Business Review Online (14 October). Conceptualising the Post-Regulatory (Cyber)state 307 According to the dynamic regulatory matrix the best regulatory model is not one built upon an active intervention into the settled regulatory environment, the result of which is likely to be extremely disruptive, rather it is one which harnesses, as best as possible, the relationships already in place between the actors: what I call symbiotic regulation.78 The development of symbiotic regulation, although com- plex, is not impossible. It is used in community-led and market-led regulatory developments such as the development of the VHS/DVD market. After the failure of the Sony litigation in 198479 a market led solution was used to provide the most effective regulatory settlement. If we use as our case study the effects of the Video Cassette Recorder (VCR) on the film industry in the 1980s and 1990s we see the value of complementary or ‘symbiotic’ regulation. By mapping the regulatory matrix surrounding the development of the VCR post 1984 (as is seen in Figure 4) we see why it was not the Boston Strangler of the film industry, but rather its Fairy Godmother.80 What we note first is the doomed attempt of the film industry to externally regulate the technology of the VCR in the failed Sony litigation. This is represented at Point A and it should be particularly noted that with the failure of this action the external forces on the regulatory matrix are shifted causing the regulatory focal point to shift from Point A, as was the case in the ICANN case study, to Point C. As with Point C in the ICANN study, Point C here represents the consumers, who freed from the external constraints of hierarchical intervention took the lead in designing market-led regulatory developments. The consumer immediately began to transmit their demands to the other key players in the VCR marketplace: the hardware suppliers, represented at Point B; the content suppliers, represented at Point D; and movie theatres, represented at Point E. 78 A full discussion of ‘symbiotic regulation’ follows. 79 Sony Corp of America v Universal City Studios 464 US 417 (1984). 80 In his famous testimony before the House of Representatives hearing on Home Recording of Copyrighted Works, Jack Valenti, President of the Motion Picture Association of America (MPAA), stated that: ‘the VCR is to the American film producer and the American public as the Boston strangler is to the woman home alone’. Figure 13.4: Three-Dimensional Map of the VCR Regulatory Matrix (post 1984). 308 Andrew Murray Consumers demanded from hardware suppliers ever better picture and sound quality, longer playing tapes and easy to use recording systems that would allow them to programme their VCR for days and weeks ahead. As we moved from the analogue to the digital, consumers demanded better quality and greater storage offered by digital media such as DVDs. The industry has responded by produc- ing higher quality home video equipment at ever lower prices,81 and has been rewarded by growing consumer expenditure on home entertainment products. Consumers indicated to the movie industry that they were willing to pay for a copy of their favourite movie which they could watch at home over and over in a fashion similar to playing their favourite record again and again. Further they indicated they would be willing to pay more for added extras which were made available through special editions or re-mastered originals. As a result the market for pre-recorded videos (and later DVDs) exploded.82 The video rental market, as exemplified by the success of the Blockbuster chain, offered a whole new market segment, the opportunity to watch recently released movies in the comfort of the consumer’s own home before they became available on general sale, but after their theatrical run, bringing on tap a whole new income stream for the film industry. This innovation also allowed consumers to bring pressure to bear on the cinema chains, who for many years had been under-investing in their theatres. Faced with the threat of the clean, well-lit and family-friendly Blockbuster chain, cinema operators invested heavily in their infrastructure throughout the 1980s, leading to the development of the modern multiplex cinema and with it a renaissance in the movie theatre industry.83 The result of this consumer-led market-regulatory settlement has been success for all parties. Consumers have greater choice and higher quality home cinema experiences, home electronics suppliers have new market segments to exploit, the film industry is making increased profits, both at the cinema and through the development of a new market segment: the sale- through video and even the movie theatre industry has benefited from the halo effect, and from increased investment with more customers coming through their doors to see blockbuster spectaculars such as the Lord of the Rings Trilogy, the Spiderman movies and the Harry Potter movies. What is the key difference between the ICANN case-study and the VCR case study which leads to such a dramatic difference in outcome? It is simply that in the ICANN case study an attempt was made to engineer a regulatory outcome by 81 In 1984 a VCR would cost on average between $400–$500. In 2007 a DVD Recorder may be bought for $54.99. 82 Figures from the UK Film Council reveal that in the UK alone in 2006 116 million DVDs were rented, while 227 million DVDs were sold with a combined market value of £2.5 billion. See: accessed 10 June 2008. 83 Statistics provided by the UK Film Council reveal that in 1984 cinema admissions had fallen to an all-time low of 54 million admissions in the UK (down from 1635 million in 1946). Since then admis- sion figures have shown steady improvement to reach 157 million in 2006, a figure in excess of that achieved in the years 1972–1980, before the widespread distribution of the home VCR in the UK. Conceptualising the Post-Regulatory (Cyber)state 309 directive, external, intervention: An intervention which was designed with little regard for the relationships between actors in the extant regulatory matrix. In the VCR case-study, fortunately for all involved, an attempt at a similar action ultimately failed and in its place a regulatory settlement evolved organically from within the extant regulatory matrix. It is a lesson which should not be lost on regulators and regulatory theorists. By acknowledging the complexity of the extant regulatory environment and by developing a dynamic regulatory model we can design more effective regulatory interventions: interventions which take account of the extant regulatory matrix and are more likely to achieve the desired regulatory outcome. Regulators may thus learn from, and apply, the mathematical model of the Gardener’s Dilemma. Complex systems may prove to be mathemati- cally intractable but this does not mean that they are unregulated: attempts to intervene in the extant regulatory settlement are, applying Chaos Theory, more likely to disturb the regulatory settlement in an unexpected and unpredictable manner than to achieve the desired outcome, whereas modelling and harnessing the extant regulatory settlement in a dynamic regulatory matrix allows regula- tors to harness the regulatory relationships already in place. It is the difference between a disruptive regulatory intervention and complementary intervention, and is the key to successful regulation, both in Cyberspace and in real space. How can hierarchical regulators, who are used to implementing a command and control model, match the complexity of these organic regulatory develop- ments? The answer is to use contemporary modelling techniques to predict where tensions will arise within the regulatory matrix and to design a regulatory inter- vention to avoid such tensions and to instead harness the natural communications flows within the matrix: in other words to mimic organic regulatory developments. To do this the regulator must carry out a two stage evaluation process before they start to design their intervention. The first stage is to map the communications which naturally occur between regulatory actors and the second is to predict what feedback will occur after the intervention is made. The first requires them to take account of theories of autopoietic social systems; the second requires them to be familiar with system dynamics. Modelling Symbiotic Regulation: Autopoiesis and Systems Dynamics Niklas Luhmann’s thesis of autopoiesis84 develops Humberto Maturana and Francisco Varela’s biological concept of autonomous living systems85 and 84 Autopoiesis is a compound word: auto meaning oneself and by itself, and poiesis, production, creation, and formation. Hence, the word autopoiesis literally is ‘self-production or self-creation’. 85 F Varela, H Maturana & R Uribe, ‘Autopoiesis: The Organization of Living Systems, Its Characterization and a Model’ (1974) 5 Biosystems 187. 310 Andrew Murray proposes that social systems are self-referring entities created within their own organisational logic. This approach is a radical departure from mainstream sociological thought, which is based on the premise of collective human agency. According to Luhmann there is no central organisational body and no hierar- chical structure merely unique subsystems, and subsystems within subsystems. A social system emerges wherever two or more actions are connected. At the most basic ‘level’ Luhmann classifies this as ‘interaction’. But as the complexity of these interactions increase they formalise into distinct subsystems such as organisations or corporations each carrying unique specialisation and identity. These societal subsystems self-define ‘meaning’ and in doing so isolate them- selves, creating a unique identity through the selection or rejection of relevant or irrelevant ‘communications’.86 This process allows an organisation to assume its own ‘life’, motivated and justified by its selective communication process. In this way, social systems reduce the overwhelming world complexity, establish- ing difference between themselves (the subsystem) and the environment (all other subsystems).87 Thus communication is at the heart of Luhmann’s theory, subsystems evolve and develop through the internalisation of information com- municated from other subsystems. It is my belief that by treating the regulatory matrix as an autopoietic environment, with each group of actors considered a subsystem, we can begin to understand the regulatory environment more fully. In doing so though we ask regulators and regulatory theorists to embrace a much more complex regulatory environment as within Luhmann’s model the effect of each communication between actors is dependent upon the internal logic of each of the external, self-referring subsystems. Control is the fundamental premise of regulation, but within an autopoietic model control becomes a problem of communication where those subsystems required to implement control are cog- nitively open but operatively closed.88 This means that communications between actors can never be certain, but within Luhmann’s terms a communication is a very specific event, allowing us to account for these difficulties in our regulatory model. In an autopoietic context communication is an ‘event’ comprised of three key aspects: ‘information’, ‘utterance’ and ‘understanding’ which enable the autopoietic process by way of further communications. Indeed, such communication forms the core of self-referential autopoietic systems and subsystems. Each of these aspects is selected (not necessarily by a person) from numerous possible choices thereby defining the identity and boundary of the subsystem. Information, as it implies, is the what of the message. Utterance is the how, the who and the when. Understanding is the sense or meaning generated in the receiver. The process of 86 N Luhmann, Soziale Systeme (Frankfurt, Suhrkamp, 1984). 87 N Luhmann, The Differentiation of Society (New York, Columbia UP, 1982). 88 A Dunshire, ‘Tipping the Balance: Autopoiesis and Governance’ (1996) 28 Administration and Society 299. Conceptualising the Post-Regulatory (Cyber)state 311 this communication leads to further communications relating to the information imparted, both within the subsystem and potentially within the environment (other subsystems). Through self-reference, and the memory of previous selections a subsystem focuses on only specific communications as among the possible social connections there are only a few that are relevant or compatible with its identity. Functionally differentiated subsystems within the social systems are thereby con- cerned and can only be concerned with communications that are relevant to their functioning, autonomous of one another. Thereby communicative acts effectively say nothing about the world that is not classified by the communication itself. This process ensures the creation of highly defined differences and attaches the rationale that identity is the creation of further, expected, communications, which form and stabilise boundaries. An entity builds up a unique backlog of selections made and selections negated. It uses this accumulation of selections, its meanings, as values for making future selections. This is a self-referential, closed process that maintains a circular dynamic. Its repetition, over time, maintains the identity and existence of the individual subsystem. As Mingers states: We can visualize the whole subsystem as an ongoing network of interacting and self- referring communications of different types and see how they can be separated from the particular people involved. The people will come and go, and their individual subjective motivations will disappear, but the communicative dynamic will remain.89 Thus communication in autopoietic systems is not a process directed by the actions of individuals but is rather a system in which they act as the nodes tempo- rarily located within the communication. People are unable to alter the course of communications as they have formed a self-referential loop within which actors play their part rather than write it. In this way, social systems effectively have a life of their own that gives direction to the thought and activity of individuals. The difficulty with this model is that it only goes part of the way towards solving the problem of designing symbiotic regulatory interventions. It suggests that there are stable patterns of communication within the regulatory matrix allowing regula- tors to map the communications dynamic within the matrix. This, in turn, allows regulators to anticipate where (and perhaps even when) communication between nodes will take place, suggesting that where known variables can be mapped some nodal responses to the regulatory intervention may be anticipated.90 Despite this, regulators cannot accurately predict all nodal responses. This is because, as 89 J Mingers, Self-Producing Systems: Implications and Applications of Autopoiesis, Contemporary Systems Thinking (New York, Plenum, 1995) at 144. 90 Eg, if we return to our example of the Gardener’s Dilemma, it means that the regulator can create links or associations between certain actions: knowing that watering the Azalea for instance will have a detrimental effect on the African Violet if it is placed next to the Azalea. Unfortunately he will not know why this is so. To help understand this he must measure the different responses which occur during each change to see which variables cause the change. Although measuring the effect of each change on every component (or node) is computationally intractable, observing the overall effect of each intervention is possible: this is the foundation of systems dynamics. 312 Andrew Murray discussed above, the content of communications between actors can never be certain only the pattern to actively map the effect of their intervention within the regulatory matrix, regulators must take a further step: that is to measure the probable (or actual) outcome of their intervention through the application of system dynamics. System dynamics was developed by Professor Jay Forrester of the MIT Sloan School of Management in 195891 and is the study of information dynamics within a system, in particular the flow of feedback (information that is transmitted and returned) which occurs throughout the system and the behaviour of the system as a result of those flows.92 System dynamics starts by defining the problem to be solved. In our example this may be the illicit copying and distribution of copyright protected music or video files. The first step is to information gather. This requires the regulator to record the current information being communicated by each of the nodes in the matrix, keeping a record of exactly what is being communicated and how. This information, which in our model would have been gathered at stage one, the creation of the autopeotic map of naturally occurring communi- cations, provides a foundational (or first order) model of the system. Using this model as their template the regulator designs a regulatory intervention which they hope will prove to be complementary to the existing regulatory communications within the matrix, leading to symbiotic regulation. The problem is, though, that as the system is complex it is equally as likely that the intervention will lead to an unexpected response occurring causing one, or more, node(s) communicating either an understanding or information transmission which could not have been foreseen. The result of such an occurrence will be for the intervention to become disruptive. But, by measuring this event, known as feedback, systems dynamics allows for a new, more detailed, second-order model of the regulatory environ- ment to be developed. Thus feedback is both the key to system dynamics, and the final piece of our regulatory jigsaw. Forrester explains that decisions, like the envi- ronment are dynamic rather than static. Whereas most decision makers, including regulators, imagine what he terms an ‘open-loop’ decision-making process (seen in Figure 5) in truth decision-making is part of the same self-referential loop out- lined by Luhmann and Mingers, meaning that the decision-making process looks more like Figure 13.6. 91 See J Forrester, ‘Industrial Dynamics—A Major Breakthrough for Decision Makers’ (1958) 36(4) Harvard Business Review 37; J Forrester, Industrial Dynamics (Waltham, MA, Pegasus Communications, 1961); J Forrester, ‘Market Growth as Influenced by Capital Investment’ (1968) 9 Industrial Management Review105. 92 Eg, system dynamicists study reinforcing processes—feedback flows that generate exponential growth or collapse and balancing processes—feedback flows that help a system maintain stability. Figure 13.5: Forrester’s ‘open loop’. Conceptualising the Post-Regulatory (Cyber)state 313 The key of this ‘closed-loop’ model is the constant feedback the decision maker is receiving. Whenever a regulatory intervention is made in any complex environment, whether it be in Cyberspace or in a complex real-world regulatory environment the intervention is scrutinised by all parties and their verdict is com- municated to all other regulatory nodes including the originator of the interven- tion. This allows the regulator to constantly evaluate and refine their intervention through a process of continual modelling (as seen in Figure 13.7). At each stage, subsequent to the first order model which is designed using the autopeotic map, the regulator is continually amending their actions based upon the feedback received following their previous actions. Thus Action 1 causes a set of results and resultant feedback, for example adding DRMs to digital media files causes consumer disquiet and a rise in the activity of crackers. As a result the regulator considers this and makes a second intervention, Action 2. This may be attempts to legally control the activity of crackers though legislation such as the Digital Millennium Copyright Act or the Directive on Copyright and Related Rights in the Information Society. The effect of this may be to cause a shift in focus from cracking to sharing through file sharing technologies, leading to a Figure 13.6: Forrester’s ‘closed loop’. Figure 13.7: Dynamic Modelling. 314 Andrew Murray third order intervention in file sharing communities and so on. What this dem- onstrates is that an intervention should not be viewed as a single act which is then assumed to succeed or fail depending upon whether it meets a series of subjective standards set by the decision-maker. It, like the regulatory environment, should be dynamically modelled over a period of months, or even years, with each new intervention being designed specifically with regard to the feedback received at each point of intervention. Although this sounds complex, and indeed seems not to be a great advancement on the current model there are modelling tools such as iThink93 and Venisim94 which allow for computer modelling of millions of vari- ables within a digital model.95 These systems mean that regulators do not need to continue to develop static ‘trial and error’ regulatory models. They may instead model millions of regulatory variables before they make an intervention suggest- ing that symbiotic regulation is not something which has to be left to chance or to organic development, by mapping the communications variables within the sys- tem and modelling potential feedback patterns using system dynamics it should be possible to achieve regulatory symbiosis on a regular basis.96 Regulating the Post Regulatory (Cyber)environment Finally we have a model which goes some way towards describing the complexity of the Cyber-regulatory environment but which also describes how the struc- ture of the environment may be harnessed to provide a more robust regulatory model. At its heart is communication, a discovery that seems rather apt given that the internet is, after all, a communications device. The first stage in design- ing a regulatory intervention in any complex regulatory environment, including Cyberspace, is to develop a dynamic model of the environment recording all par- ties and mapping their contemporary regulatory settlements with each other.97 Secondly, by observing this environment, regulators are required to map the communications dynamic in place within this regulatory matrix. According to Mingers, the regulator does not need to actually record the content of all com- munications which take place between subsystems, or nodes, all that is required is that the dynamic of such communication is mapped. In other words the regulator 93 Developed and supplied by isee Systems. See accessed 10 June 2008. 94 Developed and supplied by Ventana Systems. See http://www.vensim.com/> accessed 10 June 2008. 95 The author would like to point out the elegance of harnessing the power of computers to aid in the design of regulatory tools within the complex environment of Cyberspace. Thus demonstrating that much like regulation digital technology can be both disruptive and positive. 96 It should be recorded that some US regulators including the Environmental Protection Agency and the Department of Energy now use system dynamics on a regular basis. 97 By this I mean which relationships—market, power, social or design cause particular outcomes to occur or not to occur. Conceptualising the Post-Regulatory (Cyber)state 315 need not anticipate the needs of all actors in the regulatory matrix they need only anticipate the regulatory tensions that are likely to arise when actors communi- cate. Finally, once a regulatory intervention has been designed, it should be tested thoroughly. This involves constant monitoring of feedback from all regulatory nodes; both positive and negative. The regulator should be prepared in light of this feedback to make alterations in their position and to continue to monitor feedback on each change: thus allowing them to both accomplish the regulatory settlement they set out achieve and to generate valuable data which they may use to model future regulatory interventions. Effective, symbiotic, regulatory interventions may therefore be designed through the application of a three-stage process. Firstly regulators must produce a dynamic model of the regulatory matrix surrounding the action they wish to reg- ulate (including a map of the communications networks already in place). From this they may design a regulatory intervention intended to harness the natural communications flows by offering to the subsystems, or nodes, within the matrix a positive communication which encourages them to support the regulatory intervention. Finally they must monitor the feedback which follows this interven- tion. If the intervention is initially unsuccessful they should consider modifying it slightly and continuing to monitor the feedback in the hope of producing con- stant improvements. If successful, the positive feedback generated will reinforce the regulatory intervention making it much more likely to succeed. If regulators were to use this three-stage design mechanism, it may be possible to design suc- cessful regulatory interventions in the most complex regulatory environment. 14 Vicissitudes of Imaging, Imprisonment and Intentionality JUDY ILLES* In an article in the online publication of Slate Magazine, professor of law and former dean of Roger Williams University School of Law, and former legal counsel to the deputy director of the FBI, Harvey Rishikoff, and Michael Schrage, senior adviser to the MIT Security Studies Program, provide a provocative discussion of the role of technologies such as brain imaging in the context of military inter- rogation.1 In this paper, I use portions of their text to serve as triggers to a deeper discussion of the role of neurotechnology in the domain of criminal and military justice. I argue for a cautious approach to advancing laboratory neuroscience into prisons, police stations, courtrooms and society’s living rooms. I further evolve my argument to draw conclusions on this topic by appealing to the complex dis- tinctions between information-giving, intentionality and motivation. A Revolution in Brain Imaging True or False? The past decade has seen revolutions both in brain-scanning technologies and in drugs that affect the brain’s functions. Like personal computers and digital camcorders, these technologies are getting faster, better, and cheaper. And they may have uses in the inter- rogation room that will render moot debates about the excesses of Abu Ghraib-style treatment of prisoners.2 True and false. True: The development and application of new brain scanning techniques, and functional imaging methods in particular, has been nothing short * The helpful feedback of Dr Emily Murphy is gratefully acknowledged. Supported by NIH/NINDS RO1 #NS 045831–04. 1 Rishikoff, H and Schrage, M (2007) ‘Brave New World Technology Vs. Torture: Psychopharmaceuticals and Brain Imaging Could Make Prisoner Interrogation More Humane. Should We Use Them?’ avail- able at accessed 28 February 2007. 2 Ibid. 318 Judy Illes of phenomenal.3 Published studies using functional magnetic resonance imaging (fMRI) of the human brain to date number well over 10,000,4 with applications to diverse neural processes, from cognition to motivation and consciousness also growing exponentially. False: The epistemology of brain scanning techniques, ie the inherent limitations to the information about the human mind that the technology can alone reveal, render moot discussions about the excesses imprisonment that it can mitigate. The human brain is a highly variable input-output machine, and its output in terms of behaviour is a function of the intricate interrelationship between a person’s biology, genetic hardwiring and the ways in which these are modified by a person’s environment. In as much as genetics cannot be wholly predictive of whom a person will be, there is similarly no single neural code that could successfully incul- pate or exculpate an accused or imprisoned person. Accurate or Faulty? Functional Magnetic Resonance Imaging (fMRI) brain scans can measure how the brain reacts when asked certain questions, such as, ‘Do you know Mr. X?’ or, ‘Have you seen this man?’ When you ask someone a question, the parts of the brain responsible for answering will cause certain neurons to fire, drawing blood flow. The oxygen in blood then changes the brain’s magnetic field so that a neural radio signal emitted becomes more intense. Functional MRI scanners detect and calibrate these changes. And by com- paring the resulting images to those of the brain at rest, a computer can produce detailed pictures of the part of the brain answering or not answering the question in essence (True) creating a kind of high-tech lie detector.5 More hopeful than either accurate or faulty. There is a long history of the failure of polygraphy, for example, to deliver accurate lie detection.6 For reasons alluded to above and discussed in further detail below, hopeful may be the best we can attribute to Rishikoff and Schrage’s position on high-tech lie detectors. Good or Bad? Indeed, a Pentagon agency is already funding fMRI research for such purposes. Engineers are also developing less cumbersome and expensive technologies such as infrared to track blood flow in the brain’s prefrontal cortex, the region associated with decision-making and social inhibition. ‘[…] we can gain even richer insight into how the brain is functioning.7 3 J Illes, SW Atlas and TA Raffin (2005) ‘Imaging Neuroethics for the Imaging Neurosciences’ 1 (2) Neuroscience Imaging 5–18. 4 J Illes, E Racine and MP Kirschen (2006) “A picture is worth 1000 words, but which 1000?” in J Illes (ed), Neuroethics: Defining the Issues in Theory, Practice and Policy (Oxford, Oxford University Press) 149–68. 5 Rishikoff and Schrage, above n 1. 6 National Research Council (2003) The Polygraph and Lie Detection. London, 2003, available from accessed 28 February 2007. 7 Rishikoff and Schrage, above n 1. Vicissitudes of Imaging, Imprisonment and Intentionality 319 Mostly good. In the hands of experienced neuroscientists and subject to peer review, research that leads to better, less invasive techniques that enhance the understanding of the human condition is clearly positive. Meritorious or Foolhardy? […] traditional techniques [for interrogation] depend overwhelmingly on coercive combinations of fear, disorientation, and pain. The technological approach doesn’t and is inherently more humane.8 Initially meritorious. If one believes that sometimes information must be extracted by one person from another person, in cases such as in the interest of national security, then this conclusion would appear to have merit. The degree of merit is less straightforward, however, when considering that even increasingly effica- cious and decreasingly invasive methods are still significantly limited both in their sensitivity and specificity. Sensitivity (a measure of the existence of a signal) and specificity (the meaning of that signal) are central to the analysis of neuroimaging for applications such as assessing truthful or false information-giving, intention- ality and motivation. Despite the hope imparted to modern neurotechnology, it has a long way to go from the laboratory to any non-medical, real-world setting, whether that application is in Abu Ghraib, a courtroom in a cosmopolitan city, or the home-and-mall setting for the more mundane sale of brain scans for teenagers who must account for the adventures of a previous night to anxious parents. Where does the regulatory community go from here? Hints of Hope As Rishikoff, Schrage and I have written, progress in the imaging neurosci- ences over the past few decades has been staggering.9 Since the first publica- tion of the use of MRI to obtain functional measurement of brain beyond anatomical ones,10 new applications and discoveries have been and will con- tinue to be steady.11,12 Functional imaging studies and other empirical works have revealed with unprecedented detail the complexity of neural networks 8 Ibid. 9 J Illes, MP Kirschen and JDE Gabrieli (2003) ‘From Neuroimaging to Neuroethics’ 6 (3) Nature Neuroscience 205. 10 JT Kikka, JW Belliveau and R Hari (1996) ‘Future of Functional Brain Imaging’ 23 (7) European Journal of Nuclear Medicine 737–40. 11 Laura Spinney (2002) ‘The Mind Readers’ in New Scientist 38. 12 Laura Spinney.(2005) ‘Optical Topography and the Color of Blood’ The Scientist 25–7. 320 Judy Illes underlying moral behaviour as revealed by existential problem solving13,14 introverted and extroverted personality traits,15–17 and decision making.18,19 At the time of this writing, more than fifteen studies have focused on detecting lies with fMRI technology specifically, using paradigms that variously involve responses to factual autobiographical, simulated crimes, and game-playing (reviewed in20 and21). The critical nodes in the neural circuitry associated with these behaviours are numerous. They include the anterior prefrontal area, ventromedial prefrontal area, dorsolateral prefrontal area, parahippocampal areas, anterior cingulate, left posterior cingulate, temporal and subcortical caudate, right precuneous, left cerebellum, insula, basal ganglia nuclei such as the putamen and caudate, thalamus, and regions of temporal cortex. Given that the complexity of human behaviour involves, at any given time, aspects of memory, intention, motivation, planning and executive function, self- monitoring, mood, plus a system of language with which to organise and express it, the coordination of such a large number of cerebral structures is no surprise. Lying and deception meet this requirement for coordination, but with an extra dimension of complexity: they require another layer of behaviour analysis given the need for inferences about another person’s intent or position and gullibility. Lying and deception are also each a little different. Lying is a frank and overt communication of erroneous information. Lies can be dark (‘He raped that woman’) or light (‘Please come again’); there are everyday lies (‘The dog ate my homework’) and there are pathologic liars (‘He did it’). Deception, by contrast, relies on misleading information, omissions or distortions of information that lead the recipient of the information to an erroneous conclusion or understand- ing. Common to both lying and deception is the fact that the information-givers can be highly accomplished (good liars or deceivers), highly motivated (hungry or driven by religious beliefs), or relatively ineffective. The relevance to the pres- ent argument lies in the associated neural signatures for these commonalities and differences that are not at all yet understood. 13 JD Greene, RB Sommerville, LE Nystrom, JM Darley and JD Cohen (2001) ‘An fMRI Investigation of Emotional Engagement in Moral Judgment’ 293 (5537) Science 2105–8. 14 JD Greene, LE Nystrom, AD Engell, JM Darley and JD Cohen (2004) ‘The Neural Bases of Cognitive Conflict and Control in Moral Judgment’ 44 (2) Neuron 389–400. 15 T Canli, Z Turhan, JD Desmond, E Kang, J Gross and JDE Gabrieli (2001) ‘An fMRI Study of Personality Influences on Brain Reactivity to Emotional Stimuli’ 114 (1) Behavioral Neuroscience 33–42. 16 T Canli and Z Amin (2002) ‘Neuroimaging of Emotion and Personality: Scientific Evidence and Ethical Considerations’ 50 (2) Brain and Cognition 414–31. 17 Canli, T. (2006) ‘When Genes and Brains Unite: Ethical Implications of Genomic Neuroimaging’ in J Illes (ed) Neuroethics, above n 4 at 169–84. 18 KF Schaffner (2002) ‘Neuroethics: Reductionism, Emergence, and Decision-Making Capacities’ in Neuroethics: Mapping the Field. (San Francisco, The Dana Press) 27–33. 19 P Churchland (2006) ‘Moral Decision-Making and the Brain’ in J Illes (ed) above n 4 at 3–26. 20 J Illes (2004) ‘A Fish Story: Brain Maps, Lie Detection and Personhood’ in Cerebrum: Special Issue on Neuroethics (New York, The Dana Press) 73–80. 21 HT Greely and J Illes ‘Neuroscience-Based Lie Detection: The Urgent Need for Regulation’ (2007) 33 American Journal of Law and Medicine 2, 3. Vicissitudes of Imaging, Imprisonment and Intentionality 321 What are the Challenges? Standards of practice and quality control: Technical approaches to image acqui- sition, instrumentation, design approaches to data collection, and analytic approaches to results interpretation vary widely among the hundreds of academic laboratories conducting basic or clinical neuroscience research and commercial laboratories developing technology for profit.22,23 Even among the half-dozen or so companies known to be devoted only to the commercialisation of lie detec- tion beyond polygraphy, instrumentation varies both by manufacturer and by field strength (the higher the field strength, the higher the resolution or pos- sible quality of an image), brain regions of interest, and statistical approaches to analysis of the data. Common experimental limitations are the use of small subject numbers—typically right-handed, college age, ethnically homogeneous subjects—lack of socioculturally-appropriate stimuli (this would be especially rel- evant if subject populations were more ethnically diverse), and measures of valid- ity that the behaviours subjects are exhibiting are true internally (they reported them faithfully to provide baseline) and externally (responses are verifiable).24 Analytic approaches: There are two significant analytic considerations for the problems at hand. The first is whether the search for relevant patterns of brain activation is localisationist-driven or network driven. Recall Franz Gall, the great localisationist, whose work on phrenology gave us the first brain-function maps25: one brain place, one behaviour. The brain’s numerous networks and the intricate stuctural and timing communication among them, however, yield a more dynamic model.26 Each is an analytic philosophy that has its place, but the existence of both does not make resolving oft-found dif- ferences between results more simple to explain. A second analytic challenge draws upon the inherent noninvasiveness of methods like fMRI and, conse- quently, the repeatability of experiments with subjects. Noninvasiveness is certainly good. Repeatability can add power. But the effects of learning due to repetition within an individual scanned several times may be a natural source for confounded results.27 Ethics and policy: Privacy and justice are two key values that gird the ethics and policy challenges for imaging behaviours such as lying. In thinking about them, we have to also consider two related factors: context and goals. 22 J Illes and E Racine (2005) ‘Imaging or Imagining? A Neuroethics Challenge Informed by Genetics’ 5 (2) American Journal of Bioethics 5–18. 23 ML Eaton and J Illes (2007) ‘Commercializing Cognitive Neurotechnology: The Ethical Terrain’ 25 (4) Nature Biotechnology 1–5. 24 Greely and Illes, above n 21. 25 JC Marshall and GR Fink (2003) ‘Cerebral Localization then and Now’ 20 Neuroimage, S2–S7. 26 G Gevins, NH Morgan, SL Bressler, BA Cutillo, RM White, J Iles, DS Greer, JC Doyle and GM Zeitlin (1987) ‘Human Neuroelectric Patterns Predict Performance Accuracy’ 235 Science 580–85. 27 ME Raichle (1998) ‘The Neural Correlates of Consciousness: An Analysis of Cognitive Skill Learning’ 353 (1377) Philosophical Transactions of the Royal Society of London, Series B Biological Sciences 1889–1901. 322 Judy Illes Context has to do with the question of who is being imaged, for what reason, and in what situation. Is it the person who is accused of a crime or having pos- sessing knowledge about criminal behaviour? Is it the accuser? Is the individual a man or a woman? An autonomous decision-making adult or a child coerced? Coercion overlaps with the question of justice: is the scan expected to deliver definitive or adjunctive information? Is that information to be used as a diagnos- tic about current state or predictive for a future state (screening for intention)? Is the information to be used immediately with a proximate goal or at later time toward a long-range goal? How will fundamental principles of non-malfeasance, justice and fairness be protected when sensitivity is less than 100 per cent, and specificity for that variable is even far less than that? Who will have the skills and know-how to detect an unexpected medical condition appearing on a scan? Will countermeasures, and countermeasures to countermeasures developed to trick neurobiologic measurements, be humane? Uncoerced?28 External modula- tory drugs like beta blockers that suppress the consolidation of memory29 and internally generated methods like toe curling on electromagnetic interventions30 can have significant effects on neural signatures. They create patently irrelevant signals or movement artifacts on brain scan measures, thus further reducing both sensitivity and specificity. In the Public Eye Hype is not a phenomenon unfamiliar to the scientific or legal community, or even to the public. As individuals, we may hype our accomplishments when due for a promotion or a raise in salary, boost the beauty of the drawing by a 4-year-old, or exaggerate the size of a catch from a previous fishing weekend. The press corps does the same with the science it covers, albeit with different motivation.31 Gardner et al have described the enormous pressures on journalists to respond to the ‘need for speed’.32 Consequently, whether in contemporary times or as early as the days of Egas Moniz and his 20th century cures for mental illness,33 benefits 28 J Illes (2006) ‘Even if If, then What?’ in Reading Minds: Lie Detection, Neuroscience, Law, and Society (Center for the Law and Biosciences, Stanford University). 29 K Evers (2007) ‘Perspectives on memory manipulation: Using beta-blockers to cure post-traumatic stress disorder’ 16 (2) Cambridge Quarterly of Healthcare Ethics 138–46. 30 MS Steven and A Pascual-Leone (2006) ‘Tms in the Human Brain: an Ethical Evaulation’ in J Illes (ed) Neuroethics, above n 4 at 201–12. 31 E Racine, O Bar-Ilan and J Illes (2006) ‘Brain Imaging: A Decade of Coverage in the Print Media’ 16 (2) Science Communication 122–43. 32 H Gardner, M Csikszentmihalyi and D William (2001) Good Work: When Excellence and Ethics Meet (New York, Basic Books). 33 ES Valenstein (1986) Great and Desperate Cures: the Rise and Decline of Psychosurgery and Other Radical Treatments for Mental Illness (New York, Basic Books). Vicissitudes of Imaging, Imprisonment and Intentionality 323 over risks, and hope over limitation has dominated coverage of advances in science in the press.34,35 In 1935, the public was drawn to prefrontal lobotomies by press reports as an answer to depression. In the 1960s, similar surgeries were considered to be a possible response to social unrest (reviewed in36). In 1993, press coverage of an anticipated (yet unproven effect) of classical music on the child development37 led the legislature in the state of Georgia, USA, to require the dis- tribution of music—Mozart in particular—to families with newborn children.38 While clearly not as socially or medically questionable as the excision of neural tissue, resources were nonetheless diverted from other possibly more effective methods for childhood intervention. In the 21st century, the risks of over-medicalising conditions not previously considered to be pathologic have become practically epidemic.39,40 Even when media coverage is critical of a trend, the exposure in the press seems to fuel reac- tions by the public that while intended to be positive, are often misguided.41 Recipe for Success or Signal of Trouble? While close intersections between advanced technology, enthusiastic scientists, and an engaged world press would seem to be a recipe for success, there are many signals of trouble. The first follows from the technical concerns. With few standards of practice in place, the risk of improper use of technology due to lack of quality control over instrumentation and personnel, implementation of paradigms, and interpretation of data is great. This may have a cascade effect, including the occurrence and outcome of false positive results. If the potential for experimental error is high, then protections should be put in place to mitigate the possibility of false positive findings before a person is subjected to further testing. I would argue that the consequences of a missed positive are less grave for the individual than for society if, in fact, concealment of a heinous crime is successfully achieved. Well-intentioned yet premature adoption of technology is always a risk in a society that has an insatiable appetite for new innovation. 34 J Illes and SJ Bird (2006) ‘Neuroethics: A Modern Context for Ethics in Neuroscience’ 29 (9) Trends in Neuroscience 511–17. 35 J Singh, J Hallmayer and J Illes (2007) ‘Interacting and Paradoxical Forces in Neuroscience and Society’ 8 Nature Reviews Neuroscience 153–160. 36 Illes and Bird, above n 34. 37 FH Rauscher, GL Shaw and KN Ky (1995) ‘Listening to Mozart Enhances Spatial-Temporal Reasoning: Towards a Neurophysiological Basis’ 185 (1) Neuroscience Letters 44–7. 38 A Bangerter and C Health (2004) ‘The Mozart Effect: Tracking the Evolution of a Scientific Legend’ 43 British Journal of Social Psychology 605–23. 39 SE Hyman (2002) ‘Neuroscience, Genetics, and the Future of Psychiatric Diagnosis’ 35 (203) Psychopharmacology 139–144. 40 MJ Farah (2002) ‘Emerging Ethical Issues in Neuroscience’ 5 (11) Nature Neuroscience 1123–9. 41 Singh, Hallmayer and Illes, above n 35. 324 Judy Illes Resources for research are not always sustainable, making the conversion of a technology once it has reached the point of initial validity to a sustained one dif- ficult.42 Even in the best-case scenario, commercialisers of technology must be mindful of the need for continuing education when technology moves ahead.43 A precedent for technology misuse stems from off-label uses of drugs or devices approved for one application and then adopted for another (and not needing new approval). Further concern arises from the presumed covert development of relevant technology for military and security purposes. There is an impera- tive therefore, for entrepreneurs to operate like their colleagues in the academic sector and publish openly in the peer reviewed literature. In this age of increased on-line publishing, this would be entirely feasible. With standards of practice in place for time-to-review, time-to-publication, and disclosure, this could reasonably be accomplished without risks to trade secrets or other proprietary information. A final note is one that may be coined as the ‘speeding ticket effect’: it is remi- niscent of the urban legend that at certain times in a month, one is more likely to receive a speeding ticket or other traffic violation in part due to a quota that must be met by law enforcement. Overzealous ‘lie catchers’—whether motivated by quotas that must be met, by financial gain or by other factors—represent con- siderable risk to society. The risks are defined by the infringements on privacy, immediate harm to those wrongly accused and thus targeted for further interven- tion, long term risks of stigma, and the inconvenience, at best, when an individual has been ‘red flagged’. Stanford Professor of Law Hank Greely and I have proposed a regulatory scheme for neurotechnology that may be one answer to the ethical, legal and social challenges described here.44 Our model draws on the United States Food and Drug Administration’s (FDA) requirements for drugs or biologics, and crimi- nal implications when rules are not followed. We believe that large-scale trials equivalent to the randomised controlled clinical trials of medicine are needed for lie detection and other similar-application technology. Trials must have subject numbers not in the 10s, as are conducted now, but in the 100s, be representative of the population for age, sex, ethnicity, handedness, and other characteristics, and measure a wide range of relevant behaviours. Further to this proposal, I believe that the voice of well-informed stakeholders from all sectors must be heard early and directly. The public’s gauge of whether the benefits of such technology for variously detecting liars, everyday criminals, or terrorists whose goal may be mass destruction of human life outweigh the risks are as relevant as the opinions of the scientists and engineers who drive development, and the position of administrative bodies implementing regulation and policy. 42 Eaton and Illes, above n 23. 43 Ibid. 44 Greely and Illes, above n 21. Vicissitudes of Imaging, Imprisonment and Intentionality 325 Conclusion I return briefly to the article by Rishikoff and Schrage45 with a final question and quotation: Insightful or Absurd? The outrage attending the news about Abu Ghraib probably wouldn’t have arisen if the images featured detainees who weren’t naked, hooded, or sexually posed as preludes to hostile interrogation. If prisoners instead had been wired to electroencephalographs or noninvasively examined by fMRI scanners to see whether they were telling the truth, the images would not have turned into emblems of degradation and humiliation. Whether for bodies or brains, this assertion is both as powerful as it is absurd. The power of an image in any dimensional space is tremendous, far more than the power of words. Even if we had scans of the prisoners’ brain at work at some task, I suspect they still would not touch our emotions as much as pictures of abused human beings. However, past studies of jury behaviour46 would suggest that they still might provoke severe and quick reactions, with the persuasiveness of a colorised images trumping context, any information about limitations, and good common sense.47,48 As this essay draws to a close, I conclude simply with a last few questions and responses: Could Rishikoff and Schrage’s proposal for the use of neurotechnology really lead to the detection of truth or falsehood in a prison setting, viable infor- mation and underlying motivation? This is highly uncertain. Would the applica- tion be good or bad? Not purely good. Perhaps not bad. In the best-case scenario, perhaps a little bit of both. Indeed, the jury is still out. 45 Rishikoff and Schrage, above n 1. 46 Dumit, J (1999) ‘Objective brains, prejudicial images’ 12 (1) Science in Context 173–201. 47 Dumit, J (2003) ‘Picturing Personhood: Brain Scans and Biomedical Identity’ (Princeton, Princeton University Press). 48 Schuman, D (2007) ‘Comment on: J Illes, Authenticity, Bluffing and the Privacy of Human Thought’ (Dallas, University of Texas). 15 Taming Matter for the Welfare of Humanity: Regulating Nanotechnology HAILEMICHAEL TESHOME DEMISSIE* I. Introduction ‘A tsunami’ is the metaphor used to describe the stealth and transformative potential of nanotechnology. It is an unfortunate metaphor as it ironically conveys a negative import being associated with the devastating natural phenomenon. The metaphor obscures the fact that nanotechnology is in the hands of humankind and can be controlled and harnessed to the good of humanity. Yet, the apocalyptic pessimism about nanotechnology aired by sceptics is not far from being realistic. Besides the risks to human health and the environment about which little is certain to be known, the speed at which the technology is diffusing, the relatively low entry- cost for application, the low public visibility, the fact that the development of the technology is still driven by the business-as-usual scramble for markets and profits despite the enormous potential of the technology to alleviate the sufferings of the needy by catering for their bare necessities, call for a renewed vigilance in regulating nanotechnology with the prime objective of global welfare. This paper contributes to the core theme of promoting the deployment of nan- otechnology for the welfare of humanity. In somewhat synoptic way, it traces the major regulatory, social and ethical issues surrounding the technology. The first part of the paper addresses the nature, the ‘revolutionariness’ and the revolution- ary promises of the technology. In the second part, the regulatory concerns asso- ciated with the development of the technology will be taken up while the rest of the paper will be dealing with the impending nanodivide and the attendant issue of benefit-sharing. The paper argues that the business-as-usual approach whereby the development of the technology is left to the operations of the vagaries of the * This contribution is dedicated to my mother, W/o Mekdes Desta. I would like to thank Professor Roger Brownsword for his invaluable support in the writing of the paper. I gratefully acknowledge the helpful suggestions that Professor Nicholas Squires of Coventry University, Robin Mackenzie of University of Kent and Joel D’Silva from University of Surrey have made on earlier drafts of the paper. 328 Hailemichael T Demissie market has to be abandoned. Given the enormity of its potential, the market is either unable to handle it or will disrupt its beneficial deployment. II. Introducing Nano Defining ‘Nano’: From Quantum Metrology to Branding Terminology ‘Don’t let nano become a four-letter word.’1 (Philip Bond, US Under-Secretary of Commerce for Technology) That was a piece of advice given to an audience consisting of scientists, engineers and academics. In a way the official’s comment was needlessly thrown at his audi- ence since ‘nano’ was no longer ‘a four-letter word’ as he spoke. Far from viewing it in awe or as a taboo, scientists and the lay public in the first world are making it a household term; business has aggressively appropriated and misappropriated it deploying it for marketing purposes. ‘Nano’ made a glamorous entrance to the market as ‘the advertising hit du jour’ of big brands like GM and Apple.2 People boasting about their nano-ties on their necks or iPod Nanos in their pockets are not rare nowadays. Nano is a favourite of the world of science fiction which Hollywood took to an even bigger audience often casting it as the villain. In sum, the days where nano was too revered or seen as ‘too exotic for general discussion’ are over.3 The term ‘nanotechnology’ has been in such very wide circulation that the mean- ing attached to it varies depending on whom you ask. The over-adoption of the term and the resulting multiplicity of its meaning is a nightmare for standardisation and regulatory agencies whose appeal for an agreement on terminology and nomen- clature is still standing. After more than two decades of active engagement and enor- mous investment, a great deal of the disambiguation exercise is intact as the quest for a determination of the ‘ontological status’ of nanotechnology is still pending.4 1 Philip Bond (2004), ‘Vision for Converging Technologies and Future Society’ in Mihail Roco and Carlo D Montemagno (eds), The Coevolution of Human Potential and Converging Technologies (New York, The New York Academy of Sciences) 17 at 21. 2 National Geographic (June 2006) 98 at 118. 3 GH Reynolds (2003), ‘Nanotechnology and Regulatory Policy: Three Futures’ 17 (1) Harvard Journal of Law and Technology 179 at 181. 4 Notable among such disambiguation exercises is the publication of the first nanotechnology termi- nology standards by ASTM International (the American Society for Testing Materials) in partnership with other international standardisation organisations in February 2007. Other efforts at the national and international level are underway and include those being exerted by the British Standards Institute and the International Standards Organisation (ISO). ASTM International (2006), E2456–06: Terminology for Nanotechnology, available at (accessed 02 May 2007); Environmental Defense–DuPont, Nano Risk Framework, 26 February 2007, available at (accessed 01 May 2007) at 6. Also Fritz Allhoff and Patrick Lin (2006), ‘What’s So Special about Nanotechnology and Nanoethics’ 20 (2) International Journal of Applied Philosophy 179. Taming Matter for the Welfare of Humanity 329 Nor is there the hope of a consensual definition to come soon owing to the ‘fluid’ nature of the technology further complicated by the hype and attention it attracts.5 As the first essential step of the disambiguation exercise, ‘killing’ the very notion of nanotechnology itself is suggested: ‘Nanotechnology simply does not exist. What is real is science, technology and engineering at the nanometre scale’ (emphasis added).6 The ‘killing’ is, however, unhelpful as the ‘nanometre scale’ itself is not an agreed scale. ‘Killing’ the term nanotechnology and salvaging sci- ence, technology and engineering at the nanoscale would not dispense with the onus of defining ‘the nanometre scale’. Nanometre refers to the base unit in quantum metrology, or, with unavoidable tautology, nanometrology: the science of measurement at the nanoscale.7 One nanometre is a billionth of a metre. This is a length scale which is hard to imag- ine even with farfetched comparisons with familiar length scales. Nobel laureate Sir Harold Kroto describes it by comparing it to the human head: one nano- metre is to a human head what a human head is to the planet Earth.8 A dollar bill is 100,000 nanometres (abbreviated as nm) thick while the human hair is 80,000 nm wide. While there is no disagreement as to the metrological signification of the pre- fix ‘nano’, confusion crops up as the suffixes are added. Nanotechnology is often crudely defined as science and technology operating at the nanoscale and that scale is confined to the range of 1–100 nm. However, this is not the only criterion for a field in science and technology to qualify as nanotechnology. It has to do with the novel properties of matter which are exhibited only at the nanoscale. This additional criterion has got a critical implication to the former criterion whereby this least disputable feature of nanotechnology is demoted to a less important feature of the definition. The range of 1–100 nm is said to be ‘arbitrary’ as materi- als within this scale range do not necessarily behave in the strange ways that differ them from their macro or micro-scale state.9 While in some cases the strange behaviour of materials could be observed well beyond the 100 nm range, in other cases no such behaviour is observed within the same range, especially above 5 Hans Fogelberg, and Hans Glimmel (2003), Bringing Visibility to the Invisible: Toward a Social Understanding of Nanotechnology, Göteborgs Universitet, Göteborgs; also available at (accessed 20 January 2007) at 42. 6 Denis Loveridge (2002), ‘Nanotechnology: Its Potential as the “Next Industrial Revolution” and Its Social Consequences’ (The University of Manchester); available at (accessed 13 April 2007). 7 The Royal Society and the Royal Academy of Engineers (2004), Nanoscience and Nanotechnologies: Opportunities and Uncertainties (London, The Royal Society) at 13, available at (accessed 13 April 2007). 8 Jim Gimzewski and Victoria Vesna (2003), ‘The Nanoneme Syndrome: Blurring of Fact and Fiction in the Construction of a New Science’ 1 (1) Technoetic Arts. 9 Natasha Loder (2005), ‘Small Wonders’ (1 January) The Economist 3 at .3. The arbitrariness of the 1–100nm range can nowhere be made clear than in the humorous impromptu answer to the question- ‘Why 100nm?’that an academic gave: ‘Because President Clinton says so’; quoted in Richard Jones (2004), Soft Machines: Nanotechnology and Life (Oxford, Oxford University Press) at 39. 330 Hailemichael T Demissie 50 nm.10 Therefore, the novel property rather than the scale range of 1–100 nm becomes the central defining concept of nanotechnology.11 It is indeed the strange properties of matter at the nanoscale and the possible applications thereof that made nanotechnology a subject of enormous interest. At the nanoscale, the classical laws of physics governing the macroworld cease to oper- ate and the laws of quantum physics take over and the strange properties of matter unknown at the macro level begin to dominate. At the nanoscale silver turns into a bioactive antimicrobial substance; gold melts at a much lower temperature than it does at the micro or macroscale; copper strangely becomes a poor conductor of electricity; aluminium behaves like chlorine and turns into an explosive substance; the soft carbon in the form of graphite becomes fifty to hundred times stronger than steel when manipulated at the nanoscale turning into a much sought-after material with the high strength-to-weight ratio. ‘It is like you shrink a cat, and keep shrinking it, and then at some point, all at once, it turns into a dog.’12 It is, thus, understand- able why the exploitation, actual or potential, real or purported, of these strange properties has become the crucial concept in the definition of nanotechnology. Defining nano is not a mere semantic exercise. Those engaged in defining nanotechnology are trying to put their impression on what is now becoming a ‘gigaideology’.13 As the first of the series of steps in setting boundaries for the ensuing social and ethical issues, defining nano is ‘necessarily a question of exer- cising power’.14 The UNESCO report on nanotechnology underscored the fact that in view of the lack of an agreed definition, ‘nanotechnology will be defined by the corporations and nations that pursue their own interests most vigorously’.15 Understandably, it is the US official definition, quoted by the UNESCO report itself, which is more often cited than any other definition. The US formulation defines nanotechnology in terms of the length scale and the novel properties. Though an official utterance, this definition is not meant to be the last word; that it is merely ‘a working definition’ needs to be stressed. Accordingly, nanotechnology is understood here loosely as ‘an umbrella term’ for the scientific and technological activities where the minimum conditions of the length scale and novel properties are met. Yet, it is preferred to subscribe to what Wood et al advise: ‘Rather than seeing the issue of the field as a matter of definition or at least as defining it once and for all, it may be more helpful 10 Loder, above n 9 at 3; also Michael Roukes (2001), ‘Plenty of Room Indeed’ (September) Scientific American 42 at 43. 11 Loder, above n 9, at 3. 12 National Geographic, above n 2, at 103. 13 Debashish Munshi et al (2007), ‘A Map of the Nanowrold: Sizing up the science, politics, and business of the infinitesimal’ 39 (4) Futures 432; Stephen Wood et al (eds) (2007), Nanotechnology: From the Science to the Social (ESRC) available at (accessed 26 September 2007). 14 Bruce Lewenstein (2005), ‘What Counts as a ‘Social and Ethical Issue’ in Nanotechnology?’ 11 (1) HYLE—International Journal for Philosophy of Chemistry 5. 15 UNESCO (2006), The Ethics and Politics of Nanotechnology (Paris, UNESCO) at 7. Taming Matter for the Welfare of Humanity 331 to approach it as a sociological issue.’16 Reckoning the resources consumed over the search for clear-cut definitions, they rightly suggest a detour in the course of engaging with the phenomenon: ‘strict definitions may be irrelevant as perspectives on how it is best pursued and what it can achieve become more important’.17 Definitely the Next, but not Just Another Scientific Revolution Nanotechnology is almost unanimously held to be a revolutionary technology in both camps of its proponents and opponents. However, the discourse on the ‘rev- olutionariness’ of nanotechnology is not without nuances or even ambivalences. Commentators are at pains when appreciating its revolutionary nature.18 Some would say it is revolutionary but not new; others consider it as nothing new except the enabling increased knowledge thus rendering any development as a matter of degree rather than a paradigm change. Still others doubt its ‘revolutionariness’ because it is not yet fully known.19 The relevance of nanotechnology for developing countries hinges on its being revolutionary as their transformation can be achieved by nothing less than a revolutionary change that would relieve them of the burden of playing catch-up. Schummer holds that the revolutionary slogan may serve as a beacon of ‘a unique opportunity’ that these countries may put to use in this respect. However, Schummer recoils from his position simply because the technol- ogy itself, let alone its ‘revolutionariness’, is not sufficiently understood.20 Such scepticism on the newness or the revolutionary nature of a technology is justi- fied as the revolutionary tag is often ‘a thoughtless marketing slogan’.21 Thus, as Professor Brownsword cautions, one needs to think twice—using scientific and social criteria—before buying into the idea that a certain technology is revolutionary.22 16 Wood et al, above n 13 at 12. 17 Ibid 17. 18 Consider, for example, the following irresolute statement: ‘although surely implying a revolu- tion as far as matter-processing is concerned, it is not entirely revolutionary’. Fogelberg and Glimmel, above n 5, at 7. A similar tone of frailty is detected in the French National Ethics Committee opinion which says it is a ‘technical revolution bearing-perhaps—the promise of a future scientific revolu- tion’. National Consultative Ethics Committee for Health and Life Sciences (2007), Opinion No. 96, Ethical Issues Raised by Nanosciences, Nanotechnologies and Health (Paris). 19 Joachim Schummer (2007) ‘The Impact of Nanotechnologies on Developing Countries’ in Fritz Allohff et al (eds), Nanoethics: Examining the Societal Impact of Nanotechnology (Hoboken, NJ, Wiley) 5; Gary Stix (2001), ‘Little Big Science’ (September) Scientific American 26 at 31; Stephen Wood et al (eds) (2003), The Social and Economic Challenges of Nanotechnology (Swindon, ESRC) 28 and 47–51; Loveridge, above n 6. 20 Schummer, above n 19. 21 Ibid. 22 Roger Brownsword (2004), Red Lights and Rogues: Regulating Human Genetics, paper given at international conference on ‘Regulating Biotechnology’ University of Amsterdam, May 2004 (on file with the author). 332 Hailemichael T Demissie The Scientific Criteria Using the scientific criterion requires probing into whether nanotechnology has brought about a paradigm change in scientific knowledge and understanding. Nanotechnology satisfies this criterion in the true Khunian sense of conceptual or theoretical revolution.23 The bottom-up approach of manipulating matter atom by atom is understood as the essence of nanotechnology ‘where the real power of nano lies’.24 Drexler characterises it as a ‘fundamentally different way of processing matter’.25 The very idea of a bottom-up approach may not be a completely new approach as it is mimicked from nature and already applied in computer technology where bits and bytes are used as building blocks. Yet, neither is it simply an incremental change. It is in terms of increased understanding by which it has become pos- sible to see and explain old phenomena in new ways that nanotechnology is said to be revolutionary. It is revolutionary as ‘a perspective shifter’ ushering in the shift from the top-down ‘macrocentric’ to the bottom-up ‘nanocentric’ approach in scientific inquiry.26 And as such it represents ‘a concept-driven revolution’ in the proportion of the revolution brought about by the ideas of Darwin, Einstein or Freud.27 As a ‘technological paradigm’ deployed in nanotechnology, the atom-by-atom manipulation of matter, with its corollaries of complete human control and high precision, is a break with the scientific approach that was dominated for millennia by the top-down approach. With the top-down approach, it has been possible to achieve results that nanotechnology is claiming to achieve including nanoscale artefacts like microchips made by a top-down process called microlithography. Yet, as Reynolds noted ‘no one can “build” a tree with top-down methods’.28 Further- more, the bottom-up approach is revolutionary not only because it can make possible what is not possible using the top-down approach of making things by cutting, etching, carving or moulding from the bulk but also because it can go beyond the capabilities of natural bottom-up mechanisms.29 Mimicking nature is one thing, outdoing nature, as exemplified by the entire human enhancement 23 Jarunee Wonglimpyarat (2005), ‘The Nano-Revolution of Schumpeter’s Kondratieff Cycle’ 25 Technovation 1349. 24 Wood et al, above n 19 at 26; National Geographic, above n 2 at 108 25 Reynolds, above n 3, quoting Drexler at his fn 13. 26 Fogelberg and Glimmel, above n 5 at 5; Wonglimpyarat, above n 23. 27 Susan Greenfield (2003), Tomorrow’s People: How the 21st-Century Technology is Changing the Way We Think and Feel (London, Penguin Books) at 186 and 192. A similar analysis is presented by Hunt who considers the nanotechnology revolution as more of a ‘“rebound revolution”, one that throws us back onto a consideration of the nature of the human enterprise we call science and technology’. Geoffrey Hunt (2006), ‘Nanotechnoscience and Complex Systems: The Case for Nanology’ in Hunt and Mehta (eds), Nanotechnology: Risk, Ethics and Law (London, Earthscan) 43 at 44. 28 Reynolds, above n 3 at 182. 29 Ibid; also the projection in IRGC(International Risk Governance Council) (2006), White Paper on Nanotechnology Risk Governance (Geneva, IRGC) 24–5. Taming Matter for the Welfare of Humanity 333 venture, is another. In this respect, nanotechnology is also taken as ‘a tool-driven revolution’ on account of the processes, devices and substances discovered and invented.30 The Social Criteria The social criterion to determine whether a technology can legitimately be called revolutionary requires a social transformation as the purpose and outcome of the technology.31 The nanotech promise is that the change will be a radical one engen- dering a paradigmatic transition in terms of modes and relations of production as typified by the Drexlerian concept of ‘molecular manufacturing’—the hope that one will have a desktop unit to make anything one needs almost out of nothing.32 Since the social and economic changes are largely promises at the moment, the revolution is rather a revolution-in-waiting. Yet, there is little doubt that a socioeconomic paradigmatic transition is taking place with nanotechnology as its progenitor. This is intelligible from the manner paradigm shifts take place. A ‘paradigmatic transition’, as Santos prefers to refer to it as a less drastic moment than suggested by a ‘paradigm shift’, is ‘a highly contested time mainly because it comprises multiple temporalities’.33 There will certainly be intermediate stages of sub-paradigmatic changes which are identified with the incumbent para- digm as its excesses and deficits before the new paradigm takes centre stage. Likewise, the socioeconomic revolution that nanotechnology will be herald- ing will have to pass through multiple stages of sub-paradigmatic changes as explained by the tsunami metaphor: Technological revolutions travel with the same stealth [as the tsunami]. Spotting the wave while it is still crossing the ocean is tricky, which explains why so few of us are aware of the one that is approaching. Nanotechnology has been around for two decades but the first wave of applications is only now beginning to break. As it does, it will make the computer revolution look like a small change. It will affect everything from batteries we use to the clothes we wear to the way we treat cancer.34 Nanovisionaries contemplate a paradigm shift leading to ‘the experience economy’, a prelude to cyborgian post/transhumanist era with the biosphere colonised by ‘Humanity 2.0’ and other makes of ‘Life 2.0’ and to all sorts of things one reads in science fiction.35 ‘Scenario planning’ is deployed to help visualise the coming 30 Greenfield, above n 27 at 186 and 192. 31 Brownsword, above n 22. 32 Wood et al, above n 19 at 22 33 Boaventura de Sousa Santos (2002), Toward a New Legal Common Sense, 2nd edn (London, Butterworths) 64. 34 National Geographic, above n 2. 35 Robert Best et al (2006), ‘Introduction: A Sympathetic but Critical Assessment of Nanotech- nology Initiatives’ 34 Journal of Law, Medicine and Ethics 655 at 655; Lee Silver (2007), ‘Life 2.0’ Newsweek (04 June) 41 at 41. 334 Hailemichael T Demissie era of nano-induced new social, economic, political and cultural paradigms.36 ‘Preparedness’ is the new buzz as the sweeping advance of nanotechnology is set to topple the existing relatively ‘primitive’ technologies from their privileged position as state-of-the-art technologies.37 Whatever these technologies can do from production of material things to human cloning, nanotechnology can do it better and faster. Nanotechnology is celebrated as ‘the next industrial revolution’ in official policy documents. The word ‘next’ implies the existence of a series of revolutions of comparable magnitude and genre. However, it is widely held that the nano- revolutionary future is ‘a future in which we have not simply added one more technology to our arsenal, but achieved a final mastery over matter’.38 For one thing, manipulating matter at the atomic and sub-atomic level appears to be the last activity that can be done as this level represents the fundamental limit in this frontier at least for now. This is indeed ‘the authentic question’ that distinguishes the nanorevolution from other revolutions.39 On the other hand, as the key technology in the convergence of technologies, nanotechnology is catalysing the revolution in other technologies. Nanotechnology, biotechnology, information and cognitive sciences are integrally fused and col- lectively treated as ‘convergent technologies’ in the EU and under the acronym NBIC (nano, bio, info, cogno) in the US. In this convergence, nanotechnology ‘occupies an elevated position’.40 The basis of this convergence is the very subject- matter of nanotechnology research—‘the material unity at the nano-scale’.41 The revolution unleashed by the convergence of technologies in which nanotech- nology occupies a decisive position is ‘a revolution of a kind never experienced before’,42 a ‘meta-revolution’43 not just another scientific revolution. 36 K Eric Drexler et al (1991), Unbounding the Future: the Nanotechnology Revolution (New York, William Morrow and Company, Inc). See also the recent outcome document of the EU-funded scenario planning project published by the Nanologue Team; Nanologue (2006), The Future of Nanotechnology: We Need to Talk, available at (accessed on 20 January 2007). 37 Schummer noted the novelty of a legislation calling for the establishment of a Nanotechnology Preparedness Centre in the US. Schummer, Joachim (2004), ‘Societal and Ethical Implications of Nanotechnology: Meanings, Interest Groups, and Social Dynamics’ 8 (2) Techné: Research in Philosophy and Technology 56 at 65. The issue is also taken up by nongovernmental entities. The aim of Foresight Institute founded by Drexler is ‘“to help prepare society for anticipated advanced technologies”—most important nanotechnology’. Bill Joy (2000), ‘Why the Future Doesn’t Need Us’ 8 (2) Wired. ‘Preparedness’ was a central theme of a recent UK government study on technologies including notably nanotechnology. ‘Robots Could Demand Legal Rights’ BBC news report available at (accessed 10 December 2007). 38 Nigel Cameron (2006), Heralding a Century of Hype and Hope: Nanotechnology and its Trans- formative Potential, available at (accessed 20 January 2007). 39 Hunt, above n 27 at 44. 40 Denis Loveridge (2004), ‘Converging Technologies—A Commentary, Part I’ The University of Manchester; available at (accessed 20 January 2007). 41 Mihail Roco (2004), ‘Science and Technology Integration for Increased Human Potential and Societal Outcomes’ in Mihail Roco and Carlo D Montemagno, above n 11 at 3. 42 Loveridge, above n 40. 43 Hunt, above n 27 at 44. Taming Matter for the Welfare of Humanity 335 III. Risk and Regulation A Glance at the Risk Profile ‘Gray Goo’ The ‘gray goo’ danger whereby self-replicating nanobots are feared to go feral wreaking havoc on the biosphere was the first headline-grabber in the debate on nanotechnology. Bill Joy, whose dystopian polemic provided a classic articula- tion of the ‘heuristics of fear’ in this regard, picked up and amplified Drexler’s notion of ‘gray goo’.44 Prince Charles joined him with a nano-edition of his appreciation of GM food apparently building on the ‘gray goo’ concept.45 The ‘gray goo’ scenario is an indispensable ingredient of the debate on nano- technology in general. As Nigel Cameron observes, any discussion on nanotech- nology cannot claim to be complete without a discussion of the gray goo issue.46 It is also held that nanotechnology owes its present publicity to this issue which was at the heart of the great Smalley–Drexler debate.47 Smalley argued the impos- sibility of such apocalypse rendering the concept of molecular assemblers as an unworkable proposition. He reasoned that what came to be known as ‘Smalley Fingers’ do not allow the manoeuvring of atoms for self-replication as these manipulator fingers are either too ‘fat’ or ‘sticky’ to do the job. Smalley accused Drexler of scaring generations and instilling aversion to the science. Drexler while maintaining his argument for the molecular assembler has, nevertheless, recanted and even regretted his coinage of the term ‘gray goo’.48 The report by the Royal Society and Royal Academy of Engineers favoured the Smalley argument against ‘gray goo’ treating it rather as ‘a distraction from more important issues’—a view later endorsed by UNESCO.49 Despite such high- profile trivialising of the issue, and the recanting by the very author of the concept itself, the academic and NGO quill has imbibed a disproportionate amount of the ink dedicated to the discourse on the regulation of nanotechnology. In the sequel to their 2003 report for the ESRC which they published in 2007, Wood et al regretted the space they allotted to the gray goo issue.50 The intensive focus on 44 Reynolds, above n 3 at 188. 45 Peter Singer et al (2004), ‘Will Prince Charles et al Diminish the Opportunities of Developing Countries in Nanotechnology?’ available at (accessed 15 January 2007). Prince Charles has defended himself saying he has not used the phrase ‘gray goo’. ‘Prince Charles Airs His Nano-views’ at (accessed 15 January 2007). 46 Nigel Cameron (2006), ‘Nanotechnology: Why it matters?’ available at (accessed 15 September 2007). 47 ‘Nanotech is Not So Scary’ (2003) 421 Nature (23 January) 299. 48 BBC, ‘Nanotech Guru Turns Back on “Goo” ’ at accessed (DATE). 49 The Royal Society and the Royal Academy of Engineers, above n 7 at 104; UNESCO, above n 15 at 20. 50 Wood et al, above n 13 at 18. 336 Hailemichael T Demissie gray goo has been ‘an unfortunate tendency’ that unduly impacted the thinking on nanoregulation.51 Though a resilient issue in the discourse on nanotechnology, the gray goo scenario seemed to have lost its alarmist appeal. Bill Joy’s frightening urge for the relinquishment of nano research was ignominiously ignored.52 His arguments were rendered obsolete by some, if not majority, of scientists who reassured the public that there is nothing to fear about such thing as gray goo at least for the foreseeable future.53 Yet, despite the supposed obsolescence of the gray goo sce- nario the discourse on nanoregulation heavily relied on it either inadvertently or in direct allusion to its rationality or ‘arationality’54 culminating in the revival of the issue in its original clout. There are several reasons for its recent comeback. For one thing, the ‘Smalley Fingers’ objections that aim at evicting the gray goo fear ‘do not constitute a blanket disproof of the feasibility of [molecular nanotechnology]’.55 Furthermore, innova- tion is bubbling up in the field providing answers to some of the crucial questions that the Drexlerian ‘molecular assembler’ concept was thought to have failed to answer. In particular the issue of the source of energy for the assembler was recently addressed by UK researchers who are poised to use light as a source using the thought experiment known as ‘Maxwell’s Demon’.56 This development is highly held to ‘take molecular machines a step forward to the realisation of the future world of nanotechnology’.57 A host of other discoveries and inventions are furnish- ing the evidence for the definitive advance towards the molecular assembler.58 51 Ahson Wardak (2003), Nanotechnology & Regulation: A Case Study Using the Toxic Substances Control Act (TSCA), A Discussion Paper (Woodrow Wilson International Centre for Scholars Foresight and Governance Project) at 10; available at (accessed 07 August 2007). 52 Thomas D Vandermolen (2006), ‘Molecular Nanotechnology and National Security’Air & Space Power Journal. 53 Not at least for the next 25 years according to Wilson’s bold prediction. Robin F Wilson (2006), ‘Nanotechnology: The Challenge of Regulating Known Unknowns’ Journal of Law, Medicine & Ethics 704 at 705. 54 Kaiser observes that the dystopian concerns on nanotechnology and gray goo in particular can- not be characterised as either rational or irrational as they are undetermined at the moment. Hence, he employs the Greek negative ‘arational’ to create a third category which is neither rational nor irra- tional. Mario Kaiser (2006), ‘How Do We Situate Nanotechnology in a Social and Historical Context?: Drawing the Boundaries of Nanoscience Rationalizing the Concerns’ 34 Journal of Law, Medicine and Ethics 667. 55 Chris Phoenix (2003), A Technical Commentary on Greenpeace’s Nanotechnology Report, available at (accessed on 19 February 2007). 56 Viviana Serreli et al (2007), ‘A Molecular Information Ratchet’ 445 (February) Nature 523. The researchers capitalise on this thought experiment proposed by James Clerk Maxwell a century and a half ago to find a way around the second law of thermodynamics-the universal tendency to even out differences in temperature, pressure and density in an isolated system. 57 ‘Scientists Build Nanomachine’ at (accessed on 19 February 2007). 58 Sample news on the advent of nanomachines: ‘New Micromanipulator May Help Build Micro- Machines’ (accessed on 19 February 2007);‘Biologists Learn Structure of Enzyme Needed to Power “Molecular Moto”’ (accessed on 19 Taming Matter for the Welfare of Humanity 337 What is more is that it is ‘moving very fast’ as Roco recently conceded.59 Drexler himself has updated the gray goo theme reaffirming the need for control: ‘Nanoreplicators are feasible and their control is, thus, a legitimate concern’.60 Whereas its precedence over other issues is an overzealous move lacking in wit, its depiction as an obsolete ‘nanomyth’61 is itself a deadly distraction. It is time to recap Drexler’s earlier warning two decades ago: ‘The gray goo threat makes one thing per- fectly clear: we cannot afford certain kinds of accident with replicating assemblers.’62 Accidents With a market worth around $70 billion in the US alone,63 nanoengineered mate- rials are now found in hundreds of consumer and industrial products. As yet, no conclusive scientific rendition of a certain nanomaterial as toxic or hazardous can be found.64 Yet, few would take their cue from this to rush into the conclusion that the technology is safe. The prevailing predisposition is not that of accepting the technology as safe but rather relying on the lack of knowledge to presume otherwise.65 This is understandable not only because the technology is new and not fully known but also because it is known that nanomaterials are capable of penetrating into parts of the human body where no alien material has ever reached. Nanoengineered materials can be inhaled and deposited in the alveoli; they can make their way right through unbroken skin; they pass through the cell membrane and even compromise the blood–brain barrier.66 That is a satisfacto- rily sufficient reason to demand more research into the health and environmental risks potentially posed by nanomaterials. February 2007); ‘Nano-Wheels Seen Rolling at Last’ (accessed on 19 February 2007). 59 Mihail Roco (2007), ‘The Future of Nanotechnology: A Rice Q&A With the NSF’s Mike Roco’ available at (accessed 01 May 2007). 60 K Eric Drexler (2006), ‘Nanotechnology: From Feynman to Funding’ in Hunt and Mehta, above n 27, 25 at 31. 61 William Cheshire (2007), ‘Doing Small Things Well: Translating Nanotechnology into Nanomedicine’ in Nigel Cameron and M Ellen Mitchell (eds), Nanoscale: Issues and Perspectives for the Nano Century (Hoboken, NJ, Wiley) 315 at 330. It seems this author labelling ‘grey goo’ as nanomyth is oblivious of Drexler’s latest reminder cited in n 60 above. 62 K Eric Drexler (1989), Engines of Creation: The Coming Era of Nanotechnology, Anchor Books, available at (accessed 20 January 2007). 63 Roco, above n 59. 64 Paula Gould (2006), ‘Nanomaterials Face Control Measures’ 1 (2) Nanotoday 34 at 39. The much publicised Magic Nano incident in Germany in 2006, which some were eager to use as the flagship episode of ‘a sinister technology run amuck’ was concluded in absolving nanotechnology from the charge of toxicity. ‘Has All the Magic Gone?’ The Economist (15 April 2006). 65 Ronald Clift (2006), ‘Risk Management and Regulation in an Emerging Technology’ in Hunt and Mehta, above n 27, 140 at 146. 66 It is to be noted that it is this very ability to reach hitherto inaccessible parts of the human body that promises novel forms of medication and drug delivery. Jacob Heller and Christine Peterson (2007), ‘Nanotechnology: Maximizing Benefits, Minimizing Downsides’ in Cameron and Mitchell, above n 61, 83 at 88. 338 Hailemichael T Demissie The thorny issue is, however, finding support for such research which funding agencies and investors find not so ‘sexy’. Governments are faced with a disorien- tating dilemma of giving in to the tempting economic rewards of the technology pitting against their duty to protect the public.67 The fund governments earmark for risk research is a mere speck in the avalanche of funds available for nano R&D in general clearly showing that risk research is not a priority in governments’ engagement with the technology.68 The need to reset priorities in funding is stressed with ‘a targeted risk research’ seeking answers to the regulatory concerns as opposed to an ‘exploratory research’ which is not a preferred model when addressing obvious and specific questions of health and safety.69 Similarly, the UNEP called for ‘a carefully designed research’.70 Abuse Abuse of nanotechnology is ‘the greatest danger’71 presenting an unprecedented challenge for humanity. The peculiar attributes of the technology—its relative inexpensiveness, invisibility, micro-locomotion and self-replication will make the control of abuse extremely difficult.72 The danger posed by abuse of nanotechnol- ogy requires ‘a level of political control far beyond that which most nations know how to exercise’.73 Abuse of nanotechnology can be exceptionally intractable. The ‘double life’ of technology—a phenomenon analysed in science and technology studies—denotes the use of a certain technology for purposes other than originally designed by the creators.74 Sometimes such uses are ingenious and no wonder they attract the attention of technology analysts. What attracts the interest of ethicists and regula- tors is, however, the unintended disruptive and/or destructive use of technologies; and in the case of nanotechnology its immense potential for such use. In the post-9/11 world, the risk of nanoterrorism cannot be underestimated especially with non-state actors playing the major role. Indeed, non-state actors 67 ‘Environmental Law and Nanotechnology’ available at (accessed on 20 July 2007). 68 Of the $100 billion spent world wide on nanotechnology research only $10 million is said to have been spent on risk research. Wilson, above n 53 at 711. In the US, of the more than $1 billion spent annually on nanotechnology research only $11 million per year goes to risk research. Similarly, in the UK, government expenditure on risk research is only an ‘absurd’ £600,000 per year compared to the £90 million funding for the research of advancing nanotechnology in 2004 alone. ‘Scientists Take the Government to Task’ The Daily Telegraph (28 March 2007). 69 Andrew Maynard of Texas University in testimony to the US Congress; ‘Nanotech Safety Needs Specific Government Risk Research and Funding’ avilable at (accessed on 20 July 2007). 70 UNEP (2007), ‘Emerging Challenges: Nanotechnology and the Environment’ Geo Year Book 68. 71 Alexander Arnall (2003), Future Technologies, Today’s Choices: Nanotechnology, Artificial Intelligence and Robotics; A Technical, Political and Institutional Map of Emerging Technologies (London, Greenpeace Environmental Trust) at 41. 72 Robert Pinson (2004), ‘Is Nanotechnology Prohibited by the Biological and Chemical Weapons Conventions?’ 22 Berkley Journal of International Law 279 at 304. 73 K Eric Drexler et al, above n 36. 74 David Bell (2006), Science, Technology and Culture (Maidenhead, Open University Press) 8. Taming Matter for the Welfare of Humanity 339 are shaping the entire thinking on defence and the new kind of arms development placing nanotechnology in the spotlight. The reaction of states to terrorist threats is becoming as equally worrying as the terrorist threats themselves. Following the US ‘war on terror’ model, some countries are appropriating power to use violence not only to wage the war on terror but also on other exigencies too.75 In the context of nanotechnology military applications, UNESCO warned against this opportunistic weapons development by ‘governments abusing the threat of terrorism’.76 The intense interest in the military applications of nanotechnology is driving major powers into an arms race which after a brief hiatus following the end of the Cold War is resuming with as yet unidentified polarities.77 The militarisation of nanotechnology may not be neatly categorised as abuse as it depends not on the militarisation per se but on the purpose for which it is deployed. This would, in turn, depend on how one defines ‘abuse’—a task made no easier by the blurring of the line separating defensive and offensive technolo- gies and actions as in the case of pre-emptive actions.78 Regulators will be faced right at the outset with the task of setting the parameters to delimit what consti- tutes abuse of nanotechnology. A major question will be whether the withholding of certain technologies for various reasons would amount to an abuse—like a criminal offence by omission. Drexler et al raise the question whether the with- holding of a lifesaving treatment while pondering over the ethical aspects of its deployment is akin to murder.79 The question can be expanded to bring in similar issues. What if the withholding is not due to the ethical issues involved but due to pure pecuniary issues or the prioritisation of national interest? A very pertinent example is provided by nano-products that are used by the US military for water treatment but are unavailable for populations in Africa and Asia where water- borne diseases take their toll by the hundreds of thousands each year.80 Would this amount to an abuse for purposes of the global governance of nanotechnol- ogy? Nations have been deploying technology for ‘ends that are non-productive or inefficient for society as a whole: status, power, or political and social control, 75 Like the war on drugs (France), on separatist rebels (Russia, Spain), and on political dissidents (China and many others), ‘For Whom the Liberty Bell Tolls’ The Economist (31August 2002). 76 UNESCO, above n 15 at 19 77 IRGC, above n 29 at 44. See the sample stories on recent military issues in Russia and China at ‘Cheney Warns on Chinese Build-Up’ (accessed 10 December 2007); ‘Russia Threatening New Cold War Over Missile Defence’ (accessed 10 December 2007). 78 Geoffrey Hunt (2006), ‘The Global Ethics of Nanotechnology’ in Hunt and Mehta, above n 27, 183 at 187. 79 Drexler, above n 36. 80 Schummer questions the acceptability of directing resources for funding an expensive water filtering nanodevice to be used in the battlefield and turning a blind eye to the needs of the masses in the third world. Schummer, above n 19 at 6. On nanotechnologies for the disabled in the developed but not in the developing countries see Meridian Institute (2005), Nano-Technology and the Poor: Opportunities and Risks: Closing the Gaps within and between Sectors of Society, available at (accessed 20 January 2007) at 11. 340 Hailemichael T Demissie not to mention aesthetic pleasure and fun’.81 Should such purposes be rendered as abuses in view of unabating infant mortality, hunger, water-borne diseases and other easily preventable catastrophes? The trend to make the idea of abuse as inclusive as possible for purposes of ensuring its strictly beneficial purposes can be gleaned from the various ethical theories on nanotechnology.82 Informing the ethical debate on nanotechnology, religious teachings qualify such abuses as ‘sin’—with the broad understanding that ‘sin is to know what is good but refuse to do it’.83 Confucian ethic teaches the same precept: ‘To see what is right, and not to do it, is a want of courage or of principle’.84 IV. Regulating Nano With the exception of recent moves to bring nano within the regulatory remit, no country has adopted any regulatory measure specific to nanotechnology. The engagement of governments with the technology is heavily ‘tilted’ towards ‘incentivisation’ and ‘facilitation’. The very absence of regulation specific to nano- technology is evidence of the concerted inaction by governments to clear the way for the advance of the technology—akin to ‘deregulation as a form of regulation’ that was characteristic of the regulation of biotechnology.85 It is also in line with the current regulatory vogue where the preoccupation has been that of changing the face of regulation from the bad raps it has been enduring recently. As Prosser noted the current bustle is about ‘cutting red tape’, ‘lifting the regulatory burden’, and achieving ‘better regulation’.86 It, thus, follows that the vociferous calls for 81 Eda Kranakis (2005), ‘Surveying Technology and History: Essential Tensions and Postmodern Possibilities’ 46 (4) Technology and Culture 805 at 808. 82 Loveridge, for example, warns against the ‘unquestioning exploitation’ of the technology and on the need to pay more attention to what may be the most important question determining the success- ful development of the technology: ‘social desirability’. Loveridge, above n 6. 83 Franz A Folz and Friedrik Folz (2006), ‘The Societal and Ethical Implications of Nanotechnology: A Christian Response’ 32 (1) Journal of Technology Studies 104 at 110. 84 UNESCO (2004), Ethics in Asia-Pacific (Bangkok, Thailand UNESCO Asia and Pacific Regional Bureau for Education). See the quote on the rear cover. 85 Upendra Baxi (2004), The Future of Human Rights, 2nd edn (New Delhi, Oxford University Press) 274. 86 Tony Prosser (2006), ‘Regulation and Social Solidarity’ 33 (3) Journal of Law and Society 364 at p.364. In a related development, in January 2007, the Bush administration has issued a directive to regulatory agencies constraining their exercise of discretion in taking regulatory measures. Cheryl Hogue (2007), ‘Changing the Rules on Regulations: Bush Directive Makes It Harder for Agencies to Issue Rules’ Chemical & Engineering News (American Chemical Society) available at (accessed on 10 April 2007). The UK government position is not dissimilar as Lord Sainsbury made it clear that the government’s engagement should not ‘over- burden industry with regulation’, HM Government (2005), Response to the Royal Society and Royal Academy of Engineers Report: ‘Nanoscience and Nanotechnologies: Risks and Opportunities’ (London, HM Government in Consultation of the Devolved Administrations) 1. Taming Matter for the Welfare of Humanity 341 banning, prohibition, relinquishment and allied suggestions hardly find purchase as policy choices to regulate nanotechnology. Other approaches like the precau- tionary approach predominantly feature in the discourse on nanoregulation. Regulating Uncertainty: Precautionary Versus ‘Proactionary’ Approaches As far as available data can show, the risk involved in manufacturing of nano- materials is not higher than the risk involved in wine-making or petroleum refining.87 The risk spoken of nanotechnology is largely no more than a deep speculation at the moment and is emphatically qualified as ‘potential’. Even the expression ‘potential risk’ lacks precision: ‘what [potential risk] designates is not a risk waiting to be realised, but a hypothetical risk, one that is only a matter of conjecture.’88 However, this downgrading does not warrant a presumption that nanomateri- als are generally safe. The most that can be said of them is that they are neither ‘inherently unsafe’ nor ‘inherently benign’.89 The asbestos and more recently the Vioxx sagas do not hint at any complacency about the possible risks that may manifest decades later even with tight regulation. It is in such uncertainty-ridden circumstances that the precautionary principle comes into play. The 1992 Rio Declaration on Environment provides for one of the most oft-quoted formulations of the principle: ‘lack of scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environ- mental degradation’.90 The rampant scientific uncertainty in this young field does satisfy the trigger for invoking the precautionary principle. What is not properly addressed by the principle is, however, the sheer interest to exploit the technology for economic and environmental purposes. The more interesting promise of nanotechnology relates to sustainability and the remediation of the environment. Nano-enabled economic growth will not be the kind of growth that was achievable only at the expense of the environment. Clean and ubiquitous energy, wasteless production, less or no dependence on raw materials are the goals that the nanotech revolution and the environmental cause that the precautionary principle envisages to uphold share in common. Given the fact that nanotechnology is incredibly promising to sustainable development, 87 Wilson, above n 53 at 710. 88 Jean-Pierre Dupuy and Alexei Grinbaum (2004), ‘Living with Uncertainty: Toward the Ongoing Assessment of Nanotechnology’ 8 (2) Techné 4 at 10. 89 Kathy Jo Wetter (2006), ‘100 Years after The Pure Food & Drug Act: FDA’s Current Regulatory Framework Inadequate to Address New Nano-Scale Technologies’ Presentation on behalf of ETC Group, FDA Nanotechnology Public Meeting, 10 October 2006, available at (accessed 04 February 2007) and Wilson, above n 53 at 706. 90 Rio Declaration on Environment and Development (UNEP, 1992), avilable at (accessed on 10 April 2007). 342 Hailemichael T Demissie subscribing to the ‘technology-freezing’ tenets of the precautionary principle would be a gross miscalculation and even ‘unethical’.91 The opportunities that may follow from the technology are too great to pass up. The precautionary approach will be self-defeating if its application hampers the socially and envi- ronmentally beneficial development of nanotechnology for the mere reason of scientific uncertainty. Such eventuality is more likely than not, as such self- defeating outcomes, which Cass Sunstein calls ‘paradoxes of the regulatory state’, occur more often than is usually thought.92 Besides the self-defeatist operation of the precautionary principle in relation to nanotechnology, it is of questionable conceptual integrity. The principle is taken rather as an idiom for the ‘philosophical aporia’ that floats up while engaging with the future. The principle requires the anticipation of the future which is impos- sible lest we reduce the future into a quantifiable probability which is far from being anything congruent to uncertainty.93 This fundamental handicap of not being able to predict the future invites all sorts of compromises and modifications in the application of the precautionary principle. Accordingly, the applications of the principle thus far have actually remained only ‘a little more than a glorified version of the ‘cost–benefit’ analysis’.94 Policy makers applying the precautionary principle are assigning probability values to the future often collapsing uncer- tainty into quantifiable risk, and precaution into prevention.95 In the uncertainty scenario all calculations and predictions have to be abruptly abandoned in the face of a single event—‘a tipping point’. The ‘tipping point’ illustrates the inapplicability of assigning probability values to uncertainty. The possible occurrence of these events is ‘one of the reasons why uncertainty is not amenable to the concept of probability’.96 Such are the events that trigger the knee-jerk responses from regulators often devoid of thoroughly thought-out solutions.97 The conceptual fragility of the principle also relates to the other, yet the sine qua non element of the principle, viz, ‘scientific uncertainty’. The principle requires that primary focus be given to the hypothetical negative outcomes that may result from ‘scientific uncertainty’. Highlighting the impossibility of ascertaining the 91 Mark Meaney (2006), ‘Lessons from the Sustainability Movement: Toward an Integrative Decision-Making Process for Nanotechnology’ 34 Journal of Law, Medicine & Ethics 682 at 682. 92 Reynolds invoking Cass Sunstein, above n 3 at 204. 93 Dupuy and Grinbaum, above n 88 at 9. 94 Ibid 11. 95 For the analyses and instances of the conflation of precaution and prevention, see Marie-Claire C Segger and Ashfaq Khalfan (eds) (2004), Sustainable Development Law (Oxford, Oxford University Press) 150–52; and, Dupuy and Grinbaum, above n 88 at 10. The European Environment Agency even uses the phrase ‘precautionary prevention’ apparently as interchangeable with ‘precautionary principle’; European Environment Agency (2001), Late Lessons from Early Warnings: the Precautionary Principle 1896–2000 (Copenhagen, European Environment Agency); also available at (accessed 07 September 2007) at 12. 96 Dupuy and Grinbaum, above n 88 at 12. 97 Julia Black (2005), ‘The Emergence of Risk-based Regulation and the New Public Risk Management in the United Kingdom’ Public Law 512 at 527. Taming Matter for the Welfare of Humanity 343 existence of a situation of uncertainty itself, Dupuy and Grinbaum show that the principle is actually resting on a sandy basis: The assumption is that we know we are in a situation of uncertainty. It is an axiom of epistemic logic that if I do not know P, then I know that I do not know P. Yet, as soon as we depart from this framework, we must entertain the possibility that we do not know that we do not know something. In cases where uncertainty is such that it entails that uncertainty itself is uncertain, it is impossible to know whether or not the conditions for application of the precautionary principle have been met. If we apply the principle to itself, it will invalidate itself before our eyes.98 The near intuitive resort to the precautionary principle as the legal and ethi- cal wastebasket for situations involving scientific uncertainty fails to take into account the fragility of the principle. From the above analysis, it is argued that the precautionary principle is not capable of serving as a regulatory policy for nanotechnology and other new technologies.99 While arguments against the precautionary approach abound, those forwarded by transhumanist organisations are far too radical. These organisations regard any attempt at slowing down research as a move against human interest and the adop- tion of the precautionary principle is seen as such and call for its replacement with a ‘proactionary principle’. They redefine risks and categorise them into ‘endurable’ and ‘existential risks’. Risks are largely disregarded as long as they do not qualify as ‘existential risks’ endangering the future of humanity as a whole.100 The radical form of utilitarianism adopted by these organisations allows putting individuals on the altar for the good of society and accordingly carcinogenic pollutants and nuclear meltdowns are treated as endurable risks. While the promises of nano- technology are indeed worth pursuing, they do not require needless sacrifices as proposed by these organisations. Given the fact that there is no scientific activity that does not involve a theoreti- cal possibility of harm, the precautionary principle cannot ensure safety without hindering innovation and development.101 A compromise solution between the ‘inhumane’ ‘proactionary principle’ of transhumanists and the technology-freez- ing precautionary principle would have to be sought for the flourishing of benefi- cial nanotechnology. This is the trend set in the Universal Declaration of Bioethics and Human Rights that sanctions the sacrificing of individuals for society’s ends without at the same time embracing the precautionary principle.102 The UN 98 Dupuy and Grinbaum, above n 88 at 11. 99 Ibid 21. 100 Schummer, above n 37 at 71. 101 Sir Søren Holm and John Harris vehemently contested its utility as a policy choice stressing its stifling effect; quoted in Gary E Marchant and Douglas J Sylvster (2006), ‘Transnational Models for Regulation of Nanotechnology’ 34 Journal of Law, Medicine & Ethics 714 at 722. 102 Universal Declaration of Bioethics and Human Rights, adopted by acclamation on 19 October 2005, General Conference of UNESCO, Art 3. The precautionary principle, which was retained as late as the third elaboration of the draft Declaration, did not make it to the final. UNESCO IBC (2004), ‘Elaboration of the Declaration on Universal Norms on Bioethics: Third Outline of a Text’ 15 (2–3) International Journal of Bioethics. 344 Hailemichael T Demissie Environmental Program even failed to mention this venerated cannon of inter- national environmental policy in its report on nanotechnology in which it urged governments to take ‘swift action’ to regulate nanotechnology.103 In Lieu of the Precautionary Principle Having dismissed the precautionary principle as incapable of ‘dealing with the kind of uncertainty that the new technological wave generates’, Dupuy and Grinbaum propose a methodology which they named ‘the methodology of ongo- ing normative assessment’.104 Their methodology introduces an approach differ- ent from the precautionary principle in that it is not a principle in the first place but ‘a practice’: It is a matter of obtaining through research, public deliberation, and all other means, an image of the future sufficiently optimistic to be desirable and sufficiently credible to trigger the actions that will bring about its own realisation.105 Continuous and incremental evaluation of the facts and the norms depending on them is the hallmark of their proposition which they share with other authors.106 Using a different nomenclature, Guston and Sarewitz have earlier elaborated on the same issue of ongoing assessment which they christened as ‘Real-Time Technology Assessment’. Attacking the tendency to adopt a position which cher- ishes inaction, they suggest theirs as a method particularly fitting the needs of nanoregulation: society’s capacity to plan despite an uncertain future shows that alternative to prediction is not inaction or reaction, but incremental action based on synchronous reflection and adjustment. What is necessary … is to build into the R&D enterprise itself a reflexive capacity that … allows modulation of innovation paths and outcomes in response to ongoing analysis and discourse.107 The non-technical notion of ‘vigilance’ captures the general sense of the meth- odology the authors mentioned above presented in their respective phraseology. Vigilance is the watchword commonly heard in the wake of a terrorist attack, a disease outbreak or a financial market crisis. In bankers speak, ‘vigilance’ has almost attained a technical usage signifying the inevitability of a rise in interest rate and an appeal for the market to adjust.108 In the context of terrorist attacks, 103 UNEP, above n 70. 104 Dupuy and Grinbaum, above n 88 at 21. 105 Ibid. 106 Inter alia Douglas and Wildavsky whom Black evokes; Black , above n 97 at 547–8. 107 David Guston and Daniel Sarewitz (2002), ‘Real-time Technology Assessment’ 24 Technology in Society 93 at 100. 108 Joe Downes, ‘ECB Chief Keeps All of His Options Open’ The Daily Mail (London; 28 August 2007), the European Central Bank’s reaction to the mortgage crisis in the US was captioned by the term ‘strong vigilance’—a term consistently used in the bank and indicating a certain course of action. See also Rachel Williams, ‘Vigilance Call as Foot and Mouth Curbs Eased’ The Guardian (London; 17 September 2007). Taming Matter for the Welfare of Humanity 345 it is an appeal to maintain a high state of alert.109 The discourse on nanoregula- tion draws on this context. Dupuy and Grinbaum quote the Metropolitan Police Commissioner’s statement after the 7/7 terrorist attacks in London in which he said that ‘there is an inevitability that some sort of attack will get through but my job is to make sure that does not happen’.110 And this is what the methodology suggested in lieu of the precautionary principle does. The importance of living with the uncertain future is emphasised. We are required to learn ‘to be comfort- able when we do not have definite boundaries, when we do not have our feet on the ground’.111 Unlike the precautionary principle whose central tenet, its ‘spirit’,112 is restraint of further proceeding in research, development and application of new technolo- gies, ‘vigilance’ has a forward thrust—going ahead but remaining alert. Past expe- rience has shown risk could materialise even with robust regulatory safeguards if vigilance is lacking. It has also shown that with greater vigilance risk could have been averted. The failure by Merck (in case of Vioxx) and Monsanto (in case of the genetically engineered Mon 863 corn variety) to adhere to their own safety test procedures are stark reminders of the need for emphasis on vigilance.113 There cannot be one particular recipe for risk assessment and management. Whatever the mechanism we deploy for this purpose, vigilance is of the essence and will pay off as it has paid off in some instances of terrorist threats. 109 Francis Elliot, ‘Baptism of Fire for New Home Secretary Facing Sustained Terrorism Threat’ The Times (London; 30 June 2007). 110 Dupuy and Grinbaum, above n 88 at 22. The statement was reiterated in a recent government statement: ‘This latest incident [of attempted bombings in Glasgow and Heathrow of 2007] reinforces the need for the public to remain vigilant and alert to the threat we face at all times’ in Elliot above n 105. 111 Kristy Mills (2006), ‘Nanotechnologies and Society in the USA’ in Hunt and Mehta, above n 27 at 89 112 Michael Mehta (2002), ‘Regulating Biotechnology and Nanotechnology in Canada: A Post- Normal Science Approach for Inclusion of the Fourth Helix’. Paper presented to the International Workshop on Science, Technology and Society: Lessons and Challenges, National University of Singapore, 19 Aril 2002 available at (accessed 13 April 2007) 22. 113 On Merck see Little, Tim et al (2007), Beneath the Skin: Hidden Liabilities, Market Risk and Drivers of Change in the Cosmetics and Personal Care Products Industry (Investor Environmental Health Network (IEHN)), available at (accessed 13 April 2007) at 4. On Monsanto see Jeffrey Smith (2005), ‘Genetically Modified Corn Study Reveals Health Damage and Cover-up’ Spilling the Beans (11 June) available at (accessed 03 September 2007). The cause of the recent outbreak of foot-and-mouth in the UK was traced to a leaking pipe that was unattended because of a squabble as to who has to pay for main- taining the pipe. ‘Pirbright: Labs at the Centre of Outbreak’ BBC (14 September 2007) at (accessed 26/11 2007). Also note yet another leak from one of the labs involved in the squabble, Merial, ‘“Probable” New Farm Disease Leak’ (22 November 2007) at (accessed 26 November 2007). Such ‘sloppy practice’ is in no way unique to the UK. The lack of vigilance was a matter before the US Congress probing on the lack of oversight on the proliferating labs handling deadly substances. See the report by Margasak, Larry, 2 October 2007, ‘U.S. Labs Mishandling Deadly Germs’ at (accessed October 2007). 346 Hailemichael T Demissie Vigilance is commonly thought of as a kind of prudence on the part of the public. There is a focus away from the state and towards the public in current appeals for vigilance; it is thought as a strategy of winning over public trust by enlisting the public in the regulatory regime.114 Earlier, Drexler et al have called for vigilance in a similar fashion: Modern manufacturing and its products should continue this trend [of prudent people choosing technologies with the mix of risks and benefits], not as an automatic conse- quence, but as a result of continued vigilance of people exercising care in picking and choosing with technologies they allow into their lives.115 The focus on the role of the public is justifiable; yet, the roles of all other stake- holders not least governments require unreserved attention. There is a significant deficit in this respect and, in particular, government responsibility in maintaining and nurturing vigilance needs a major re-examination. ‘Render Unto Caesar What is Caesar’s’: Private versus Public Regulation Among the excuses for deferring the regulation of nanotechnology is the comfort governments find in industry self-regulation filling in the gaps in government capa- city to regulate new technology. Self-regulation has become a feature of the ‘new regu- latory state’ and the apparent tension between self-regulation and governmental regulation is explained away by the notion of ‘regulatory regime’—‘a set of interrela ted units which are engaged in joint problem solving to address a particular goal’.116 Nevertheless, even when conducted in the mantra of ‘self-regulation’, regulation remains a public function and the final say rests with the state.117 Furthermore, the public trust in governmental regulation necessitates the widening and not the diminishing of its role. Governmental regulation makes technological develop- ment far more transparent as it enters it into the public arena out of the secretive world of academic and company R&D quarters. Governmental abdication of its regulatory powers in favour of voluntary self-regulation is predicated on the assumption that industry will prioritise the protection of public interest in terms of safety and risk. The conflict of interest captured by the satirical adage ‘the fox guarding the hen house’ explains the extant scenarios despite ‘the crisis of confidence in both the practitioners and custodians of new technology’.118 114 Edna Einsiedel and Linda Goldenberg (2006), ‘Dwarfing the Social? Nanotechnology Lessons from the Biotechnology Front’ in Hunt and Mehta, above n 27, at 214. 115 Drexler et al, above n 36. 116 Julia Black (2007), ‘Tensions in the Regulatory State’ Public Law 58 at 62; Black, above n 97 at 544. 117 Peter Cane (2002), ‘Tort Law as Regulation’ 31 Common Law World Review 305 at 309. 118 Roger Brownsword (2004), ‘What the World Needs Now: Techno-Regulation, Humanity, Human Rights and Human Dignity’ in Brownsword (ed), Global Governance and the Quest for Justice, vol 4 (Oxford, Hart Publishing) at 225. Taming Matter for the Welfare of Humanity 347 There are scores of episodes to show the situation where industry in its role as agent of private regulation is placed in a mission not compatible with its disposi- tion. Even with the rigour of ‘enforced self-regulation’ contrasted with ‘voluntary self-regulation’ suggested for nanoregulation, self-regulation itself is said to be ‘risk-taking’.119 Corporate interest was and is a priority over lives of millions of AIDS victims whom the pharmaceutical industry denied access to life-saving drugs. The oil industry’s attempt to sabotage the truth about global warming gives a clear picture of what may happen if industry is left to itself. Publics across the globe have every reason to be cynical about entrusting the regulation of powerful technologies to private bodies. The regulation in ‘self-regulation’ essentially refers to the narrow under- standing of the prevention of risk. Regulation in its wider sense of channelling the development of the technology and including the attendant social and ethical issues certainly requires more than self-regulation.120 Business self- regulation would not be adequate if the advancement of beneficial technology with no quick pecuniary returns is to be the object of the regulation. In particu- lar, the channelling of research resources and the setting of priorities to wider problems like sustainable development cannot be done without governmental regulation.121 In other fields, self-regulation is sometimes statutory with the expectations of the government clearly spelled out.122 Self-regulation of nanotechnology is being touted in the absence of general guidelines provided by law that self-regulation should embody. In case of nanotechnology, industry is given the free rein to make the law and administer it. At the moment, industry wields so much power in controlling the technology that it will significantly shape the rules that will be governing nanotechnology for years to come.123 The need to mitigate this state of affairs requires greater governmental role in nanoregulation necessitating a re- examination of self-regulation. Society has long been reminded that ‘serious discussion of self-regulation is probably due’—a call echoed by UNESCO years later.124A vehement opposition to initiatives of self-regulation of nanotechnology was recently voiced by 21 NGOs of international stature in a joint response to a framework proposed by DuPont 119 Bridget Hutter (2001), ‘Is Enforced Self-regulation a Form of Risk Taking?: The Case of Railway Health and Safety’ 29 International Journal of the Sociology of Law 379 at 398. 120 Mehta and Hunt emphatically conclude that ‘the social and ethical issues surrounding nano- technology are important regulatory issues too’. Michael Mehta and Geoffrey Hunt (2007), ‘What Makes Nanotechnologies Special?’ in Hunt and Mehta, above n 27, at 280. 121 Wood et al, above n 13 at 4. 122 Surveillance Studies Network (2006), A Report on the Surveillance Society: A Report for the Information Commissioner (UK) at 83. 123 Michael Mehta (2006), ‘Nanotechnology’ Encyclopaedia of Globalisation, Cosmo Publications, also available at (accessed 12 April 07) at 848. 124 Mihail Roco,and William S Bainbridge (eds) (2001), Societal Implications of Nanoscience and Nanotechnology: NSET Workship Report (Arlington, National Science Foundation) 180; UNESCO, above n 15 at 12. 348 Hailemichael T Demissie and Environmental Defence.125 The weakening of the state’s regulatory role either by the persistence of self-regulation or through the undue influence of industry in shaping upcoming nano-regulation will raise serious issues of legitimacy, accountability, and transparency, aggravating the apparent democratic deficit. Such calls for a wider proactive government role in the regulation of nanotechnol- ogy may sound as sly versions of the ‘corporate bashing’ typical of the literature on the regulation of biotechnology. However, these are calls to restore the rightful power of the state: ‘rendering unto Caesar what is Caesar’s’. Regulation is a public function and should reside in the public arenas. Industry self-regulation is an anomaly in so far as industry’s main objective is the pursuit of private interests or profits. Yet, it is unwise to discount the changes that industry has undergone, especially, the fact that it has become more responsive to the concerns of the public. This has not come as a benevolent gesture from industry towards the public. The demands of regaining and retaining the loyalty of the environmentally savvy consumer and containing the effects of biting public accusations by NGOs about sweatshops, unfair trade practices and the many other exigencies including the tort litigations that the corporate world has to grapple with have brought about a change in corporate behaviour—a change that is now institutionalised as ‘corporate social responsibility’.126 This change gives a glimpse of what can be achieved by embrac- ing industry in the governance of nanotechnology. Besides, the regulation of nanotechnology is not a responsibility that the state alone can carry out. The governance structure needs to be as inclusive as possible to avoid the overbearing role of powerful private actors in the field. Only the state can wield power matching that of corporate behemoths. Not all governments can wield such power though. Some may fall prey to the pressure and arm-twisting exerted by multinationals and their cohorts. For this and other reasons, the efficacious regulation of nanotechnology will only be possible if it is governmental as well as international and also if it is not constrained by state sovereignty as is the case in the ‘common heritage’ regime discussed below. IV Benefit-sharing and the Impending Nanodivide Background Issues Nanotechnology promises to bring an end to ‘the dictatorship of scarcity’,127 the demand–supply asymmetry at the heart of market systems. The withholding of 125 ‘Civil Society-Labor Coalition Rejects Fundamentally Flawed DuPont-ED Proposed Framework’: An Open Letter to the International Nanotechnology Community At Large, 12 April 2007 at (accessed 05 April 2007). 126 UNESCO, above n 15 at 12. 127 Beck’s phrase referring to the prevailing material need especially in the Third World; Urlich Beck (1992), Risk Society (London, Sage) 20. Taming Matter for the Welfare of Humanity 349 such technology with the aim of creating an artificial black hole in the supply side—a quasi-Luddite exercise—would be the only viable means of running markets as we know them today. Those nations in pursuit and/or possession of nanotechnology have already succumbed to this temptation. The professed objective of their R&D efforts is none other than the reinvigoration of national competitiveness—a morbidly parochial objective in view of the enormity of the potential of nanotechnology. The development of the technology is still driven by the business-as-usual scramble for markets and profits despite the promise of abundance that would do away with markets altogether. It is this predominant predilection to preserve extant markets that makes commentators hold on to their pessimism fostered by experiences with earlier technologies as manifested in the digital and genetic divides.128 As things stand now, the view that the digital and genetic divides will be enlarged and consolidated into a ‘nanodivide’ is more than plausible. The question is then whether the leading nations in the technology are willing to sign up to a global regulatory regime that would aim at avoiding a nanodivide by including the ethical issues of equity and benefit-sharing in the set of global priorities. These issues are gaining thrust in the discourse on global technoscience governance in recent times. Particularly the last two decades have witnessed a remarkable surge in the interest on the concept of benefit-sharing.129 The term benefit-sharing has been employed with various meanings oscillating along the continuum spanning from the common heritage ethos to a property- based profit and royalty-sharing arrangement. The UDHR enunciates that ‘every- one has the right freely ... to share in scientific advancement and its benefits’.130 It could be said that the CBD was a pragmatic retreat from the ethos of the UDHR as benefit-sharing in the CBD was employed to mitigate the gross inequity whereby a source community was left out of the equation while a pharmaceutical company reaps the profits on the drugs developed using the biological resources provided by that community and often without its consent. As a means of combating biopi- racy without unduly restricting access to biological resources, an arrangement by which access is given in return for the sharing of benefits has been established. Reciprocity is at the centre of the CBD version of benefit-sharing. 128 Noela Invernizzi and Guillermo Foladori (2005), ‘Nanotechnology and the Developing World: Will Nanotechnology Overcome Poverty or Widen Disparities?’ 2 (3) Nanotechnology Law & Business 294 at 298. 129 The concept of benefit-sharing has been in use even before its well-known institutionalisa- tion in the regulation of biotechnology in the 1992 Convention on Biodiversity (CBD). Costa Rica’s National Institute of Biodiversity(INBio) was a leader in negotiating benefit-sharing arrangements and its practice predates the CBD; Government of Costa Rica (2000), Benefit Sharing: Experience of Costa Rica, Paper prepared for the Second Regional Workshop of the UNCTAD ‘Project on Strengthening Research and Policy Making Capacity on Trade and Development in the Developing Countries’ La Habana, Cuba, 31 May–3 June, 2000 available at (accessed on 05 August 2007). 130 Universal Declaration of Human Rights, adopted 10 December 1948, UN General Assembly, Art 27(1); International Covenant on Economic, Social and Cultural Rights, Art 15(1)(b). 350 Hailemichael T Demissie The CBD reinstates national sovereignty over the resources that were otherwise considered global commons. Under the CBD regime, propertisation qua nati- nalisation of biological resources that should otherwise be catalogued as global commons was instituted; so was propertisation of scientific knowledge contrary to the UDHR enunciation. The IT revolution and all the biotechnology without the reciprocity scenario are not covered by the CBD and, thus, call for a benefit-sharing regime of their own which is being vigorously debated in the respective fields. The issue is being further enlivened with the rise of the biobank industry, nanobiotechnology and, on different but related contexts, with the threat of global epidemics and with the burgeoning interest of nations on the Arctic Ocean.131 International instruments on bioethics and the human genome contain reaffirmations of benefit-sharing based on the common heritage concept as enunciated in the UDHR denouncing financial gain from scientific knowledge.132 A 2007 WHO report stressed the need for an open sharing of samples, information and technology in the fight against recalcitrant epidemics.133 The concept is not a fringe issue in the digital world. Sharing is a well-worn theme that goes to the heart of the digital economy as evidenced by the robust open-source movement. The analogy of benefit-sharing in IT is a useful one because the nano economy will be like a software economy as everything is predicted to be reduced to information in the form of bits and bytes with the advance of nanotechnology.134 The open-source software movement is particu- larly invoked as a template for nanotechnology benefit-sharing.135 The answer to the question who benefits from the technology is critical in deciding the course of nanotechnology. The biotechnology syndrome that tends to 131 It looks the US Senate is about to embrace the common heritage principle that it has been loath- ing so far by failing to ratify it. As a reaction to the recent Russian move claiming swathes of the Arctic Sea, the US will be compelled to further mollify its aversion to the Law of the Sea Convention—the epitome of the common heritage concept so far. ‘Russia Ahead in Arctic “Gold Rush”’ BBC (1 August 2007) available at (accessed 26 October 2007); Santos, above n 33 at 304. 132 The instruments include UNESCO (2005), Universal Declaration of Bioethics and Human Rights, adopted by acclamation on 19 October 2005, General Conference of UNESCO; HUGO (Human Genome Organisation) Ethics Committee (2000), Statement on Benefit Sharing available at (accessed 5 September 2007). Nuffield Council on Bioethics (2002), The Ethics of Research Related to Health Care in Developing Countries, at (accessed 8 August 2007); WHO (2002), Genomics and World Health: Report of the Advisory Committee on Health Research at (accessed 6 July 2007). 133 WHO (2007), A Safer Future: Global Public Health Security in the 21st Century (Geneva, World Health Organisation). 134 Ray Kurzweil says ‘everything is ultimately becoming information technology’. Brian O’Keefe (2007), ‘The Smartest, The Nuttiest Futurist on Earth’ (14 May) Fortune. 135 Bryan Bruns (2004), Applying Nanotechnology to the Challenges of Global Poverty: Strategies for Accessible Abundance; 1st Conference on Advanced Nanotechnology: Research, Applications and Policy, 21–24 October, Washington DC at (accessed 7 August 2007). Taming Matter for the Welfare of Humanity 351 blight nanotechnology very much dwells on the benefits issue as it does on the risk issue.136 The bad publicity that biotechnology has received is not so much on its being risk-laden. The scoreboard for biotechnology in this respect is perennially tentative as concerns the health and environmental risk issues but rather certain on other aspects. As professor Brownsword explains ‘for those who oppose the technology on moral or religious grounds, the risks are already perfectly clear’.137 The fact that biotechnology was unavailable to those who needed it most and the fact that the benefits of the technology were not to the end user but to the producer in the form of higher yields or pest-resistant crops were among the major grudges against biotechnology.138 As evidenced by the experience with mobile phones, such feats are unlikely with nanotechnology because what nano- technology promises are products whose benefits accrue particularly to the end user: stronger, more durable and less expensive materials and things fine-tuned to the particular needs of the consumer. However, nanotechnology may face similar setbacks if sufficient attention is not paid to the benefit-sharing issue taking into account both the solvent consumer and the destitute bystander. In this respect, it is legitimate to enquire whether nanotechnology raises issues with regard to benefit-sharing not raised by earlier technologies. It is self-evident that the advent of nanotechnology creates the opportunity to address the issue in new light, with more vigour and rigour. The perspectives, the degree of emphasis and the reinvigorated discursive practices that nanotechnology engenders are worthy of the attention that any new issue may command. We are advised to be methodical by sparing ethical issues that are not unique to the new technology or are not of major significance in respect of the technol- ogy.139 On the other hand, there is the observation that nanoethics is the ‘ethics of the largest’, and that, even when not raising any new issues of its own, it is a unique totality of previous issues—‘a whole greater than the sum of its parts’.140 The ethics of such magnitude calls for a mobilisation of all epistemological resources and not the economising of same.141 All established thoughts, hypotheses and questions will have to be re-examined whether or not they have been dealt with earlier in respect of previous technologies or other resources. One such concept 136 Opposition to nanotechnology is often the extension of biotech bashing. See, eg, Karin Gavelin et al (2007), Democratic Technologies? The Final Report of the Nanotechnology Engagement Group (NEG) (London, Involve) 5. 137 Roger Brownsword (2008), Rights, Regulation and the Technological Revolution (Oxford, Oxford University Press) 119. 138 Dennis Kelso (2003), ‘Recreating Democracy’ in Rachel Schurman and Dennis DT Kelso (eds), Engineering Trouble: Biotechnology and its Discontents (Berkeley, CA, University of California Press) 246. 139 Hunt, above n 78 at 184, restricts his discussion to four issues while Lewenstein, above n 14, does the same believing his list of issues will be general enough to cover main identified issues. 140 Hunt, above n 78 at 183–4; Lewenstein, above n 14; Allhoff and Lin, above n 4. 141 Such mobilisation befits the debate on nano which is characterised as ‘the greatest of all public debates’ deserving ‘the full discourse’. Nigel Cameron and Ellen Mitchell, ‘Preface’ in Cameron and Mitchell, above n 61, p xix. 352 Hailemichael T Demissie that seems to have been neglected in the practice of the economising of ethical issues is the concept of the common heritage of humankind. Nanotechnology: A Common Heritage? Though not its quintessential representation, the concept of science as a common heritage of humankind is found in the abovementioned UDHR enunciation. The concept was, however, fully developed in relation to the international regulation of the oceans, the seabed, and outer space. Benefit-sharing under the common heritage doctrine emanates from common ownership and no reciprocity or con- tingency of any kind applies. Eclipsed by the triumph of markets and the opera- tion of state sovereignty subverting its application, this doctrine has not been in power and its record is utterly disappointing. The chances of its renaissance with the advent of nanotechnology may seem meagre at first glance. However, considering the increasing number of spaces and resources coming under the common heritage regime ranging from the res nullius like the oceans and the moon to privately owned cultural objects, there is a viable case for the inclusion of nanotechnology in the regime.142 Moreover, the idea of nanotechnol- ogy as a common heritage of humankind has a strong historical resonance. The common heritage concept was first conceived not in relation to the sea, the moon or the outer space but in relation to the atom—the very object of nanotechnology. The bleak beginnings of the ‘atomic age’ associated with nuclear weapons might find a bright and promising future with nanotechnology ushering in a new ver- sion of the ‘atomic age’—the ‘nano age’. Nuclear technology was declared as the common heritage of humankind to be developed and managed by humanity as a whole and for the benefit of all.143 That was the basis upon which the IAEA was founded; and today there is a call for the management of nanotechnology to be entrusted to a similar body—the ‘International Nanotechnology Agency’ albeit without mention of the common heritage principle.144 The hugely disappointing implementation of the doctrine so far seems to have made it less of a favourite in the eclecticism of nanoethics. This, however, discounts the great achievements in the struggle for the common heritage regime against the bulwarks of triumphant capitalism. The success in having the oceans, the moon and outer space declared as the common heritage of humankind is worth celebrating even though not much has come out of it for the globe’s 142 James Martin sees the enclosure into global common goods of things that haven’t existed before. He highlights the creation of ‘cathedrals’ of global common goods as a massive 21st century opportu- nity. James Martin (2006), The Meaning of the 21st Century: A Vital Blueprint for Ensuring our Future (London, Eden Project Books) 339. 143 President Eisenhower’s famous address ‘Atom for Peace’ before the UN General Assembly in 1953, which was largely retained in the Statute of the IAEA, proposed a regime ‘whereby nuclear materials could be allocated to serve the peaceful purposes of mankind’. David Fischer (1997), History of the International Atomic Energy Agency: The First Forty Years (Vienna, IAEA) 9. 144 Hunt and Mehta, above n 120 at 280. Taming Matter for the Welfare of Humanity 353 needy.145 If such a declaration can be secured for nanotechnology, it will be so sig- nificant that it could be an end by itself. To elaborate on this we need to compare the capabilities of developing countries when seabed mining was being negotiated and now in the age of nanotechnology. Today a number of developing countries have developed research and technological capabilities to help them hop onto the nanotech train. The chances of their being inhibited by lack of capacity as was the case in seabed mining are far more limited. The declaration of nanotechnology as a common heritage would remove the choking build-up of ‘patent thickets’. Their scientists and researchers, some having phenomenal expertise at imitation and adaptation, would be able to go about their business freely without the remorse of plagiarism and the infringement of someone’s intellectual property. The unique nature of nanotechnology as science and as an enabling technology makes the return to the common heritage concept all the more important and justified. The linear science-to-technology trajectory has long ceased to be the rule. Particularly with nanotechnology, the trajectory is in both directions neither having any primacy over each other.146 Yet, the inauguration of nanotechnology as ‘tech- nology’ despite its more pronounced ‘science’ aspect was a reason for its early enclo- sure out of the public domain.147 As a socially produced social phenomenon with substantial public resources of various nature going into its production, the rightful abode of scientific knowledge is in the public domain and the global commons as per the formulation of the UDHR and other international instruments.148 It is argued here that nanotechnology is a capability, a resource and an opportu- nity so important to humanity that its regulation should come under the common heritage doctrine. It is a doctrine on which ‘the new sense of human responsibil- ity’ that ethicists deem imperative for our time can be grounded.149 What made biotechnology one of the most contested contemporary technologies was the fact that the power to decide on its development was appropriated by certain segments of society.150 Questions as to whether the power to decide on a technology as sig- nificant as biotechnology should be in the hands of any segment of society were raised and continue to be raised in respect of nanotechnology, too.151 The common heritage doctrine places the power to decide on technological resources not on certain segments of society but on the entire humanity. It enables 145 Elisabeth Borgese and Caroline Vanderbilt (not dated), The IOI-Story International Ocean Institute, International Ocean Institute, available at (accessed 8 October 2007). 146 Dana Nicolau (2004), ‘Challenges and Opportunities for Nanotechnology Policies: An Australian Perspective’ 1 (4) Nanotechnology Law & Business 446 at 451. 147 Einsiedel and Goldenberg, above n 114 at 216. 148 See the discussion in Peter Lee (2004), ‘Patents, Paradigm Shifts and Progress in Biomedical Science’ 114 The Yale Law Journal 659 at 671. 149 Hunt, above n 78 at 183. 150 Rachel Schurman (2003), ‘Biotechnology in the New Millenium: Technological Change, Institutional Change and Political Struggle’ in Schurman and Kelso, above n 138 at 3. 151 UNESCO, above n 15 at 7, calls for the recognition of the right of citizens of all nations to have a say on the course nanotechnology takes. 354 Hailemichael T Demissie the less powerful (and even the non-existing but future) segments of society to have their say on the management of the resources. The essence of the doctrine is that resources which given their extreme importance for the sustainability and quality of life on earth, must be considered as globally owned and managed in the interest of humankind as a whole, both present and future.152 Given the potential of nanotechnology to bring about material ubiquity and the associated social and environmental promises, nanotechnology’s place is nowhere but in the global commons regulated by the common heritage doctrine. True enough, the common heritage doctrine is not in power today but it is very much in power compared to its status 20 or 30 years ago. Yet, it remains an unapologetically utopian concept and more so in respect of nanotechnology.153 But so is the concept of sustainable development which is enceinte with irrecon- cilable ideals. The sustainability movement has propelled the common heritage doctrine into the foreground. It is the convergence of the promises of nanotechnology and the ideals of the sustainability movement that calls for a serious consideration of the application of the common heritage doctrine to nanotechnology. Ever since its epic entrée in the global discourse by the Bruntland report in 1992, sustain- ability has remained only ‘a pious hope’.154 Economic growth at the expense of the environment and to the detriment of the social component of sustainable development was vigorously promoted openly sponsored by both developed and developing countries. With nanotechnology, it may be possible for the first time to contemporaneously pursue the sustainability trio, viz, economic prosperity, environmental quality and social equity. Treating nanotechnology as anything other than a global common is antithetical to the sustainability concept dampen- ing on its hope. Benefit-sharing arrangements outside of the common heritage regime will have the potential for complexity. The motives for benefit-sharing arrangements were in many cases self-serving to the benefactor. Now, with the advent of nanotech- nology, it is pretty realistic to speak of benefit-sharing based not on piecemeal self-serving approaches requiring reciprocity (CBD), nor as a means of induce- ment to make nations desist from certain activities(NPT, BWC), nor as a means of achieving compensatory or ‘selfish’155 justice on account of past and present 152 Santos, above n 33 at 302. 153 Ibid 310. 154 Robert Dunkley(2004), ‘Nanotechnology: Social Consequences and Future Implications’ 36 Futures 1129 at 1131. 155 A discussion of justice issues in the context of international environmental governance is pro- vided by Drumbl who emphasises the role of self-interest in considerations of justice. He articulates his thesis of the ‘selfish justice rationale’ that may lead to an entirely different result from the one that may be attained by relying on the common heritage principle: the more immediate, specific, and direct the environmental harm to the developed world, the more the developed world is willing to share technology, redistribute wealth and demonstrate receptiveness to claims for justice by the developing world and to exhortations of cooperation and solidarity. Taming Matter for the Welfare of Humanity 355 misdeeds (sustainable development). Material ubiquity would make such consid- erations redundant paving the way for genuine benefit sharing as envisaged by the common heritage concept. VI. Conclusion: Beware of the Impending ‘Nanodivide’ The resort to the original tenet of the UDHR and its subsequent elaborations in the common heritage doctrine would give a solid foundation for benefit-sharing. Material ubiquity furnishes the factual basis for benefit-sharing dispensing with the excuse of inadequacy of resources or the imposing of conditions that will be made redundant by the fact of ubiquity. However, if the ethical discourse on benefit-sharing is not impacting the development of nanotechnology, it is set to entrench the privileged communities in their positions hurling the rest into the abyss of degradation and suffering. The proof of the success of the technology will be measured by the magnitude of its reach as emphasised by the refrain that ‘[u]nless converging technologies benefit the whole planet, and not just an elite, we have failed to make real progress’.156 If nanotechnology keeps on driving the wedges further down between the elite and the rest of the world, its effect will eclipse the combined effect of the digital and genetic divides. Baroness Greenfield foresees a sombre scenario worse than anything humanity has ever seen: [The Vast Majority] are in danger not only of being disenfranchised from a vastly more comfortable way of life but also of being exploited and abused in ways more sinister, pervasive and cruel than even witnessed by the worst excesses of the colonialist past. 157 The comforting proviso is that she has not dismissed the possibility of an alter- native scenario whereby the capabilities developed by the new technologies can be deployed to bring an end to the binary world of the haves and have-nots.158 The nanodivide may even be more fundamental than may have ever been thought. With radical human enhancement on the horizon, it may be accelerating the speciation within the human race that is expected to culminate in the split of the species into ‘the tall, slim, healthy, attractive, intelligent and creative’ genetic Mark Drumbl (2002), ‘Poverty, Wealth and Obligation in International Environmental Law’ 76 (4) Tulane Law Review 843 at 931. Taking the law of the sea as illustration, he highlights the selfishness involved in sharing by posing an incisive question (p 934): Is it not somewhat selfish to distribute resources when common concerns of humanity-in the case of UNCLOS, the high seas, which constitute a common heritage of humanity—are at stake but to withhold them when all that is at stake is financial empowerment for developing nations? 156 Michael Gorman (2004), ‘Collaborating on Convergent Technologies: Education and Practice’ in Mihail Roco and Carlo D Montemagno (eds), above n 1 at 30. 157 Greenfield, above n 27 at 268. 158 Ibid. 356 Hailemichael T Demissie upperclass and the ‘dim-witted, ugly, squat goblin-like’ underclass.159 It would not be difficult to see how benefit-sharing could help thwart this prognostica- tion. James Watson’s vow to ‘make all girls pretty’160 is not to be dismissed as a geneticist hubris or a paternalistic eugenics considering what the future holds for the vast majority if technology continues to be deployed in the same way as it has been deployed heretofore, i.e., as ‘the rich man’s toy’.161 With benefit-sharing of the kind espoused by Watson,162 humanity can make ‘nanodivide’ a real oxymo- ron with a negligible nanosize divide necessary to maintain the congenial diver- sity among genetically embellished girls and not the kind of divide that makes a David–Goliath glitch. 159 The prediction is made by Dr Curry of the London School of Economics. ‘Human Species May Split in Two’ (accessed 12 October 2007); also Greenfield, above n 27 at 268. 160 Sharon Begley, ‘Reading the Book of Jim’ Newsweek (4 June 2007). 161 Brian Heap (2002), ‘Can We End Hunger?’ in Harriet Swain (ed), Big Questions in Science, (London, Jonathan Cape) 180 at 182. 162 Watson has expressed his willingness to make his genome public to be accessed by everyone and make whatever use it can be made of. See Begley, above n 160. 16 Regulating Renewable Energy Technologies: The Chinese Experience DENG HAIFENG The year 2006 was memorable for the Chinese energy resource industry, especially for the renewable energy resource industry. On 1 January, the Law on Renewable Energy Resources in China, the first law in China on renewable energy resource development and utilisation, was put into effect. It was created on the basis of Chinese economic, social, energy resource and environmental conditions and relevant foreign experiences. Some theoretic and practical insiders believe that the release of this law has shed a light on the development of Chinese renewable energy resource industry and technology because it is clearly defined in the begin- ning of the law that the State has put scientific research and industrial development of renewable energy resources on the prioritized position in the high-tech industry development. … Technology of renewable energy resource development and utilization is to be promoted.1 However, we may not expect too much of the Law on Renewable Energy Resources once having analysed the status quo of the Chinese renewable energy resource industry and its technology. For the current legislation focuses only on the system design for the fund cumulating period while ignores the guid- ing regulation for the homogeneous competition period, which is inconsistent with the cycling rule of development of renewable energy resource industry and its technology.2 To elaborate the author’s standpoint, it is necessary to review the status quo of the Chinese renewable energy resource industry and its technology. 1 Art 12 of Law on Renewable Energy Resources of China. 2 Fund cumulating period refers to such stage in the process of renewable energy resource industry evolution that fund cumulation is needed to realise reproduction, while homogeneous competition period means such a stage after the fund cumulating period that reproduction is realized through advantageous cost due to technological competition. 358 Deng Haifeng I. Current Problems of the Industry and its Technology As defined in the Law on Renewable Energy Resources, the renewable energy resources in China refer to such non-fossil energy resources as wind energy, solar energy, water energy, bio-energy, terrestrial heat energy and marine energy, etc. Due to the limited technological approaches, people are utilising such energy resources mostly by converting the renewable energy resources into thermal or heat energies. Therefore, generally, the so-called renewable energy resource indus- try consists of power generation and heating. Wind-driven power generation is now exemplified to describe the status quo of Chinese wind power industry and evolution of wind power technology. Though the Chinese wind power industry started no later than other countries, there has been no breakthrough and the gap with the world level is widening. The whole industry is faced with small-scale, low industrialisation, high power genera- tion cost, few professionals, a weak R&D force, an underdeveloped core technol- ogy and tender market. As early as 1995, the former Ministry of Electric Power set a target of 1MKW wind-driven generators by the year 2000, but the actual capacity at the end of 2004 was only 764,000KW, accounting for 0.14 per cent of the country’s total power generation capacity, lagging far behind the world level. Despite the 48 wind-driven power stations in over 20 provinces, municipalities and autonomous regions, the average capacity per station of less than 15,000KW is far from scale effective.3 India, which started wind-driven power generation later than China, is far ahead of China both in terms of capacity and equipment manufacturing. In addition to its current capacity of 2.11MKW, the home-made high-power generators are not only for domestic use but for export.4 Then we have a look at the technological level of wind-driven power generation in China. From the maturity point of view, renewable energy resource technologies are categorised into (a) the economically feasible, (b)the government-motivated and industrialised, (c) those still at R&D stage and (d)the future technologies.5 As the core technique of wind-driven power generation is the design and manufac- ture of the generator units, it can be said that the maturity of generator techniques represents the overall strength of a country’s wind-driven power generation tech- nology. According to the technological standards of generator units, the world 3 By the end of 2004, China had installed 1,292 wind-power generator units with a total capacity of 764,000KW and 48 plants spread in more than 20 northeast, north, northwest and southeast provinces and municipalities. The capacity of in-process generators in 2004 was 1.5MKW, in which 420,000KW was under construction, 680,000KW was to be approved and 450,000KW was proposed including five 100,000KW special projects. Refer to Zhou Heliang, ‘Prospect and Strategy of Chinese Wind Power Generation’ (2006) 6 Electric Technology 93–6. 4 Zhao Hongjie et al, ‘Situation and Trend of Wind Power Generation’ (2006) 9 Water Conservancy Technology and Economy 112–17. 5 Wang Qingyi, ‘Status Quo, Bottleneck and Strategy of Chinese Renewable Energy Resource Industry’ China Energy 42. Regulating Renewable Energy Technologies 359 mainstream techniques are ranked upward as follows: (1) Conventional gear tank asynchronous AC generator, (2) doubly fed wound asynchronous generator and (3) Direct driven VSCF wind-power generator.6 For world wind-power genera- tor units, they are categorised as follows based on the single generator capacity: (1) small unit (below 100KW), (2) KW unit (100KW–1000KW), (3) MW unit (1MW–2MW) and (4) units above 2MW. Chinese wind-power generator manu- facture is currently changing from gear tank asynchronous AC generators to doubly fed wound asynchronous generators. In terms of capacity, it is capable of making small unit and the pre-research for KW and MW units are in process. Generally, the Chinese wind-power design and manufacture is still lagging behind. With the maximum single unit capacity of only 750KW, China has to import or cooperate with foreign manufacturers for large capacity generators. Thus it can be seen that China is lagging behind from the point of view of both the wind-driven power generation industry as a whole or wind generator manu- facture. The only realistic way out of such a pattern is to put more money into the industry. On the one hand, the proportion of wind-driven power generation is to be raised so that the cost can be cut down to such an extent that is able to compete with the mainstream thermal, water and nuclear power producers. On the other hand, original industrial technology is to be improved so that large wind generator units can be made, and huge import costs be saved. However, it is impossible for the existing wind power producers to realise expanded reproduction and increase input on technological R&D through conventional profit cumulation. The aforesaid dif- ficulties can only be solved based upon legislative incentives in favour of their fund cumulation and technological advancement. In this paper, the author intends to propose optional systems applicable for the present and future of Chinese renew- able energy resource industry by making a comparative study of systems for fund cumulation and technological advancement normally adopted in the world. II. Description of Two Mandatory Systems for Renewable Energy Resources A. Quota System This system is represented by United Kingdom and Australia as well as some states in the US. Based upon the overall national (regional) targets, it specifies every stakeholder (normally power suppliers) is obliged to undertake certain quota, i.e., to buy certain proportion of renewable energy resources based electric power from renewable energy resources power producers so that yearly target could be fulfilled.7 6 Zhou Heliang, above n 3 at 93–4. 7 Gu Shuhua and Wang Baiyu, ‘Preliminary Research on Quota System for Renewable Energy Resources in China’ (2003) 18 (1) Tsinghua University Journal—Social Science 27. 360 Deng Haifeng The object of constraint is all the end-user-oriented power suppliers who are the subject of the electric power market instead of a nation or a region as a whole. Thanks to the mandatory legislation, a minimum demand for renewable energy resource products is ensured and a favourable profit-making environment for such products created, which consequentially motivates project developers and manufacturers to research into and invest in the renewable energy resource indus- try and its technology. Similar to an overall national (regional) target system, a quota system is merely target-oriented and cannot operate unless combined with a tradeable Renewable Energy Resource Certificate System, under which Each renewable energy resource certificate stands for certain electric power. They are granted by government supervisory authorities to qualified renewable energy-resources based power producers. … Two tradable products are for such renewable energy resources based power producers, i.e., electric power and certificate. The previous is networked as conventional power, while the latter, standing for price difference between renewable energy resources based power and conventional power, is tradable as an inde- pendent product. Through the Certificate System, power suppliers are enabled to fulfill targeted quotas by purchasing the certificates from the power producers.8 Texas in the US is a typical case. In its RPS Act (Act on Electric Industry Restructuring 1995), it was prescribed that quotas shall be allocated to competing private power suppliers in proportion to their annual power sales and that public power companies must meet the quota standard if deciding to participate in the completion. Texas also formulated strict punishment measures against the power suppliers who failed to fulfil their quotas. The United Kingdom released a Decree on Renewable Energy Resources Obligations in April 2000, which explicitly specified certain proportion of renewable-energy-resources-based power in the whole power supply. All power suppliers are obliged to buy power from renew- able-energy-resources-based power producers or buy quota certificates directly from the power supervisory authority. Anyone who fails to complete the quota is subject to a penalty up to 10% of its turnover.9 B. Mandatory Purchase System Mandatory purchase means that in order to meet the national (regional) target, power suppliers are forced to buy renewable-energy-resource-based power gen- erated by qualified power producers. Such a system must combine with suitable pricing and cost-sharing systems. The price for electricity, under this system, is defined by law instead of the market, which is higher than the buy-in price of power generated in conventional ways. However, such price is diversified rather 8 Gu Shuhua and Wang Baiyu, previous n at 27–8. 9 Shi Jingli and Li Junfeng, ‘An Overview and Effect Analysis of British Acts on Renewable Energy Resources’ (2004) China Energy 39. Regulating Renewable Energy Technologies 361 than unified, on the basis of costs of renewable energy resources, so as to ensure the profitability of all renewable energy resource power producers who are consequentially driven to continue their production with lowered costs and to promote the comprehensive development of all renewable energy resources. The typical case for this system is Germany. It was prescribed in the Energy Act of 1998 that renewable-energy-resource-based power is prioritised to be networked in the event of limited grid capacity; that no third party should be networked if the use of renewable energy resources is likely to be harmed and that installation of renewable-energy-resource-driven power supply devices are exempted from per- mission regulations. The Act on Renewable Energy Resources released early 2000 added protective prices for all renewable energy resources. It also defined the dura- tions for protective price implementation as per strength of wind in different areas, ie the weaker the wind is, the longer the protective price remains.10 The Feed-In Act went even further by indicating that responsibility for purchase of renewable energy resource power shall be handed over from regional power suppliers to regional grid operators. Offshore wind power development was encouraged by another provision that the nearest power supply network shall be liable for power purchase if a power generating facility is out of spheres of all power supply networks. III. Comparison of the Two Systems Both the quota system and the mandatory purchase system are policy rather than market systems in nature. By means of legislation and policy making beyond the lib- eralised market rules, they enable the disadvantageously more expensive renewable energy resource industry to gain more space to grow thanks to the government’s con- cern regarding energy safety, a balanced regional economy and environmental pro- tection. However, in practice, both are different from each other in many respects. The quota system, focusing on market mechanism despite the basis of man- datory demand, owns a number of advantages. For one thing, it is because of market demand that investors are willing to increase input on the renewable energy resource industry; secondly, market pricing and competition will motivate developers to promote R&D and technology with lower costs to come up with higher profit; thirdly, as the subject of the quota system, power suppliers tend to cut down cost by making lending to renewable energy resource projects, seeking the most reasonable applications or coming into long-term commitments, etc,11 which enables power suppliers and developers to jointly raise overall development and production efficiency and lastly, the quota system expresses the social benefit of renewable energy resources. For the value of renewable-energy-resource-based 10 Yan Huimin, ‘Thinking on Quota System for Renewable Energy Resources’ (2003) 5 Research & Utilization of Energy. 11 Gu Shuhua and Wang Baiyu, above n 8 at 28. 362 Deng Haifeng power is divided into two parts under this system, one being the value equal to the power generated with conventional energy and the other being the unique value arising from its environmental and social benefits, of which the beneficiary may be the entire population of a nation or a region.12 As the extra cost of renewable energy resources against conventional ones is ultimately borne by consumers under the quota system and its supporting tradeable Certificate System, the target requiring that the social benefit arising from renewable energy resources is allo- cated by beneficiaries is fulfilled.13 No doubt, the quota system has its weakness. What it can boast is only the development of those renewable energy resources which are less costly, richer and easier to acquire with simpler technology. However, it is unable to promote comprehensive development of all renewable energy resources. In addition, quota (a limit for the time being) may confine the development of renewable energy resources to the total quota and may raise the risk of investment. Furthermore, the unstable market price of renewable-energy-resource-based power may make power generation projects more risky. On the other hand, due to its access restriction for developers, the manda- tory purchase system is beneficial for the steady growth of the renewable energy resource industry. Meanwhile, the categorised pricing mechanism is able to ensure balanced growth of all resources and technologies. However, developers and power suppliers are exposed to little risk and all costs are borne by the end users, which is unfair for the end users and scarcely motivating for the developers and power suppliers, thus preventing the technology and production efficiency from upgrading. What’s more, the inflexible government pricing is unlikely to reflect the actual value and market trend of renewable energy resources. In conclusion, while the quota system is suitable for countries and regions where there are highly liberalised electric power markets and the renewable energy resource industry is in the homogeneous competition period, the manda- tory purchase system is better adapted to countries and regions where the electric power markets are less liberalised and the renewable energy resource industry is in the initial fund cumulating period when more governmental support is needed. IV. Optional Systems in the Law of China on Renewable Energy Resources The prevailing Law on Renewable Energy Resources of China adopts the man- datory purchase model based on German law. Pre-approval and a registration 12 Xiao Jiangping, ‘System Design of the Law on Renewable Energy Resources Promotion of China’, (2004) 2 China Law Science 107. 13 Although the extra cost is borne by customers under Mandatory Purchase System, the concept of social benefit is not expressed from the system design. In addition, the non-market pricing rule cannot reflect the two values of renewable energy resources. Regulating Renewable Energy Technologies 363 mechanism is applied to renewable energy resource power-networking projects, ie only power generated by the aforesaid qualified enterprises can be networked and bought. As specified in Article 14 A grid shall sign network agreement with pre-approved or registered renewable energy resource power producers, fully purchase the networked renewable energy resource based power covered by the grid and provide networking services for renewable energy resource based power. In addition, mandatory networking for compliant fuel and heat is specified in the law. To support the mandatory networking, Article 19 prescribes that the networked prices for renewable-energy-resource-based power be determined and adjusted from time to time by supervisory State Council authorities in accordance with energy categories and regions; Article 20 specifies that the cost difference between the purchase of renewable energy-resource-based-power and conven- tional power incurring to the grid shall be distributed to the market power price, ie be borne by the end users. Based upon the analysis on applicable environments of the two systems, it is believed by the author that the mandatory purchase system is roughly reasonable in the current China where the renewable energy resource industry is character- ised by low concentration, underdeveloped technology and fund shortages. For this system is in a favourable position for fund cumulation and is adaptable to the reforming Chinese power system in the short term. As power producers have ceased to undertake government functions since the ‘Grids Separate from Power Producers’ and ‘Competing Networking’ reform measures were implemented, they should be free from public obligation for the development of renewable energy resources. On the other hand, as power suppliers have not been liber- alised, lack of competition is unlikely to force them to seek the least expensive producers or cooperate with producers to minimise cost for renewable-energy- resource-based power generation, ie the objectives of the quota system are yet far from achievable. However, this is just one side of the coin. What we need to be aware of is that the Chinese renewable energy resource industry is in a rapid growth stage. Given time, it will step up to the higher level, the homogeneous competition period. Therefore, it is necessary to give the quota system a trial run for an appropriate period for the purpose of conserving motivation, active technological promotion and cost decrease conducted by the future renewable energy resource power producers. It is proposed that the quota system be partially implemented in 2011 when another 5-year plan is scheduled to be made and the Law on Renewable Energy Resources of China witnesses its fifth anniversary so as to maximise the strengths of both systems and to fully elevate the Chinese renew- able energy resource industry. Closing Ref lections 17 New Frontier: Regulating Technology by Law and ‘Code’ MICHAEL KIRBY* ‘[T]he continued rapid advance in science is going to make life difficult for judges. We live in an age of breakneck technological change that will thrust many difficult technical and scientific issues on judges, for which very few of them (of us, I should say) are prepared because of the excessive rhetorical emphasis of legal education and the weak scientific background of most law students.’ (RA Posner ‘The Role of the Judge in the Twenty-First Century’ 86 Boston University Law Review 1049) I. Present at the Creation A. Preposterous Claims Dean Acheson, one-time Secretary of State of the United States of America, called his memoirs Present at the Creation.1 It was a clever title, laying claim to having been at the important meetings during and after the Second World War in which the new world order was established. The claim was faintly preposterous, given that the Second World War grew out of the first, and bore remarkable parallels to other conflicts dating back to the Peloponnesian Wars of ancient times. All history, and all technology, grow out of the giant strides that preceded their current manifestations. We forgive Acheson because (unlike some of his predecessors and successors) he was an elegant and sophisticated man, significantly concerned with improving the condition of the world and the welfare of its inhabitants. * Justice of the High Court of Australia. One-time Chairman of the Expert Group of the OECD on Transborder Data Flows and the Protection of Privacy. Formerly a Member of the World Health Organisation Global Commission on AIDS and of the UNESCO International Bioethics Committee. Honorary Bencher of Inner Temple. 1 D Acheson, Present at the Creation: My Years at the State Department (WW Norton, Inc, 1969). 368 Michael Kirby I make an equally preposterous claim that I was present at the creation of the central problem that occasioned the TELOS2 conference to discuss the chal- lenge presented to legal regulation by the advent of modern biotechnology and information technology, the subjects of this book. The claim is absurd because such technologies have advanced by reason of the genius of technologists and scientists, who stand on the shoulders of their predecessors, also dating back to ancient times.3 In one of the closing talks at the conference, Professor Mireille Hildebrandt described the advances that occurred in the communication of ideas in medi- eval times following the perfection of spectacle glasses and the invention of the printing press. The former allowed the monks, who spent their years inscribing religious texts, to extend their working lives beyond presbyopia. Yet it was the printing press that released words (and hence the ideas represented by words) from the calligraphy of the monks. For holy men, the words were written to be said or sung. But after Caxton, printed words took on a life of their own. Their meaning could be gathered without mouthing the sounds they conjured up. In a forerunner to the urgencies of the present day email, words could be read four times faster than they could be said. A revolution in communication had begun. It continues into our own times. Acknowledging the ancient lineage of contemporary technologies, the changes upon which the conference concentrated were information technology and bio- technology. They are major features of the contemporary world. From the view- point of law, they present a common difficulty that, no sooner is a conventional law made to address some of their features, and to regulate those deemed neces- sary for regulation by reference to community standards, but the technology itself has raced ahead. The law in the books is then in great danger of being irrelevant, in whole or part. Language written down at one time may have little, or no, rel- evance to events that happen soon thereafter. B. Regulating Biotechnology This is the sense in which I claim to have been present at the creation of the two nominated technologies. It came about in this way. In 1975, soon after I was first appointed to federal judicial office in Australia, I was seconded to chair the Australian Law Reform Commission (ALRC). The Commission, a federal statutory body, was created after the model of Lord Scarman’s Law Commissions in the United Kingdom.4 Our task was to advise 2 TELOS—Centre for the Study of Technology, Ethics and Law in Society, King’s College School of Law, London. 3 Sir Isaac Newton in a letter to Robert Hooke, 5 February 1675/6 wrote: ‘If I have seen further it is by standing on the shoulders of giants.’ 4 See MD Kirby, ‘Law Reform and Human Rights—Scarman’s Great Legacy’ (2006) 26 Legal Studies 449. New Frontier 369 the Australian Parliament on the reform, modernisation and simplification of Australian federal law. One of the first inquiries assigned to the ALRC concerned an issue of biotechnol- ogy. The Attorney-General required on us to prepare a law for the Australian Capital Territory (a federal responsibility) to deal with the issues presented to the law by human tissue transplantation.5 The project was initiated in July 1976. The Commission was obliged to report no later than 30 June 1977. The timetable was heroic. In the event, the Commission fulfilled its mandate. It produced its report on time. Within Australia, the report proved highly successful. Not only did it result in the adoption of a law on this aspect of biotechnology for the Capital Territory.6 The draft legislation attached to the ALRC’s report was soon copied in all parts of Australia.7 Such was the universality of the issues that we addressed that the report was also quickly translated into languages other than English and used overseas in the development of the laws of other countries. The report described the then rapid advances that had occurred in trans- plantation surgery. The earliest attempts in this technology were dated back two thousand years. Instances of the transplantation of teeth in England at the close of the eighteenth century,8 of successful bone transplantation at the close of the 19th century9 and of transplantation of organs such as the kidney dating from the early 1950s,10 indicated that this was an area of human activity that probably required fresh legal thinking. One of the events that had propelled the Australian Attorney-General into action on this subject was the world-wide controversy that had surrounded the first transplantation of a human heart in South Africa in December 1967 by Dr Christiaan Barnard. The recipient died 18 days later from pneumonia. But successful operations quickly followed. The ALRC was quite pleased with itself for getting its report completed on time. After all, there were many difficult and controversial legal topics of regulation to be addressed. These included whether a system of ‘opting in’ or ‘opting out’ should be accepted to permit the removal of human tissue from the source; whether legal minors should be permitted to give consent, as for a sibling recipient and, if so, under what conditions; whether payments for human organs should be forbid- den; whether organs might be taken from prisoners and other dependent persons for transplantation; whether tissue might be removed from coroner’s cadavers; whether blood was to be treated separately or as just another human tissue; and how ‘death’ should be defined for legal purposes, as a precondition to the removal of vital organs for transplantation. 5 Australian Law Reform Commission, Human Tissue Transplants, Report No 7 (1977). 6 Transplantation and Anatomy Act 1978 (ACT). 7 Human Tissue Transplant Act 1979 (NT); Transplantation and Anatomy Act 1979 (Qld); Human Tissue Act 1982 (Vic); Human Tissue and Transplant Act 1982 (WA); Human Tissue Act 1983 (NSW); Transplantation and Anatomy Act 1983 (SA); Human Tissue Act 1985 (Tas). 8 MFA Woodruff, The Transplantation of Tissues and Organs (Illinois, Chas Thomas, 1968). 9 Ibid, 380. 10 Ibid, 521–5. 370 Michael Kirby As the ALRC was producing its report, it became aware of a ‘major medical development … expected within the near future—possibly the next two or three years’. This was described as ‘the fertilisation of human egg cells outside the human body’. The process of in vitro fertilisation and embryo transplantation was therefore mentioned in the report. However, the ALRC recognised that the fertili- sation of the ovum of a woman by the use of donor semen, whether in utero or in vitro, raised issues different in kind from those presented by the transplantation of particular organs and tissues. Whether or not embryo transplantation literally fell within its terms of reference, the ALRC felt bound to exclude the subject from its report and draft legislation. If there were to be an inquiry into in vitro fertilisation, it would require a separate reference.11 Similarly, the ALRC had become aware, even at that time thirty years ago, of the potential of transplantation of fetal tissue. It noted that work on fetal tissue transplants ‘may have already begun in Australia’.12 Already ‘right to life’ organisa- tions and others had made submissions calling for legal prohibitions. Reports in Britain,13 the United States14 and New Zealand15 were mentioned. Once again the subject was side-stepped. The ALRC inquiry afforded a vivid illustration for me of how, in the regula- tion of technology, events rarely, if ever, stand still. Even between the time that the ALRC initiated its project on human tissue transplantation law and the time it reported, the technology had marched on. Draft legislation prepared to address other topics was unsuitable, and plainly so, for the more sensitive and complicated issues emerging from in vitro fertilisation and fetal tissue transplants. Before long, Louise Brown was born. Eventually, special laws on in vitro fertilisation were adopted in Australia, as elsewhere.16 As I have learned in my judicial capacity, such laws and the issues involving the availability of IVF for unmarried or same-sex recipients, invoke strong feelings, conflicting demands and different regulatory responses in different places.17 C. Regulating Information Technology Soon after the completion of the law reform project on human tissue transplants, the ALRC was asked to prepare recommendations on reform of the Australian law 11 ALRC 7, above n 5 at paras 18–19 [41]–[42]. 12 ALRC 7, above n 5 at 20 [45]–[46]. 13 Great Britain, The Uses of Fetuses and Fetal Material for Research (London, HMSO, 1972), report by Advisory Committee established in 1970. 14 United States, National Commission for the Protection of Human Subjects on Biomedical and Behavioural Research, Report (21 May 1975). 15 New Zealand, Royal Commission of Inquiry, Contraception, Sterilisation and Abortion in New Zealand (Government Printer, 1977). 16 See eg Infertility Treatment Act 1995 (Vic); Reproductive Technology (Clinical Practices) Act 1988 (SA); Human Reproductive Technology Act 1991 (WA). 17 Re McCain; Ex parte Australian Catholic Bishops Conference (2002) 209 CLR 372. New Frontier 371 governing the protection of privacy. This too led to a major inquiry although, in this case, the object was the preparation of proposals for federal legislation, suit- able for enactment by the national Parliament. In the result, a number of reports were delivered on the topic.18 The major report, in 1983, dealt with many aspects of privacy protection under federal law. As befitted its delivery on the brink of 1984, a major focus of the 1983 report was the new information technology. Even at that time, that technology had sig- nificantly changed the way in which information was collected and distributed and the amount of personal information that could be communicated. Because of the currency of the Australian inquiry, I was sent as the Australian representative to a group of experts convened by the Organisation for Economic Cooperation and Development (OECD) in Paris. That expert group was formed to make recommendations to member countries of the OECD on guidelines for the protection of privacy in the context of transborder data flows. In the event, I was elected to chair the OECD expert group. Between 1978 and 1980, it conducted its inquiry drawing upon principles already developed in relation to automated and non-automated data systems by the Nordic Council, the Council of Europe and the then European Economic Community. In the result, guidelines were agreed to by the OECD.19 They were to prove highly influential in the development of the national laws of member states, influencing the design and contents of such laws in countries with legal systems as diverse as Australia, Canada, Japan and the Netherlands and corporate practice in the United States of America. The Australian Privacy Act, based on the ALRC report, was enacted by Parliament in 1988.20 Annexed to the Australian Privacy Act, in Schedule 3, were ‘national privacy principles’. As the Act declared in its Preamble, its purpose included compliance by Australia, as a member of the OECD, with the recommendation of the Council that member countries take into account in their domestic legislation the prin- ciples concerning the protection of privacy and individual liberties set forth in Guidelines annexed to the recommendations. The Act recited that Australia had ‘informed that organisation that it will participate in the recommendation concerning those Guidelines’.21 Hence, the national privacy principles adopted by the new federal law. A difficulty soon became apparent. It did not arise out of any defect in the understanding of the OECD expert group or of the ALRC in its recommendations to the Australian government and Parliament, concerning the technology then deployed. However, that technology quickly changed in its potential. Moreover, it did so in a way that rendered an assumption, expressed in the OECD Guidelines 18 ALRC, Unfair Publication: Defamation and Privacy, ALRC 11 (1979); Privacy and the Census, ALRC 12 (1979); Privacy, ALRC 22 (1993). 19 Organisation for Economic Cooperation and Development, Guidelines on the Protection of Privacy and Transborder Data Flows (Paris, 1980). 20 Privacy Act 1988 (Cth). 21 Privacy Act 1988 (Cth), Preambles 4 and 5. 372 Michael Kirby and the Australian national privacy principles, out of date (at best) and irrelevant (at worst). Illustrating the issue by reference to the ‘use and disclosure’ principle, the sec- ond in the Australian national privacy principles, this principle stated: 2.1 An organisation must not use or disclose personal information about an indi- vidual for a purpose (the secondary purpose) other than the primary purpose of collection unless: (a) Both of the following apply: (i) The secondary purpose is related to the primary purpose of collec- tion and, if the personal information is sensitive information, directly related to the primary purpose of collection; (ii) The individual would reasonably expect the organisation to use or disclose the information for the secondary purpose; or (b) The individual has consented to the use or disclosure; or (c) If the information is not sensitive information and the use of the information is for the secondary purpose of direct marketing [certain provisions follow]; or (e) The organisation reasonably believes that the use or disclosure is neces- sary to lessen or prevent: (i) A serious or imminent threat to an individual’s life, health or safety; or (ii) A serious threat to public health or public safety; or (f) The organisation has reason to suspect that unlawful activity has been, is being or may be engaged in …; or (g) The use or disclosure is required or authorised by or under law; or (h) The organisation reasonably believes that the use or disclosure is reason- ably or necessary for one or more of the following by or on behalf of an enforcement body [Provisions on law enforcement follow]. The basic hypothesis of the OECD Guidelines (and therefore of the ALRC recommendations) was that personal information that was collected should ordi- narily be restricted to use for the purpose for which it was collected and that such purpose should be made known to the individual at the time of the collection.22 Then along come search engines, including Google and Yahoo. The specification of purposes of collection and the limitation of use and disclosure by reference to such purposes went out the window.23 This is the sense in which I assert that I was present at the creation of the problem addressed in the TELOS conference on the regulation of new technolo- gies. Accepting as paradigm instances the cases of biotechnology and information 22 Privacy Act 1988 (Cth), Sch 3. ‘Privacy Principle 1 (Collection:’). 23 Another illustration arises out of the enactment of provisions requiring that confessions and admissions to police, by suspects in custody, should be recorded on ‘videotape’. See eg Criminal Code (WA), s 570D(2)(a). The change to digital technology necessitated amendment of such laws to substi- tute a requirement for ‘audio-visual recording’. See Criminal Investigation Act 2006 (WA), s 118(1). New Frontier 373 technology that I have described, the difficulty (in some cases near impossibility) was soon apparent of drafting any law of the conventional kind that would not quickly be overtaken by events. In part, legal texts might be overtaken by advances in technology of the kind that I have described. But in part too, changes in social attitudes, themselves stimulated by advances in technology and a perception of the utility of the advances, make it more difficult than in other fields of law to draw a clear line in the sand. D. The Caravan of Controversy Take for example, in vitro fertilisation. In 1976, when the ALRC report on Human Tissue Transplants was written, many earnest debates were conducted over the suggested ethical quandary of transplantation of ova fertilised by a husband’s sperm. These debates were quickly replaced by new ones concerned with the use of non-husband (donor) sperm. Such debates are now rarely raised, even in eso- teric legal circles. Today the ethical (and legal) debates in Australia and elsewhere are generally concerned with the availability of IVF to single parents and to same- sex couples. Thus, the caravan of controversy has moved on. A law drafted too early may freeze in time the resolution of earlier controversies which may later be regarded as immaterial or insignificant. Napoleon reportedly observed a principle of never responding to letters for at least a year. He adopted this principle on the footing that, if the problem still existed a year later, it would be time enough for it to receive the Emperor’s attention. Whether by default, or by design, many issues presented to the law by contemporary technology appear to receive the same treatment. One suspects that, in many instances, it is because of the complexity and sensitivity of the issues rather than a strategic policy of lawmakers to post- pone lawmaking or clarification of regulation until the contours of the necessary law have become clear. Five Paradoxes A. Doing the Best without Experts Having laid the ground for my competence to provide this summation of the TELOS conference, I will start by identifying a number of paradoxes, or at least curiosities, that emerged during the debates. In fact, the first of the curiosities is a reflection not only on my own limited competence to participate but also on the limited competence of everyone else. There are no real experts on the subject of regulating technologies. They do not exist in the United Kingdom, the United States, Australia or elsewhere. It is much easier to find an expert on the intellectual property implications of biotechnol- ogy and information technology than it is to find someone skilled in considering 374 Michael Kirby what new law, if any, should be adopted to deal with a particular issue presented by technology and how it should be devised. Easier by far to find an expert on income tax or unjust enrichment or international human rights law than to find scholars, judges or even legislative drafters who can claim to be experts in the subject matter of the TELOS conference. It is true that we had the privilege of an opening address by Professor Lawrence Lessig, Professor of Law at Stanford Law School in the United States. He is founder of that School’s Center for Internet and Society. Professor Lessig’s book Code and Other Laws of Cyberspace (now updated by Code V2) blazed a trail. He launched the host organisation for the conference, TELOS. He is something of a guru on the interface of cyberspace and the law. His launching speech, like his books, chal- lenged us all to think freshly. His novel thesis is that ‘Code’, or the architecture of technological systems, will sometimes incorporate regulatory imperatives into information technology obviating any real choice on the part of the user as to whether or not to conform to the law. In the High Court of Australia we came face to face with this reality in the recent appeal in Stevens v Sony Computer Entertainment.24 The case concerned a claim by Sony Corporation of breach of a ‘technological protection measure’ installed by it in the programme of its computer games. Sony asserted that the measure was protected under the Australian Copyright Act 1968. Sony argued that Mr Stevens had unlawfully sought to circumvent the device incorporated computer games that it produced and sold on CD-Rom for use in its PlayStation consoles. Applying a strict interpretation to the expression ‘technological protec- tion measure’, the court held that Sony’s device did not fall within the statute. I agreed in this analysis.25 The case was a vivid illustration of the way in which, for copyright, contractual and other legal purposes, attempts are now often made to incorporate regulatory provisions in the relevant technological codes. It is a new development, although I suppose one might see primitive attempts directed at the same object in the safety provisions incorporated in the design of houses, bridges and aeroplanes. The computer PlayStations simply take this development to a higher level of sophistication and technological capability. Professor Lessig identified this new development. Inevitably, his expertise did not include all of the current major technologies, still less the way in which law can regulate them. I too am no expert in the design of laws. True, I sit in a final national court that sometimes declares new laws. I worked for a decade in national law reform, as I have described. True also, I have participated in the drafting of international guidelines, such as those of the OECD.26 However, this is hardly an intensive 24 (2005) 224 CLR 193 HCA 58. 25 (2005) 224 CLR 193 at 246. 26 Also as chair of the UNESCO International Bioethics Committee drafting group for the Universal Declaration on Bioethics and Human Rights, adopted by the General Conference of UNESCO, Paris, October 2005. See R Andorno, ‘Global Bioethics at Unesco: in Defence of the Universal Declaration on Bioethics and Human Rights’ (2007) 33 Journal of Medical Ethics 150. New Frontier 375 preparation for the complex and technical task of drafting conventional laws for, or under, a legislature. I have become rusty since, in my law reform days, I worked with former parliamentary counsel on the draft legislation annexed to the ALRC’s reports. Nor can it be said that the academics present at the conference had any special skills (at least skills that any of them revealed) in drafting statutes and subordinate regulations. Professor Brownsword confessed to beginning life teaching contract law, with later experience in consumer and environmental law. Whilst the latter fields are overburdened with a mass of regulation, it is a different thing to use and interpret such laws, on the one hand, and to design and draft them, on the other. Many participants in the conference were, to use the words of Professor Judy Illes, trained as ‘bench scientists’. Although the experience of authentic scientists and technologists was essential to an understanding of the problem, it does not neces- sarily provide the best guidance for the legal solutions. VI Lenin declared that the person who writes the minutes of an organisa- tion usually ends up controlling it. His work as general secretary of the Soviet Communist Party obliges us to take this advice seriously. We may complain about the absence of law concerned with new cutting edge technology. We may acknowledge our own imperfections for addressing the gap. We may recognise, with Professor Lessig, that regulation in the future may not necessarily come in the form of instruments made by or under the legislature and published in the Government Gazette. Nevertheless, the issue tackled in the TELOS conference is undoubtedly one of the greatest importance for the future of the rule of law in every society. Despite the manifold weaknesses of those whom it invited to its conference, TELOS may, in the long run, have a paradoxically disproportionate impact on perceptions of how technologies may be regulated and used in regulation, simply because it is one of the first organisations to tackle this issue generically. It surveys what is substantially a blank page. Increasingly the content of law, like the content of life, will be concerned with technology and with its many consequences for society. The importance of the chosen topic therefore belies the comparatively little that is written, said and thought about it. Paradoxically, then, those who first lay claim to expertise may participate in a self-fulfilling prophesy. B. Too Much/Too Little Law The second paradox is that most of us recognise that a failure to provide law to deal with the fallout of particular technologies is not socially neutral. Effectively, to do nothing is often to make a decision. Thus, for the law to say nothing about reproductive cloning of human beings, for example, (assuming that end to be technically possible) is to give a green light to experiments in that technology. In so far as law expresses prohibitions sup- ported by sanctions that uphold the command of a sovereign power, silence may, for once, imply consent or at least non-prohibition. Thus, if there is no law to 376 Michael Kirby prohibit or regulate reproductive cloning or hybridisation or xeno-transplants, scientists and technologists at their benches may decide to experiment. Nothing then exists to restrain them except their own ethical principles, any institutional ethics requirements, the availability of funding and the prospects of a market. A scientist or technologist may proceed out of sheer curiosity, as when David Baltimore so beneficially investigated a simian retrovirus a decade before the dis- covery of the immuno-deficiency virus in human beings. The scientist or technologist may do this in the hope of cashing in on a poten- tially lucrative therapeutic market. One such market certainly exists in respect of therapies to overcome human infertility. Reproductive human cloning might, potentially, be one such therapy. Some of its supporters treat with contempt the supposed moral objections to this form of scientific advance.27 They point to earlier resistance to other reproductive technologies such as artificial insemina- tion donor (AID), artificial insemination husband (AIH), in vitro fertilisation (IVF) and surrogacy arrangements.28 Most of these objections have faded away as society becomes more used to ‘non-natural’ ways of securing a desired pregnancy in a particular patient. The recognition that inaction in the face of significant technologies may amount to making a decision co-exists with our appreciation, as observers of the law, that premature, over-reaching or excessive lawmaking may, in some cases, be an option worse than doing nothing. It may place a needless impediment upon local scientists and technologists, obliging them to take their laboratories and experiments offshore. In a big world with diverse cultures, religions and moral beliefs, it is never dif- ficult to find a place offering a regulation-free zone in exchange for investment dollars. Just as bad is the possibility that laws are solemnly made and then ignored or found to be ineffective, as was temporarily the case with the ‘technological protection measure’ considered in the Australian Sony litigation. Following the decision of the High Court of Australia in that case, and under pressure from the United States government under the United States–Australia Free Trade Agreement, Australian law was changed. The new law represented an attempt to overcome the High Court’s decision, although in a somewhat different way.29 27 JA Robertson, ‘Why Human Reproductive Cloning Should Not in All Cases be Prohibited’ (2001) 4 Legislation and Public Policy 35; YM Shikai, ‘Don’t be Swept Away by Mass Hysteria: the Benefits of Human Reproductive Cloning and Its Future’ 33 Southwestern University Law Review 259 (2002). 28 The New South Wales Law Reform Commission in 1988 recommended a prohibition on sur- rogacy arrangements which was not implemented. However, surrogacy arrangements are regulated in some Australian jurisdictions: Parentage Act 2004 (ACT); Surrogate Parenthood Act 1988 (Qld); Family Relationships Act 1975 (SA); Surrogacy Contracts Act 1993 (Tas); and Infertility Treatment Act 1995 (Vic). 29 The story of the change of law following the decision in the Sony case is told in M de Zwart, ‘Technological Enclosure of Copyright: the End of Fair Dealing?’ (2007) 18 Australian Intellectual Property Journal 7; contrast D Brennan, ‘What Can It Mean “to Prevent or Inhibit the Infringement of Copyright?”:—A Critique of Stevens v Sony’ (2006) 17 Australian Intellectual Property Journal 81 at 86. See also Copyright Amendment Act 2006 (Cth) implementing the new scheme said to be required by art 17.4.7 of the Australia–United States Free Trade Agreement. New Frontier 377 Many participants in the TELOS conference, whether expert in matters of biotechnology or information technology, revealed themselves as legal libertar- ians. They were so mainly because of their recognition of the common potential of premature, over-reaching and ill-targeted laws to diminish experimentation, burden innovation and cause economic and other inefficiencies. Thus, Professor Han Somsen presented a number of compelling arguments about the dangers of the ‘precautionary principle’.30 Whilst this principle appears to be gaining increasing acceptance in the international community, particularly in respect of protection of the global environment, it carries risks of its own. If taken too far, it could instil a negative attitude towards science and technology and encourage excessive regulation in the attempt to avoid any risks. Life is risky. Most tech- nological innovations carry some risk. An undue emphasis on precaution, for fear of any risks, would not be good for science or technology or for the global economy or for innovation in thought as well as action. The second paradox is thus more of a contradiction or tension, difficult to resolve. At the one time we must accept that doing nothing to regulate technolo- gies involves making a decision. Yet we must also recognise that sometimes doing nothing will be a better option than making laws that impede innovation and burden efficiency. C. First Amendment and Copyright Law An early illustration of the second paradox arose in the opening address of Professor Lessig. His address was concerned with the potential of ‘Code’ (or infor- mation technology architecture) to play a part in regulating technology in ways more universal and immediately effective than most laws are. An instance, frequently mentioned, is the installation of filters designed to prohibit access to materials considered ‘harmful to minors’. Many countries now have legal regulations forbidding access to, or possession of, child pornography. Available software may prevent access to sites providing such images. But some- times they may achieve these objectives at a cost of over-reaching prohibitions. The burden on free communication may outstrip the legitimate place of legal regulation, forbidding access not only to child pornography but to lawful erotic materials or discussion about censorship itself or to websites concerned with subjects of legitimate interest, such as aspects of human sexuality, women’s rights and even children’s rights. Whereas the law will commonly afford avenues of appeal and review of deci- sions that purport to apply legal norms, an over-reaching ‘protective’ software programme may afford no such rights of challenge. Those concerned with the human right of free expression are naturally anxious about the potential of ‘Code’ 30 R Andorno, ‘The Precautionary Principle: A New Legal Standard for a Technological Age’ (2004) 1 Journal of International Biotechnology Law 11–19. 378 Michael Kirby to re-institute excessive censorship in society, just when we thought we had grown out of that habit. Like most American lawyers, Professor Lessig approached these issues from the standpoint of the First Amendment to the United States Constitution.31 This upholds a very high level of unrestricted and unregulated freedom of communi- cation. The rest of the world tends to be less absolutist in this respect.32 It recog- nises that, whilst ‘free’ expression and access to a ‘free’ media constitute important human rights, they are not unlimited. They have to be harmonised with other fundamental human rights. These include the right to individual honour and reputation and to protection of privacy and family relationships.33 They also include protection of the legitimate rights of inventors.34 Professor Lessig expressed concern about the balance that has been struck in the United States between rights to free expression and right to copyright protec- tion that necessarily impinges on free expression.35 In an international meeting such as the TELOS conference, we were not, as such, concerned with the particularities of United States law, including the way the constitutional law of that country reconciles free expression and lawful copy- right protection. On the other hand, because of the dominance of the United States media and its hegemony in entertainment and popular culture, what is done in that country to regulate information technology obviously has conse- quences world-wide. Just as, in earlier decades, the hard copy issues of Playboy, circulating in huge numbers around the world, broke down the prevailing culture of censorship, carrying First Amendment values virtually everywhere, so today the inbuilt ‘Code’ or architecture of information systems may carry American legal protections for American copyright holders far beyond the protections that the laws of other countries afford them.36 This consequence can present legal and practical problems of regulation of technology in jurisdictions enjoying different capacities to contest the balances struck by the Constitution and laws of the United States. In smaller economies, there may be no real choice. Upholding the local constitution and its values may, 31 Relevantly, the First Amendment states: ‘Congress shall make no law … abridging the freedom of speech, or of the press’. 32 Eg ABC v Lenah Game Meats Ltd (2001) 208 CLR 199 at 283 [202] ([2001] HCA 63); Dow Jones and Co Inc v Gutnick (2002) 210 CLR 575 at 626 [115] ([2002] HCA 56). 33 International Covenant on Civil and Political Rights (1976) arts 17.1, 17.2 and 19.3. 34 cf Universal Declaration of Human Rights (1948) art 27.1; International Covenant on Economic, Social and Cultural Rights (1976) art 15.1(b) and (c). 35 cf Nintendo Co Ltd v Sentronics Systems Pty Ltd (1994) 181 CLR 134 at 160; Grain Pool of WA v The Commonwealth (2000) 202 CLR 479 at 531 [133], fn 266 ([2000] HCA 14) referring to Graham v John Deere & Co 383 US 1 at 6 (1966); Feist Publications Inc v Rural Telephone Service Co Inc 499 US 340 at 348 (1991) and L Lessig, Code and Other Laws of Cyberspace (1999), 131, 133–4. 36 Stevens v Kabushiki Kaisha Sony Computer Entertainment (2005) 224 CLR 193 HCA 58 cit- ing L Lessig, Code and Other Laws of Cyberspace (1999); B Fitzgerald, ‘The PlayStation Mod Shift: A Technological Guarantee of the Digital Consumer’s Liberty or Copyright Menace/Circumvention Device?’ (2005) 10 Media and Arts Law Review 85 at 96. See also Metro-Goldwyn-Mayer Studios Inc v Grokster Ltd 73 USLW 4675 (2005). New Frontier 379 as a matter of practicalities, be impossible. Consumers may be presented with no real option. If they buy the software that drives the PlayStation, they may find that it reflects United States constitutional and copyright laws. Indeed, such software may exceed even the protections afforded by those laws. It is in this sense that ‘Code’ and architecture may challenge the previous assumption that, within its own borders, each nation state is entitled, and able, to enforce its own laws, reflect- ing its own values. In Australia, we gained a glimpse of things to come in the Sony litigation. But it was only the beginning. The debate that Professor Lessig recounted between First Amendment values and the current state of American copyright law presents a microcosm of similar conflicts in every society. There is an element of the paradoxical about it in the United States. This is because, as Professor Lessig put it, intellectual property law in that country has been able, to some extent, to slip under the radar of First Amendment values. To a large extent, intellectual property law has developed separately and, in part, inconsistently. This point was noted by me in my reasons in Sony. Eventually, in the United States, Britain, Australia and elsewhere, it will be necessary to face directly the tension between enlarging copyright protec- tion (including through the use of the technological architecture of information technology) and adhering to high levels of free communication, unimpeded by governmental regulation (such as by copyright law37). The conflict recounted by Professor Lessig presents a paradox, visible to non- Americans and to American lawyers themselves.38 The country which has been foremost in promoting values of free expression and the free press has also lately been foremost in promoting, extending and enforcing the intellectual property rights of its own creators, ‘inventors’ and designers. This is not only true in the context of information technology. It is also true in the case of biotechnology, as the closely divided decision of the Supreme Court of the United States in Diamond v Chakrabarti,39 and its progeny, demonstrate. Professor Lessig appreciated, and highlighted, this paradox. It appears in an acute form in the United States. But it has its counterparts everywhere. D. Technology’s Democratic Deficit A fourth paradox derives from the way in which contemporary technology at once enhances, and diminishes, our facilities of democratic governance. No one at the TELOS conference questioned the importance of science and technology in the current age. Similarly, no one questioned the desirability of rendering laws, and regulation more generally, available and accountable to the people from whom authority to govern society is ultimately derived. However, on balance, 37 Grain Pool (2000) 202 CLR 479 at 531 [133] HCA 14; Sony 224 CLR 193 at 256. 38 Graham v John Deere Co 383 US 1 at 6 (1966). 39 477 US 303 (1980); cf MD Kirby, ‘Intellectual Property and the Human Genome’ (2001) 12 Australian Intellectual Property Journal 61 at 64. 380 Michael Kirby does technology enhance or reduce democratic accountability for the state of the resulting regulations? In some respects, there can be no doubting that technology has improved com- munication that is essential to converting the formalities of electoral democracy into the realities of genuine accountability of the governors to the governed. Radio, television, world-wide satellite communications, the Internet, podcasts, blogs and so forth have revolutionised the distribution of information about those persons and institutions whose decisions affect the regulation of our daily lives. In this sense, democratic governance has moved from small town hall assemblies of ear- lier times into huge national and international forums both public and private. Paradoxically, however, the very quantity of information has resulted in its manipulation and presentation that is often antithetical to real democratic account- ability. The technology stimulates a demand for the simplification and visualisation of messages, the personalisation of issues, the trivialisation of conflict, the confu- sion between fact and opinion and the centralisation and ‘management’ of news. So-called ‘spin’ and ‘infotainment’ are characteristics of the present age. They tend to concentrate power in a way that even George Orwell could not have imagined. Several speakers at the TELOS conference referred to yet another feature of contemporary technology that can be inimical to democracy. This is the incor- poration of regulation in the technology itself that goes beyond what is strictly required by local law yet without effective opportunities for those affected to chal- lenge the regulation so imposed. Who can, or would, challenge the over-inclusive software designed to bar access to Internet sites selected as ‘harmful to minors’ but sometimes operating in an over-inclusive way? Not long ago, in the High Court of Australia, I found that the website of the Archbishop of Canterbury was barred to use. My staff were unable to procure one of the Archbishop’s addresses. This was presumably because a filter, instituted to deny access to websites deemed undesirable, had erected a bar. Presumably, this was because, in the manner of these times, one or more of his Grace’s addresses dealt with issues of sex, specifically homosexuality. In fact, that was exactly why I wanted the speech. I was surprised to find that at the same time the Vatican website was accessible without any restriction. This may say something either about the prudence of His Holiness’s choice of language, the power of the Roman Catholic Church in such matters or the religion of the filter programmer. I gave directions that led to the filter being over-ridden. I secured copy of the desired speech. But many might not be so lucky. Given the importance of technology to the current age, how do we render those who design, install and enforce such programmes accountable to the democratic values of our society? As ‘Code’ enlarges and replaces the old style legal regula- tion of technology, how do we render its architects answerable to the majority views of the people? How, if at all, are transnational corporations, like Sony for instance, rendered responsible to the democratic values of the nations in which their products are used? New Frontier 381 These are legitimate questions because the fourth paradox is the coincidence, at the one time of history, of technologies that vastly enhance access to informa- tion that jumped the Berlin Wall, bringing messages of freedom, at the same time as they sometimes diminish genuine debate, enlarge unreviewable ‘technological’ corporate decisions and expand the capacity to ‘manage’ news in a way inimical to real transparency and accountability of decision-makers to the people. E. Vital but Neglected Topics I reach my fifth, and final, paradox. The TELOS conference addressed one of the most important issues for the future health of the rule of law in every coun- try. Because of the elusiveness of much contemporary technology to effective regulation large and increasing areas of activity in society find themselves beyond the traditional reach of law as we have hitherto known it. When regulation is attempted, as I have shown, it will often be quickly rendered ineffective because the target has already shifted. Typically, in the past, the drawing up laws has been a slow and painstaking process. Consulting governments and those primarily affected, not to say the people more generally, takes much time. In that time, the technology may itself change, as I have demonstrated from my experience with human tissue transplantation and privacy laws. Now new forms of regulation are being developed in the form of what Professor Lessig calls ‘Code’. Yet this form of regulation is not so readily susceptible, if susceptible at all, as conventional laws have been, to democratic values and to the participation (or even appreciation) of most of those affected in the moral choices that determine the point at which the regulation is pitched. If, on the same Easter weekend in London, King’s College School of Law had convened a conference on revenue law, it would have filled a convention hall. A month earlier, in Hobart, Tasmania, I had addressed more than 600 lawyers and accountants at such a conference in Australia. Similarly, a conference on the law of unjust enrichment would attract hundreds of contributors, with their differing opinions. Even a meeting on the rule against perpetuities would probably have attracted more participants than the inaugural conference of TELOS. Yet, in all truth, the issues addressed by TELOS are more important for our societies and their governance than virtually any of the other topics that the legal discipline could offer. It sometimes falls to small groups, particularly in professions, to lead the way and to bring enlightenment to the many. This, then, is the fifth paradox—at least it is an oddity. Such an important topic as the regulation of burgeoning technolo- gies in modern society should engage the interest and attention of all who claim to be lawyers, sociologists and philosophers and express an interest in the health of the rule of law. Yet, for the moment, and for most such observers, this is terra incognita. The contributions at the TELOS conference suggest that it will, and should, not be so for long. 382 Michael Kirby Seven Lessons A. Recognise a Basic Dilemma Certain general lessons stand out from the presentations at the TELOS conference. Some of them have already been touched on. The first is that, the regulation of technology faces a fundamental dilemma hitherto uncommon in the law. This is that, of its character, technology is nor- mally global. Law, being the command of an organised community is traditionally tied to a particular geographical jurisdiction. Whereas in recent years the need for extra-territorial operation of municipal law has been recognised, and upheld,40 the fact remains that the focus of most national law is the territory of the nation. By way of contrast, the focus of regulating technology must be the technology itself.41 Sometimes, that feature of the technology will make effective regulation by national law difficult, or even impossible. It is into this context that direct enforcement by ‘Code’, written into software pro- grammes or otherwise imposed, adds a new dimension to global technology. The values and objectives of transnational corporations may be even more unrespon- sive to national regulation than the rules of municipal legal system are. Moreover, ‘Code’ of this kind may opt for caution and over-inclusion so as to avoid dangers to markets in the least right-respecting countries. The contractual arrangements entered between the government of the People’s Republic of China and the corpora- tions selling access to Yahoo and Google in China, described during the conference, illustrate the willingness of the latter to succumb to the demands of the former so as to avoid endangering a lucrative economic market for their products. In this way the provider, but also the users, are subjected to forms of censorship that might not be tolerated in other societies. A smaller country, with a smaller market, is unlikely to exert the same clout. Considerations of economics rather than of legal principle, ethical rules or democratic values may come to predominate in such cases. B. Recognise that Inaction is a Decision In the past, proponents of technological innovation have often favoured contain- ment of law and a ‘libertarian’ approach to developments of technology. Yet most lawyers recognise that there are limits. Unless such limits are clearly expressed, and upheld in an effective way, the absence of regulation will mean, effectively, that the society in question has effectively made a decision to permit the techno- logical advances to occur, without impediment. 40 Re Aird; Ex parte Alpert (2004) 220 CLR 308 at 344–350 [114]–[133] [2004] HCA 44 referring to the case of the SS ‘Lotus’ (1927) Permanent Court of International Justice, Series A, No 10, Judgment No 9, pp 18–19 and J Martinez, ‘Towards an International Judicial System’56 Stanford Law Review 429 (2003). 41 Dow Jones (2002) 210 CLR 575 at 615–19 [78]–[92] [2002] HCA 56. New Frontier 383 Those who are cautious about adopting any form of the precautionary principle may nonetheless recognise the need for some restraints. Thus, unlimited access to child pornography will probably offend most people and sustain the need for regulation of the Internet to prohibit or restrict access to such sites. However, that will still leave room for debate about the detailed content of the regulation: the age of the subjects depicted; any permissible (computer graphic rather than human) images; the means of enforcing the law; and the provision of effective sanctions.42 Cases on these issues, and on any constitutional questions that they present, are now quite common.43 Likewise with biotechnology. Views may differ over whether regulation is neces- sary, or even desirable, to prohibit therapeutic cloning, reproductive cloning or the use of human embryonic stem cells. Yet non-binding prohibitory resolutions and declarations have been adopted in the organs of the United Nations on this subject.44 Even those nations, like the United Kingdom, that have not favoured prohibitions or moratoriums on experiments with human cloning for therapeutic purposes might well accept the need to prohibit, or restrict, some bio-technological experiments. Hybridisation and xeno-transplantation of tissue across species clearly require, at the very least, restrictions and safeguards so as to prevent cross-species transmission of endogenous viruses. To do nothing is therefore effectively to decide that nothing should be done. It does not necessarily amount to a decision to ‘wait and see’. This is why the regulation of technology is such an important topic. It is not one that can be ignored, simply because the subject matter, and the available regu- latory techniques, are difficult and controversial. C. Recognise the Limited Power to Regulate A third lesson, derived from the first two, is that the normal organs of legal regu- lation often appear powerless in the face of new technology. This is clear in the case of attempts to regulate new information technology. So far as the Internet is concerned, the regulatory values of the United States inevitably exert the greatest influence on the way the Internet operates and what it may include. This means that both First Amendment and copyright protection values, established by the law of the United States, profoundly influence the Internet’s present design and opera- tion. An attempt by another nation’s laws (such as those of France) to prohibit transnational publication offensive to that country’s values (such as advertising 42 Bounds v The Queen (2006) 228 ALR 190 at 197 [26], 211 [94]; ([2006] HCA 39). 43 The Queen v Fellows and Arnold [1997] 2 All ER 548; The Queen v Oliver [2003] 1 Cr App R 28 at 466–7 [10]; cf Lawrence v Texas 539 US 558 at 590 (2003). 44 KL Macintosh, ‘Human Clones and International Human Rights’ (2005) 7 University of Technology, Sydney Review 134 at 135–6 describing the resolution of the General Assembly of the United Nations of 8 March 2005. This approved a Declaration, proposed by the Sixth Committee, to ‘prohibit all forms of human cloning inasmuch as they are incompatible with human dignity and the protection of human life’. The General Assembly vote was 84 to 34 in favour with 37 abstentions. 384 Michael Kirby Nazi memorabilia) may face difficulties of acceptance and enforcement in the Internet.45 The same is true of biotechnology. The Australian Parliament initially enacted the Prohibition of Human Cloning Act 2002 (Cth) and the Research Involving Human Embryos Act 2002 (Cth). These were part of a package of laws aimed at the consistent prohibition in Australia of human cloning and other practices deemed unacceptable at the time. Both Acts were adopted on the basis of the promise of an independent review two years after the enactment. Such a review was duly established. It was chaired by a retired federal judge, the Hon John Lockhart. The review presented its report on December 2005. It recommended an end to the strict prohibitions of the 2002 legislation; the redefinition for legal purposes of the ‘human embryo’; and the introduction of a system of licensing for the creation of embryos for use for therapeutic purposes.46 Initially, the Australian government rejected the recommendations of the Lockhart review. However, following political, scientific and media reaction, a conscience vote on an amending Act, introduced by a previous Health Minister, was allowed. In the outcome, the amendments were enacted. They passed the Senate with only a tiny majority.47 The main arguments that promoted this outcome in Australia were the recog- nition of the pluralistic nature of the society; widespread reports on the poten- tial utility of the research and experimentation; and the expressed conviction that experimentation would proceed in overseas countries with results that, if they proved successful, would necessarily be adopted and utilised in Australia.48 Interestingly, both the Prime Minister and the Leader of the Federal Opposition voted against the amending Act.49 The global debates on the regulation of experiments using embryonic stem cells have often been driven by countries that, to put it politely, are not at the cutting edge of the applicable technology.50 On the other hand, in recent years, the United States has also adopted a conservative position on these topics in United Nations forums. As happened in Australia, this may change in time. 45 League Against Racism and Anti-Semitism (LICRA), French Union of Jewish Students v Yahoo! Inc. (USA), Yahoo France [2001] Electronic Business Law Reports 1(3) 110–120 (The County Court of Paris). 46 Australian Government Legislation Review: Prohibition of Human Cloning Act 2002 and the Research Involving Human Embryos Act 2002, Report, Canberra, December 2005. 47 In the Australian House of Representatives, the vote was 82:62. See Commonwealth Parliamentary Debates (House of Representatives), 6 December 2006, 127. In the Senate the vote was 34:31. See Commonwealth Parliamentary Debates (Senate), 7 November 2006, 48. 48 See eg, ‘Let the Debate Begin: Australia Should Lead, Not Lag, in Regenerative Medicine’ The Australian (7 August 2006) 15; and B Finkel and L Cannold, ‘Day for Stem Cells and the Hope of Finding Cures’ Sydney Morning Herald (7 August 2006) 9; L Skene and Ors, ‘A Greater Moralilty at Stake on the Decision of Stem-Cells Research’ Sydney Morning Herald (14 August 2006) 11; B Carr, ‘Age-Old Objections Must not be Allowed to Delay this Revolution’ Sydney Morning Herald (25 July 2006) 13. 49 Mr Howard spoke at Commonwealth Parliamentary Debates (House of Representatives), 6 December 2006, 117. Mr Rudd spoke, ibid, p 119. 50 Thus, Honduras was the national sponsor of the United Nations ban on human cloning, repro- ductive and therapeutic. See Macintosh (2005) 7 University of Technology Sydney Law Review 134. New Frontier 385 D. Recognise Differentiating Technologies So far as regulation of technologies is concerned, the TELOS conference established the need to differentiate technologies for the purpose of regulation. It is not a case of one response fits all. Self-evidently, some forms of technology are highly sensitive and urgently in need of regulation. Unless the proliferation of nuclear weapons is effectively regulated, the massive destructive power that they present has the poten- tial to render all other topics theoretical. Similarly, some aspects of the regulation of biotechnology are sensitive, including the use of embryonic stem cells and germline modification. For some, the sensitivity derives from deep religious or other beliefs concerning the starting point of human existence. For others, it arises out of fears of irreversible experiments that go wrong. Somewhat less sensitive is the regulation of information technology. Yet this technology too presents questions about values concerning which people may have strong differences of opinion. To outsiders, Americans seem to imbibe First Amendment values with their mother’s milk. United States lawyers sometimes have to be reminded that their balance between free speech and other human rights is viewed in most of the world as extreme and disproportionate. E. Recognise Different Cultures Most of the participants in the conference came from the developed world. They therefore reflected general attitudes of optimism and confidence about the out- come of rational dialogue and the capacity of human beings ultimately to arrive at reasonable responses to regulating technologies, on the basis of calm debate. This is not, however, universally true. The Easter conference in London coin- cided with a declaration by the Roman Catholic Bishop of Birmingham, the Most Rev Vincent Nichols, that Britain was facing a period of secular revulsion. This response was attributed to impatience with the instances of violence attributed to religious beliefs and the apparent obsession of some Christian churches with issues of sexuality and gender. There is no doubt that the current age bears witness to many instances of religious fundamentalism. Modern secular democracies can usually prepare their regulations of technology without undue attention to such extremist consider- ations. But when the considerations come before international law-makers, they may have to run the gauntlet of fundamental beliefs. Such religious beliefs are by no means confined to Islam. They also exist in Christianity, Judaism, Hinduism and other world religions. Because, in such instances, religious instruction is attributed to God and derived from human understandings of an inerrant reli- gious texts, it may brook no debate and no compromise. Recognising the coincidence of galloping technology and the force of religious fundamentalism is necessary to an understanding of what can be done in different countries to respond effectively to aspects of technology that challenge orthodox religious beliefs. In the Australian Parliamentary Debates on the amendment of 386 Michael Kirby the 2002 moratorium on human cloning and use of embryonic tissue, many of the legislators addressed the extent to which it was legitimate, in a pluralistic society, to allow beliefs, even of a majority, to control the design of national legal regula- tion. Yet if such beliefs are treated as irrelevant, what other foundations can be provided for a coherent system of moral principle? In some societies such issues simply do not arise. The Taliban in Afghanistan would not entertain an open debate on topics treated as decided by a holy text. The diversity of regulatory responses to new technology therefore grows out of the different starting points in each society. F. Basing Regulation on Good Science In the early days of the HIV pandemic, I served on the Global Commission on AIDS of the World Health Organisation. One of the members, June Osborn, then a professor of public health in the University of Michigan, taught the importance of basing all regulatory responses to the epidemic upon good science. The danger of responses based on assumptions, religious dogmas, intuitive beliefs, or popular opinion were that they would not address the target of regulation effectively. The intervening decades have suggested that the countries that have been most successful in responding to HIV/AIDS have been those that have observed June Osborn’s dictum.51 The same is true of the subjects of biotechnology, information technology and neuroscience examined in the TELOS conference. All too often, science and technology shatter earlier assumptions and intuitions. For example, the long-held judicial assumption that jurors, and judges them- selves, may safely rest conclusions concerning the truth of witness testimony on the basis of the appearance of witnesses and courtroom demeanour has gradu- ally evaporated because scientific experiments shatter this illusion.52 One day, by subjecting witnesses to brain scans, it may be possible to demonstrate objectively the truthfulness or falsity of their evidence. However, one lesson of the paper of Professor Judy Illes of the Stanford Center for Biomedical Ethics, is that we have not yet reached that position. If, and when, it arrives, other issues will doubtless be presented for regulators. We are not there yet. But any regulation must recognise the need to remain abreast of scientific knowledge and technological advances. G. Addressing the Democratic Deficit This brings me to the last, and most pervasive, of the lessons of the TELOS con- ference. Technology races ahead. Earlier innovations quickly become out of date. Laws addressed to a particular technology are overtaken and rendered irrelevant 51 D Plummer and L Irwin, ‘Grassroots Activities, National Initiatives and HIV Prevention: Clues to Explain Australia’s Dramatic Early Success in Controlling the HIV Epidemic’ (2006) 17 International Journal of STD and AIDS, 1. 52 See eg Fox v Percy (2003) 214 CLR 118 at 129 [31]; ([2003] HCA 22). New Frontier 387 or even obstructive. Nowadays scientific knowledge, technological inventions and community values change radically in a very short space of time. Within less than two years, demands were made for reversal to the Australian federal prohibition on therapeutic cloning. Within five years, the prohibition was repealed. In such an environment, there is an obvious danger for the rule of law. It is impossible to expect of legislatures, with their many responsibilities, that they will address all of the technological developments for regulatory purposes. The average legislator finds such issues complex and impenetrable. They are rarely political vote-winners. They struggle to find a place in the entertainment and personality politics of the present age as well as with the many other competing questions awaiting political decision-making. This leaves a gap in democratic involvement in this sphere of regulation. It is a gap that is being filled, in part, by ‘Code’ which incorporates regulations designed by inventors of information sys- tems themselves in the structure of such systems but without a democratic input or the necessity of individual moral judgment. The democratic deficit presented by contemporary technology is thus the larg- est potential lesson from the TELOS conference. In an age when technology is so important to society, yet so complex and fast moving that it often defies lay understanding, how do we adapt our accountable law-making institutions to keep pace with such changes? One means, ventured in Australia, is by the use of con- sultative mechanisms such as the ALRC53 or independent inquiries, such as the Lockhart committee.54 In such cases, the very process of consultation and public debate promote a broad community understanding of the issues, an appreciation of different viewpoints and an acceptance of any regulations adopted, even when they may give effect to conclusions different from one’s own. Adapting the legislative timetable and machinery to the challenges of modern governance is a subject that has engaged law reform bodies and executive govern- ment for decades. In Australia, proposals for some form of delegated legislation have been made to increase the implementation of such reports. Often they lie fallow for years, or indefinitely, not because of any real objections to their propos- als but because of the legislative logjam.55 In the United Kingdom, suggestions for a fast track system for implementing reports of the Law Commissions have been under review for some time.56 53 D Chalmers, ‘Science, Medicine and Health in the Work of the Australian Law Reform Commission’ in D Weisbrot and B Opeskin, The Promise of Law Reform (Federation Press, 2005), 374. Important recent reports of the ALRC in the field have included Essentially Yours: The Regulation of Human Genetic Information in Australia, ALRC 96 (2003). 54 D Cooper, ‘The Lockhart Review: Where Now for Australia?’ (2006) 14 Journal of Law and Medicine 27; N Stobbs, ‘Lockhart Review into Human Cloning and Research Involving Human Embryo—Closing the Gap’ (2006) 26 Queensland Lawyer 247; I Karpin, ‘The Uncanny Embryos: Legal Limits to Human Reproduction without Women’ (2006) 28 Sydney Law Review 599. 55 AF Mason, ‘Law Reform in Australia’ (1971) 4 Federal Law Review 197. 56 See MD Kirby, ‘Law reform and human rights—Scarman’s great legacy’ (2006) 26 Legal Studies 26, 449–474 at 466. 388 Michael Kirby In the face of radically changing technologies and the danger of a growing democratic deficit, it will obviously be necessary to adapt and supplement the lawmaking processes we have hitherto followed in most countries. Various forms of delegated legislation may need to be considered. So may the enactment of over-arching laws, expressed in general terms, which will not be quickly reduced to irrelevancy by further technological change.57 Addressing the weaknesses in democratic accountability of large and complex modern government is an important challenge to legal and political theory.58 The TELOS conference dem- onstrated once again the ingredients and the urgency of the problem. It will take more conferences to provide the solutions appropriate to the differing systems of government operating in different countries. 57 Issues considered in Quintavalle v Human Fertilisation and Embryology Authority [2005] UKHL 28 at [25]; cf R Brownsword, ‘Interpretive Re-Connection, the Reproductive Revolution and the Rule of Law’ unpublished, 20 f. 58 Ibid. INDEX abortion see pregnancy termination Acheson, Dean 367 active dot matrix 301, 302 activism 194–5 see also optimal mix critique of 195–7 ALRC see Australian Law Reform Commission Alzheimer’s disease 244–5 ambient intelligence (Ami) 172–3 normative impact 176, 189 regulatory proposals 269–70 vision 187–8 ambient law 89, 173, 176, 185–8, 189 digitalisation and 185–7 legal protection and 187–8 animals, design-based approaches 83–4 Ashby, WR 290 Asscher, L 164–5 Australian Law Reform Commission (ALRC) biotechnology, regulation 368–70 information technology, regulation 370–3 postponement of lawmaking 373 automatic enforcement 110–11 definition 115 feedback loops 117 overblocking 115–17 autonomy principle 246–7, 257 autopoeisis 282, 309–12 communications and 310–11 definition 309n model 309–10 system dynamics 312–14 Azureus 150 Baldwin, R 288, 291 Barlow, JP 295 behavioural change encouragement 85–6, 87 harm-generating impact 86 Benkler, Y 298, 299 Bennett, C 281 Bentham, Jeremy 114 Better Regulation Task Force 36 bio-engineering, design of humans and 84–5 bio-ethical triangle 234–5 biological organisms, design-based approaches 82–5 animals 83–4 example 82–3 humans 84 plants 83 bionics, design of humans and 85 biotechnology see also human fetal brain tissue, transplantation; infertility regulation 368–70, 383, 384 syndrome 350–1 Black, J 91, 228, 297 Boyle, J 110–11, 141 brain imaging 317–25 accuracy 318 analytic approaches 321 application 318 background 317 conclusion 325 context/goals 321–2 development 317–18 functional progress 319–20 limitations 319 media coverage 322–3 premature adoption 323–4 regulation 324 standards/quality control 321 trends 318–19 Brave New World (Huxley) 40 British Telecom, Cleanfeed project 112, 121 Brodeur, J-P 270–1 Brownsword, R 96, 97–102, 123, 165, 169, 351 dimensions of regulation 193 reproductive precaution and 233–4 Cameron, N 335 Cave, M 288, 291 CBD (Convention on Biodiversity 2002) 349–50 censorware 112 Centre for Design against Crime (Central St Martin’s College of Art) 61 certainty, legal 201–2 Chicago Schools 265, 266, 278 child pornography 137 China Internet regulators 130–1, 133–4, 140 renewable energy 357–9 Chorea-Huntingdon 244–5 390 Index civil liberties/human rights legislation 57, 57–9 infringement justification 58 as last resort 58 law of evidence and 58 limitations 57–8 surveillant technologies and 58–9 Clarkson, J 294–6, 303 Cleanfeed (BT) project 112, 121 code as law 114 see also normative technology concept 158 values 160–1 code-based enforcement mechanisms 130 coercion 56, 68–70 low intensity sound 68–9 non-lethal techniques 69–70 physical 68 common good, in transplants law 252–6 common heritage doctrine 352–5 community of rights 40, 46, 47–8 complexity cyberspace and 289–91, 294–6 regulation and 288–92 conduct/decision rules, collapse 151–2 conflict avoidance 102–3 contingent appliances 132–3 Convention on Biodiversity (CBD) 1992 349–50 convergent technologies 334 copyright enforcement/protections 116–17, 138–9, 142–3, 147–51 corporate social responsibility 348 courtroom use of technology 56, 66–8 due process safeguards 67–8 implications 66 juries and 67 validation issues 66–7 CPTED (crime prevention through environ- mental design) approach 81–2 crime control 51–78 authoritarian populism 52–3 background 51–2 coercion see coercion conclusion 78 defensive applications 60–1 descriptive research 71–2 deterrence issues 70, 76 evaluative research 72–4 force enabler technology 53–4 ICT use 54, 59–60 investigation see crime investigation legal context see civil liberties/human rights legislation legal regulatory research 74–7 normative research 77–8 probative applications see courtroom use of technology punishment see punishment research 71–8 risk management/actuarial approach 52 surveillance see surveillance typology 54–6, 55 Table unintended consequences 74 crime investigation 56, 63–6 advances 63 legalisation/regulation 63–4 neuroscience/psychology applications 64, 65–6 search, statutory definition 64 surveillance/investigation boundaries 64, 75–6 crime prevention 60–1 through environmental design (CPTED) approach 81–2 cyber-state, post-regulatory 297–8 regulation 302–9, 314–15 cyberlibertarianism 295–6 post-regulatory 296–302 cybernetics, regulation and 289–91, 294–6 see also post-regulatory state cyberpaternalism 297–8 Dan-Cohen, M 151 data protection, ambient intelligence and 188 De Hert, P 188 democratic values/deficit 95–7, 379–81, 386–8 design see design-based instruments; regulatory design design-based instruments 105–7 architectural design v rules 90–3, 106 authenticity issues 102–5 background 79–80 as closed systems 93–4 conflict avoidance 102–3 control and 114 democratic values and 95–7 design failure and 90–1, 95 effectiveness assurance 106 environmental approach 81–2 error correction 93–5 evaluation issues 88–90 false positives/false negatives errors 95 features 80–1 feedback 93–5 filtering technologies/systems 87–8, 157 modalities 85–8 moral choices and 97–102, 106–7 products/processes 82 rules v architectural design 90–3, 106 target hardening 103–5 taxonomy 81–7 determinism/indeterminism 292–4 Dewey, J 176 Diebold Electronics 116–17 Digital Millennium Copyright Act 1998 (US) 147–9 digital rights management (DRM) systems 130, 157 Index 391 digitalisation ambient law and 185–7 legal protection and 187–8 in lifeworld transition 183–4 dignitarianism 235, 239–40, 254–6 discursive spheres 202–4, 210 DNA testing collection/retention 4–5, 59–60 social impact 73–4 Dorbeck-Jung, B 166 dot community 301, 302 Drexler, KE 335, 337, 346 DRM (digital rights management) systems 130, 157 drug trace evidence 67 Dupuy, J-P 343, 344, 345 Dworkin, R 208–9 Easterbrook, F 298 EchoStar DVR 127–9, 132, 134, 149–50 EEB (European Environmental Bureau) 235–6 Einstein, A 293 electronic tagging/monitoring 70, 78 environmental design 81–2 environmental precaution 224–7, 241 definition 224 in EU/WTO law 225 reasons 225–6 reservations 226–7, 230–1 scope 224–5 European Environmental Bureau (EEB) 235–6 facial mapping 67 false positives/false negatives errors 95 feedback loops 117 fetus see human fetal brain tissue, transplanta- tion; pregnancy termination, and imminent tissue removal filtering technologies/systems 87–8, 157 Finkelstein, S 113 Fish, S 209 flux, internal/external 289 fMRI (functional magnetic resonance imaging) 318, 320 Forrester, J 312–13 France Conseil d’État 200, 201–2, 203, 204, 205, 211, 214 Cour de cassation 202–4, 210 Fukuyama, F 25, 165 Fuller, L 291 functional magnetic resonance imaging (fMRI) 318, 320 Funtowicz, SO 222 Galligan, DJ 65 gamete intrafallopian transfer (GIFT) 238 Gardener’s Dilemma 289–90, 292, 301 generative/non-generative appliances 131–2, 141, 152–5 genetic manipulation/modification (GM) animals 83–4 design of humans and 84–5 gigaideology 330 GM see genetic manipulation/modification Goldsmith, J 297 Good Regulator Theorem 290–1 Google Maps 153 Video 154 Google.cn 140 Greely, H 324 Greene, B 294 Greenfield, S 355 Grinbaum, A 343, 344, 345 Guston, D 344 Gutwirth, S 188 Harlow, C 288–9 harm principle, violation 256–7 harm-generating behaviour changing impact 86, 87 prevention 86–7 Harris, J 222 Hawkings, S 294 Heisenberg Uncertainty Principle 293, 294 hesitation see legal hesitation Hildebrandt, M 89 HMG (Law on Medication (Heilmittelgesetz)) (Switzerland) 258 Hood, C, The Tools of Government 263, 264, 273–80, 281, 282–3 application canons 277–80 background 273–5 basic tools 273–4 government/governance emphasis 274–5 nodality 273, 274, 276–7 selection/combination 275–7 House of Commons Science and Technology Committee, Human Reproductive Technologies and the Law 237–9, 240 human cloning 235 human enhancements 221–2 Human Fertilisation and Embryology Authority (HFEA) 34–5 human fetal brain tissue, transplantation 243–60 autonomy principle 246–7, 257, 258 background 243–4 common good and 252–6 comparative law 244 conclusion 259–60 doing no harm principle, violation 256–7 ethical debate 246–56 pregnancy effect see pregnancy termination, and imminent tissue removal procedure overview 244–5 392 Index tissue removal issues 247–9 tissue transplant issues 256–9 TPG see Switzerland, Federal Law on the Transplantation of Organs, Tissues and Cells TPV see Switzerland, Federal Ordinance on the Transplantation of Human Organs, Tissues and Cells Human Tissue Authority (HTA) 35 humans, design-based approaches 84 Huxley, Aldous, Brave New World 40 Iacobucci, E 36–7, 165 infertitilty in vitro fertilisation (IVF) 370, 373 legal libertarianism and 376–7 information and communication technology (ICT) crime control and 54, 59–60 regulation 370–3 Internet Corporation for Assigned Names and Numbers (ICANN) 281–2 impact 303–5, 306, 308–9 internet filtering see also filtering technologies/systems; tethered information appliances accountability 111 automatic see automatic enforcement background 109–12 conclusions 124 examples 109–10 features 110–11 governance modalities 113–15 intermediaries’ role 120–2 legitimacy 111 moral choice and 122–3 opaqueness see opaque systems rhetoric 111, 112–13 Internet Watch Foundation (IWF) 121–2 investigation, crime see crime investigation IVF (in vitro) fertilisation 370, 373 Jill Dando Institute of Crime Science (UCL) 61 Johnson, D 295 Joy, Bill 335, 336 Kantian approach 40 Kesan, JP 164 Kooiman, J 281–2 Koops, BJ 166–7 Kranzberg, M 177 Kreimer, S 122 Kroto, H 329 Laplace’s Demon 292–3, 294 Lasser, M 202–4, 209–10 Latour, B 200–2, 204–5, 206–9, 211 law emerging technologies and 185–7 technological articulation 180–5 Law on Renewable Energy Resources of China 2006 background 357 technology covered 358–9 legal certainty 201–2 legal hesitation certainty and 201–2 discursive spheres 202–4, 210 imputation and 200–1, 206–7 legal features 204 principled detachment 204–6 legal normativity 178–9 technological normativity and 179–80 legal opacity/transparency tools 188 legal practice activism see activism comparison of systems 209–11 constraints, obligations/requirements 197–8, 199, 200, 201 constructivist/performative features 207–8 hesitation see legal hesitation humility 212–14 right answer thesis 209 successive chapters analogy 208–9 superficiality 206–8 legal tradition, transition 184–5 hand-written to printed script 185 oral to written 184 Lessing, L 79, 81, 82, 95–6, 110, 193–4 code as law 114, 129, 130, 158–9, 178, 298, 374, 377–9, 381 layers model 299 on normative technology 163, 175–6 optimal mix see optimal mix privacy protection model 267–9 regulation model 263–6, 278, 279, 280, 282–3, 292 spam regulation model 269–70 letterisation in legal tradition transition 184–5 in lifeworld transition 182–4 Lévy, P 182, 183, 186, 189 Lex Informatica 158, 296, 298 lie detection 318, 320, 324 lifeworld transition 180–4 hand-written to printed script 182–3 letterisation to digitalisation 182–4 orality to script 180–2 Linke, D 248 Lisbon earthquake 175 Loughlin, M 288–9 Luhmann, N 309–10 Margetts, H 264, 276, 277, 281 Marx, G 62 Index 393 Maturana, H 309 Maxwell’s Demon 336 Microsoft v Commission 212–14 mobile phones, surveillance and 136 mobile trace detection 62 moral choices/issues design-based instruments and 97–102, 106–7 internet filtering 122–3 technology as regulatory tool 39–43 Mosquito, The 68–9, 70, 100 Murray, A 277, 281–2, 283, 292, 303 nanotechnology 327–54 abuse 338–40 accidents 337–8 background 327–8 benefit-sharing issues 348–52 common heritage and 352–5 definitions 328–31 global regulation 349–50 governance regime 346–8 nanodivide 349, 353–4 NBIC (nano, bio, info, cogno) technologies 334 regulation 340–1 revolutionary nature 331 scientific criteria 332–3 self-regulation 346–8 self-replication issue 335–7 social criteria 333–4 uncertainty issues 341–4 vigilance model 344–6 National Strategy for Police Information Systems (NSPIS) 59 NBIC (nano, bio, info, cogno) technologies 334 NECTAR (Network of European CNS Transplantation and Restoration) 255–6, 259 Netherlands, Minister for Health (Christian Democrat) 239–40 network blocks 130–1 Network of European CNS Transplantation and Restoration (NECTAR) 255–6, 259 network power 300–2 neurotechnology see brain imaging New Chicago School 265, 266, 278 Neyroud, Peter 59 nodality 273, 274, 276–7 normative technology authors on 163–7 background 157–9 conclusion 170–1 criteria for acceptability 162–3 see also systematic criteria below authors on 163–7 democratic/constitutional legitimacy 161, 166, 167 importance 157–8 intentional use 159 legitimacy 161, 166, 167 publicly/privately embedded rules 161–2 reasons for assessment 160–2 research questions/agenda 159–60, 171–3 systematic criteria 164–5, 167–70 application 169–70 hierarchy 169 level of abstraction 168–9 overview 167–9 obesity reduction 80–1, 82–3 OECD, privacy guidelines 371–2 Ogus, A 288–9 O’Neill, O 38 OnStar systems 135, 140 opaque systems 110–11, 117–20 awareness 117–18 commercial imperatives 118–19 deception layers 118 opacity/transparency tools 188 transparency and 119 Open Systems Interconnection Basic Reference Model (Benkler) 298 layers and 299 open-loop modelling 312–14 optimal mix see also activism concept 194–5, 206, 300–1 critique of 195–7, 212, 214–15 orality to script 180–2 to written legal tradition 184 Osborn, J 386 overblocking 115–17 Panopticon 114 Parkinson’s disease 243, 244–5 Pease, K 60–1 perfect law enforcement 133–56 checks on government and 145–7 code as law and 133 conduct/decision rules, collapse 151–2 evaluation 136–7 and generativity 152–6 mistakes, amplification/lock-in 141–5 preemption 133–4, 142 prior constraints 143–4 rule of law absent 139–41 specific injunction 134, 143–4 substantive law, objections to 137–9 surveillance 134–6, 144–5 tolerated uses and 147–51 pgd (preimplantation genetic diagnosis) 222, 238, 239–40 plants, design-based approaches 83 PlayMedia v AOL 129, 134 Police National Computer (PNC) 59 Police National Database (PND) 59–60 polycentric governance 161–2 394 Index polycentric web 291 polygraphy 318 Post, D 152, 295 post-regulatory state 287–8, 296–302 concept 297 cyber-state and see cyber-state, post-regulatory environment and 299–300 intervention models 298–9 network power 300–2 practice see legal practice precaution 221–3, 227–31 deliberative 228–30, 232 enabling see reproductive precaution environmental see environmental precaution fact-finding 227–8, 232 issues 221, 222–3 nanotechnology and 341–4 preemption 133–4, 142 pregnancy termination, and imminent tissue removal 246–9 bodily integrity/autonomy and 246–7 fetus dignity after abortion 254–6 fetus protection during pregnancy 253–4 instrumentalising effect 247 progenitor’s interests 252 woman’s interests 250–2 preimplantation genetic diagnosis (pgd) 222, 238, 239–40 printing press communication and 368 transition of legal tradition and 185 privacy enhancing technologies (PETs) 157 principles/guidelines 371–3 procedural 139–40 protection model 267–9 privately embedded rules 161–2 proactionary principle 238 Prosser, Tony 340 psychopharmacology, design of humans and 84 publicly embedded rules 161–2 punishment populist punitiveness 52–3 use of technology 70 quantum metrology 329–30 theories 293–4 Raab, C 281 RATE (Regulatory Authority for Tissue and Embryos) 35 Rawlings, R 288–9 real-time technology assessment 344 regulating technologies 23–48, 373–88 see also regulation of technology; technology as regulatory tool agenda 24–5 background 23–6 cultural differences 385–6 democratic deficit 379–81, 386–8 differentiation 385 experts, lack of 373–5 focus 3–4, 23–4, 47–8 global dimension 382 inaction as option 375–7, 382–3 limited powers 383–4 new technologies and 25–6 scientific basis 383 topic neglect 381 regulation 193–4, 214–16 complexity 288–92 definition 288–9 determinism/indeterminism 292–4 evolution 289–91 regulation by design see design-based instruments regulation of technology 26–34 see also regulating technologies; technology as regulatory tool advance measures 28–30 authors on 166–7 connection 26 economy/effectiveness 27–8 legitimacy 32–4 regime alternatives 27 space 30–2 Regulatory Authority for Tissue and Embryos (RATE) 35 regulatory design 34–8 background 34–6 institutional sets 37–8 operational practice of regulators 36 oppositional desiderata 36–7 plurality problems 37 trustworthiness and 38 regulatory system, definition 90–1 Reidenberg, J 129, 158–9, 281, 298 on normative technology 163–4 renewable energy China 357–9, 362–3 mandatory purchase system 360–1 quota system 359–60 systems comparison 361–2 reproductive precaution 221–42 arbitrary use 233–4 background 221–3 dignitarian 235, 239–40 enabling precaution 230–1, 232–40 human rights and 234–5, 238–9 issues 232–3 principles of precaution see precaution reservations 231–2 utilitarian 235–6, 237–9 Requisite Variety, Law of 290 Reynolds, GH 332 Ricoeur, P 180–1 Index 395 Ridley, M 47 Rio Declaration on the Environment 1992 224, 341 Rishikoff, H 319, 325 risk issues 223, 227–8 see also precaution crime control 52 Roco, M 337 Rosenfeld, M 210 Rotenberg, M 268, 270 Rothstein, M 4 Sarewitz, D 344 Schomberg, R von 228 Schrage, M 319, 325 Schummer, J 331 Schwartz, P 268–9, 270 Scott, C 292, 297, 298 script, hand-written to printed 182–3 self-enforcing systems 110–11 self-regulation criteria 166 Selznick, P 288–9 Shah, RC 164 Shklar, J 175 Skype 140 Smalley, R 335, 336 Smith, DJ 99 socio-technical-legal theory (STL) 300 spam regulation model 269–70 specific injunction 134, 143–4 stem cells 245 Stengers, I 197, 206, 209, 211 STL (socio-technical-legal theory) 300 Strahilevitz, L 147 Stuntz, W 139 Sunstein, C 342 surgery, design of humans and 84 surveillance 56, 61–3 data storage/analysis 63 deterrence issues 76 human rights and 58–9 new 62–3 operation 61–2 perfect law enforcement and 134–6, 144–5 public space, regulation 63 sustainability movement 354 Swire, P 110–11 Switzerland Federal Civil Code 251–3, 255 Federal Law on the Transplantation of Organs, Tissues and Cells (Transplantationsgesetz) (TPG) 244, 249–56, 257–60 Federal Ordinance on the Transplantation of Human Organs, Tissues and Cells (Transplantationverordnung) (TPV) 251, 257–8 fetal transplantation, ethical debate in 249–56 Law on Medication (Heilmittelgesetz) (HMG) 258 symbiotic regulation 307–9 system dynamics 312–14 systems theory 289–91 Talbott, M 4 technical protection measures 130 technological articulation of law 180–5 technological hybrids 127–9 technological normativity 176, 189 concept 177–8 legal normativity and 179–80 technological strip search/property search 62 technology, definition 51 technology regulation see regulation of technology technology as regulatory tool 38–47 see also design-based instruments; regulating technologies; regulation of technology background 38–9 design-out/design-in strategies 42–3 feasibility 43–5 moral issues 39–43 state stewardship 45–7 TELOS (Technology, Ethics and Law in Society), research centre for study of 3–4, 368 tethered information appliances 125–56 background 125–7 contingent appliances 132–3 generative/non-generative appliances 131–2, 141, 152–5 PC/Internet imperfections 125–6 perfect enforcement and see perfect law enforcement regulability and 129–33 technological hybrids and 127–9 trusted systems 130 Thatcher, Mark 291 tissue removal/transplantation see human fetal brain tissue, transplantation TiVo v EchoStar 127–9, 132, 134, 135 Toffler, A 74 tolerated uses 147–51 tools/instruments see also Hood, C, The Tools of Government; Lessing, L, regulation model actors in process 280–2 analysis 263–4, 282–3 comparative criteria 271–2 interactions 272–3 interdependencies 270–3 technology and see technology as regulatory tool TPG (Federal Law on the Transplantation of Organs, Tissues and Cells (Transplantationsgesetz) (Switzerland) 244, 249–56, 257–60 396 Index TPV (Federal Ordinance on the Transplantation of Human Organs, Tissues and Cells (Transplantationverordnung) (Switzerland) 251, 257–8 transitions see legal tradition, transition; lifeworld transition transplantation see also human fetal brain tissue, transplantation regulation 368–70 Transplantationsgesetz, Federal Law on the Transplantation of Organs, Tissues and Cells (TPG) (Switzerland) 244, 249–56, 257–60 Transplantationverordnung, (Federal Ordinance on the Transplantation of Human Organs, Tissues and Cells (TPV) (Switzerland) 251, 257–8 Trebilcock, M 36–7, 165 Trefil, J 292 trusted systems 130 Universal Declaration of Bioethics and Human Rights 343–4 US Constitution First Amendment 141–2, 377–9, 383–4 Second Amendment 145 Fourth/Fifth Amendment 139 US, renewable energy quota system 360 US Supreme Court 205–6, 210 van den Daele, W 226 Varela, F 309 video cassette recorders (VCRs) 282 regulation model 307–9 vigilance model 344–6 Villeneuve, N 115–16, 119 Voice Risk Analysis 65 von Bertalanffy, L 289, 291 Watson, J 356 Web 2.0 126, 152–6 wind power 358–9 Wood, S 330 World Summit on the Information Society (WSIS) 305–6 Wu, T 147 Zittrain, J 123


Comments

Copyright © 2025 UPDOCS Inc.