“Masters of intelligence”? – Visions of the Future (1)

December 19, 2007

In this, the first of three episodes, the BBC4 mini-series Visions of The Future examines how some of the scientific advances of the 20th and early-21st century may shape our future. Specifically, presenter Michio Kaku – Professor of physics and co-creator of string field theory – posits that we are on the brink of an “historic transition from the the age of scientific discovery to the age of scientific mastery” (00:01:20). He suggests that having “created artificial intelligence”, “unravelled the molecule of life” and “unlocked the secrets of matter” (all 00:01:03), science of the future will be concerned with more than mere observation of nature. It will be concerned with its mastery.

 

michio.jpg

 

caption2.jpg

Thus, while the individual programmes each explore human mastery of one of three key areas (intelligence, DNA and matter), the series as a whole maintains a consistent theme: that though this mastery offers us “unparalleled freedom and opportunities” (00:57:47) it also presents us with “profound challenges and choices” (00:01:46). Kaku refers to “key social issues” that will be raised by future science and technology as topics we must “start to address today” (00:57:59). In the first episode Kaku introduces a number of developments stemming from “ubiquitous computing” (00:06:19), many of which intersect with relatively new areas of debate in bioethics. Ubiquitous computing – or ubiquitous technology – is the view that powerful computer microchips will soon be everywhere. They will be such a taken-for-granted feature of every product we use or buy, that they will become largely unnoticed and invisible. While obvious applications of this include “intelligent” cars and roads, health care monitoring technologies might also become commonplace. For example, Kaku suggests that “wearable computers” (00:07:40) in our clothes will monitor our health from the outside, and that by swallowing an aspirin-sized pill with “the power of a PC and a video camera” (00:08:45) the health of our internal organs might also be continuously assessed.

However, as interviewee Susan Greenfield notes, the biggest changes may come when “ubiquitous technology converges with … the internet” (00:09:11); changes which “raise some rather disturbing questions” (00:18:00). These focus on issues of identity (loss of identity, multiple identities), the preference of virtual social networks over ‘real’ social networks, and the impact upon family life. As Greenfield further comments, current experience with virtual reality worlds like Second Life and online gaming, suggests changes are already taking place in these areas.

michio2.jpg

 

caption3.jpg

 

For Kaku, however, it is in AI (artificial intelligence) that “an evolutionary leap that will profoundly challenge the human condition” (00:22:08) is now taking place. While he does describe the types of monitoring technologies noted above as machine intelligences, it is in the move towards intelligent machines that the future lies. It is these machines that raise a number of important questions with respect to the relatively new bioethical area of robot ethics, including:

  • To what extent can machines really be regarded as intelligent? How does this compare to human intelligence? Will humans always be able to tell the difference between a human and an intelligent machine?
  • What types of relationships might humans have with machines, and what principles – ethical or otherwise – might this be based upon?
  • To what extent could (or should) the human form be mechanically enhanced? At what if any point would a mechanically enhanced human cease to be human and become machine?

These questions also intersect with long-standing debates in philosophy and other areas of ethics, and have also been explored in popular science books and TV fiction (see the BioethicsBytes posts on Kevin Warwick’s I, Cyborg and the Cybermen episodes of BBC’s Doctor Who). For example, phenomenologists, epistemologists and AI experts have long debated whether machines will ever display “human level intelligence” (00:29:18) – including such social skills as “getting the joke” (00:37:52) – or whether they will be limited to merely mimicking some aspects of it. Kaku explores this question with commentators and AI researchers like Ray Kurzweil and Rosalind Picard, and focuses on emotion, which he suggests is “critical for higher intelligence” (00:36:58). Current work in ‘affective computing’ is directed towards developing robots with some such capacities, though as technology forecaster Paul Saffo notes, “you’ll know its not really intelligent” (00:35:51).

 

visions4.jpg

 

Similarly, questions around how we might relate to intelligent machines resonate with debates in animal ethics. Kaku notes the tendency to anthropomorphise robots that appear intelligent. He refers to his own Roomba robot, and says of the Japanese robot Asimo “I know Asimo is a machine, but I find myself relating to it as if it were a real person” (00:32:33). This introduces one of the key issues in the new area of robot ethics: at what point might machines come to be seen as ‘persons’ rather than mere ‘things’, and – if this does occur – should they be granted robot rights? (see for example Sawyer. 2007. “Robot Ethics”. Science Magazine, Vol. 318, pp. 1037). Extending this further, Visions of the Future considers what relationship we humans might have with machines whose intelligence greatly exceeded our own. This discussion is predicated on the possibility that intelligent machines might “outgrow human control” (00:40:15), and examines whether this would be based on harmony or conflict. Here the focus is not on how we will treat the machines of the future, but on how they might treat us.

However, as the final sections of this episode of Visions of the Future highlight, the distinction and opposition of the categories ‘human’ and ‘machine’ implied above may have limited relevance in the future. Alongside the drive to create intelligent machines, Kaku notes growing interest in the mechanical enhancement of human intelligence: “as machines become more like humans, humans may become more like machines” (00:43:36). Further, we are asked “precisely how many of our natural body parts could we replace with artificial ones before we begin to loose our sense of being human?” (00:55:27).

saffo.jpg

 

saffo1.jpg

 

These concerns echo several of the dominant themes in posthumanism: the philosophical trend and cultural movement that both observes and advocates moving beyond a traditional – or classical – modern conception of the nature of humanity. In the form of transhumanism, this approach embraces the notion of ‘upgraded’ human, the cyborg, as the next – inevitable – evolutionary step. In may ways, Visions of the Future functions to outline, both the steps in the posthumanist argument, and it ultimate endpoint. It highlights how technologies currently used for therapeutic purposes could be used to enhance various human capacities (the examples used here are mood, memory and intelligence), however, that those who choose not to take part in this ‘revolution’ will find themselves severely disadvantaged. Paul Saffo notes “all revolutions have winners and losers, this revolution is no exception … the big losers are the people who say they don’t want to get involved. They are the ones who are going to discover that being a little bit out of touch will have some unpleasant consequences” (00:56:39).

Overall this futuristic first episode of the Visions of the Future series sets a tone of expectation – both of the future and the next two episodes. It is engaging and useful, both in its presentation of the science, and the questions it raises regarding the social and ethical implications of ‘the intelligence revolution’.

The first of three episodes of Visions of the Future was first broadcast on BBC4 on November 5th 2007 at 21:00 (TRILT identifier: 00741D95).


Artifical Evolution – Prey (Crichton, 2002)

November 19, 2007

(Warning: contains plot spoilers!) Published in 2002, Michael Crichton’s novel Prey explores some of his concerns around the convergence of “genetics, nanotechnology, and distributed intelligence” (p.525), within the contemporary market for new healthcare technologies. The story revolves around the creation of a “swarm” of nanobots designed to work together as an artificial imaging system for use inside the human body. These molecular scale “cameras” are manufactured from organic biomolecules, which are assembled automatically. When injected into the body the cameras “distributed agent” programming enables them to work together and coalesce to form a minature eye, such that creator company Xymos can ‘see’ any organ within the human body. However, one camera swarm escapes the company’s fabrication plant and is allowed to roam free in the Nevada desert. Within days the swarm begins to evolve in ways its creators never imagined. Drawing on its programming and sources of biomolecules present in the environment the swarm multiplies, develops predatory capabilities, and eventually turns on its creators.

prey.jpg

Crichton (2002). Prey.
HarperCollins, London.

As in his more recent novel, Next, Crichton’s explicit intention is to highlight some of the more troubling aspects of contemporary biotechnological developments (this is expressed in a short introduction to Prey). While much of the action is quite fantastic, the technology is – as Crichton’s bibliography demonstrates – not unrealistic. At present, bioethical concerns associated with developments in nanobiotechnology, including artificial life forms capable of autonomy and evolution, rarely make the headlines. However, as these technologies become more widely discussed (see for example Science, 16 November 2007. Vol. 318, no. 5853 and Phoenix, 2003), this seems likely to change.

A number of bioethically-relevant themes arise in Prey. They include:

  • Issues around machine “mimicry” and enhancement (for example, see pages 85-88 concerning the appearance of Julia, a Xymos employee)
  • Non-rule based conceptions of intelligence, specifically distributed intelligence and self-organising behaviour in animals and machines (see pages 96-97 and 177-181)
  • The commercial environment within which new biotechnologies are developed, specifically in relation to the involvement of venture capital and the military (as on pages 174-176, for example)
  • Issues around our ability to control the technologies we create (the theme of “technology-out-of-control” in bioethical thought)

Overall, Crichton’s novel raises interesting questions about both, the direction of contemporary research in nanobiotechnology, and the environment within which it is conducted. However, as a basis for discussing the bioethical questions posed by this technology, the lack of short sections dealing with key themes (as in Next) and the narrative’s later focus on the ‘battle’ with the nanobot swarms, unfortunately makes it unwieldy as a teaching tool.

Prey was written by Michael Crichton, and published in the UK in 2002 by HarperCollins, London. ISBN: 9780007229734.


More than human? – ‘I, Cyborg’ (Warwick, 2002)

October 30, 2007

Kevin Warwick’s 2002 book I, Cyborg opens with the line “this book is all about me” (pg. vii). For the reader, this appears true in at least two senses. Firstly, its pages detail Warwick’s journey to become professor of cybernetics at Reading University, and explore the origins, ambitions and actualisation of his drive “to become a cyborg” (pg. 1). Secondly, it can also be read as an expression of his belief that science, in this case robotics, should be made accessible to the public “in a straightforward way” (pg. 189) – a sentiment that has led some to accuse him of deliberately courting media attention (see, for example, this article in Wired magazine published in 2000).

I, Cyborg cover photo

While Warwick does address various bioethical issues implicit within his projects (most notably in terms of applying for ethics committee approval for experimental procedures, as on page 156, for example), I, Cyborg‘s primary bioethical utility is as an opportunity to examine in detail how one of the key scientific figures in the area of human-machine interaction sees the future of this technology. Is its use to “upgrade the human form” (pg. 1) a morally legitimate goal; or should cyborg technology be used only in the treatment of disease and disability?

In both bioethics and philosophy of medicine these two uses correspond to the distinction between ‘therapy’ and ‘enhancement’. In I, Cyborg, Warwick does effectively make a distinction of this kind, particularly when distinguishing between his own “projects on two fronts” (pg. 40). However, in Chapter 8, where he catalogues some of the research that informed his second cyborg experiment, the ambiguity implicit in the therapy/enhancement distinction is exposed (this is discussed at length in the accompanying BioethicsBytes Extended Commentary that will shortly appear here).

However, where bioethical debates have centred on the use and validity of the therapy/enhancement distinction as a way to describe a moral boundary between ‘good’ and ‘bad’ research and intervention (for example, where breast reconstruction following mastectomy might be viewed as intrinsically ‘good’, breast enhancement for cosmetic purposes might be more morally questionable), this is largely neglected in Warwick’s book. It may be his ‘post-‘ or ‘trans-humanist’ orientation to the ethics of enhancement that is responsible for this, though it also provides for a different perspective on this issue.

In this way, I, Cyborg provides a rich source of provocative quotes on the ethics and implications of the technological enhancement of humans. These would form a suitable basis for any discussion of this issue. Some key quotes include:

  • “humans will be able to evolve by harnessing the super-intelligence and extra abilities offered by the machines of the future, by joining with them. All this points to the development of a new human species, known in the science-fiction world as ‘cyborgs’.” (pg. 4)
  • “it doesn’t mean that everyone has to become a cyborg. If you are happy with your state as a human then so be it, you can remain as you are. But be warned – just as we humans split from our chimpanzee cousins years ago, so cyborgs will split from humans. Those who remain as humans are likely to become a sub-species. They will, effectively, be the chimpanzees of the future.” (pg. 4)
  • “My own definition of a cyborg is something that is part-animal, part-machine, and whose capabilities are extended beyond normal limits. … it allows for metal upgrades as well as physical upgrades and allows the extension to go beyond the normal limits of either the animal or the machine.” (pg. 61)
  • “As a result of the experiment, I received several communications from companies, government bodies, military and police forces about … what it might mean for the future. Would we as a society want implants like this to be generally available? Who would control the situation? The technology was now available, so such questions had to be raised, rather than just discussed as a mere futuristic concept that might never happen.” (pg. 89)

Finally, in Chapter 17 of I, Cyborg Warwick speculates on what a future populated by (superior) cyborgs and (inferior) humans might look like. What he describes is a global, networked society with deep divisions and huge potential for exploitation, discrimination and abuse. While this might also be said of our contemporary society, Warwick’s vision suggests that in the future the lines of division might be drawn in very different places and with different effects. Though the darker aspects of this chapter resonate with the sentiments of another of Warwick’s popular science books In the Mind of the Machine (1997), and also reflect their author’s provocative style, this epilogue does raise an important question. As Warwick himself suggests: “this really is the crux of the whole moral and ethical dilemma. Using implants to help a person with a disability is one thing, but using them to upgrade a perfectly healthy individual is something else” (pg. 293). For posthumanists, as Warwick appears to be, the “ultimate upgrade” (The Rise of the Cybermen, Doctor Who series 2, 2006. [TV]. BBC1, 13th May 2006. time in: 00:24:15) is something to be desired. However, for the rest of us, is Warwick’s future one we really want to inhabit?

I, Cyborg was written by Kevin Warwick, and published in the UK in 2002 by Century, London. ISBN: 0712669884.


“The ultimate upgrade” – Doctor Who & the Cybermen (parts 1 & 2)

September 20, 2007

In a two part episode concerning the Doctor’s encounter with the Cybermen, The Rise of the Cybermen and The Age of Steel rehearse a number of important bioethical issues regarding the feasibility and acceptability of “the ultimate upgrade” (00:24:15) – that is, the downloading and/or replicating of characteristics and functions of the human brain into a machine.

lumick21.jpg
John Lumick (The Rise of the Cybermen. BBC, 2006)

In brief, The Rise of the Cybermen and The Age of Steel concern the efforts of John Lumick – a dying cybernetics genius in a parallel world – to prolong his life by downloading or replicating his conciousness in a mechanical body. This is described in terms of “a brain welded to an exoskeleton” (00:00:20). However, Lumick sees the cybermen project as, not only, his way to circumvent the wheelchair we see him in and his immanent death, but also, as the future of the human species – what he refers to as “our greatest step into cyberspace” (00:24:56). In order to secure this future Lumick unleashes the Cybermen on human society where they go about suggesting that “upgrading is compulsory” (00:41:53) and that humans “are inferior and will be reborn as Cybermen” (00:45:01).

cybermen21.jpg
The Cybermen (The Rise of the Cybermen. BBC, 2006)

As the story progresses the slippage and ambiguity in the terms ‘treatment’ and ‘enhancement’ becomes obvious. In The Age of Steel it is noted that “this all started out as a way of prolonging life” (00:07:21), though that the project has now become one which “takes the living and turns them into…machines” (00:04:30). Though this issue of mechanical enhancement of humans, including their effective replacement by super – or post – human cyborgs, is presented negatively in the action and dialogue that ensues, these episodes of Doctor Who do acknowledge the view that this type of extreme augmentation can be seen as the next step up on the evolutionary ladder. Indeed the Cybermen are referred to as a new species and describe themselves “human point two” (The Rise of the Cybermen: 00:41:51).

While both episodes are interesting, though provoking and exciting, it is The Rise of the Cybermen, that provides the best opportunity to explore and elaborate current themes in the bioethics of enhancement, including:

  • the distinction between treatment and enhancement of human beings by mechanical means
  • the boundary and difference between humans and machines
  • the idea and practical use of a hierarchy of ethical values in society
  • and, the interaction between science and regulatory and political structures in technological decision-making

These issues are explored in detail in the BioethicsBytes Extended Commentary that will shortly be available to accompany this post.

The Rise of the Cybermen was first broadcast on BBC1 on May 13th 2006 at 19.00 (TRILT identifier: 0059521F), followed by The Age of Steel on BBC1 on May 20th 2006 at 18.35 (TRILT identifier: 00597007).


Cybernetics – The Farm Revealed (1)

June 12, 2007

There are a number of things about this programme that irritate me (but also some features that are worthy of note!)  Firstly, the title of the series is more than a little misleading, and the confusion is compounded by the fact that Channel 4 transmitted the episodes in a different order relative to the pre-publicity (and thus the presenter Rufus Hound started this ‘first’ episode by referring back to the previous episodes on genetic modification and manipulation!)  Added to this, the presentation style seemed terribly like ‘yoof TV’ of a bygone age. 

The title The Farm Revealed has been chosen to tie-in with another recent Channel 4 series Animal Farm; some of the footage (and incidental music) is common to both programmes .  This episode (originally scheduled for 15th June 2007, but actually transmitted on 11th June) doesn’t really have any connection to farming, ancient or modern.  The focus instead is on the current and future use of cybernetics and prosthetics. 

We are introduced initially to Richard Whitehead and Richard Hirons; the former is a marathon runner who has no legs and therefore uses sophisticated carbon-fibre replacements, the latter an engineer who develops these kinds of aids.  They were then joined by Marc Woods, another client of Dr Hirons, who demonstrated a complex artificial leg which respond to changes in gradient and allows him to participate in mountain climbing.

Moving on from artificial limbs, the programme then started to consider ways in which brain activity alone can be used to control a remote robot. The demonstration did not go entirely as planned, but was sufficiently impressive to show that there are very real developments going on in this area.

Possibly the most interesting section, from a bioethical point of view, starts 11 minutes into the programme and features Prof Kevin Warwick from the University of Reading. He stands in a long tradition of medical researchers who use themselves as their own guinea pig. At different stages of his research, Kevin has had a Radio Frequency Identification Device inserted into his arm (to investigate the security possibilities of such technology) and also ‘mainframed’ his nervous system, connecting a two-way electronic signalling system from his brain to the internet via electrodes in his arm.  There is some impressive footage of the experiments (starting 17 minutes into the programme). We see Prof Warwick control a series of household tasks chosen from an onscreen menu simply by closing and opening his left hand.  He is also able to control a wheelchair and,  most sensationally, used thought alone to guide the movements of a robotic hand back in his home lab at Reading whilst he himself was in New York.  Sensors in the fingers of the disembodied hand fed back information to him about how tight his grip was. 

I have heard Prof Warwick speaking about this type of cybernetics on a previous occasion, and am pleased therefore that this programme offers the opportunity to obtain and use the same footage of experiments that he had referred to in his lecture.  From an ethical standpoint, it raises interesting questions about the application of developments of this kind.  Therapeutic uses, such as providing sonar abilities to aid blind people avoid obstacles, or ways to control artificial limbs for amputees, are clear medical applications which, on the face of it, would not seems unreasonable.  Yet there are potentially more sinister ways to employ the same technology, such as pilotless warplanes and other military uses.  Indeed, Prof Warwick himself is the first to acknowledge that it is very difficult to draw a boundary between a therapeutic use for one person and an enhancement for somebody else. 

How the outworkings of new technologies are regulated is an old, but crucial, question.  Do you ban ‘good’ uses for fear about the misuse of the same procedures by somebody else?  Do you take an ‘anything goes’ approach because you cannot arbitrate between uses?  Or do you try and find some way to distinguish between acceptable and unacceptable applications?  No easy answers, of course, but I think that some measure of regulation is always going to be necessary.  The possibility that some maverick somewhere else may misuse innovations made initially for good reasons, cannot be used to fuel an abdication  of responsibility.