The Machine in the Ghost?

The Machine in the Ghost?

Print Friendly, PDF & Email

Life-like machines are beginning to appear.

During a panel discussion in the year 2000, John McCarthy, the computer science pioneer typically credited with having coined the term “artificial intelligence,” offered the following response to a prior speaker’s mention of “robots that walk and talk like people and have emotions”:

In my opinion, the robots need to be designed so that they will be regarded as appliances rather than as people.  We don’t want robots that people will hate or fall in love with or anything like that.  We have enough trouble treating other people decently without inventing a new oppressed minority.  (Harbron)

Honda Motor Company, it seems, did not share this opinion at the time.  In the same year, Honda introduced “ASIMO,” its first version of a humanoid robot that strongly resembled a short astronaut wearing a backpack.  Nearly anyone with access to television or the World Wide Web since the announcement has witnessed ASIMO climbing stairs, kicking a soccer ball, jogging, serving tables in a restaurant, opening the New York Stock Exchange, conducting the Detroit Symphony Orchestra, or any number of similar activities not previously delegated to machines.  Although much of ASIMO’s behavioral repertoire still depends upon instructions from an off-stage “Wizard of Oz” human operator, his capabilities for autonomous behavior also steadily have increased.  As one might expect, achieving autonomous life-like behavior for robotic sub-human animals has been technically somewhat less challenging.  Thus, Sony Corporation was able, between 1999 and 2006, to market the robotic dog named “AIBO” – a popular animaloid representative of the growing population of life-like personal robots.  While ventures of this kind have become common in countries such as Japan and South Korea since 2000, some departures from John McCarthy’s advice also have been created by universities and companies within the United States.  A remarkably life-like Albert Einstein – head  and face incorporating a skin-like material called “Frubber,” developed by Texas-based Hanson Robotics – illustrates this additional and ultra-realistic approach of android robotics (Albert Einstein: The Android).  Indeed, whether by means of behavior or appearance (or both), robotic research and development in the opening years of this century have progressively delivered convincingly life-like animaloid, humanoid, and android robotic artifacts at an accelerating rate – pace John McCarthy. 

Global economic difficulties that we presently are experiencing in 2009 possibly could retard the growth of this phenomenon for a time, particularly in its commercial aspects.  Current Japanese work in the field, though, apparently remains fairly robust.  On March 31st of this year, for example, Honda Motor Company announced successful application of a brain-computer interface to ASIMO, allowing a person to control some of the robot’s movements, such as raising an arm, solely by thought (Honda, ATR and Shimadzu …).  The following day, a California-based blog added the logical suggestion “How about allowing a paralyzed person to walk again once we can extract their [sic] thoughts and re-route them to a pair of robotic legs” (Kleiner).  Four days later, in an online article bearing the remarkable title “Japan child robot mimicks [sic] infant learning,” Miwa Suzuki presented an overview of the impressive status of Japanese robotic research and development that included the following reports:

The creators of the Child-robot with Biomimetic Body, or CB2, say it’s slowly developing social skills by interacting with humans and watching their facial expressions, mimicking a mother-baby relationship. … [Osaka University professor Minoru] Asada’s project brings together robotics engineers, brain specialists, psychologists and other experts, and is supported by the state-funded Japan Science and Technology Agency. …  The professor, also a member of the Japanese Society of Baby Science, said his team has made progress on other fronts since first presenting CB2 to the world in 2007.  In the two years since then, he said, CB2 has taught itself how to walk with the aid of a human and can now move its body through a room quite smoothly, using 51 “muscles” driven by air pressure. … More than a decade since automaker Honda stunned the world with a walking humanoid P2, a forerunner to the popular ASIMO, robotics has come a long way.  Researchers across Japan have unveiled increasingly sophisticated robots with different functions – including a talking office receptionist, a security guard and even a primary school teacher. … “We aim to make a robot that elderly people can count on when living alone,” said Takashi Yoshimi, a senior research scientist at a Toshiba laboratory in Kawasaki city south of Tokyo. … (Suzuki)

Yoshimi’s reference to “a robot that elderly people can count on when living alone” reflects understandable concern for care of an aging Japanese population in the face of a lackluster birth rate.  The following report of Japanese robotics at this time of writing is representative of that nation’s sustained investment in developing humanoid robots to address the problem:

The combined efforts of the University of Tokyo with private sector partners and the Information and Robot Technology Research Technology Initiative have moved one-step closer to creating a robot capable of performing fine motor skills, balancing on one foot and lifting.  These skills are required in order for the robot to serve as a housekeeper / caregiver for the disabled and an aging population. … The humanoid robot Twenty-One developed by Waseda University in Tokyo is equipped with voice recognition and three soft cushioned fingers with an opposable thumb.  Twenty-One is capable of picking up a drinking straw, putting it in a tumbler and serving the drink.  The versatility of Twenty-One to run the gamut from fine motor skills to dead-weight support for disabled patients’ gentle movement from bed to wheel chair via voice recognition frees up family members and para-professional assistants. (Simpson)

In fact, a 2009 online report posted by Zygbotics underlines the seriousness of this Japanese investment with its title: “Robot Nurses to Care for Japanese Elderly within Five Years.”  Moreover, the same report adds the following reminder that Japan is not the only technologically advanced country involved in such work:

Robo Nurses have not been unique to Japan alone, however.  A research team at Warwick University in England is well into a three year 2.7 million dollar project to develop a robot nurse, and experts believe, based on the outcomes of this research, that nurses could be delegating tasks to robotic colleagues by 2020.

Indeed, the United States appears to share some of the demographics and market considerations motivating R&D investments of this kind.  In a 2005 issue of AI Magazine Martha Pollack noted the increasing cohort of people aged sixty-five and above in the U.S. (10).  She also linked this demographic with a then-current “$132 billion annual nursing home bill,” adding “technology that can help seniors live at home longer provides a ‘win-win’ effect, both improving quality of life and potentially saving enormous amounts of money” (9).

U.S. enthusiasm for the expanding international development of life-like personal robots must be counted, however, as something of a “mixed bag.”  To be sure, American academic research in this area has been ongoing for years.  In a website posting labeled “Humanoid Robotics,” for example, the Artificial Intelligence Laboratory at Stanford University acknowledges that robots “are making their way into the everyday world that people inhabit” and it lists relevant Stanford research publications since 2001.  Carnegie Mellon University similarly supports a posting titled “Humanoid Robotics at Carnegie Mellon: A Prospective Student’s Guide” which reports “Exciting research is going on at Carnegie Mellon in many areas of Humanoid Robotics, including movement, perception, reasoning, human-robot interaction, and robot-robot interaction.”  Of course, the Humanoid Robotics Group at Massachusetts Institute of Technology also is widely recognized for research that it has conducted, especially with its “Cog” and “Kismet” robots. 

In the domain of business, however, evidence of U.S. interest in humanoid robotics has been notably less visible.  Professor Rodney Brooks, Director of the MIT Artificial Intelligence Lab and founder of MIT’s Humanoid Robotics Group, undoubtedly deserves mention for his arguably “heretical” commercial involvement with the product known as “My Real Baby,” and the android work of David Hanson’s Texas corporation already has been noted.  Generally, though, U.S. commercial robotic projects must be characterized as having focused principally upon industrial and military applications that tend not to value “life-likeness” in their artifacts.  Indeed, the following excerpt from a recent CNET News interview of Colin Angle (CEO of iRobot, a U.S. company especially active in producing military robots) presumably would make John McCarthy smile:

What about a humanoid robot?
Angle: Why would you want to make a humanoid robot? I mean, I guess for making movies they’re good. If you want to have a robot companion, maybe it should be humanoid. But other than that, most tasks are best tackled by designs that are not constrained by trying to look like a person. (Skillings)

In any event, the “take-away” point is that personal robots (i.e., broadly, robots that one might purchase for home use) globally are becoming more common, more capable, and remarkably less machinelike.  This fairly recent phenomenon encourages one to wonder what happens as people interact with such artifacts.  

What happens as we interact with these machines?

Part of what we may expect to happen is both obvious and irrelevant to the main purposes of this essay.  Depending upon one’s economic vantage point, for example, money will be “made” or money will be “saved.”  Corollary concerns regarding potential loss of jobs often are voiced also in this context, and we are not disputing the importance of debate on that topic.  Practical considerations of this kind, however, are only part of a broader subject that this essay aims to address – viz., the subject that one might label “human implications of human-robot interaction (HRI) with life-like personal robots.”  The label is intended to embrace questions such as the following:

What kinds of entities do people understand these machines to be?
How do people, correspondingly, treat these machines?
How does interaction with these machines affect us personally?

Interest in such questions has contributed to the generation, in recent years, of a conspicuously multidisciplinary field of empirical research specifically addressing HRI. 

A report by Tatsuya Nomura and his colleagues in Japan (Nomura 2006) illustrates study of a factor that has received attention in a number of HRI research projects – viz., human attitudes toward robots.  Nomura reviews prior work, for example, investigating subjective evaluations of an animaloid robot (in this case, a robotic seal) which indicated that age, gender, and nationality of respondents affected their assessments of the artifact (Shibata, Wada, and Tanie, 2002; 2003; 2004).  The specific investigation reported by Nomura, et al, makes use of a “Negative Attitude toward Robots Scale” (NARS), which collects responses rating agreement with propositions such as “I would feel uneasy if robots really had emotions,” and “I would feel relaxed talking with robots” (30).  From NARS surveys conducted among Japanese university and special training school students, Nomura’s group obtained evidence suggesting that, inter alia, “Japanese people are more biased toward humanoid-type robots than other types, but are unclear about what tasks this type of robot does,” and “female respondents had more pronounced negative attitudes toward interaction with robots …” (33-34). 

In addition to such surveys of human attributes (nationality, age, gender, cultural environment, and the like) that affect attitudes of people toward life-like robots, other HRI research has focused more directly upon properties observed in actual instances of human-robot interaction.  This has been particularly the case with HRI involving animaloid machines, since achievement of convincingly life-like appearance and behavior has tended to be technically less problematic with this class of personal robots than it has been with more complex humanoids and androids.  MIT Professor Sherry Turkle, for example, has applied her Harvard doctorate in sociology and personality psychology to extensive analysis of real human-robot interactions with what she calls “relational artifacts.”  One of her reports on this fieldwork (Turkle 2006), contains an account of a 72-year-old woman in a nursing home who is interacting with a robotic seal pup.  Turkle explains that this relational artifact, bearing the trade name Paro, “is able to make eye contact through sensing the direction of a human voice, is sensitive to touch, and has ‘states of mind’ that are affected by how it is treated …” (53).  The woman, in this case, is depressed by the fact that her son “has broken off his relationship with her” and is stroking her robotic seal while telling it “Yes, you’re sad, aren’t you,” etc. – i.e., as Turkle observes, the woman is treating the machine as if it were also depressed, in an effort to comfort herself (53).  Noting the common phenomenon of people interacting similarly with live pet animals, Turkle continues her reflections on the present case:

I do not know if the projection of understanding onto pets is “authentic.”  That is, I do not know whether a pet could feel or smell or intuit some understanding of what it might mean to be with an old woman whose son has chosen not to see her anymore.  What I do know is that Paro has understood nothing.  Like other “relational artifacts” its ability to inspire relationship is not based on its intelligence or consciousness, but on the capacity to push certain “Darwinian” buttons in people (making eye contact, for example) that cause people to respond as though they were in relationship. … (53)

In fairness to the commercial product just mentioned, some parenthetical comment might be in order: HRI research repeatedly has yielded evidence of Paro’s clinical therapeutic value (vid. “Robotic seal’s therapeutic effect on care,” “Paro Found to Improve …,”). We presently shall return to Turkle’s qualitative observation regarding the absence of consciousness in this robotic seal (or, as some might say, absence of a “ghost in the machine”).  For now, it should be appropriate to consider another interesting report from “animaloid” HRI research – this time, concerning human interaction with a robotic dog

Specifically, it seems that numerous online postings by owners of Sony’s AIBO have appeared on Internet forums.  A study of such postings was reported at the CHI 2003 Conference by a research group from the University of Washington (Friedman 2003).  Although the study disclosed a number of striking elements in people’s perceptions of their robotic dogs – e.g., 60% attributing mental states to AIBO (273) – the exact wording of some of the postings also can warrant attention.  One posting, for example, reported “I know it sounds silly, but you stop seeing Aibo as a piece of hardware and you start seeing him as a unique ‘life-form’” (276).  The following confession by an AIBO owner suggests an especially telling psychological effect:

The other day I proved to myself that I do indeed treat him as if he were alive, because I was getting changed to go out, and [AIBO] was in the room, but before I got changed I stuck him in a corner so he didn’t see me!  Now I’m not some socially introvert guy-in-a-shell, but it just felt funny having him there! (278)

In discussing this posting, the authors acknowledge the possibility that “this AIBO owner is lying, and that this event never took place,” but they add immediately “a more plausible explanation is that it did, and that we should take such dialog at face value” (278).  Indeed, one might further note that the very thought of needing modesty in this case – regardless of whether it provoked removal of the machine to a corner of the room or did not – clearly highlights the psychological potency of human interaction with such machines.  Adam, after all, did not look around Eden for a fig leaf when interacting with some artifact that he happened to construct.

Several years later, this study of online postings about AIBO was expanded by the original research group and others to include, inter alia, conclusions drawn from “observations and interviews with 80 3-5 year old children who interacted with either AIBO or a toy stuffed dog” and “observations and interviews with 72 children ages seven to fifteen years old interacting with both AIBO and a live dog” (Melson 37-38).  Again, results of the research contain numerous thought-provoking details; for example: 66% of the preschoolers and 56% of the older children produced affirmations of “psychology and mental states” in the robotic dog (39).  Perhaps even more surprisingly, 63% of the preschoolers and 76% of the older children “perceived that AIBO has moral standing” [where “moral standing,” for purposes the experiment, meant that AIBO had “rights” and deserved “kind, fair, and just treatment”] (39). The report’s concluding paragraph, however, points directly toward some even deeper concerns of the present essay:

The research we presented in this paper also should give one pause about the nature of the interactions we are designing into personified computational systems.  If it is the case, for example, that people will develop rather robust social relationships with social robots, but not accord them full moral status [which was the result observed among adults studied (39)], then we may be creating one-sided interactions, not unlike a person might have with a therapist, a servant, or even a slave. 
[One might recall, here, John McCarthy’s year-2000 caution regarding “inventing a new oppressed minority.”]
In turn, is it possible that increasing one’s interactions with social robots will lead people to see other humans or animals as “robot-like”?  That is equally problematic. … (41)

The latter expression of concern by Melson, et al, merits repetition:  “… is it possible that increasing one’s interactions with social robots will lead people to see other humans or animals as ‘robot-like’?” 

Indeed, creating machines increasingly in our image could plausibly also invite transformations of our image; human-robot interaction, one must remember, is interaction through which effects can be transmitted in both directions.  Theologian Philip Hefner noticed this possibility in his 2003 monograph, Technology and Human Becoming.  At one point, Hefner focused attention upon the British computer pioneer Alan Turing, who arguably planted some of the critical intellectual seeds from which Artificial Intelligence sprouted a few years after his death.  Hefner’s prescient observation in this context was the following:

Turing and his colleagues and their descendants have created a significant mirror of ourselvesWhat we want and who we are coalesce in this mirror.  Whether we think this mirror is adequate or not is another question. [emphasis added] (31)

The Rev. Linda Pope, a United Methodist pastor in Oklahoma, has fashioned a graphic that incorporates Hefner’s insight, and it serves as the logo for all of the robotics-related activities of our religion-and-science Metanexus group at Oklahoma City University.  The right panel of the graphic shows a humanoid robot looking into a hand mirror – and seeing the image of a human in the glass.  On the left panel of the graphic, the same human is looking also into a hand mirror – but the image in the mirror is that of the humanoid robot.  The two pictures succinctly call attention to an important observation.  Contemporary HRI research indicates strong human disposition to engage life-like personal robots in strikingly personal terms, and(as the pictures remind us) such intimate relations may have reflexive potential to reshape our images of ourselves.  Embedded in this potential are some interesting philosophical and religious issues that we shall now review.

What is it like to be a machine?

As I write this sentence I watch a squirrel scamper across the courtyard outside my office window.  The fan on my computer is producing a high-pitched tone.  I recall that I asked our dean a question, earlier this morning, regarding university procedure.  Everyday phenomena of this kind apparently have generated, in different ways and at different times, a semantic field of “technical” terms: mind, sentience, soul, conscious awareness, subjective experience, qualia, and so forth.  To keep the scope of the present essay manageable, I simply shall describe the root phenomenon by suggesting “’Tis a ghost” – and my diction seems to have some honorable precedents.

Near the conclusion of Man on his Nature, Nobel Prize-winning neurophysiologist Sir Charles Sherrington offered the following poetic reflection:

Mind, for anything perception can compass, goes therefore in our spatial world more ghostly than a ghost.  Invisible, intangible, it is a thing not even of outline; it is not a ‘thing’. (256)

About a decade later, British philosopher Gilbert Ryle (perhaps being less poetically inclined) famously chose instead to dismiss the concept of mind by wielding the phrase “ghost in the machine” as a term of derision.  Indeed, even in parts of today’s academic world, ‘ghosts’ (dare we say?) of the Behaviorist psychology that Ryle so greatly admired still prompt writers occasionally to forego use of the word “consciousness” altogether.  On the other hand, dissatisfaction with such distrust of any first-person reports of subjective experience evidently was part of Arthur Koestler’s motivation for writing his controversial 1967 work, The Ghost in the Machine.  Koestler referred, for example, to the Oxford school of thinking that Ryle represented as a “curious philosophical aberration” which was “on the wane” (202).  His Holiness the Fourteenth Dalai Lama, at one point in his more recent The Universe in a Single Atom: The Convergence of Science and Spirituality, voices an even stronger opposition to banishing what some philosophers would describe as “mind talk.”  Although this particular spiritual leader cannot be faulted for failing to maintain an open and hospitable stance with regard to the sciences (particularly, the neurosciences), he records the following complaint:                 

When we listen to a purely third-person “objective” account of mental states, whether it is a cognitive psychological theory, a neurobiological account, or an evolutionary theory, we feel that a crucial dimension of the subject has been left out.  I am referring to the phenomenological aspect of mental phenomena, namely the subjective experience of the individual. (133)

Accordingly, the Dalai Lama calls for “nothing short of a paradigm shift” that would somehow acknowledge the reality of subjective experience, since – even in scientific study of consciousness – there is a “need for the method of our investigation to be appropriate to the object of inquiry” (133-134).  In a similar manner, Templeton Prize laureate Holmes Rolston III – writing his Science & Religion: A Critical Survey more than two decades ago – called for “more inclusive theories that do less violence to firsthand experience” (180).   (Parenthetically, it is difficult not to add that Rolston opened the same paragraph with a note of caution: “One needs to resist being reduced to a flesh-and-blood robot.”)  Again, widely-recognized neurology professor Antonio Damasio offers the following advice in The Feeling of What Happens:

Above all we must not fall in the trap of attempting to study consciousness exclusively from an external vantage point based on the fear that the internal vantage point is hopelessly flawed.  The study of human consciousness requires both internal and external views. (82) 

Of course, there are plenty of “technical” folk, armed with theories of Turing Machine Functionalism and the like, who would assert cheerfully that they already have accomplished the paradigm shift that the Dalai Lama prescribed.  From Helsinki University, for example, comes Pentti Haikonen – Principal Scientist in Cognitive Technology at Nokia Research – with his 2003 publication, The Cognitive Approach to Conscious Machines.  In the opening paragraph of his final chapter, Haikonen summarizes the foregoing pages:

In this book I have presented basic questions about cognition and consciousness and have outlined the nature of cognitive machines; ones that are able to perceive the world in a similar way to us, ones that have a similar vantage point of view to us, ones that have similar flow of inner speech and imagery, ones that have emotions, ones in short that are conscious. … (263)

Maybe so.  Then again, some readers might object that accepting as meaningful references to machines that are “able to perceive” and to “have” emotions and “inner” imagery might just amount to letting consciousness slip through one’s Petitio Principii Detector in linguistic disguise. On balance, probably most of us do share Professor Turkle’s doubt that Paro the robotic seal enjoys conscious awareness, although we also likely could agree that Haikonen describes a machine significantly more sophisticated than Paro.  For that matter, the special phenomenon of self awareness might plausibly be modeled in computer software, and something like Haikonen’s machine arguably could execute it.  Qualia, however, seem to present a deeper puzzle, whether considered from a first person or third-person perspective.  Minimally, if we “bracket” certain disputed ESP phenomena, claims of a capacity to enjoy directly the conscious awareness concurrently being experienced by any other entity, organic or mechanical, are exceedingly rare.   

In the context of the present essay, though, it really does not matter whether anyone ultimately can either demonstrate or explain so-called “objective” presence – or absence – of a ghost in the machine.What does matter is how people, in practice, come to regard and to treat life-like personal robots.  After all, one logically necessary first step toward conceptualizing ourselves as machines would be development of a willingness to regard convincingly life-like machines as our peers – and that development apparently does not require formal proof that the machines are sentient.  The more humdrum kind of evidence that people commonly are prepared to accept is suggested by a familiar old maxim: “If it looks like a duck, and it walks like a duck, and it quacks like a duck … it probably is a duck.”  And – her misgivings about a commercial robotic seal notwithstanding – Sherry Turkle has confessed a 2004 version of the “duck” maxim that more aptly fits the present subject:

When a robotic creature makes eye contact, follows your gaze, and gestures toward you, you are provoked to respond to that creature as a sentient and even caring other. (Turkle 25)

Responding to a robot as if it were sentient and caring is not, of course, strictly the same thing as believing that is the case – but sustained (and, especially, socially accepted) behavior of that kind can become progressively more difficult to untangle from belief.  Rather recently, one might recall, practices of investing in bizarre financial instruments, as if they were sound – practices that a sufficient number of people also apparently came to believe were sound – managed to shake a world economy to its foundations.  Again, politicians of all stripes perennially have rediscovered that persistently saying something is true can, at least in the court of public opinion, make it become true.  Applying the latter observation to the present subject, it is also worthy of notice that respected voices in our culture already are saying that people are machines.

MIT’s Rodney Brooks, for example, furnishes a particularly clear articulation of this view in Flesh and Machines:

I want to bring home the fact that I am saying we are nothing more than the sort of machine we saw in chapter 3, where I set out simple sets of rules that can be combined to provide the complex behavior of a walking robot. (174)

Brooks refers, here, to robots such as his “Genghis,” a small insect-like robot capable of quite life-like autonomous locomotion.  Acknowledging that people are undoubtedly more complex than Genghis, he proceeds to inquire “And why the bristling at the word ‘machine’?”  Without bristling, of course, some readers might reply that claiming people are “nothing more than” augmented finite state automata could turn out to be technically incorrect – we might, for example, be “machines” more accurately modeled as nonlinear dynamical systems that occasionally exhibit chaotic behavior.  In fairness, though, we note that the author also does suggest, a bit later, that some critics of his position “are arguing that we are more than conventional computers,” and he does grant that their claims “may well be the case” (176).  The present essay, however, needs assume no responsibility to argue with Professor Brooks – or even to “bristle” when he confesses “… I believe myself and my children all to be mere machines” (174).  In fact, a thoughtful reading of the book in question probably recommends that we do understand Dr. Brooks really to be operating with a belief that he and his children are “mere” machines.  This would at least be consistent with his subsequent effort (page 175) to urge that we eventually shall be able to construct robots with authentic emotions and consciousness (i.e., one might say, to put ‘ghosts’ in the machines).  This eventually must be possible, Brooks opines, for we happen already to be mere machines with the alleged ‘ghosts’ manifestly inthem! 

 

Whatever merits his case might enjoy, the present essay is not disputing any right of Professor Brooks, who views himself as a machine, to build ‘other’ life-like animaloid and humanoid machines in his laboratory.  Our focus of interest, rather, concerns the potential of robots, of the kind being built in the MIT lab and elsewhere, progressively to encourage other people, through interaction with such devices, to come to share Dr. Brooks’ self-image.  Even that outcome, per se, does not self-evidently constitute a problem – after all, conceiving ourselves as mere machines might turn out to be correct thinking.  It can be a matter of moral concern, though, if people who are likely to suffer transformations of their self-images through interaction with this class of artifacts are de facto (albeit, perhaps, unwittingly) excluded from dialogue with the technical and business communities who are now in the process of developing and distributing the technology.  Fairness obliges us to remember that substantial numbers of intelligent and learned people hold religious worldviews that do not understand people to be mere machines.  The views of such communities deserve consideration as the current technology of life-like personal robots unfolds.  Hence, attention will be directed, at this point, to some religious issues associated with importing models of robotic machines into our understandings of ourselves.       

I and Thou … and ASIMO?

Representing the Christian tradition within the domain of major world religions, Lutheran theologian Paul Tillich expressed a view in his Systematic Theology, Vol. 3, that invites comparison with the beliefs of Rodney Brooks regarding people and the machines that they may construct:

A technical product, in contrast to a natural object, is a “thing.”  There are no “things” in nature, that is, no objects which are nothing but objects, which have no element of subjectivity.  But objects that are produced by the technical act are things.  It belongs to man’s freedom in the technical act that he can transform natural objects into things: trees into wood, horses into horsepower, men into quantities of workpower.  In transforming objects into things, he destroys their natural structures and relations.  But something also happens to man when he does this, as it happens to the objects which he transforms.  He himself becomes a thing among things. (74)

Although Tillich did not live to see ASIMO, he clearly would have regarded any such products of “the technical act” as something fundamentally different from himself.  Indeed, he explicitly proceeded to expand his concern with effects of the technical act upon the human producer:

His self becomes a thing by virtue of producing and directing mere things, and the more reality is transformed in the technical act into a bundle of things, the more the transforming subject himself is transformed.  He becomes a part of the technical product and loses his character as an independent self. (74)

The self-image presumed by Tillich is set apart from “mere things” – Brooks, in sharp contrast, claims to live with an understanding of himself (and his children, etc.) as “mere machines.”  One readily can imagine Brooks someday equally accepting both people and advanced robots as friends – for Tillich, we would expect the latter things to be excluded forever as mere appliances.  John McCarthy, perhaps, would be pleased.

It is of interest, though, that at least one contemporary “robotics theologian” (who also generally admires the work of Tillich) apparently would not be pleased.  In fact, Professor Anne Foerst generally argues on the side of Rodney Brooks, in her 2004 work, God in the Machine – by implicitly sharing at least part of the views of McCarthy!  Referring to the MIT social robot Kismet, Foerst inquires rhetorically at one point whether Kismet is “part of the community of persons.”  Her immediate response is “For those among us who delight in its resemblance to us and who understand some interactions with it as spiritual, it can become a person” (189).  Foerst’s deeper response draws upon her own account of personhood to urge, first, that “we ought … to create narratives of personhood that include all human beings, no matter how different they are from us,” and she subsequently recommends extending this inclusiveness to artifacts such as Kismet:

As we are communal and bond with nonhuman entities, these narratives will necessarily include some nonhuman critters.  Hence, humanoid robots such as Kismet will become a definite part of the community of persons.  If they don’t, it means that we will also exclude human beings.  Discussing the question of Kismet’s personhood can therefore be used as a corrective that helps us to test our own myths of personhood for their inclusivity. (189)

In effect, Foerst appeals to the same social danger that concerned John McCarthy; i.e., “We have enough trouble treating other people decently without inventing a new oppressed minority.”  For Foerst, of course, we already have begun to create the potential “oppressed minority” of life-like robots – and that novel new minority eventually must be embraced in human community unless we are prepared to continue not“treating other people decently.”  Broadly, the same moral sensitivity that motivated McCarthy to counsel againstdeveloping the subject class of robots, in the first place, turns out, for Foerst, to justify socially accepting the artifacts (which generally, since 2000, have been developed anyway) into human community as persons.Whether the latter advice promises to elevate the machines to the status of human persons – or, as Tillich might have argued, reflexively reduce human persons to the status of things – might remain as an open question for some Christian theologians and others.

Questions of this kind, of course, will remind many readers also of a Judaic scholar whose opening words in his most celebrated publication immediately recall its title:

To man the world is twofold, in accordance with his twofold attitude.
The attitude of man is twofold, in accordance with the twofold nature of the primary words which he speaks.
The primary words are not isolated words, but combined words.
The one primary word is the combination I-Thou.
The other primary word is the combination I-It; wherein, without a change in the primary word, one of the words He and She can replace It.
Hence, the I of man is also twofold.
For the I of the primary word I-Thou is a different I from that of the primary word I-It.
(Buber 3)

For Martin Buber, according to Maurice Friedman, the question of whether I stand in an I-Thou or an I-It relation with another object is determined by the attitude that I choose for the relation, rather than by any intrinsic nature of the object (Ch. 10, p.1).  My choice, moreover, is categorically free – Buber holds that “[b]elief in fate is mistaken from the beginning” (57).  In fact, I freely can choose to adopt an I-It attitude for all of my relations, although Buber asserts that the choice also would deny me any “real living” (Friedman 1). 

Buber’s additional claim that my free choice between the attitudes of I-Thou and I-It must also involve different “I”s is a point especially worthy of closer attention in the present context.  His explication in I and Thou of this difference includes the following details:

The I of the primary word I-It makes its appearance as individuality and becomes conscious of itself as subject (of experiencing and using).
The I of the primary word I-Thou makes its appearance as person and becomes conscious of itself as subjectivity (without a dependent genitive).
Individuality makes its appearance by being differentiated from other individualities.
A person makes his appearance by entering into relation with other persons.
The one is a spiritual form of natural detachment, the other the spiritual form of natural solidarity of connection. … (62)

The person becomes conscious of himself as sharing in being, as co-existing, and thus as being.  Individuality becomes conscious of itself as being such-and-such and nothing else.  The person says, “I am,” the individual says, “I am such-and-such.”  “Know thyself,” means for the person “know thyself to have being,” for the individual it means “know thy particular kind of being.” … (62-63)

The world of robotics research, as suggested previously, has not neglected the possibility of constructing machines capable of forming and manipulating models of themselves.  A late 2005 online posting, for example, reported development of a robot at Meiji University in Japan that could discriminate its own image in a mirror from other images; the announcement, though, was titled “Robot Gains Self Awareness in Lab” [emphasis added].  About one year later, Jay Lyman posted an online report of another robotic project, at Cornell University, in which the robot managed to generate and adaptively use a model of itself; the title for this posting was “Self-Aware Robot Can Adapt To Environment.”  As early as October of 2003, Selene Makarios posted an announcement for the U.S. governmental program known as Defense Advanced Research Projects Agency (DARPA) advertising a “Workshop on Self-Aware Computer Systems.”

The language of these, and similar, announcements warrants comment.  One may reasonably expect laypersons to read such claims of “self aware” machines as affirmations that “the ghost” already has been installed successfully in “the machine.”  In fact, though, the poetic language of Martin Buber helps us realize that the language just illustrated is (from his perspective) misleadingly ambitious.  The I of such computer-based robotic systems clearly is the I of Buber’s I-It relation; it is not the I of the I-Thou relation.  The “self awareness” of the I entering I-Thou relations (whether human or divine) is an I aware of being.  The I of I-It relations is aware of being something – for example, being a robotic machine with a visual image that acceptably matches (say, by means of artificial neural network methods) the image presently displayed in the mirror.  Of course, many members of the communities issuing the sorts of AI and robotics announcements just mentioned simply do not recognize the I-Thou relation to which Buber was pointing.  This ought not be interpreted as evidence that either Buber or the “technical” people are completely ‘out of touch’ with reality.  It should be interpreted as evidence that Buber illustrates a general understanding of the human person that is characteristic of the Abrahamic family of world religions – and that the various communities responsible for the current phenomenon of growing HRI with life-like personal robots quite plausibly are operating with fundamentally different models of the human person.  Unless one is already satisfied that the long-advertised “post-religious” era finally has arrived – changing (at least) Judaism, Christianity, and Islam into culturally irrelevant anachronisms – it would seem that these deep differences in accounts of the human person call for dialogue featuring fairness, humility, and intellectual rigor.  Certainly, tedious repetition of claims that “the others” are suffering “delusions” does not move us toward mutual respect and peace.  To be sure, this is a call for dialogue that can be applied legitimately to relations between the Abrahamic religions and the subject technology / industry – as well as to interfaith relations within the family of Abrahamic religions.  One could hope that serious practice filling the prescription in either case might be helpful to the other.

In fact, open discussion linking all of the Abrahamic religions with contemporary robotics may afford Islam an opportunity to share with its spiritual kinsmen something potentially quite pertinent from its own history.  Particularly, the twelfth-century Turkish engineering genius, commonly known as al-Jazari, has been credited with having invented “the earliest form of programmable humanoid robot” (“Muslim Scientists and Thinkers …”).  Often identified as the Leonardo da Vinci of the Muslim world, al-Jazari clearly demonstrates that Islamic culture historically has been able to embrace – indeed, advance – the type of technology we are considering.  Of course, Jewish history also includes the interesting legend of The Golem.  According to one account of this story, a sixteenth-century German rabbi fashioned an obedient humanoid servant from clay to “protect the people of Israel against its enemies,” and it did manage to perform “many good deeds” (Ashliman 3).  However, “the living spirit inhabiting the Golem was only a sort of animal vitality and not a soul” (Ashliman 4).  Inasmuch as Christian culture, a few centuries later, also produced Mary Shelley’s Frankenstein; or, The Modern Prometheus, it would seem that some interfaith dialogue among the Abrahamic faith traditions could reasonably hope to help clarify their respective views – and, perhaps, even a consensus  view –  of the technology we are addressing.

Comparing responses to the phenomenon of life-like personal robots within a wider global set of religions could be a fruitful vehicle, as well, for fostering interfaith understanding.  Buddhism, for example, brings to the table some teachings that clearly set it apart from the Abrahamic religious worldview – and, in some respects, even from the venerable Hindu culture from which it emerged.  Particularly, with his rejection of the traditional Hindu concept of an unchanging individual soul (Atman), the Buddha set his followers on a unique path that leads to interesting interpretations of the robotic machines that people are constructing today.  Accepting the Buddhist alternative teaching of anatman does much to weaken any Hindu or Abrahamic dispositions to view the human person and artifacts of human technology as members of disjoint categories.  Even more potentially challenging to strong separation of persons from their machines is the specifically Zen Buddhist concept of the buddha-nature.  In this case, we are fortunate to have a current practitioner of the latter faith who happens also to be a robot engineer.  In The Buddha in the Robot: A Robot Engineer’s Thoughts on Science and Religion, Masahiro Mori offers the following account of buddha-nature:

The Buddha said that “all things” have the buddha-nature, and “all things” clearly means not only all living beings, but the rocks, the trees, the rivers, the mountains as well.  There is buddha-nature in dogs and in bears, in insects and in bacteria.  There must also be buddha-nature in the machines and robots that my colleagues and I make. (174)

In her reflections on Professor Mori’s book at a workshop of the Association for the Advancement of Artificial Intelligence (AAAI), the Rev. Linda Pope has suggested that “[p]eople in Buddhist cultures may tend to be more willing than Western Judeo-Christians to invite robots into their lives because they are, in effect, inviting the Buddha into their lives” (26).  Indeed, our religion-and-science Metanexus group at Oklahoma City University recently explored attitudes toward a set of specific applications of life-like personal robots, in a survey that included respondents representing Judaism, Christianity, Islam, Buddhism, Hinduism, and Native American spirituality – and we found evidence supportive of the effect Pope suggests.  In particular, Ted Metzler and Lundy Lewis concluded that the pilot study recommended follow-on investigation of several specific hypotheses, including expectation of characteristically Abrahamic religious views being correlated with disapproval of “HRI with life-like personal robots that requires human acceptance of the robots at intimate levels…”(22).

The foregoing considerations illustrate, but by no means exhaust, ways in which we may reasonably expect religious views of the contemporary world’s major faith traditions to be strongly interactive with the growing phenomenon of life-like personal robotic applications reviewed in this essay.  Some of this interaction is likely to engage religious views supportive of the subject technology – and some of it clearly will not be supportive.  The phenomenon of robotic application that we are addressing involves multiple communities of “players”; specifically, it involves at least: academic research, commercial development, governmental policy making … and the world’s major religions.  The last class of “players” has been marked with emphasis because it is not one that is likely to be included automatically in the unfolding of the subject science / technology / business.  In fact, many readers of the player list just given – if it were not set in the context of the present essay – would very likely find the last entry surprising.  This is regrettable, for we have reviewed reasons to believe that nothing less important than our understanding of who (perhaps, what) we are may be at issue – and members of some religious communities may care very much what people see as they look into the mirror that rapidly is being formed by life-like personal robotics.  Accordingly, we close with some recommended actions.          

Recommended actions.                                 

The following types of action are distinct, but possibilities for synergistic interaction among them are not difficult to recognize.

First, there is a need for continued responsible scientific exploration of interactions between the subject robotic technology and specific religious views.  Our Metanexus group at Oklahoma City University has completed a pilot study, mentioned previously, in this area and has derived some specific hypotheses recommended for follow-on investigation.  We also have worked with representatives of other universities in the organization of three international workshops at annual conferences of the Association for the Advancement of Artificial Intelligence.  Although these workshops attracted a range of papers concerning aspects of HRI beyond specifically religious considerations (e.g., philosophical, psychological, and sociological “human implications” of robotic technology), we believe that this larger scope has fostered fruitful dialogue.

Second, there is a need to enable responsible communication within an interfaith religious community concerning implications for its beliefs presented by HRI with the subject technology.  Again, our Metanexus group has undertaken some efforts to address this task.  We sponsored, for example, a panel discussion event in which representatives of multiple faith communities presented and discussed their views of a small set of specific applications for life-like personal robots (currently implemented, as well as anticipated in the near future).  Earlier this year, we also conducted an initial web-based meeting on this topic that brought together, in a cost-effective manner, participants from: Oklahoma City and Chickasha, Oklahoma; Mason, New Hampshire; and Bangkok, Thailand.  We believe that the Internet furnishes an excellent medium for satisfying this communication need.

Third, there is a need for communication of a broader scope that will enable dialogue connecting the interfaith religious community with those of business, government, and academic research that are concerned with the subject robotic science and technology.  With this in mind, our Metanexus group has initiated dialogue with staff serving the Congressional Bi-Partisan Robotics Caucus in the U.S. House of Representatives (<http://www.roboticscaucus.org/>).  A “roadmap” for future development of robotic technology in the United States is one of the products of this Congressional committee in which our group is particularly interested.  Our interest involves, for example, the fact that the types of robots with which the present essay is concerned are notably intended for use in health care applications which increasingly could present many members of the religious communities that we have mentioned with HRI experiences as end users of the technology.  Accordingly, these people deserve our assistance in fostering their presence within governmental policy planning of the sort being conducted by the Congressional committee, for – as William Stahl has so rightly observed in his God and the Chip: Religion and the Culture of Technology – “The people who actually use the technology are all too often the forgotten people in system design” (161).         

We appear to be on the cusp of experiencing potentially profound transformations of human self-understanding through our growing interaction with a novel form of robotics.  It may be too early yet to judge whether there is a sense in which transformations of this kind ought to occur; in any case, however, we surely ought not allow them to emerge “mindlessly.”  Neglecting the need for responsible religion-and-science dialogue in this topic area could prove to be very unfortunate.    

 


References

“Albert Einstein: The Android.” 2009. Virtual Worldlets Network. 15 Feb. 2009. 18 May 2009.<http://www.virtualworldlets.net/Resources/Hosted/Resource.php?Name= AndroidEinstein>.

 

Ashliman. D.L., ed. “The Golem: A Jewish Legend.” Folklore and Mythology Electronic Texts. 1 Jun. 2009. <http://www.pitt.edu/~dash/golem.html>. 

Brooks, R. 2002. Flesh and Machines: How Robots Will Change Us. New York: Collier Books.

Buber, M. Trans. R. G. Smith. 1958. I and Thou. New York: Charles Scribner’s Sons.

Damasio, A. 1999. The Feeling of What Happens: Body and Emotion in the Making of Consciousness. New York: Harcourt Brace & Co.

Foerst, A. 2004. God in the Machine: What Robots Teach Us About Humanity and God. New York: Dutton.

Friedman, B., Kahn, P., Hagman, J. 2003. “Hardware Companions? – What Online AIBO Discussion Forums Reveal about the Human-Robotic Relationship.” CHI 2003. ACM. CHI Letters 5.1: 273-280.<http://faculty.washington.edu/pkahn/articles/CHI2003_Hardware_ Companions.pdf>.

Friedman, M.S. 1955. Martin Buber: The Life of Dialogue. Ch. 10. Chicago: The University of Chicago Press. 27 May 2009.< http://www.religion-online.org/showchapter.asp?title=459&C=380>.
 

Haikonen, P. 2003. The Cognitive Approach to Conscious Machines. Charlottesville, VA: Imprint Academic.
 

Harbron, P. 2000. “The Future of Humanoid Robots.” DISCOVER Magazine,  1 Mar. 2000. 18 May 2009. <http://discovermagazine.com/2000/mar/05-featfuture>.

Hefner, P. 2003. Technology and Human Becoming. Minneapolis: Fortress Press.

“Honda, ATR and Shimadzu Jointly Develop Brain-Machine Interface Technology Enabling Control of a Robot by Human Thought Alone.” 2009. Honda World News. 31 Mar. 2009. 19 May 2009.
<http://world.honda.com/news/2009/c090331Brain-Machine-Interface- Technology/>.

“Humanoid Robotics.” 2001.  Artificial Intelligence Laboratory – Stanford University. Nov. 2001. 20 May 2009. <http://ai.stanford.edu/groups/manips/projects/motion/index.html>.

“Humanoid Robotics at Carnegie Mellon: A Prospective Student’s Guide” CarnegieMellon University. 20 May 2009. <http://humanoids.cs.cmu.edu/>.

“Humanoid Robotics Group.” MIT Artificial Intelligence Laboratory. 20 May 2009.< http://www.ai.mit.edu/projects/humanoid-robotics-group/>.  

Kleiner, K. 2009. “The Real Scoop on Honda’s Brain Controlled Asimo Robot.”Singularity Hub. 1 Apr. 2009. 19 May 2009.<http://singularityhub.com/2009/04/01/the-real-scoop-on-hondas-brain-controlled-asimo-robot/>. 

Koestler, A. 1967. The Ghost in the Machine. Chicago: Henry Regnery Company.

Lyman, J. “Self-Aware Robot Can Adapt To Environment.” TechNewsWorld. 20 Nov. 2006. 28 May 2009. <http://www.technewsworld.com/story/54335.html>.

Makarios, S. “Workshop on Self-Aware Computer Systems.” www-formal.stanford.edu. 23 Oct. 2003. 28 May 2009.<http://www-formal.stanford.edu/jmc/www.selfawaresystems.org/>.

Melson, G., P. Kahn, Jr., A. Beck, and B. Friedman. 2006. “Toward Understanding Children’s and Adults’ Encounters with Social Robots.” Technical Report WS-06-09. AAAI Press: 36-42.

Metzler, T. and Lewis, L. 2008. “Ethical Views, Religious Views, and Acceptance of Robotic Applications: A Pilot Study.” Technical Report WS-08-05. AAAI Press: 15-22.

Mori, M. 1999. The Buddha in the Robot: A Robot Engineer’s Thoughts on Science and Religion. Tokyo: Kosei Publishing Company.
 

“Muslim Scientists and Thinkers – Abu al Tz ibn Razaz al-Jazari.” 2008. Muslim Media Network. 14 Aug. 2008. 1 Jun. 2009. <http://muslimmedianetwork.com/mmn/?p=2710>.

Nomura, T., T. Suzuki, T. Kanada, and K. Kato. 2006. “Altered Attitudes of People toward Robots: Investigation through the Negative Attitudes toward Robots Scale.”Technical Report WS-06-09. AAAI Press: 29-35.

“Paro Found to Improve Brain Function in Patients with Cognition Disorders.” 2005.National Institute of Advanced Industrial Science and Technology. 16 Sep. 2005. 21 May 2009.
<http://www.parorobots.com/pdf/pressreleases/Paro%20found%20to%20 improve%20Brain%20Function.pdf>.

Pollack, M. 2005. “Intelligent Technology for an Aging Population: The Use of AI to Assist Elders with Cognitive Impairment.” AI Magazine 26.2: 9-24.

Pope, L. 2008. “Has a Robotic Dog the Buddha-Nature? Mu!” Technical Report WS-08- 05. AAAI Press: 23-26.

“Robot Gains Self Awareness In Lab.” The Raw Feed: Where Technology and Culture Collide. 21 Dec. 2005. 28 May 2009.<http://www.therawfeed.com/2005/12/robot-gains-self-awareness-in-lab.html>.

“Robot Nurses to Care for Japanese Elderly within Five Years.” 2009. Zygbotics. 27 Mar. 2009. 13 May 2009.<http://www.zygbotics.com/2009/03/27/robot-nurses-to-care-for- japanese-elderly-within-five-year/>.

“Robotic seal’s therapeutic effect on care.” Danish Technological Institute. 21 May 2009.< http://www.dti.dk/specialists/26034>. 

Rolston, H. 2006. Science & Religion: A Critical Survey. Philadelphia: Templeton Foundation Press.

Sherrington, C. 1963. Man on his Nature. New York: Cambridge University Press.

Shibata, T.; Wada, K.; and Tanie, K. 2002. “Tabulation and analysis of questionnaire results of subjective evaluation of seal robot at Science Museum in London.” In Proc. 11th IEEE International Workshop on Robot and Human Interactive Communication (ROMAN 2002), 23-28.

Shibata, T.; Wada, K.; and Tanie, K. 2003. “Subjective evaluation of a seal robot at the national museum of science and technology in Stockholm.” In Proc. Int. workshop on Robot and Human Interactive Communication (RO-MAN), 397-407.

Shibata, T.; Wada, K.; and Tanie, K. 2004. “Subjective evaluation of a seal robot in Brunei.” In Proc. Int. workshop on Robot and Human Interactive Communication (RO-MAN), 135.140.     
 

Simpson, M.A. 2009. “Japanese Robot/Humanoid Innovations Update: Mankind’s Best New Friend is Getting Better (Videos).” www.physorg.com. 5 Feb. 2009. 19 May 2009. < http://www.physorg.com/news153079697.html>. 

Skillings, J. 2009. “Q&A: iRobot taps into its Warrior spirit.” CNET News. 15 Apr. 2009. 20 May 2009. <http://news.cnet.com/8301-11386_3-1021942676.html>.

Stahl, W. 1999. God and the Chip: Religion and the Culture of Technology. Ontario, Canada: Wilfrid Laurier University Press.

Suzuki, M. 2009. “Japan child robot mimicks infant learning.” www.physorg.com. 5 Apr. 2009. 19 May 2009. <http://www.physorg.com/news158151870.html>.

The Dalai Lama. 2005. The Universe in a Single Atom: The Convergence of Science and Spirituality. New York: Morgan Road Books.

Tillich, P. 1971. Systematic Theology. Chicago, IL: The University of Chicago Press.

Turkle, S. 2004. “Whither Psychoanalysis in Computer Culture?” Psychoanalytic Psychology, 21:1, pp. 16-30.

…….. . 2006. “Robot as Rorschach: New Complicities for Companionship.” Human Implications of Human-Robot Interaction. Technical Report WS-06-09. AAAI Press: 51-60.