Will Machines Reinforce or Redefine a ‘Social’ Humanity? – Dr. Ben Goertzel

“It is change, continuing change, inevitable change, that is the dominant factor in society today. No sensible decision can be made any longer without taking into account not only the world as it is, but the world as it will be…”Isaac Asimov, 1978 

Wired Brains

Isaac Asimov, the Russian-born American author and biochemist, was onto something simple yet profound.  Change is constant, and the implications are nowhere more evident than in the field of advancing Artificial General Intelligence (AGI).   Dr. Ben Goertzel, American author and researcher in the field of AI, sites an early interest in science-fiction as at least partly responsible for his entry into AGI.  Dr. Goertzel recalls an Isaac Asimov novel, in which people retreat into solitary worlds of virtual reality, estranged from the meaning of social relationships.

His reference to the novel sprang from an at-present unanswerable question – will the continual rise in AGI increase our cooperation and compassion for fellow beings and other intelligent forms, or will it give way to increased conflict and/or isolation?   Dr. Goertzel Ben sees today’s reality of technology as leveraging more social connection than before, bringing up Facebook and other social media as modern evidence of this phenomenon.  Dr. Keith Hampton, an Associate Professor at Rutgers University, has research interests in the relationship between information and communication technology, to include the Internet, social networks, and community.  In a paper published in 2011, Dr. Hampton et al. investigated the relationship between social networking sites and our lives.  The results led to some general conclusions: social media users are more trusting; have more close relationships; get more social support; and are more politically engaged.   While these findings are still in their infancy, this idea is also supported by the media equation, a theory developed in the 1990s by Stanford researchers Byron Reeves and Clifford Nass; the two researchers used a collation of psychological studies to form the foundation for their overarching idea that people treat computers, television and media as real people and places.

The intersection of these findings has interesting implications for the future, and they beg the question – what role do social relationships and collaboration have in the future of AGI?  In the book Social Intelligence, (The New Science of Human Relationships), American author and researcher Daniel Goleman illustrates new findings in neuroscience that show the brain’s design makes it sociable; we form, subconsciously, neural bridges that let us affect the brain – and the body – of those with whom we interact; the stronger the connection, the greater the mutual force.  What affect might these constructed feedback loops have in human interaction with technology?  Dr. Goertzel’s views seem to align with the general nature of people’s tendency towards the social in relationship to technology.  He describes his vision of the future of brain-computer interfacing, with one possible result being a sort of Internet-mediated telepathy – “…if I put a brain chip in the back of my head, you have one in the back of your head, we could beam our thoughts and feelings to each other indirectly, so if you have human beings with their brains networked together like that, that would seem to enable both a level of empathy and…understanding…than what we have now”.

The OpenCog Foundation, of which Dr. Goertzel is Chairman of the Board, works on projects rooted in the vision of advancing artificial intelligence beyond human levels within the next few decades.  Work on projects is done by multidisciplinary teams and individuals located in various parts of the world, which can make unified collaboration and understanding of ideas a challenge; Dr. Goertzel speculates on using this brain-computer interfacing in the formation of various ‘group minds’: picture a software development team, all sharing thoughts and understanding of codes as they work, perhaps an early-stage AGI processing system before AGI becomes much smarter than humans.  “This sort of interfacing could allow us to become closer to each other and closer to our developing intelligent machines and in a way, going beyond the whole individualistic model of the self that we have now”, a speculative but potential reality.  This type of interfacing speaks to the idea of an increasingly united global mind and consciousness, which presumably is a more efficient way to transfer information and feelings than at present.

The stark difference seems to lie between the notions of people’s relationships with potential greater intelligences and how people interact with today’s technology, which is responsive to human controls or more akin to being interacted with on a human-like level.  What happens when we reach what Ray Kurzweil dubs the ‘Singularity’, the point where unenhanced human intelligence is unable to keep up with the rapid advance of progress in artificial intelligence?  Might our social and emotional natures be our potential downfall as a species?   It would seem that once this ‘Singularity’ is reached (or close to being reached,) the unenhanced human brain will be vulnerable to manipulation by more intelligent machines, which are able to instantly mine Internet data, or even the information in our minds, effectively influencing our decision-making and thought patterns.

This seems like a bleak outcome for humanity, especially to those of us who identify with the social and emotional idea of what it means to be human.  How do we prepare ourselves for what may lie ahead?  “We all have the tendency to become attached to specific ideas about what’s going to happen”, notes Dr. Goertzel.  He describes one of human beings’ greatest challenges as the ability to stay open-minded, to be aware of the need to constantly change and adapt in the face of obsolete ideas and a quickened pace of new information, “…ultimately on the most profound level of who we are and what we are and why we are…some of our thinking about the singularity itself…ideas like friendly AI…what is friendly to humanity, what is humanity?”  The evolution of human thought that will need to take place is imminent and real in the face of such difficult questions on human existence alongside ever-advancing AI.

When Robots Become Co-Workers and Peers – A Conversation with Dr. Karl MacDorman

When I first think of robots in the workforce, I can’t help but mentally form the introductory scene of some future-oriented sitcom, with a silver-plated, two-legged robot coming through the front door, briefcase in hand and shouting in digitized monotones, “Honey, I’m home!”  As silly and culturally-bound as this image may be, the overall picture may not be too far from the scenes most humans form when thinking about intelligent machines actively working alongside humans.  In a recent interview with Dr. Karl MacDorman, we asked about the implications of robots or more human-like androids mixing with society, particularly in the workforce.  His logical response – “We don’t know yet…human beings habituate too many things.”

As a species, we’re so familiar and conditioned to human beings controlling most of the jobs, that the thought of robots or androids taking our place seems foreign.  Dr. MacDorman expounds this idea by discussing his experiences working in a school for the mentally and emotionally disturbed.  “There are people doing very strange things there…but you get used to it.”  At some point, you see past the odd behaviors, able to see each person as an individual and appreciate their human worth.  A similar adaptation from the strange to the routine may occur between humans and androids, or initially robots.

Emotional reactions aside, what might be the economic ramifications of a workforce populated by intelligent ‘bots’?   At its core, Dr. MacDorman makes the point that the idea or “the process of people losing their jobs to some kind of innovation has been going on for thousands of years”.  He refers to the Romans and the building of aqueducts and indoor plumbing, which resulted in taking jobs from water bearers.  This type of shift in the workforce is a long-term process; in Japan, where Dr. MacDorman lived for a period of years, he notes that some manufacturing jobs have been taken over by robots, but that he’s not sure that robotics has been as disruptive as other types of technological innovations or changes.

Yet the eventual “disruption” seems to be inevitable.  In 2011, Chinese companies spent $1.3 billion on industrial robots.  FoxConn, the company that builds iPads for Apple, hopes to have the first fully automated plant within the decade.  In light of such moves, there are several interesting predictions about the outcomes of society driven by an automated workforce.  As is inherent of any intellectual debate, varying projections seem to rest on two fulcrums – societal values and attitude (optimism versus skepticism).  Each argument, of course incorporates, each of these factors to a varying degree.

In an article for Forbes, Tim Worstall brings up an existing argument for a future robot-driven economy, the eventual creation of two classes – a “rentier class” that owns the robots and reaps most of the benefits, and “the rest of us.”  Worstall argues against this perspective.  He makes the case that everything will be much cheaper, one of the reasons being the elimination of unnecessary wages.  If income is relative to purchasing price, won’t the cheap costs and reduced wages to humans result in some for of balanced income equality?  Robert Skidelsky, Professor Emiritus of Political Economy at Warwick University in the U.K., argues machines may “engineer an escape” from poverty; in poor countries, economists use the term “disguised unemployment”, in which laborers find the means to share a limited amount of work.  Perhaps instead of a few very wealthy at the top, a decrease in required human labor will alter and reduce the standard for required human output, creating time for “more leisure.”   This would, he maintains, require a major shift in social thinking of current western values.

In an article from Wired, Author Kevin Kelly writes that the post-industrial economy will keep expanding, even if robots perform most work.  Like MacDorman, he sees the post-industrial revolution as an almost predictable repeat of history.  Just as in the early 19th century, when machinated innovations eventually replaced all but 1 percent of the existing farm jobs (the livelihood of 70% of the workforce 200 years ago), robots will likely do the same to the current workforce before the end of the century.  In the wake of the industrial revolution, a plethora of jobs in completely new fields, ones we had not dreamt of while we were busy plowing the land, were born, ushering in a new era of human cultural and societal structures.  Kelly makes the case that this will be the new reality when robots take the helm.  The human task will be to find, make, and complete new things to do, acting as “robot nannies” in the interim to other robots, and possibly androids, in a repeating cycle of robot takeovers and human creation and innovation.  He suggests the idea of “personal robot automation” for every human being, emphasizing that success will go to those who innovate in the organization, optimization, and customization of process of getting work done with his or her bots.  In Kelly’s eyes, this cycle allows us to become more “human than we already are”, increasingly free to explore the depths of our consciousness and purpose.

Circling back around to the shorter-term future and the beginnings of robot labor integration, Dr. MacDorman thinks service professions might be a good place for bots to start.   He makes the claim that there are some things that other cultures, like the Japanese, do better than we do; for example, service personnel and shop owners don’t talk to each other, but instead focus solely on the customer.  He remarks that while they seem to an empathetic culture in general, this seems to be taken to the extreme in service jobs.   Accordingly, there seems to be more of the opposite experience in the U.S.; especially from the perspective of service personnel, working in the industry can be a negative experience, especially when there are expectations to be the “perfect” service person and to unfalteringly treat the customer as king.   “In general, I don’t think people like to be servants…especially Americans don’t like to be in that kind of role”, MacDorman states.

Robots could be programmed to respond efficiently to humans’ every need, sans moving slowly on account of feeling tired or entering into emotionally-triggered conflicts with customers.  He points out that the U.S. economy has created a numbers of jobs in the low-end service sector, and the economy would need to evolve rather quickly to ensure balance, and enough jobs for those workers who would be temporarily displaced.   And what about when we get to the point where “androids” or a similar intelligence enter reality, which have the capabilities to potentially support social complex interactions?  We can make our predictions, but this is far enough into the future that we still recognize the wisdom in Dr. MacDorman’s initial response – we just don’t know yet.

Pharmacological Enhancements Prompt Closer Look at an “Authentic” Self with Dr. Alexandre Erler

Authenticity is an oft-used and seemingly coveted word in the English language.  The idea of, the meaning or the essence behind “authenticity”, might be perceived as common to the universal human experience, but how do we come to an agreed upon definition of the word, which seems to hold different meanings for different cultures and individuals?  Even more puzzling in today’s global-scape, how does authenticity affect the idea of human enhancement and vice versa?

Dr. Alexandre Erler of Oxford dedicated his dissertation work to examining the meanings and ethical implications of authenticity in relation to self-creation and self-transformation, including the use of human enhancement technologies.  Across many minds, authenticity has taken on similar but fundamentally different meanings.  Thinkers of the ‘romantic’ movement, like Charles Taylor and others, appropriate authenticity as discovering a “true nature” hidden within that must be cultivated through a range of experiences.  In slight contrast are thinkers like Harry Frankfurt of Princeton University, who view authenticity as taking personal responsibility for our actions and developing “higher-order volitions” that allow us to act on one’s highest values and priorities.

Dr. Erler draws on both of these historic lines of thought to voice two concepts that he believes represent a multi-dimensional view of authenticity – self-expression and preservation of the self.   Amidst our rapid technological progression as a species, it seems wise to hold on to the thread of our identities, continuing to ask how our expression and understanding of ourselves as individuals might be affected by the future of enhancement.  David Pearce, a philosopher and transhumanist, supports the abolition of all human suffering and argues for the use of technology to engineer “gradients of well-being”.   But in the spirit of ethical debate, is the eradication of all negative experiences a “good” thing?

Dr. Erler explains his rationale for initially approaching the issue.  He sees logic in carefully listening to people’s expressed concerns about authenticity in the face of enhancement, particularly in regards to mood or personality enhancers. He acknowledges that people may not always clearly state their thoughts on the issue, voicing concerns along the lines of, “I will no longer be myself” if my mood or personality is directly altered; it’s understandable how the ambiguity of these statements may leave them more vulnerable to criticism, with proponents of enhancement describing these concerns as not logical and therefore irrelevant.  But Dr. Erler feels that these concerns, though not always adequately articulated, may still be relevant. “Maybe people are onto something when they raise these concerns, even though they are not clear enough in what they are saying.”

One possible legitimate fear is some uses of pharmaceutical ‘enhancement’ that threaten valuable aspects of current human existence.  The alleviation of all human suffering through pharmaceuticals raises various interesting questions – Is there any value in the experience of negative affect?  Are all negative emotions created equal?  Are there reasons to experience these emotions?

There seems to be a connected yet grey line between our emotions and ‘personality’ and the idea of an authentic, ‘true’ self.  It’s necessary to investigate the differences and connections between emotion and the self-concept of authenticity further.  Nick Bostrom and Rebecca Roache give a sample of the many questions that arise when considering mood and personality enhancement.  What counts as an improvement to mood or personality?  Are there plausible standards to judge which is which?  Even if you agree to certain changes as improvements, are drugs the answer?  In their article, Bostrom and Roache reference Leon Kass’ inquiry as to whether the experience of working through emotional obstacles and the moment of improvement in mood or perspective is somehow reduced if there is a lapse in this experience due to the more immediate effects of enhancement; how does the overall effect of either route affect the person’s mental state and therefore influence their ‘authenticity?’  Is authenticity somehow realized in the doing or the process of working through negative emotions, i.e. through the work of ‘self-expression’?

A common case that is often discussed addresses loss, when a person loses a loved one for example, and the inevitable post-experience of grief.  If you could take a pill to reduce the experience of mourning, is this a ‘good’ enhancement, all things considered?  Carl Elliot, a Professor in the Center of Bioethics at University of Minnesota, is one researcher who has done some interesting thought experiments in the field.  Though Dr. Erler does not necessarily agree with all of Elliot’s arguments, he references his example of the imaginary accountant who takes Prozac and feels better, more like himself, and that he has improved his life; was this really a benefit, Elliot asks, or has he perhaps lost some sense of ‘authentic’ self in the process?

The Prozac question was heavily researched and influenced by the work of Psychiatrist Peter Kramer, who used case studies of patients’ experiences to suggest that Prozac might actually be regarded as an authenticity-generating drug in some cases.  Dr. Felicitas Kraemer explored this issue further in her 2011 article published in Neuroethics.   Dr. Kraemer looks at the idea of naturalness versus artificiality in emotions, suggesting that there may be valid, non-naturalist ways to assess emotions brought about by psychopharmacological enhancements.  Dr. Kraemer references the work of Dr. David Pugmire, a key researcher in bringing emotional authenticity to the spotlight in emotion theory and who emphasized the natural or artificial origin of an emotion.  In the eyes of Dr. Pugmire and other bio-conservatives, artificial means of reaching a particular emotion automatically assumes inauthentic results.

Dr. Kraemer firmly suggests that this notion, particularly based on the work of Kramer, should be abandoned in future research of authenticity, asserting that a causal link between artificial origin and resulting emotional inauthenticity does not prove valid.  In other words, there can exist both artificially-induced authentic emotional states and naturally-engendered emotions that are inauthentic.  An intriguing idea and one that is certainly still ripe for debate.

Dr. Kraemer does give an imperative caution in conclusion: that an uncritical and limited view of enhancement of emotions via technology falls short of recognizing the complexity and subtlety of emotional life; it is still not clear whether enhancement of positive emotions would lead to a good and “authentic” life.   The apparent link yet distinct line between emotions and authenticity remains a gray area.   Dr. Erler addresses this directly; “I think the authenticity question…is a distinct one”.  We might imagine people, he explains, who don’t respond in emotionally appropriate or accepted ways in social situations (this would vary from culture to culture), but the emotional or mood spectrum seems to be an issue separate from the idea of ‘authenticity’.

How we continue to define the concept of emotional authenticity will play a critical role in determining how society moves forward in its pursuance of human enhancement in the field of neuro-psychopharmacology.  It requires further exploration and refinement of our internally constructed paradigm as to what it means to experience life in an authentic fashion, all rational and irrational emotional experiences considered.

Human-Robot Relations and the Power Struggle – A Perspective from Dr. Kevin LaGrandeur

Author and Professor Dr. Kevin LaGrandeur of the New York Institute of Technology was an early embracer of Internet technology in the 1990’s, specifically interested in the utility of digital technology and how it could enhance education.  Combined with his love of literature – he’s an English professor – these divergent interests led to an exploration of the presence of artificial intelligence in early modern literature, the illumination of the human psych, and the resulting ethical implications.

In researching for his book, Dr. LaGrandeur found references to automata back to the time of Aristotle and even before.  He remarks that the likely first reference to an android-like being can be found in Book 18 of Homer’s Iliad.   In a visit to Thetis’ palace, the reader bears witness to golden serving maidens that are able to function as humans, in addition to an “R2D2”-like robot that serves as a mobile serving platter and responds to human demands.  The ancient Greeks were clearly ahead of their time in considering the ramifications of humans co-existing with artificial intelligence – Aristotle refers back to the Iliad discussing politics and the topic of slavery.  This foundation let to the central theme – the idea of artificial slavery – in Dr. LaGrandeur’s book, Androids and Intelligent Networks in Early Modern Literature and Culture: Artificial Slaves.  “Automated humanoids or humanoid like robots, in their (the ancient Greeks’) mind comes up in connection to slavery every single time”, remarks Dr. LaGrandeur.  The ethical ramifications are even more relevant today.

Humanities and Philosophy are, by nature, intricately connected; Dr. LaGrandeur points out that most, if not all, of western literature before the 19th century had some connection to the Judaic and Christian Bible and morals.  Literature wasn’t worth reading if there was no ethical or moral consideration.  As a modern humanist, Dr. LaGrandeur sees the responsibility of modern-day intellectuals as contributing knowledge that can be used beyond a particular domain or field, to apply what we know in order to help the world on a more global scale.

What we think of as new approaches or attitudes towards technology are not really ‘new’, says Dr. LaGrandeur.  The types of technology-enhancing preoccupations that we are obsessed with today – broadening our horizons; making humanity stronger and more powerful; and superseding nature – represent an archetype that extends back early in humanity, and that seems to be built into human the human psych.  In the modern era, science fiction may be the most prevalent genre where these ideas are explored; however, once again we see examples arising even in ancient mythology – Prometheus steals fire, the most basic technology in the ancient world, from the Gods; and those who try to step into the territory of Gods, remarks Dr. LaGrandeur, usually end up being punished.

Dr. LaGrandeur expresses the idea that there are two sides to the coin; as we expand our powers and enhance ourselves, we simultaneously reduce ourselves, giving away part of our agency and ability to affect the world.  Volition and decision-making are two key terms, which Dr. LaGrandeur attributes back to the Father of Robotics, Norbert Weiner – “When we give away our decision-making capabilities to robots or intelligent technology, we’re giving away essentially our souls”.  This is the fire with which our progressing society plays today.  “Let’s say I had a robot butler, and I could delegate authority…how much would I actually give away?”  Dr. LaGrandeur describes giving away responsibility after responsibility – first, tracking food in the refrigerator, then taking care of all finances, and while we’re at it – buying flowers for a significant other.  The ultimate question is, where do you cross the line between delegation and dependence?  Robots will make decisions based on programming, but there is no guarantee that as they evolve, their actions will always be in sync with human demands.

Dr. LaGrandeur references Bill Joy’s lengthy essay, “Why the Future Doesn’t Need Us” , which appeared in the April 2000 issue of Wired.  Joy makes the astute observation that we always overestimate our programming abilities.  In developing parts of these technologies, we are often unaware of how they may become part of a grander and more complex system.  His reflection on his own software development contributions are embedded in his concerns for the implications of the ever-quickening pace of innovation.  At the very least, shouldn’t we be asking ourselves how we can best co-exist with this new technology?  And shouldn’t we proceed with a little more caution?

Thousands of years ago in literature, we see the first visions of humanoids that can do the things that humans don’t want to do.  Our forefathers expressed worries that seem to be echoed in today’s cautious forward thinkers – how do you guard against the roles of slave and master being reversed in human and machine?  In a future epoch of co-existence with intelligent robots, how will our interactions evolve and be managed?  This question hasn’t been overlooked by South Korea, which proposed a Robot Ethics Charter in 2007.  The charter, apparently still being drafted, is intended to prevent misuse of robots by humans and vice versa, in anticipation of “a robot in every South Korean household by 2020.”

This code of ethics was inspired by Author Isaac Asimov in his short story the Runaround, in which Asimov proposes his famous three laws for robots.  An uncanny notion, though at the time he might have been a bit short-sighted in not addressing humans’ responsibility in interactions with robots – only how robots should treat humans.  These types of questions have led to the development of the field of “machine ethics”.  Editors’ Edward Carr and Tom Standard at the Economist share an interesting discussion on the ethical implications and societal ramifications in the face of increasing machine intelligence and decision-making capabilities.

But the question still remains, how long can humans maintain control over robots, particularly when it seems we need to relinquish increasing amounts of control through programming in order to allow robots and machines to have more autonomy and advance human progress?  Control is almost always an illusion, a realization surely known by those at the forefront of developing and using technology.  Even for the majority of the human race, which doesn’t have malicious intents in the development of such technologies (quite the opposite), Pandora’s box has indeed been opened, and this will inevitably lead to misuse and abuse by humankind.  Our foresight and intelligence will be tested by humans’ collaborative ability to predict, hedge, and counteract dangerous situations.  A closing thought: “Power practiced without learning to control” is the resounding message in a story that goes back almost 2,000 years – and there’s no handing-back the hat once the technology crosses the threshold of human imagination.

Extending Human Lifespans Through Biotechnology

Human Clock

Can we extend human lifespans or turn back the biological clock?

We live in a world where death is inevitable – or is it?  This question is the fulcrum on which the life extension movement pivots.  Dr. Ilia Stambler is one of the forward-thinking advocates for reshaping the way that human kind perceives the human life span and in turn, the meaning of life and death.

Dr. Stambler’s interest in the subject of life and death sparked at an early age, six years old to be exact.  “When I first realized that I’m going to die and my grandfather is going to die; and I came and asked is it true, are we going to die?  They said, you don’t have to think about those things, you just have to think about playing.”  But even in youthful innocence, Dr. Stambler thought this was an issue that needed to be thought about and addressed.  His ongoing interests led him to study bioengineering at the Moscow Polytechnical Institute and the Israel Institute of Technology, and to his current roles as Affiliate scholar with IEET, researcher at Bar-Ilan University in Israel, and activist at the International Longevity Alliance (His dissertation on the “History of Life-extensionism in the 20th century” from Bar Ilan University is available on line.

Dr. Stambler identifies the Father of the ‘Life Extension’ Field as being the Russian Scientist Ilya Illyich Mechnikov, who in 1903 coined the term gerontology – the study of the social, psychological and biological aspects of aging.   The modern idea of pro-biotic diets stems from Mechnikov’s theory, which was based on the idea that auto-intoxication, which accelerates senescence, originates with intestinal bacteria, and that one way to combat this “self-poisoning” is through the introduction of acidified dairy products (probiotics).  These ideas and other helped bridge the connection between anti-aging and life extension research and mainstream medicine.  Senescene is the biological process of aging (when cells mature and stop dividing), and its discovery has led to great breakthroughs and ongoing research in the suppression of cancerous tumors.  Dr. Stambler’s own theoretical leanings for life extension are in a biological direction, where he would like to see the extension of life maintained or enhanced in its present biological form.  He mentions the birth and development of regenerative technologies, including stem cells, gene therapy, organ replacement, and tissue repair, emphasizing that wide-use of these technologies is “still a ways off”; however, he believes that advanced biotechnologies will be available before any type of computational or nanotechnologies are used for the purpose of life extension or enhancement.

There is the question, too, of how far life extension technologies reach into the future – do we aim for another 50, 100, 200 years, or more?  The latter extreme of the spectrum might be termed Radical Life Extension.  As an activist with the Israeli Life-extensionist Community, Dr. Stambler wrote a paper in January 2012 discussing activist events held in Israel, organized by the Israeli Transhumanist Community and a newly launched journal, “Let Us From Now On”, the first of its kind in Israel dedicated to the promotion of radical life extension.  The founders of the journal expand on their philosophy, as it speaks to radical life affirmation and the struggle with death; the power of modern art and science to affirm life; and zero tolerance for inter-personal, inter-religious, or inter-national conflicts in life.  The event, held at a cemetery for symbolic reasons, was meant to raise the issue of life extension to the public.  As Dr. Stambler quotes from Danila Medvedev, head of the Russian Transhumanist movement, “…this is not just a scientific question, this is first of all a social task.”

Broader social acceptance and movement of particular actions or lines of thought is a doubtless pre-requisite to real and lasting progress.  In August, the Pew Research Institute released Americans’ response to a life extension survey, presenting some interesting findings.  The survey showed a small margin of difference in Blacks and Hispanics, as well as younger people (compared to those 50 and older), as more likely to view life extension as a good thing, though general acceptance is about 50:50 amongst the general population.  One of the most telling sets of data is the public’s acknowledged primary concerns: that only the wealthy will have access (though the vast majority believed such technologies should be made available to everyone); that the economy will be less productive; and that longer life will strain natural resources.

These types of social effects are also a shared concern among bioethicists.  Some scientists also speak to the deep-seated belief that life extension “plays with evolutionary fire”, that we shouldn’t mess with any human trait resulting from natural selection.  Philosopher Bennett Foddy, deputy director and senior research fellow of the Programme on the Ethics of the New Biosciences at Oxford, believes this idea is a misconception.  In an interview with The Atlantic, he acknowledges that often people perceive certain technologies as being “new ideas” and are off-put by concepts that seem out of the realm of present experience.  But Foddy makes the argument that an animal’s evolutionary environment is largely responsible for how it ages, differing widely across species.  For example, lobsters have the capability to replace their telomeres, the caps that hold DNA together, much more frequently, while ours eventually unravel; this difference allows lobsters to live more than 50 years and experience very little aging.  Where we need to focus our technology, Foddy asserts, is not just in finding ways to keep people alive but in maintaining or enhancing physical or mental capacities for a much longer period of time.

Addressing social questions and concerns is a necessity for continuing progress, but Dr. Stambler also brings up the ongoing questions of cost and accessibility, as well as the time and research efforts needed to produce trial and error and see what really works.  Still, the biotech industry is definitely growing, drawing the interest of scientists, entrepreneurs and venture capitalists alike.  This progress is further reflected by the interest of the American Government; in 2011, Congress introduced the bipartisan Regenerative Medicine Promotion Act, which makes it easier for researchers and private biotech companies to find funding.  Nonprofits with similar interests are also popping up on the map; Maximum Life Foundation, directed by Founder David Kekich, has set a goal of reversing aging by 2033.

Despite the seeming progress, Dr. Stambler still remains skeptical of deterministic predictions.  He reiterates the importance of increasing general public concern, noting that the rate of reversing aging by 2033, 2045, or before then “all depends on us” and how many minds and efforts pull together to address the issue.  Dr. Stambler advocates that what we can do now to slow down the aging process is rooted in thousand-year old holistic knowledge i.e. the need to get good sleep, to eat a balanced diet, etc., all things for which we still need to work.  In looking at the progress made thus far, Dr. Stambler remarks, “What I see is an extension of interest, more massive participation, and I do hope it snowballs, that it will grow into scientific advancements” that can be made available to anyone, regardless of race, income, or beliefs.

Machine Consciousness in All it’s Flavors – Dr. Peter Botluc

“It’s alive!” The famous words of Dr. Frankenstein still ring in our ears as we imagine an inhuman, giant green man sit up strait, arms stretched forward.

Though the scene incites fear of the dangerous and dark power of the monster. More than, anything, I think that the scene should incite a sense of responsibility. It seems reasonable to argue that the topics which hold the most moral weight are those which involve the destruction of – tinkering with – and creation of – consciousness itself.

Dr. Peter Botluc is a Philosophy professor at the University of Illinois, whose most active work remains in the field of machine consciousness. In out July 2013 interview, I was able to discuss Dr. Botluc’s well-informed distinctions and predictions about machine consciousness and consciousness itself. That conversation, coupled with his article called “The Philosophical Issue in Machine Consciousness,” were the fuel for this article.

Types of Consciousness

Dr. Boltuc’s work aims to distinguish various “types” of consciousness to further refine and understand consciousness itself. The three types distinguished in his paper above are functional, phenomenal, and hard consciousness (respectively: f-, p-, and h-consciousness).

Functional consciousness implies an intelligible response from a system. Phenomenally conscious beings are said to experience “qualia” or sense-perception (sight, sound, etc…). A hard conscious being can be said to subjectively experience those senses with awareness.

Dr. Botluc claims that any machines might be said to be functionally conscious today, in that they can respond to stimuli in appropriate ways to attain an intelligible end. Right now, systems are being created around the world to develop phenomenally conscious machine systems (such as the robot “Rolling Justin,” which has learned to catch a thrown ball). At present, there is still great debate over whether or not any machines actually are phenomenally conscious, and the general consensus seems to be that “hard” consciousness has almost certainly not been created within a machine.

Below is a video of Philosopher David Chalmers explaining the “hard” problem of consciousness, and his suppositions of what it means and implies about the nature of consciousness itself.

A Clash of Distinctions

Dr. Botluc’s own approach to understanding the various types of consciousness relates to his own definitions of the terms. Some well-known consciousness philosophers view p-consciousness “in the non-functional way only, while I leave the notion of p-consciousness to the internal functional uses and introduce the notion of h-consciousness as the non-functional concept.” Botluc’s own views differ:

“(Ned) Block’s more functional understanding of p-consciousness seems to designate something of heuristic value — the first-person functional description. For instance, such definition is used in the description of LIDA robots, which are defined as what those authors call phenomenally consciousness and distinguish from only functionally conscious AI architectures.

While the authors identify phenomenal consciousness with the subjective experience of “qualia”, in fact they claim that adding a mechanism to stabilize the perceptual field might provide a significant step toward phenomenal consciousness in machines”. It seems to me that their view misses something but also that their project pertains to something of not only practical but also epistemological importance. Such a mechanism would enhance quality of phenomenal consciousness, if the latter was already present (this way the step is significant) but I doubt whether the mechanism would help explain, or produce, phenomenal consciousness.

My criticism is shared by Haikonen who claims that the loss of stable perception does not lead to the loss of any kind of phenomenal consciousness” but merely to some deformations within its content and that \thus stable perception cannot be a cause for phenomenal consciousness”. The distinction between cognitive architectures that satisfy numerous features of subjects, such as stability of the perceptual ¯eld, and those that do not, is a helpful practical distinction that does deserve its own term, needed in AI and related disciplines, but it is not the narrow meaning of p-consciousness advocated by Haikonen, Chalmers and others.” (Quoted from Dr. Boltuc’s paper here)

Philosophy of mind may not be the easiest subject to wrap your head around, but the moral implications of the distinctions and understanding of consciousness are tremendously heavy. This is particularly applicable in a world moving more and more towards the enhancement and creation of sentience.

Might it be the case that “consciousness” will never be “solved”?

Could be. My hope is that the world of Botluc, Block, Chalmers and others will be furthered and tested by the findings of science, and will help us make meaningful decisions about serious ethical considerations of the present and the future.

Thank you again to Professor B., you can find his academic page here at the University of Illinois.

-Daniel Faggella

PS: Prof Botluc’s slides on Non-Reductive Machine Consciousness can be found here.

Artificial Emotion in AI, Implications and Predictions with Dr. Jordi Vallverdú

Dr. Jordi Vallverdú is a professor and renown researcher at the Universitat Autonoma de Barcelona (Spain). His specialty area is the domain of synthetic emotion. His current research is dedicated to the epistemological, ethical, gender, and educational aspects of Philosophy of Computing and Science.

I’m happy we were able to catch up, and I’ve included my email correspondence with Jordi below:

1) What do you think will be the first implementations of synthetic emotion that will genuinely have a WIDE impact on human life?

WHITH CARE AND SEX ROBOTS, BESIDES OF REAL VIRTUAL FRIENDS CHATBOTS.

2) Do you believe that humans will want to alter their own emotional experiences? Many believe so, please explain your position here, as well as any ethical considerations that you find most pressing (and possibly how we could avoid major risks).

YES. IN FACT, PEOPLE WANT TO FEEL MORE AND/OR DIFFERENTLY, THIS IS THE REASON OF DRUGS AND ALCOHOL INTAKE, BESIDES OF NATURAL WAYS OF OBTAINING ‘BRAIN-BODY REWARDS’ LIKE DOPAMINE, OXICOTOCINE, ADRENALINE,….. CONSEQUENTLY, ANY NEW WAY TO ADD SENSATIONS WILL BE WELCOMED BY AN IMPORTANT PART OF THE WORLD POPULATION.

3) If conscious beings experience emotion (positive or negative), then there is more serious ethical implications of imbuing “things” with emotion, is there not? It seems as if creating consciousness and creating emotional experience is the ULTIMATE ethical precipice of our human creation thus far. Do you agree with this? (Or maybe you believe that non-conscious objects and entities might experience emotion as well?)

FOLLOWING YOUR RATIONAL SEQUENCE, IT SHOULD BE CONCLUDED THAT EVERY NEWBORN MEANS AN ETHICAL PRECIPICE, BUT IT IS FAR FROM BE TRUE. ANY SOCIAL LIVING ENTITY MUST DEAL WITH BIOLOGICAL AS WELL AS SOCIAL CONSTRAINTS…THE EMOTIONAL INTERFACE WILL ORIENTATE THE KIND OF NEW ETHICS THAT MACHINES CAN CREATE BY THEMSELVES (IF CONSCIOUS). FOR EXAMPLE: THEY COULD FIND US FUN, LIKE WE THINK ABOUT DOGS. OR TRULY FRIENDS WITH MORE HISTORICAL EXPERTISE

4) You are part of the Steering Committee of the European Association for Philosophy & Computing, E-CAP, What does your membership involve, and can you tell us more about that org? I can find no info online.

I WAS MEMBER OF ECAP STEERING COMMITTEE. ECAP WAS MERGED INTO IACAP INTERNATIONAL ORGANIZATION, WHICH IS DEAL IS TO  MAKE AN ADVANCE INTO THE PHILOSOPHICAL ANALYSIS OF COMPUTER SCIENCES, FROM A BROAD PERSPECTIVE.

5) Lastly and MOST importantly, there are so many perspectives about how emerging technology will revolutionize human experience and human potential, and it seems impossible that all the well-intending experts could agree. How can we, as researchers, businesspeople, politicians, etc… find a way to work together so that we bring humanity to a future that is aggregately BETTER, not aggregately worse?

TO IMPROVE THE FUTURE HAS BEEN A POSSIBILITY FOR EACH HUMAN GENERATION SINCE THE LAST 3 MILLIONS OF YEARS. FIRST OF ALL, WE MUST SHARE COMMON GOODS AND AGREE ABOUT BASIC VALUES NECESSARY FOR A COMPLETE LIFE. ECONOMY AND ETHICS. BESIDES, SINCE 19TH CENTURY HUMANKIND HAS MECHANISMS TO SOLVE HUNGER AND BASIC MEDICAL PROBLEMS FOR ALL HUMAN POPULATION. NEW BIOTECHNOLOGIES ARE NOT A SOLUTION: OLD NEW CHEMICAL AGRICULTURE OF 19TH CENTURY COULD PROVIDE FOOD FOE ANY HUMAN IN THE WORLD. BUT THE ECONOMIC SYSTEM DOES NOT ALLOW IT, NOT WESTERN WAY OF LIFE. TECHNOLOGY IS NOT THE PROBLEM NOR THE MOST IMPORTANT SOLUTION: IDEAS ARE.

A big thanks to Jordi for taking the time for this interview. To learn more about him and his research, see his University profile here, or visit his bio page on Lifeboat.org.

All the best,

-Daniel Faggella

Wendell Wallach – “Progress is Not Coherent”

Before Wendell Wallach’s present position as Lecturer at Yale University’s Interdisciplinary Center for Bioethics, he founded two computer consulting companies. He’s the author of Moral Machines: Teaching Robots Right From Wrong (Oxford University Press 2009), which focuses on laying the groundwork for the field of study in robot ethics / machine morality.

In my interview with Wendell, we discussed the some of the factors that he considers to be the most substantial barriers to a positive future of technological progress, in our country and others. Wendell’s interests in robot ethics extend to policy considerations, the environment, and beyond, and in our short talk I was able to glean from many of his current areas of focus.

Ethics is Not the Job of the Scientist

In Universities, never mind the world of business, the job and glory of the scientist is in the discovery. The expectation is that he is not to be the ethical arbiter of innovation who actually does the innovating. This is particularly disturbing given the powerful nature of emerging technologies such as nanotechnology, AI, robotics, etc… Nick Bostrom’s “black ball” concept explains the idea best. Once we create a technology, we cannot simply tuck it away from human tinkering. When it’s out, it’s out, and considerations about which lines of research we undertake and how those findings might be used is something that Wendell believes would do us good as a race. I am congenial with this perspective as it seems as though adding a “vigilant ethical barometer” to any field is likely not a bad thing. However, creating such a shift clearly involves powerful cultural forces and reward structures.

Nobody is Rewarded for Being “Interdisciplinary”

Wendell explains that although many universities tout that they encourage “interdisciplinary” studies, those who are rewarded are generally those who go a mile deep and an inch wide, pushing ground within their field, not between fields. Part of this is a kind of natural focus on what we’re best at, part of this is momentum around the idea of “what researchers do,” but the reward structures also don’t generally suit the interdisciplinary researcher nearly as well as he who sticks to his field.

Technological Progress May Run Away from “Phronesis”

Give then “black ball” concept – and the inability to “hide” a technology once it’s been discovered, Wendell believes that it’s exceptionally important to keep technological progress vigilant in it’s results and uses. This might involve regulatory measures to avoid dangerous runaway research (which seems challenging, as such research in emerging technology seems to inevitably happen over time). It might also involve a kind of analysis period with some standards and procedures put in place on a national (or even global) level with regards to what technologies get rolled out and how. This seems particularly important with technologies that alter our human potential or human experience.

– – –

I’d like to say a special thank you to Wendell, who happened to not only provide me with a great number of ideas, but people as well. Wendell’s personal blog is called “Moral Machines,” and you can find a great podcast featuring his ideas on robot ethics here at George Mason University’s “Surprisingly Free” conversations page.

Be well, and until next time,

-Daniel Faggella

Jim Karkanias and the Short Step to Human Augmentation

Jim Karkanias is a Partner in Applied Research at Microsoft. Starting in 2006, he joined Microsoft in a quest to apply his Information Theory background to a venture called “Health Solutions Group.” A firm believer in the transformative potential of emerging technologies – and an advocate for it’s progress – Jim is Advisor at Northwest Association for Biomedical Research, and a Board member at Washington Biotechnology & Biomedical Association.

I caught up with Jim about his perspectives on the transition of emerging technologies into the mainstream, and his thoughts on the technologies and trends most likely to bring us beyond our present human potential and human experience (what we refer to here at Sentient Potential as the “ultimate ethical precipice”).

Jim was quick to lay out the trends and perspectives that he considers to be leading us in the direction of augmentation, and below I’ll cover each:

Positivism in the Domain of Technology and Medicine

Understanding my background in Positive Psychology, Jim begins by talking about how a “perspective towards growth over amelioration  is something that’s effecting the medical world as well. He sees a number of reasons as to why our present medical model has such a reactive approach today, but believes that the tides are turning in some small circles to get a philosophy of Positivism imbued into the world of medicine.

It may not be every average doctor with a practice to run who considers the further reaches and potential of the research and technology of the field, but folks like Jim (and teams like his – upwards of 400 people) who focus specifically on this subject are certainly furthering the field.

Biology (and Psychology) as a Black Box

Jim also talks about the topic of making the “Black Box” if Biology into a “White Box.” The “Black Box” implies that we don’t know exactly what’s happening in our bodies, or why, but we see correlations and effects and can “tinker” to find solutions that seem to work. Though certainly further along than we were 50 years ago, this experimental approach is still somewhat crude – and Jim believes that as we understand the true workings of  our bodies and minds (through discoveries in various fields that allow us to truly understand systems), we’ll have a greater ability to build off of our natural systems and augment ourselves.

This “Black Box” approach is also what Jim believes has contributed to our “reactive” model of medicine:

“We have to wait for stuff to break to fix it.”

Essentially, if you can’t fix something until it breaks, you probably don’t want to build off of that some “thing” that you don’t understand. That “thing” being health, and/or our physical and mental capacities.

Information Theory

In any given system, from the brain to the liver, the functions can be understood as bundles of information patterns (chemical and electric in the case of us humans). When we “see things happen” in a system, an information flow is being manipulated. It can be broken down to numbers, and numbers can be added mechanically and electronically – and it’s interesting to think about “health” as an information flow as well.

It Seems an Inevitable Transition…

When I spoke to Jim about his feelings towards the massive and drastic changes that emerging technologies seem to be presenting us with, he stated that overall he sees opportunity. “The old paradigm has to make way for the new one,” he says. All in all, I’d have to say that although I feel both the tremendous potential for “good” and “bad” yields of such technology, it’s seeming inevitability makes it ideal to think positively (while of course keeping our eyes on the risks as well). Jim resonated with my notion of the efficiency/efficacy-based model of technolog adoption by stating:

“People overclock their computers now, and they will overclock themselves if they can.”

True, and heck… I’ll probably be one of those people – though I’ll be darn vigilant about how I do it.

To learn more about what Jim is up to, check out Microsoft Health here.

Back to overclocking myself with a little Java and getting back to work, another big thanks to Jim for taking the time for today’s interview.

-Daniel Faggella

Tim Stevens of Engadget – Beyond Google Glass and the Potential Future of Augmented Reality

Tim Stevens is Editor in Chief at Engadget.com, one of the world’s top technology blogs whose focus extends beyond product reviews to technology breakthroughs, interesting research, and interviews with Tech experts.

This past week I caught up with Tim about his experience with Google Glass and his ideas about the future of augmented and virtual reality, among other topics (see full interview above).

Tim’s Experiences with Google Glass

As one of the first users of Glass outside the ranks of Google, Tim’s experience has had it’s highs and lows. The first few days, Tim reports, are “eye openings” (pun not intended, of course), having the device beeping and popping up little notifications about emails makes you hardly be able to help smiling. Tim also mentions that GPS is another one of the key features of glass, and that “finding a restaurant is suddenly marvelously easy.” On the other hand, the droves of people stopping him on the street to try glass, the incessant beeping and notifications.

“After about a week or so you realize, there’s a pretty big commitment to putting this thing on your face.”

At home, with a display in front of him, Tim found that he didn’t really need the utility of Glass. It’s relatively limited in it’s utility as present, and Tim hasn’t had any truly “negative” experiences (IE: people being concerned about him video recording them, etc…), and he’s excited about the potential and future function of the device (including when others get their hands on the API and can begin developing and expanding off of Glass).

The Future of Augmented Reality

One of the reasons I’d decided to reach out to Tim is because I saw one of his recent interviews with Thad Starner about how Google Glass could augment our memories and realities.

From what I gathered from my chat with Tim, Thad (who is one of the first promoters of wearable computer devices, and has been wearing computers since his MIT days in the 90’s) believes that wearable computing will mostly develop as a kind of assisting device to our present faculties of memory and mind. Basically, what “augmented reality” devices like Glass will eventually turn into is kind of aide to our modern functions that allows us to access information more easily (directions, restaurant recommendations, etc…).

Thad holds that we shouldn’t really want or need anything beyond those functions, but Tim’s fascination with the technology takes him to a vision of something more immersive, “Fully Registered Augmented Reality.” Tim’s interest seems to stem both from Sci Fi notions and from the real utility and usefulness of a kind of augmented reality that covers your entire field of view.

Tim imagines a world where he can bring up as many displays as he wants at any given time and is able to interact with and data with thoughts alone. The technology, as Tim acknowledges, is nowhere near where it needs to be, though companies like Oculus are making things exciting again.

My tendency is to lean on the side of curiosity and increased utility, as maximizing my human potential is pretty high on my priority list. I am wary, however, of the possibilities of such technology, though I believe the question of it’s development is a “when” question, not an “if” question. The ability to interact with data with our imagination and to interact with virtual reality swiftly provides too much utility for humans – and it’s inevitable development seems to have it’s dangers and it’s possibilities, though I like to hope we as a race will leverage it properly to whatever the next best step is for our conscious potential.

I referred Tim to a video called “SIGHT” that explores some of the ramifications of this kind of technology in a frightfully believable way. I am of the belief that as soon as these technologies are possible, there will be a massive shift in human experience which will quickly take us beyond “human” levels of intelligence, affect, interactions, etc… I am no pessimist, but I think – as I believe Tim does as well – that these technologies require quite a vigilant effort towards the “good” as companies and labs develop them in the coming years.

Personal Security in a World of Wearable Devices

Tim explains that although he has never been accused of taking photos of or video recording people while wearing glass, he suspects that this will become common, and that this will also be a real and legitimate concern. Neither Tim nor I are particularly disturbed by being filmed or photographed in public, but Tim explains that not everyone is like him, and that some people will take their privacy much more seriously (not to mention that nearly anyone can be found in an awkward or embarrassing situation that they’d rather not have spread around the internet in a matter of seconds).

It’s his belief that Google isn’t currently doing enough to handle these concerns, and that the simple addition of a sound or “blinking red light” when recording or taking photos would make people feel safer around Glass, and would make it easier for Glass to gain commercial and political acceptance as a major transition forward in personal effectiveness / connected intelligence.

– – –

I wanted to say thanks again to Tim for taking the time for the interview. Be sure to check out his Engadget Podcast online (where they cover a wide range of topics, from Gaming to Virtual Reality to Tech Trends), and the new Engadget hands-on future technology event called EXPAND.

Otherwise, make sure to keep your Google Glass lazers to stun, and catch you next week.

-Daniel Faggella