Tinkering with Consciousness – The Great Ethical Precipice We Face

1) A Change in Our Nature, Not Just Our Gadgets

There’s much to be said about how different we’ve become from the society of even 30, never mind 300 years ago. Hunkered over our desks or our iPhones, heads buried in a dozen screens per day, there’s plenty of research to suggest that we’re losing vital skills and capacities that our ancestors might have taken for granted (http://news.discovery.com/tech/technology-brain-intelligence-20130319.htm). Indeed, there’s evidence that search engines may be acting as a kind of crutch to much of our memory (http://www.nytimes.com/2011/07/15/health/15memory.html?_r=0).

On the other hand we very well may be freeing up cognitive “space” for more complex and varied tasks, and adapting to a world where handling a hundred data streams is hardly a choice anymore. In addition, technology might be argued to afford us more freedom to pursue our highest callings (as we see them), and explore new and rich elements of life. We may be putting up with a lot more “beeping” in our rooms and in our pockets, but no longer are we trapped on a provincial farm or in a hot and noisy factory to generate an income for ourselves (http://www.digitaltonto.com/2013/how-technology-is-transforming-our-brains/).

We may be losing some faculty of memory by using a search engine as opposed to a library, but we also have the video, audio, and text information of the world at our very fingertips – making memory potentially less relevant. We probably have weaker shoulder muscles because we don’t harvest our own crops and wash our clothes on stones. Whether technology is influencing the human condition in a positive or negative way is in large part a topic that’s debatable – but where technology seems to be taking us in the future may not be.

The true overlap of technology and psychology isn’t something exogenous – that is, it is not about our external interaction with technology, or “tools” that we pick up and use. The actual overlap happens when technology and psychology literally intersect. When our technology becomes endogenous – or forms an inner part of us – we will have gone beyond an increase in our capacities aided by gadgets, to a literal change in our human condition itself. The technologies of brain-machine interface, nanotechnology and artificial intelligence (which humanity has just begun to explore) represent not just a new transition in how we live and work – but the beginning of a profound transition in life and consciousness itself.

I believe that we as a society are at a point in our development of consciousness-altering technology that mirrors the era of the Wright brothers’ first flight. We are surrounded by burgeoning technologies that may – sooner than we expect -  literally alter all aspects of our sentient experience.

2) The Cusp of an Era of Technology + Psychology

For a concrete example of the technology and psychology overlap, look no further than Institute for Brain Science at Brown University. Headed by Dr. John Donoghue, scientists at the IBS use a technology called BrainGate to connect tiny pads full of electrically sensitive spikes to the brains of paraplegic subjects. Using the patient’s own brain activity, hardware and software are used to translate thought into robotic or digital “action.” Brain activity  can move robot limbs to, say, bring a cup of coffee with a straw to the patient’s face for a drink – or, move a computer mouse on a screen to play a video game.

2008 – BMI used to control mouse / play video games.

2012 – Cathy’s famous use of BMI robotic arm.

Deep brain stimulation, another example, is a procedure involving pulse generators implanted under the skin near the clavicle.  Like a Pacemaker for the mind (http://www.nhlbi.nih.gov/health/health-topics/topics/pace/),  its electrodes run directly through small holes drilled into the skull and into specific brain regions that require stimulation. This procedure was approved for treatment of movement disorders like Parkinson’s and dystonia in the early 2000’s. Its applications in the the past half-decade however, have  broadened. Treating severe depression is one of the newer applications of deep brain stimulation, in addition to treating chronic pain and phantom limb pain. Future applications being explored now include treating obesity (http://www.ncbi.nlm.nih.gov/pubmed/18826348).

Across the Atlantic at Israel’s Tel Aviv University, rodent brain parts are being successfully swapped out with man-made replacements (http://www.newscientist.com/article/mg21128315.700-rat-cyborg-gets-digital-cerebellum.html#.UmK3aZRATUM). It should be noted that the cerebellum of a rat – albeit not very large – is no simple piece of neurological equipment. Receiving sensory information and coding it to transmit to the rest of the brain is a complex task, but it’s one that scientists have been able to achieve through silicon in place of s. Without a cerebellum, a rat is unable to perform even basic motor reflex actions (in this experiment: blinking in response to noise signaling  that a harmless puff of air will blow in the animal’s face). With this new mental hardware, however, the rat is able to develop the reflex. “It’s proof of concept that we can record information from the brain, analyze it in a way similar to the biological network, and return it to the brain,” says lead scientist Matti Mintz of Tel Aviv. In the future this kind of technology may permit us to alter the memory of mammals – even to the extent of implanting memories, or drastically improving memory capacity and recall.

Optogenetics is another burgeoning technology, with research commencing all over the world, including at the Massachusetts Institute of Technology. Optogenetics involves the use of light – shined directly into the brain cavity – to trigger electrical activity in individual neurons or sets of neurons. How does light fire off neurons? It doesn’t… unless those brain cells have been genetically modified in by adding a piece of DNA from a photosynthesizing algae that responds to direct light (http://video.mit.edu/watch/optogenetics-controlling-the-brain-with-light-7659/). Led by Ed Boyden, the MIT team is using this technology to directly control the brains of rats to make them exhibit specific behaviors (such as running in circles), or even to recover eyesight by turning the right receptors on via optogenetics. This technology may allow us to experiment with turning on and off the many thousand different types of cells in the brain, and is intended to create better therapies for an entire host of psychological disorders which we do not fully understand today.

In the very least, these technological advancements seem to present interesting scientific horizons. Indeed, in media, news, and research publications, they are mostly portrayed as just that: “interesting.”

But there will in fact come a time (sooner than most expect) when we realize what we’re truly up to. There will come a time when we will recognize that we are not simply working on catalysts in a transition to a merely “better” or more technologically advanced human world – but that instead we are catalysts in a transition beyond “human” altogether. There will come a time when we grasp that we are in fact tinkering with the very bedrock of morality itself: sentience.

3) The Highest Stakes Imaginable: Altering Sentient Itself

We might refer to “moral gravity” as the relative importance of an act, occurrence, or event, based on weighing the various ethical factors at play. The practical application of the concept is (like so many other constructs) better explained by example than by purely conceptional reasoning.

For the sake of exploring our point, let’s use the relatively extreme example of a train conductor making a choice between the lesser of two evils:

Let’s suppose you’re a train conductor traveling down an open country stretch at 80 miles per hour, and ahead of you in the distance is a fork in the track. Getting closer to the fork you can see that each strip of track after the fork is occupied by different animals.  At this speed, there’s no way to stop the train before the fork, and one or the other path must be taken. You determine that – for whatever reason – the left track is overrun with gerbils and hamsters, and the right track is overrun with cats and dogs. Hardly believing how strange a situation this is – you get ahold of yourself and take responsibility for making an important ethical decision. So – what do you do?

For most of people asked this question, cats and dogs would be spared ahead of hamsters and gerbils. Why is this?  Because they hold a greater capacity for sentience  and intelligence. Cat lives represent a greater richness of conscious highs and lows, a greater grasp of the world and their experience than do gerbils. It is this inner world – this capacity for volition and rich conscious experience – which essentially determines ethical relevance.

On a practical level, it would appear to be a being’s capacity for rich experience, relationship, and emotional / cognitive range that dictates the moral value of a given entity – the weight of its life on an ethical scale. I say this not to be cold, nor to bypass the millennia of ethical thought which humanity is well off to continue exploring. I say this merely to make a practical point – a point made clearly with the example of the train.

Think about it, what weighs on an ethical scale? There are massive events of great physical significance all over the universe, happening every millisecond, and none of them registers on a Richter scale of morality as we are describing it. Think of a black hole, or the formation of an entire nebula of stars. These events involve some of the most intense forces in the known universe, concentrated and powerful, affecting matter and space around them for distances that we as humans would be almost incapable of comprehending. However, if no mind is there to experience or be affected by these events, what is their ethical gravity?

It’s the old “if a tree falls in the woods” question. I’d argue that things matter ethically if they matter to an experiencing agent. Consciousness, then, is the bedrock of positively all ethical relevance. For thousands and thousands of years, despite the advent of agriculture, or tools of modern society, and technologies of today, our perceiving, feeling awareness is what has allowed events to “matter” at all.

If you took a prehistoric newborn, and somehow swapped him at birth with a baby born today, both would have the same innate capacities, and both would grow into and adapt to their environments, despite the differences therein. Hence, through the ages, for somewhere between one tenth and one fifth of a million years [http://www.nhm.ac.uk/nature-online/life/human-origins/modern-human-evolution/when/index.html], our innate mental faculties, our capacity to do, to feel, to learn… has been the same… but now that is changing. The significance of this change in our sentient and conscious potential itself represents a shift that cannot be understated.

I believe the creation of and tinkering with sentience itself – on a grand scale – entails greater ethical gravity than any other ability or event in the known universe.

Everything from the food we eat to the smartphones we carry around to the buildings where we work only “matter” because they “matter” to our conscious mind, they matter to us, the experiencer – and it could be argued that this “capacity to experience” alone grants ethical significance. Hence, we feel saddened by both the loss of a pet dog and accidentally losing our iPad, but only the former registers on the ethical radar itself (as the iPad only matters to the experiencer, while the dog is believed to be somewhat sentient). No “ticks” on the ethical Richter Scale exist outside the consciousness of a living being.

We feel worse when we see a dead squirrel on the road than when we see a dead bug on our windshield for the same reason. The bug may be conscious only at some very low level, while the squirrel, we imagine, can feel and even “think,” and as a mammal like ourselves, may have baby squirrels in a nest who depend on it.

Similarly, I believe that the world will come to see that technologies which tinker with conscious experience and conscious capacity are by far the most ethically important (IE: have the highest ethical weight) of all inventions. Sentience, indeed, is the highest currency.

Growing hamburgers in petri dishes (http://news.nationalgeographic.com/news/2013/08/130806-lab-grown-beef-burger-eat-meat-science/) will certainly be a convenience and a change in our way of life, but won’t affect our happiness or the essence of our human condition any more than the toaster oven does. What I’m referring to here would be an entirely separate level of impact on our actual sentient experience; that impact would be occur through developments like a technology allowing us to determine our emotional experience at will.

Developing a robot that could help with household chores would be a great benchmark achievement of human ingenuity, and would certainly matter in our day-to-day lives. However, unless these robot maids were built with awareness and sentience, they could hardly be argued to “matter” in comparison to – say – a brain implant that allowed for total recall of all memories, the addition of entirely new sensory perceptions, or a completely immersive virtual reality.

Both the “petri burger” and the robot maid are exogenous changes to our condition, external changes in our world that would potentially make life simpler with regard to handling manual tasks or labor. Volitionally controlling emotional experience or memory, however, would make endogenous changes to our actual human condition; they would be primary because they wouldn’t impact the world we live in… they would impact the world we live through – the very lens of our awareness – our conscious experience.

The possibilities of technology that has the potential to alter human potential goes beyond mere “adjustments” to our present condition. Oxford philosopher David Pearce believes that sentient life will be capable of transcenting suffering in all forms: “I predict we will abolish suffering throughout the living world. Our descendants will be animated by gradients of genetically pre-programmed well-being that are orders of magnitude richer than today’s peak experiences” (hedweb.com/confile.htm). Other future thinkers like Google’s head of engineering Ray Kurzweil predict that humans will be able to upload the contents of their entire mind and potentially explore an infinite combination of euphoric and rich virtual experiences (http://www.dailymail.co.uk/sciencetech/article-2344398/Google-futurist-claims-uploading-entire-MINDS-computers-2045-bodies-replaced-machines-90-years.html). No higher stakes exist than when building upon sentience itself.

4) How is This Sentient Transformation Beginning?

With so many transformative technologies in our midst today, which are most likely to bring us to what philosophers refer to as a “post-human” condition? In addition – what might be the time-frame in which this transition takes place?

Although I believe that brain-machine interface and the development of significant artificial intelligence will be the primary drive behind “tinkering” with sentience, I – like any other prognosticator – cannot make that statement with any degree certainty. Similarly, I cannot be certain of any succinct timeframes, though my inklings (and the guesswork of others – http://www.kurzweilai.net/how-my-predictions-are-faring-an-update-by-ray-kurzweil) tell me that irrevocable changes to the techno-human condition will occur within the coming 25 years.

Though we cannot look very far into the future, it’s interesting to analyze technological transitions of the past to glean insight into how technologies have historically made their way from ideation to global significance.

The Wright brothers first took flight in 1903, and by the first World War – hardly twelve years later – the entire civilized world had planes, and the first commercial flights  were available. Forty-four years after the first flight, the sound barrier was broken. Twenty two years after the sound barrier was broken, man stepped foot on the moon.

Are we to expect that our technological advancements will be any slower than those of centuries past? Could we possibly imagine what brain-machine interface, virtual reality, and artificial intelligence will be capable of in fifteen, never mind forty years time?

In an interesting example, modern jet aircraft are increasingly becoming some of the most advanced systems for human-machine interface. Already, pilots don helmets with spacial detection systems that augment their vision – permitting them to see “through” their aircraft with x-ray vision in literally any direction (aided by cameras placed on the outside of the jet itself: http://www.bbc.com/news/technology-19372299). Similarly, experiments are being conducted now with drugs (http://www.slate.com/articles/health_and_science/superman/2013/05/sleep_deprivation_in_the_military_modafinil_and_the_arms_race_for_soldiers.html) and even transcranial electric shocks (http://www.bostonglobe.com/news/nation/2014/02/18/air-force-stimulates-brain-waves-improve-performance-drone-operators/V8ZG5DEYze4lCoGlloq14H/story.html) to keep pilots awake and aware for long spans of time. Are we to assume that no efforts to permanently enhance our limited senses and abilities will be made? On the contrary, we ought to assume that research and experimentation (military or otherwise), will aim to bend our human capacities towards our ideals, and away from the “un-enhanced” biological limitations we are born with.

5) How Far Will Tinkering with Consciousness Go?

The chimpanzee shares over 90% of the same DNA base pairs with a human, but is a drastically different creature, physically and mentally. If a chimp had access to and basic command of a television remote control and a toaster oven, he could be said to be more “advanced” than his jungle-dwelling brothers, but not inherently different. His innate qualities would be the same, his core physical and mental faculties the same… and few people would argue that his moral status as a sentient being would be raised above that of his fellow chimps.

Alternatively, let’s say we have the opportunity to transform the chimp into a human being. Now, even without a cell phone, television, or any other technology, the chimp-turned-man is almost immeasurably more intelligent. Entire worlds of richer emotional experience, language learning, relationships, literature, art and science now open up – permitting a much greater change by a 1-3% genetic shift, than by all of the more complex technology in the world.

A chimpanzee with a television also is not inherently valued any more than a chimpanzee without television. Transformed just a few percentage points into a human being, the animal enjoys a drastically increased moral status  because we value human life in all it’s richness far above the life of any chimpanzee. This is not because of our dextrous thumbs, or our motor cars and skyscrapers; the “richness” derives from our intelligence and our awareness.

Let us suppose that a method could be developed for enhancing human minds to have twice the capacity to learn, or to have an essentially unlimited and flawless memory, or greater volitional control and mindfulness of deep virtues, and a vastly greater capacity for creativity, making Shakespeare and Da Vinci relatively boring in comparison. Assuming that the moral goodness of such  individuals remains at or above that of any given “un-enhanced” human being, would they then be rescued first from a burning building? If their industry, their art, and their superior methods of political governance were vastly superior to that of “un-enhanced” humans, then might we even be morally obligated to not just save them first from buildings, but create more of them, or to become enhanced ourselves?

Most people might cringe at the seemingly unrealistic thought of “enhanced” human beings holding some kind of moral weight that is superior to the humans from which they were created. After all – shouldn’t we feel a kind of reverence for our “ancestors” – or for a sense of equality between all sentient beings? But  if no “ticks” on the ethical Richter Scale exist outside the consciousness of a living being… then it might be argued that the more depth and intensity of that life experience, the more that being “registers,” ethically speaking.

I have an inkling that many of us would not choose to rescue an enhanced over an un-enhanced human from a burning building. Similarly, I do not believe that many chimpanzees would rescue a human before they rescued another chimpanzee. We like our own kind; they are like us, they represent us, we don’t want to turn our back on them, and we don’t want to look like a traitor when we go back to our domestic life (“Yeah, I felt like I had to rescue the enhanced guy and let the un-enhanced guy die – I know it’s a little weird but if you think about the moral underpinnings it’s really the right choice – can we still be friends even though you’re un-enhanced?”).

With our own kind, there is a sense of felt relatedness, an affinity, and the enhanced individual seems to bring with it the real fear of something better or more powerful than ourselves. So, who’s to say, maybe there will emerge an equality amongst us all, and the hard-won value of equality will stick with humanity and post-humanity long into the future. Why would it, though, when we are to enhanced humans as humans today are to chimps… or to mice?

My projection is that the continued dominance/acceptance our presently popular set of ethical values – though hard-won and functional in our present condition (with positively all due respect to Locke and many others) – will be about as predictable as roulette – or the post-human condition itself. I other words, it will be unpredictable to the highest degree.

If our values and capacities significantly shift and in the decades ahead, I can imagine that tinkering with sentience and intelligence itself might yield some unique permutations of what “values” and “virtues” even imply. These ideas themselves are constructs which are likely to be subject to much deeper probing and understanding by beings with 100x “regular” human intelligence.

Where might this transition eventually leave us? The answer to this question is positively unknowable, but a number of the world’s future thinkers – from fiction writers to philosophers to researchers – have contributed to the potential “snapshots” of the future that we might find ourselves in.

Peter Diamandis, founder of the X-Prize Foundation and co-founder of Singularity University suggests that we will indeed become “God-like” in our capacity to access all knowledge and connect – possibly even merge – with one another into a kind of meta-intelligence.

The famous American inventor and futurist Ray Kurzweil posits that humanity will conceive of full immersive virtual realities by the 2020’s. The “VR” that Kurzweil has in mind will not have it’s origins in a pair of goggles and a joystick, but in microscopic nano robots that will be able to enter the brain through the capillaries and provide direct stimuli, creating a virtual experience indistinguishable from physical experience.

Oxford philosopher David Pearce foresees the potential of a world where all biological life is genetically altered to experience no pain, and only “gradients of well-being” in a process he refers to as “paradise engineering.” Pearce even mentions that we likely should not stop at stimulating our current cognitive systems, lest we be “trapped in [a] local minimum,” unable to access “the richer states of consciousness or the more sublime states of wellbeing.”

With a brain-power a hundred times more powerful than that within our craniums today, would we expect anything about our world to be the same? Would we expect to only have the capacity of normal human eyes, or might we see infrared, or have nearly infinite zoom, or see through objects? Would there be a more effective mode of communication than speaking or writing? Wouldn’t some kind of system or connection emerge to replace this antiquated English, or indeed any language on earth? Would we expect to still feel jealousy, or to be subjected to the petty heuristics that plague our human decision-making processes? Would we still experience “boredom”?

Would we retain anything at all that is “human” – or would be eventually swim in an open pool of limitless experience? If an indistinguishably “real” virtual reality (maybe more “real” than anything we now know) can be ever-tailored to our desires and dreams, would we ever return to “reality?” Would we need “relationships?” In our present human condition, given our present “nature,” relationships are inherent to meaningful life and fulfillment at present (http://techemergence.com/wp-content/uploads/2014/04/INQ-Models-for-Fulfillment.pdf) – but why would this have to be the case in a future of limitless conscious potential?

Indeed, the far-reaching implications are by definition outside the grasp of the human mind… and it may be the “far out” appearance of post-human realities that keeps them from being taken seriously by society at large.

6) Will it Be Too Late? A Trumpet Call

My fear is that the writing on the wall will not be seen as writing on the wall, and that the human forays into “tinkering” with consciousness itself will continue to be seen as little more than “interesting,” and that we might ignore the grandest ethical precipice of all time. I hope that shock or trauma will optogenetics not be required for more minds to grasp the eminent importance of these transitions toward our merger with technology.

At the time of this writing, scientists are able to control rodent and insect motion via brain-machine interface (and in the case of rodents, optogenetics). I often jest that it may not be until the emergence of remote-controlled cats that society “wakes up” to the tremendous potential of sentience-altering technology. Though I’m not one to prognosticate about technological trends, my gut tells me that feline cyborgs may be what catapults the consciousness revolution into general awareness.

I am no techno-pessimist, but I’m no techno-optimist, either. As much as possible, I think that we should be aware of the potential issues and possibilities of conscious enhancement, but that above all else we pool our resources to make sure that this transition of all transitions is one that makes the universe an aggregately better place. Though we cannot tell precisely what “better” will entail, our work to discern “better” as our technologies progress will be the most important ethical explorations that our race can embark upon.

I would argue that few efforts will be more important than an effort to unite the positive intentions and technological powers of humanity in a connected, guided front towards the future of our conscious experience.

The trumpet-call for this mission has not yet been sounded, and “writing on the wall” is yet to be seen as “writing.” I believe that it is in all of our best interests not only to unite those who ponder these issues today (from fields as varied as psychology, philosophy, economics, and beyond), but to bring these highest ethical concerns and all-important future considerations into the awareness of the many bright and well-intended minds as possible. To wait for a triggering event to occur on it’s own – whether “good” or “bad” – would put the all-important transition of consciousness in the hands of fate.

The United Nations was founded in 1945 in the wake of the horrors of World War II. Similarly, the founding fathers of the United States met together in order to discuss their own formation of a new government – and succession from Britain.

In both the above situations, there was a desire to unite in order to assess and come to an ideal conclusion for the people (in one case of the world, in one case of a nation). In both circumstances there was a “reason” strong enough to make the meeting happen. With consciousness and technology beginning to mesh already – what will be “reason” enough for our well-intended experts to pool their thoughts and explore their policies? The importance of this transition, I fear, may be grossly overlooked – relegated to private company labs or university studies until it’s impact catches the world off guard.

Our mastery over materials, the improvement of global living conditions and the myriad human inventions that have the potential to make life richer – none of these represent the potential ethical impact of tinkering with consciousness itself – with the bedrock of moral relevance itself. Our wielding of consciousness represents not just a higher impact on the ethical Richter Scale, but control of all the forces that move it’s needle in the first place.

Like our biological ancestors, we will develop into more aware and intelligent beings. The only difference between ourselves and our predecessors, is that we will – to some extent – direct this transition ourselves. Though we can’t possibly grasp all of the implications and possibilities, exploring our options and opportunities – and indeed exploring ethics itself – will be our most likely path to finding an aggregately “better” future ahead of us – while being caught off guard seems a most unfortunate circumstance.

For thousands of years, “Playing God” has been nothing but a metaphor. Now, the gateway to a Rosetta Stone of consciousness may lie within our grasp. If sentience and entire worlds of awareness can be captured, crafted, and created – shouldn’t we be more than obligated to guide this process towards the best ends that we can determine?

Considering who controls these technologies and transitions beyond the present human condition is of the utmost importance. What – for example – might be the next step past “Google Glass” in the augmentation of our senses? The horrors of animal experimentation in the cosmetic and medical fields seems to pale in comparison with potential ethical nightmares of experimenting with human consciousness itself. Would we have private company CEOs, ivory tower philosophers, government agencies, or silicon valley techies control this transition? I’d argue that because we don’t know where we’re headed – and because of how very complex this subject matter is – our best bet is from a well-intended contribution from all these groups.

The benefits of better vaccines seem to pale in comparison to the potential benefits of controlling or indefinitely extending our conscious experience. Who will run those most important experiments? How transparent will those companies, governments or entities be about the uses of those technologies? What could or should we be doing now to ensure that humanity’s initial steps forward in sentient potential are beneficial – and not tragically detrimental?

Any significant step forward in the technologies mentioned at the beginning of this article (optogenetics, brain-machine interface, hippocampus replacements, etc…) could imply drastic changes to our conscious condition. It is now that the writing on the wall must be read – and I argue that the uniting our intentions and expertise has never been more called for.

-Daniel Faggella

Human Ideals Will Tear Us From Humanity

I. Ideals Have Taken Us Here

Could the yearning to improve the quality and efficiency of our daily human experience also bring us to abandon much of what we consider “human”?

Before making such a bold proposition, let us look at modern “first world” society in comparison to life in Europe in the year 1200. In many respects, we of the 21st century could be considered “super human,” or indeed “inhuman” from a dark ages perspective. We fly in the air, and we communicate across the planet and through the ether. We have touched foot on the moon itself. We have cured many of the world’s worst diseases, predicted earth’s weather patterns, and harnessed the sun to power many of our machines. Under our skin we have pacemakers, artificial hips, and cochlear implants – and the parts of some human hearts have been replaced with plastic. In society today, these additions are not occasional extraneous additions to our “human” form, but supporters and aides, prolongers and enhancers of what “human” life is.

A forward, progressive yearning has taken us to the destination we call “now.” Our own dreams and ideals have pulled us beyond our past conditions to a state which thinkers and doers now deem to be better, happier, more efficient, more complete.

“I will build a motorcar for the multitude” said Henry.

Thomas exclaimed: “We will make electricity so cheap that only the rich will burn candles.”

Alexander yearned to build “the method of, and apparatus for, transmitting vocal or other sounds telegraphically.”

“If birds can glide in the air… then, why can’t I?” asked Orville.

I’m sure you easily provided the last names of these individuals, because all of our lives have been touched by these very thoughts, and our way of life is indeed molded by them. Because they dreamed beyond what was presently possible – and turned ideas into realities – the names of Ford, Edison, Bell, and Wright will not soon be forgotten.

Wonder and curiosity drive our visions for what a different, better future might be. Cars have replaced horses, audio recording allow for infinite number of repeat performances, vaccines prevent epidemics, cell phones connect the most distant people, and a human fetus can be examined and treated through all stages in the womb.

My American Oxford dictionary defines “ideal” as: “(n) a standard of perfection; a principle to be aimed at,” and “something existing only as an idea.” Ideals help to take our thoughts from “what is” to what “could be.” Ideals are subjective, and might seem possible (IE: the ideal form of transportation might imply a vehicle which burns no fossil fuel), or impossible (IE: “uploading” human consciousness into another computational substrate to indefinitely store our mind and experience). These ideas of something “better” precede all vision – and action – for the development of cultures and technologies.

The ideal visions in our past have driven us to transcend the challenges of that past – from health to communication to transportation and beyond. No boot could have been planted on the moon’s grey surface, no organ transplanted from one human to another, unless someone had thought about it first, and posed the far-out question, “what if?” Centuries of these “what ifs,” corresponding visions of possibility, and technological developments have taken us from hunters and gatherers to hackers and jet setters. Compare our lives to our mediaeval ancestors, and we are like monsters, or gods.

II. Relative Technologies and Relative Standards

Today, however, most of us don’t feel like monsters or gods. We were “born into” many technologies in use today, or our world evolved with their development – They are our norms, but we can’t see how outlandish our norms are when compared to those of our ancestors. “God” and “monster” only appear through a particular perspective, like a rainbow which is invisible unless the sun is behind the observer.

It is the contrast between a medieval perspective and the perspective of today that makes the latter stand out as “god-like” or “inhuman.” Gradual change that produces incremental differences does not incite the same shock. When confronted with large changes in our current technology, we resist the difference so that we are like the frog, who – when held above boiling water – will squirm to escape, resisting the drastic change in temperature. Placed in a pot at room temperature, however, the frog will not notice gradual changes of heat, and so will sit still – adjusting his body temperature to the water around him as it rises – even to the extent of being boiled alive.

With each generation, a new set-point of “normal” is established. To us, as for our ancestors, there seems to be a reasonable limit, a ceiling to technology’s development. Some thinkers – however – are capable of extending technology and culture into future applications that the previous generations were fearful of, ignorant about, or deemed blasphemous. They prove the past generation’s “ceilings” of development to be an illusion.

Stem cell research – for example – was initially seen as unacceptable by a huge swath of the American population, and today has a much greater general acceptance. Each generation alters the level of “acceptability” of a technology, a cultural trend, or a way of doing or being. In 1950, the “morning-after pill” would have been quite a controversial pharmaceutical technology – and today it is not. Today, it might be said that human cloning occupies the “no man’s land” for acceptable technologies, and some of us likely see the imaginary ceiling once again. Yet, cultural and technical forces proceed. It is in the nature of an “ideal” to exist beyond present conditions and to build off of or entirely neglect past notions.

Visiting Mars – never mind colonizing it – might have been a wholly absurd notion in the 50‘s as well, and today companies are planning to send the first humans to mars within a decade (http://www.mars-one.com/). Uploading human minds into computers might not have even been imagined by previous generations, though many experts posit that 30 to 40 years may be all it takes to house consciousness inside a machine (http://news.discovery.com/tech/when-will-humans-upload-their-brains-to-computers-130517.htm).

With our perspective grounded firmly in the present, it’s difficult for us to think of just how many of those “impossible” circumstances we as a society have caught up to – and blown past – in just the past few decades. In 1950, major innovations included Dr. Jonas Salk’s successful polio vaccine and the telephone answering machine by Bell Laboratories. From a 1950‘s perspective, how many of the now-mundane achievements of humanity would have been deemed absurd, literally impossible, or obviously not morally permissible? From surrogate mothers for hire, to internet pornography, to the Mars rover, a bygone era’s notions of “impossible” and “blasphemous” are more than occasionally noticeable. We’re swimming in them.

Yet, our automatic assumption is that OUR future will somehow not be as groundbreaking, and OUR standards of “blasphemous” or “impossible” will stick firm. We might think to ourselves “Maybe prosthetic limbs will become more and more the norm, but surely there will never be entirely prosthetic parts of the human brain.” “Maybe we will improve the artificial intelligence on our cell phone to GPS systems, but surely nobody would allow for an artificial intelligence to gain sentience and take any sort of leadership or governmental role above humans… machines will always be only aides to man.” “Maybe deep brain stimulation will be used to help the truly clinically depressed, but surely an electrical alteration of our emotional states would never become popular as an enhancement for ‘normal’ people who want to just feel happier.”

How much easier it is to chuckle at the limited notions of past generations than to begin to imagine just how many of our own beliefs will be laughable in the coming decades. In the present, all of our notions seem safe and rational, but we have no better idea than our ancestors as to which ideas will remain useful and which would go the way of sun-god worship and phrenology.

III. We Think We’re so Wise

Lets say I told you about a science fiction film in which huge portions of society decided to permanently “plug in” their minds to a virtual reality device as opposed to continuing to exist as humans in a physical reality. You might deem the idea novel or ridiculous – as it implies cultural and technological shifts that we haven’t come close to seeing yet. You could write that possibility off rather quickly, it would seem, as there’s no reasonable way that such a shift would occur in your lifetime, or maybe even that of your grandchildren.

Hindsight would show that our grandparents’ assumptions were clearly misguided, naive, or uninformed. But given our level of technological development, we clearly have better perspectives for making informed judgements about what technologies will or will not come into existence. How often have we uttered such words of certainty about technologies that we now take entirely for granted? How many respected thinkers and perfectly reasonable people have had egg on their faces for boldly assuming “certainty” in setting limits to human innovation?

Guglielmo Marconi was thought to be insane when he suggested to the Italian Ministry of Post and Telegraphs that wireless communication might be possible.

The four minute mile was deemed impossible even by some scientists of the 1940’s and 1950’s until Roger Bannister broke the record in 1954.

In 1936, the New York Times read: “A rocket will never be able to leave earth’s atmosphere.”

In 1895, Lord Kelvin, head of the Royal Society, was quoted as saying “Heavier-than-air flying machines are impossible.”

The associates and engineers of Ford Motor Company told Henry Ford that the V-8 engine would be literally impossible to construct in one engine block. 

Sir John Eric Ericksen, dubbed ‘Surgeon-Extraordinary’ to Britain’s Queen Victoria bluntly stated in 1873 that “The abdomen, the chest, and the brain will forever be shut from the intrusion of the wise and humane surgeon.”

So, what are we certain about? Are our hunches so much better than those of our grandparents? Why do we habitually block off these mental pathways for exploring radically different futures? Unfortunately, we are victims of the same mental heuristic, the same doubting tendency that may leave us unprepared to handle the practical and ethical concerns posed by the next wave of revolutionary technologies (which we cannot possibly predict).

IV. Culture Will Protect Us – or Will it?

The speed of technological advancement, though, is not the only factor that will determine these potentially radical futures. Culture, policy, and politics also play a part. Some might argue that even though the Wright brothers did fly, it would not have amounted to any kind of change in our day-to-day lives if the government hadn’t allowed for commercial or personal flight, or if society hadn’t at least in some way seen flight as useful and acceptable. Politics and cultures may be the forces that impede the “slippery slope” technologies that seem most likely to disrupt our notions of present-day human life.

It may very well be that society will simply not permit replacement brain parts in human beings – even to cure disease – because of the perceived risks of this kind of technology. Even with all the computer intelligence in the world of 2050, governments may completely prevent an artificial intelligence from having a say in matters of politics or leadership. It may very well be that society will pass laws against the use of technologies that allow us to modulate our own emotions with the push of buttons, or against a kind of immersive virtual reality that rivals actual experience.

Some technology, it seems, will inevitably be halted or slowed by society’s standards and norms, and certainly by policy and law (as we’ve seen with cloning and stem cell research). However, we should not allow ourselves to make the same insidious mistake of “certainty” that thinkers and societies have made previously with technological progress – and end up with the same kind of egg on our faces.

We can admit it is nearly inevitable that mind-expanding cerebral implants or brain-machine interfaces – for instance – will be forced to comply to some kind of regulation from government agencies, and it seems that there are human safety concerns that would warrant this scrutiny. Cultural forces will dictate that some technologies or applications will inevitably be deemed by some as “unacceptable”, such as chemical weapons, or human cloning.

Some technologies and applications, however, will be “unacceptable” in the same way that interracial dating, stem cell research, deep brain stimulation and sex-change operations might have been seen as “unacceptable.” Imagine just how unacceptable a sex-change operation would have been considered in Christian medieval Europe, never mind the early 1950’s, when America saw it’s first sex-change with Christine Jorgensen. Though it is difficult to imagine from our present vantage point, ever so many of our notions of “unacceptable” will be trampled over by time, necessity, and the ideals of new generations.

As many examples of previously “impossible” cultural shifts exist around us as do “impossible” technical shifts. Prosthetic limbs and interracial couples are in all ways “acceptable” in much of the modern world. Similarly, even sex-change operations are today relatively commonplace, and carry a minute fraction of the moral stigma that once accompanied the procedure. A study by the University of Michigan estimates that 1 in over 2,500 men in the United States have undergone surgery to become a woman (http://ai.eecs.umich.edu/people/conway/TS/TSprevalence.html).

In addition, censure or aversion to a technology in one nation does not mean that the same development will fall flat altogether. Not only do “times change” (again, think sex-change operations in the USA), but what is not permitted within our culture may easily and swiftly be adopted elsewhere – and so flourish there instead of here. How many seemingly offensive and unbelievable acts are carried out as common cultural practices all over the world? From female genital mutilation, to stoning, to coming-of-age ceremonies, the smorgasbord of the world’s traditions shows a massive and wide range of “acceptabilities” outside of those that our lives are accustomed to. This doesn’t just translate to cultural oddities like stoning – we also see Scotland permitting cloning, and stem-cell research flourishing in Korea.

If it were possible to “upload” new information – or even new senses – into our brains – wouldn’t it be reasonable that some nations would adopt this technology rather quickly? If augmenting human memory with implants becomes illegal in the United States, it may be legal in Denmark, or in China, or in Japan. Would we permit ourselves to be left behind? Would an “arms race” of human enhancement begin?

  • V. Our Responsibility is Vigilance

Not only does it seem as though our minds aren’t inclined to foresee groundbreaking change, but our “certainties” also serve to help us sleep at night. The universe seems a lot less daunting when we believe that these most disturbing alterations to our present norms are just science fiction. Whether it is a natural disposition to think the future will be like the present (as Peter Diamandis and others happen to believe: http://bigthink.com/in-their-own-words/the-difference-between-linear-and-exponential-thinking), or a head-in-the-sand perspective to preserve our sense of comfort, it would seem more responsible to instead seek a more truthful perspective from which to handle our future head-on. It is a future that is today being guided and facilitated by our own impetus toward the ideal, by the same visionary force and primal desires that brought our human race to where it stands today.

It was human questing for the ideal that took man from horse and carriage to motorcars; that brought us cellular phones, email, heart transplants, freedom of religion, freedom in choosing a mate, and so many other societal and technological shifts. Where will the ideal take us next? When technologies become available to literally alter human sentience, or create beyond-human general intelligence from silicon, will we be prepared for the consequences?

Our desire for the ideal is bringing us now towards arguably better – and almost certainly more efficient – modes of… everything. Faster travel, instant communication, fixing memory problems, treating depression, curing the world’s deadly diseases and even staving off death altogether. As a result, we see the “Hyperloop” (http://mashable.com/2013/08/16/elon-musk-hyperloop-mashtalk/), Google Glass, replacement brain parts, medication to safely influence emotion, and research to attain indefinite life extension (http://www.sens.org/).

Each of these ideas is the airplane, motorcar, or cotton gin of it’s own era. Which will become realities, how they manifest in our societies, and when… is all for time to tell – and our innovators and policy makers to determine. The difference between motorcars and replacement brain parts, or a moon landing and a human-level artificial intelligence, may simply be the era in which we live, and our level of technological development. It may well be that “automobiles” were more monstrous to generations past than brain prosthetics will be for us in five years. It may well be that a moon landing was a more god-like feat than our eventual creation of post-human intelligence. There is no certainty whatsoever about which technologies will develop first, or how – but there should be certainty about our responsibility to steward them into reality in a positive, careful way.

VI. Pliable “Human”

“Monstrous” and “inhuman” transitions have not prevented the ideal from taking tangible form in the past, and we should assume no safety from similar developments now. When Benjamin Walt (http://articles.timesofindia.indiatimes.com/2013-10-02/mumbai/42615303_1_stimulation-dbs-depression) went under the knife to treat severe depression by having electrodes implanted inside his skull – the ideal was to feel better. The fact that deep brain stimulation is an “experimental” procedure apparently paled in comparison to the potential benefits of a better conscious experience.

When Cathy Hutchinson (http://cnettv.cnet.com/60-minutes-braingate-movement-controlled-mind/9742-1_53-50004319.html) had a stroke and was left mentally sharp but trapped in a body incapable of movement or speech, she aimed to do whatever it took to interact with her world and loved ones again. She opted to have a device implanted in her motor cortex to allow her to drive a wheelchair, control a computer mouse on a screen, and even bring a cup to her mouth to drink with a robotic arm… all using the power of her thoughts alone.

Benjamin and Cathy are human beings with machines interacting with their brains, enhancing and/or recovering their mental and physical capacities. These treatments began as “what ifs,” too – dreams that some would have supposed might never happen. Who could say that we aren’t already inhuman, monstrous, or god-like compared to our ancestors?

Our sense of the ideal drives action and creates the future from our imaginations. Different versions of the “ideal” may or may not place a highest value on a conception of “human.” If Benjamin believed that a happier day-to-day life was more important than preserving a body unencumbered by machine parts, then making the choice for brain stimulation was an obvious one. I’m sure Cathy had some notion of “human” that clashed with the image of a hole drilled in her skull and a computer chip inserted into her brain tissue with dozens of tiny metal spikes. However, escaping her incapable body to interact with her environment was a priority that superseded the desire to stay “human” or “regular.” It’s easy to see how either Benjamin or Cathy could argue that the pursuit of a better and more capable life was the most “human” thing that they could do… and who could argue with them?

This brings us to a potentially troubling perspective on our condition – a perspective which also happens to be a requirement for our honest, vigilant, and open-minded transition to a future: “Human,” as a notion, is not concrete, and is as pliable an idea as any other. It is an ephemeral idea, an un-graspable concept that has already altered over centuries, and may be completely re-invented with the advent of tomorrow’s technology. Our perceptions of communication, of travel, of speed, of “normal” have all been drastically altered by the passage of time and procession of technological and cultural changes. There is no safety – or indeed sanctity – in a present notion of “human,” and so there is no telling what the future might permit. The only “certainty” to find may be in the fact the “human” idea is bound to the same fate of alteration as all others, and that we will all be a part of that transition. As it is, few people have accused Benjamin or Cathy of being “inhuman”… and what will happen when their procedures are commonplace? When will this same acceptance transfer to “humans” with wires or chips effecting their brain, behavior, or personality? The lines continue to show themselves to be grey, the slopes slippery, the notion of “human,” pliable.

Even if “human” were to imply the un-enhanced, biological human body, it seems as though we have more than ample evidence to suggest that – even if valued – this concept does not necessarily rank at the top of our notions of the ideal, and we may move beyond it altogether. When I say that “Human ideals will tear us from humanity,” I am using “humanity” to represent the notion of “human” that we hold today. Our momentum and technologies are taking us not just to fancier, smaller, more capable gadgets… we are moving to an entirely new human condition within our lifetime. It is a circumstance where our minds and bodies will themselves be altered – where we don’t just break from humanity with regard to our “tools,” but with regards to ourselves.

VII. Stewards of Consciousness and Intelligence

The urgency of our present condition comes not only from the gravity of the situation on the whole – but from the speed of it’s approach. Our future will not just come “faster” than that of the middle ages… it will hit harder. We need to be taking the steps now to understand the ramifications and implications of the technologies that will shape humanity, and guide this transition with caution and with collaboration. More likely than a malicious use of tomorrow’s technology, is the risk of our race being ill-prepared for just how drastic the shift will be. We may be unready to steer with the aide of our technological and ethical compasses as we venture to the important “ports” of the future.

Our unfortunate tendency is to ignore or brush off notions which prove too different from our present condition – but a blind eye turned to the real possibilities of these major trends and trajectories is a blind eye turned to the human future. We have our grandparents’ tendency to do this. Kodak was put out of business by its underestimating digital photography. Thalidomide was distributed to to pregnant women without anyone’s calculating it’s horrific impact on an unborn fetus. What needs to be considered when 3-D printers can produce everything from guns to human organs? What would we need to plan for in considering memory or intelligence augmentations to our very brains?

Our own technological explosion will not necessarily lead to being boiled alive like the frog, unless we, too, lack an appropriate understanding–and therefore an appropriate response to our conditions and also ignore the constant, incremental changes that shift our predicament. The only thing worse than a frog boiled alive out of poor perception is an ostrich with it’s head in the sand, willful ignoring real concern – or acting on cowardice. Ignoring what seems too “far out” now would mean being horribly ill-prepared for a transition that requires all the attention and preparation we can muster as a united race.

Our future is the future of our ideals.

It is these very notions of the ideal–what to improve and create to achieve what is best–that will pull us farther and farther from not only our human condition (as did automobiles and electricity) but from what is “human” in the first place. I am not foretelling the “crash” of our proverbial ship in the future, but merely calling us to unite in this cause, gripping the wheel with both hands and making sure we’re looking forward. This is not a task merely for scientists, for philosophers, for businessmen, or for governments, but for all of society. Though it doesn’t require that we hold the same beliefs, or use the same methods, it does require that we share the same intention and purpose. The time to unite our efforts is now – when we hold the responsibility not just as providers for our next generation, but as stewards to consciousness and intelligence itself. This collaboration, then, is not just geared towards discovering technologies, but discovering the best ways to introduce and implement them as we swing forward into a transformative new era.

Will we have no say in the direction of our race’s future? Will we ignore any deviance from our own perceived norms and miss out on guiding the course of the greatest changes that have ever tested human kind? I hope not.

How can we prepare now for further shifts away from what we “know” to be  “human” – and how can we collaborate to ensure the safest and best transition forward? Only with open eyes, and only with resources dedicated to understanding the massive risks and untold opportunities of altering our condition – a process that requires our united vigilance and best intentions now.

-Daniel Faggella

Image credit: fineartamerica.com

Non-Android Humans May Still Be Enhanced – an Interview with Dr. Thomas Ray

DrTomRayDr. Thomas Ray is  Harvard-educated doctor of Biology, and also the original researcher in the Tierra Artificial Life project. Tierra ended up receiving major media coverage all over the world as one of the most promising forays in generating “evolution” in a digital system. Today, Dr. Ray’s research is honed in on the study of the human mind – and our conversation centered around his thoughts about human enhancement and machine consciousness.

Machines May Never Live

For Dr. Ray, the affective portion of the brain and the cognitive portion the brain (he uses the two separately in order to draw a distinction) work in unison to create the amalgam that we call consciousness. He believes that animals with only a primitive brain (lacking the capacity for language, logic, and reason) are capable of a a certain, limited extent of consciousness, while humans are capable of a higher and richer variety.

[Continue Reading]

Building Bridges: Humanitarian Efforts to Artificial Intelligence – Dr. Soenke Ziesche

Bridge 2In the push towards bridging humanitarian efforts and advances in computing and artificial intelligence, there seem to be a minimal number of thinkers AND doers.  Dr. Soenke Ziesche recognizes the imperative need to better integrate these worlds and sees potential implications for both those in the humanitarian fields and those in the AI sectors.

 

As a member of the United Nations since 2000 in the humanitarian and recovery sector, Dr. Ziesche speaks from a grounded perspective.  When asked to speak about the overlap between these two fields, he opts to take another perspective, that there exists quite a gap between humanitarian issues and the field of AI.  Applications of AI have mostly been limited to western countries.  While there may be more isolated attempts to try and apply technologies for humanitarian-based reasons, a tangible bridge that unites the two has yet to be implemented.  Perhaps the humanitarian field has been too conservative; or, perhaps the AI field has had too narrow of a scope in its outreach efforts.  Likely it’s some of both of these issues, and the fact that there are not many people who are actively involved in both fields; often when there’s a gap, there’s a lack of communication between two groups, and this seems to be the case between humanitarian activists and AI scientists.

 

Granted, there are many components of humanitarianism, as with any field.  The UN’s homepage for Humanitarian Affairs includes a list of thematic issues – everything from demining to global food security to protection of civilians in armed conflict.  While any of these areas could potentially be helped by advanced computing and developing artificial technologies, Early Warning and Disaster Risk Reduction is an area that often demands greater attention.  Unfortunately, even though some gains have been made in this area, there remains a need for the leveraging of technologies, particularly in the areas of communication and coordination between managers of crises’, aid workers on the ground, and victims of a disaster.

 

Mobile phones are invaluable tools in relaying information from disaster areas.  Click the link on the UN’s page for the Global Disaster Alert and Coordination System (GDACS) and find a homepage that shows a real-time list of current emergencies and alerts from around the world.  A page for Mobile Technology for field operations states that smartphones are becoming widely available, even in remote countries, and have the ability to provide important information from disaster areas; however, the implementation and management of such technology presents manifold obstacles.  Beginning in 2011, a conglomerate of organizations began working towards solving some of these issues, including development of user-friendly tools for different populations; a common application programming interface (API); a system for culling and processing an array of information; and a plan for how and when to promote tools in the GDACS community.

 

Dr. Patrick Meier is a lead thinker and navigator in the area of applying technologies for early warning crisis and humanitarian response and resilience.  Presently Director of Social Innovation at the Qatar Computing Research Institute (QCRI), Dr. Meier has written extensively about and is currently working towards a research-based framework for an information system for crisis response and management that he dubs Next Generation Humanitarian Technology & Innovations.   There do exist humanitarian donors and organizations with investments in technology – DfID, ALNAP, and OCHA are a few that he mentions, but he also pinpoints the crucial missing link as familiarity by many humanitarian agencies with field of AI.  Meier also clearly articulates that mobile phone use has sky-rocketed, and social media sites such as Twitter are heavily used amidst conflicts and disasters.  But as attributed in a quote by DfID, “…Currently the people directly affected by crises do not routinely have a voice, which makes it difficult for their needs to be effectively addressed.”  What’s more, he also affirms the point that in the face of disaster, mass amounts of data pour in quickly, and analysis of this data – like food – has a “sell-by” date.  Having systems by which to organize and make sense of massive amounts of data is critical.

 

Dr. Meier believes we have the tools to effectively begin to address the “Big Crisis Data Challenge”, and that they are not unique or new ones; we need to make better use of Human Computing (which he sub-defines as crowd-sourcing and micro-tasking) and AI (sub-defined as natural language processing and machine learning) in mitigating these challenges.  His far-reaching philosophy is that relevant technology applications within both of these methodologies must be united by a framework that promotes Research and Development (R&D) and is applied to humanitarian response and crisis prevention.

 

It seems that exasperating the communication gap between fields is the idea that disasters often serve as the triggers toward action; discussion and speculation about which disasters could or might occur often exhibit a theoretical or more visceral tone of reaction.   In the immediate wake of publicized disasters or conflicts, people’s interests or emotions are often peaked, and actions toward prevention of the same types of disasters or conflicts are frequently undertaken with renewed vigor by a greater segment of the governments, other organizations, and the scientific community.  As voiced in this article published by Ovum, a data analysis company, connection technologies that aid in a response to humanitarian disasters has become an established field, but using technologies in the prevention of such disasters is much more challenging.

 

Another important point is that AI could potentially be the cause of catastrophes.  A well-developed humanitarian structure could be used to address these potential risks, but populations will not be prepared if the AI field fails to give as much attention to the inherent risks of AI as they do to the potentials.  How do we plan for catastrophes at levels we haven’t seen before? Dr. Meier and other experts in the humanitarian technology field are honing in on how big data can be leveraged in both structural and operational crisis prevention.  The Ovum article gives a succinct overview of how such data can be used in both types of prevention, taking into account data from three distinct phases in the “life” of a crisis – pre-event, during, and post-event.  As Dr. Ziesche mentions, the field of AI has as much to learn from the humanitarian sector as the other way around.  Understanding these three phases of data in specific geographic, cultural, and demographic contexts, and how they evolve, will provide an important window through which to consider and prepare for the prevention and intervention of natural, social, and AI-related disasters.

 

 

 

 

 

 

 

 

 

 

 

Will We Out-Grow Our Inherited Brains? – A View of Societal Lenses, with Dr. Pat Hopkins

 

In the future, will technology deconstruct or reconstruct gender identities as culturally represented and understood by society today?   In a recent interview, Dr. Patrick Hopkins, a professor of Philosophy at Millcaps College, provides some interesting insights into the crossroads of technological influences on gender roles, societal values, and the implications for humanity.

planet apes

The intersection of technology and gender is not a new, but certainly a constantly evolving domain, and there are various domains of interest.  Technology has the potential to revolutionize or reinforce gender dynamics in a culture.  Technologies, at their essence, are gender neutral, free of biological constraints.  But most humans walking around with cell phones in their hand still by and large very much identify with gender (though there are exceptions), and may be making use of certain technologies in ways influenced by their gender, even if on a subconscious level.  This idea triggers a loosely-related dual question – might these differences one day become more of a blurred line, and will further advances in AI or pharmacological enhancements help us along the way to diminishing those identity distinctions?

 

Over time, certain technologies come to be associated with a particular image of gender encapsulated in an epoch – the washing machine or any household appliance might bring to mind newspaper ads in the 40’s and 50’s, with the operators and the audience being women – the home-keepers.  This stereotype has changed with the tides over the past couple of decades, but the image from the past is still a type of artifact that represents where human culture has been, and gives us a reference point for how it’s evolved alongside the presence of other technologies.    There exists an array of interesting relationships that intersect gender and technology, as outlined in this article by Social Anthropologist Francesca Bay.  In fact, this is now such an established field that there are multiple publications addressing the topic, and Virginia Tech holds an annual Gender, Bodies and Technology (GBT) conference addressing related topics.

 

In the course of the discussion, Dr. Hopkins presents two core questions – how distinct are (or could) the genders be, and what does technology allow us to do in terms of gender?  There’s the possibility of minimizing differences between genders, but what are the implications?  The idea of a “post-gender” world, predicted in the 1990’s, does not seem to have taken effect, and in fact in many cases seems to have taken a path of gender reinforcement, particularly in terms of of physical features – plastic surgery, for example.  In the general public, there still seems to be a strong traditional interest in maintaining a gendered identity.  This is not all that surprising, remarks Dr. Hopkins, if you consider that we still have the brains of our ancestors, out of which the ideas of gender were constructed.  Though technology will allow us to make radical changes, it seems for the time being there are some constraints on potential transgender possibilities due to our inherited brains.

 

What technology can do, Dr. Hopkins asserts, is allow us to do something new that taps into old interests – for example, the creation of artificial wombs; this technology in and of itself is ripe for debate; there a number of post-birth ethical implications.  This example also weaves biological-driven instincts with what it means to be a female, as do a thousand other examples – clearly, sex and gender are inextricably linked.  For example, another pharmacological technology in the works is a consumable form of the chemical oxytocin, which could potentially be made available to couples that feel they have lost initial sexual attraction.  Again, biological drives are inherent influences that must be taken into account.  Society could just as easily envision a drug that eradicates sexual desire for those who would like to tune their energies to other priorities.  While possible, this would undoubtedly be of present interest to a very small segment of the population.

 

But we still don’t know how many of these chemical enhancements affect the human brain.  Drugs must still be taken and the experience interpreted by participants and observers in order to arrive at more objective conclusions as to how increased doses of a specific drug affect the brain, its interconnected components, and human behavior.  Psychology of emotions and neurological processes is still a relatively new area.  Findings across many related studies show time and again that there are multiple interconnected systems in the brain, which make what would seem to be a “simple” human emotion a much more intricate endeavor.  For example, humans who suffer damage to the amygdala may experience an “absence” of fear, yet they may still experience symptoms of fear – the anticipation of an uncomfortable situation and possession of intuitive knowledge of what is to come, though they may not react in a way to prevent the situation from occurring.  As Dr. Hopkins remarks, tinkering with a particular emotion could very well produce such human being that displays “socially-bizarre” behaviors that don’t fit with out current schema of expected human behavior.

 

Nanotech, brain-machine interface, and other mechanical enhancement processes may eventually trump pharmacological options in terms of providing a greater ability and wider range of cognitive hardware that allows us transcend our present mental capacities and paradigm of reality.  Looking back at predictions from the past, including pop culture like The Jetsons, it’s interesting to note that creators of these artifacts, in spite of their innovative visions of flying cars and other increasingly-realistic technologies, seemed unable to anticipate going beyond enculturated gender stereotypes.  “We should at least”, remarks Dr. Hopkins, “…be open to these two conflicting forces in human nature, which is one: still keeping our social primate brains, and two: (that) those social primate brains might react to new environments in ways that we really have a hard time predicting now, because we just don’t know what our desire set…will be triggered by in that future.”

 

Dr. Hopkins notes that transhumanism is not really a “transhumanist” philosophy, but more like a “superhumanist” philosophy, precisely because we still conceive of ideas from an ancestral brain and haven’t had the opportunity to transcend or augment those brains that we inherited.  As we look forward to the future, the majority of human beings may not have an interest in certain potential technological enhancements, including ones that relate to surpassing gender, simply because we can’t imagine how these new conceptions fit in or are useful to our human experience.  If machines and nanotechnology become the preferred modus operandi for human enhancements over the next decade, perhaps another 50 years will produce a breed of humanity that develops an entirely new set of cognitive-driven motivations and perception on human identity, with or without gender roles.

 

Re-engineering the Mind-Body Connection – with Kyle Munkittrick

Like so many scientists and science-loving scholars, Kyle Munkittrick had an interest in science and science-fiction at a young age; however, he didn’t actually consider pursuing “science as a career” until he entered NYU and found a program that basically allowed students to construct their own course of study.  Munkittrick took a class on the transhumanist movement, a crystallizing move that gradually shifted his focus to the field of bioethics.  Now a Bioethicist and Affiliate Scholar at the Institute of Ethics and Emerging Technologies (IEET), Munkittrick writes on topics for various publications in the field of human enhancement and bioethics, including maintaining his own blog at popbioethics.com.

In making a global transition to transhumanism, Munkittrick sees the movement in emerging technologies developed for those with disabilities as being one of the greatest catalysts towards an authentic realization of a “transhumanist” reality.  “A lot of the technology that’s being built right now…a lot more attention is being put on how it can help those who aren’t able, the way you and I are.”  Individuals with physical disabilities can increasingly leverage technologies that help them better operate in a world that has not always been so accommodating.  And those who do not have disabilities are also leveraging some of these technologies on different scales; for example, the voice-command app Siri is now a staple on iPhones and Androids.

This same idea has applied to the realm of education since the passing of the Individuals with Disabilities Education Act (IDEA), which ensures interventions and accommodations for students with special education needs.  Educators, especially those who work with students with specific learning and other types of disabilities, use a “universal learning design” approach, which includes incorporating approaches and technologies that can help all students learn to their greatest potential, with the assistance of necessary accommodations. Technologies that include voice-to-speech recognition, such as Dragon NaturallySpeaking, help those students with disabilities, but can also be used with benefit by students without disabilities.  The same type of universal design approach might be increasingly incorporated into the field of artificial technology as the industry progresses.

Brain Waves

Neurotechnologies offer the potential for paradigm-shifting realities in the mind-body realm.  Brain-computer interface (BCI) systems offer the capability to repair and enhance human physical and mental functions.  There are two types of BCIs – invasive systems, electrode arrays that are implanted in the brain and communicate with neural signals; and non-invasive BCI systems that intercept signals outside the head with scalp electroencephalography.  Munkittrick describes how this technology is currently being engineered with exoskeletons.  Ekso Bionics, a California-based company and pioneer in the field of robotic exoskeletsons, which has had success in helping paraplegics to walk again.  In 2012, the company shipped EksoTM, the first commercialized robotic exoskeleton for use in rehabilitative and medical facilities.  A related and even more ambitious goal has been set by Brazilian Neuroscientist Miguel Nicolelis, who in 2010 pledged to create a brain-controlled robotic body suit that will allow a paralyzed person to step onto the field during the opening ceremony of the 2014 World Cup and, aided by an exoskeleton operated by implanted electrodes in the brain, kick a soccer ball.

The interface being developed by Dr. Nicolelis uses implanted electrodes to interface with neuronal signals.  He and his research team at Duke University are currently using monkeys as test agents, and as of February 2013 had already raised the number of captured neuronal electric impulses from a previous 100 to 500; using using four of these electrode arrays has allowed the team to record from almost 2,000 brain neurons in an individual monkey.  Nicolelis envisions this number rising to 30,000 neurons in a prospective human patient.  Another developmental leap accomplished by Nicolelis’ team has been the development of tactile feedback within this BCI system.  In 2011, his team demonstrated a neural prosthesis that allowed monkeys to experience an artificial sense of touch.  Nicolelis and many other scientists emphasize sensory feedback, which entails a “closed loop” brain-machine-brain interface system, as a key development in the BCI technology’s success.

While some scientists in the field view Nicolelis’ attempts as an overly-ambitious promise that throws caution to the wind, brain-machine interface is nonetheless a rapidly-evolving field.  The Brain-Machine Interface Systems Laboratory at the University of California, Berkeley, which was the granting partner behind the company Ekso Bionics, is a leader in technology that transforms “thought into action” and “sensation into perception”.  Another recent and frequently publicized case is associated with researchers at University of Pittsburgh.  In 2012, a research team implanted 96 electrodes into the motor cortex of a Tetraplegic woman.  The electrodes, able to detect the firing of neurons in the motor cortex and transmit those signals to an external processor, allowed the woman to control a robotic arm in three dimensions of translation, three dimensions of orientation, and one dimension of grasping.  As reported by the journal Nature in April 2012 and shared in a Brown University press release, similar feats were accomplished with two tetraplegic patients through the BrainGate2 collaboration of researchers at the Department of Veterans Affairs, Brown University, Massachusetts General Hospital, Harvard Medical School, and the German Aerospace Center (DLR).

There is also great interest by scientists and investors in moving brain-computer-interface products into the mainstream.  In a New York Times blog post, Nick Bilton mentions a few companies already marketing related technologies.  NeuroSky, based in San Jose, California, sells a Bluetooth-enabled headset that monitors brain waves and allows people to play concentration-based games on computers and smartphones.  Emotiv, another neuro-engineering company, sells headsets that operate based on user-trained mental commands to control customized computer applications and games.  At present, these technologies use scalp electroencephalography, which is not nearly as effective in communicating between the mind and external devices as is the more invasive implanted electrodes.

As with any technology, there remain ethical concerns, and the potential benefits must continue to be weighed against risks.  In an article written by Dr. Jose L. Contreras-Vidal, director of the Laboratory for Noninvasive Brain-Machine Interface Systems at the University of Houston, Texas, he notes that questions of personality and identity may result from alterations in behavior triggered by BCI.  These are important questions not only as it relates to preserving personal identity agency but also as it applies to future public policy and legal issues.  While invasive BCI technologies are primarily being used to aid those with disabilities, the potential for these technologies to be developed for the general public requires a much broader and far-reaching moratorium on the implications of enhanced cognition, perception and behavior across societal and cultural boundaries.

 

 

Where Human Ethics and Legislation Meet: Less or More Progress? – With Russell Blackford

 

Dolly

In the spirit of valuing freedom of thought in the face of scientific advancement and expansion of human potential, how much room is there for religious debate?  What are the relevant and logical ethical arguments to consider in the face of scientific progress?  These interrelated questions underlie the continuing tension between religion, human morality, and scientific progress, a weighted point of conflict that can be traced back to early human civilizations.  Human ethics walks the lines between science and religion, fused with an oscillating overlap of political involvement and influence, which continues to change human perspectives on issues over time.

 

In the last 20 years, this intersection of conflicting values and views has become archetypal in response the issue of human cloning.  Russell Blackford, an Australian author and philosopher, has always been interested in scientific advancements but became particularly invested in the social and political elements of cloning, following the breakthrough of Dolly the Sheep in 1997.  The two types of cloning historically debated are reproductive and therapeutic cloning (i.e. cloning for research purposes).  While there is a general consensus amongst most sects of society that reproductive cloning should be banned, Blackford draws attention to laws-enacted and proposals that he believes unnecessarily target therapeutic cloning.  The initial process of therapeutic cloning is identical to that of reproductive cloning, but development of the organism is halted at an earlier stage (blastocyst), when the original cell has divided into eight cells.  These “stem cells” are capable of generating specialized cells, such as liver or brain, for use in scientific research.

 

The safety issues regarding reproductive cloning are widely known and protested.  Animal clones have suffered from genetic and other defects, and the failure rate for reproductive cloning is high.  In a National Academies 2002 report on cloning, a majority of scientists and policy-makers spoke out against human cloning due to safety and ethical concerns.  The debate as to where the ethical line should be drawn is widely debated in the U.S. and internationally.  Criminalization of nuclear transportation (also known as somatic cell nuclear transfer (SNCT)) for both reproductive and research purposes is supported by some, including former President Bush.  Many others believe in criminalizing only reproductive cloning, evidence by individual states in the U.S. The varying perspectives are clearly seen amongst the different bans enacted across countries, as illustrated in this chart summarizing world human cloning policies.

 

In 2002, the American Association for the Advancement of Science (AAAS) issued its statement on human cloning, in which they supported stem cell research, including the use of nuclear transplantation techniques (research or therapeutic cloning), because of the great potential health benefits.  But AAAS noted that due to “religious, ethical, and social concerns”, such research should “only proceed under close scrutiny by the federal government over both public and private sectors”.   In contrast, the United Nations in 2005 adopted the contested nonbinding “Declaration on Human Cloning”, in which is expressed the need to prohibit all forms of cloning “in as much as they are incompatible with human dignity and the protection of human life.”  Many countries expressed disappointment that the declaration did not define between reproductive and therapeutic cloning.

 

In a free and democratic society, there exists a spectrum – on one end, the belief that placing restrictions on any medical research is counterproductive and unacceptable in a free society, and on the counter end the argument that a democratic people have the right to work together to adopt policies, including those that ban or forbid, if society believes they contribute to a “better world.” Cloning laws in the U.S. vary in the 15 states that have enacted such laws.  Federal laws have so far only applied to studies using federal funding; there is no federal law prohibiting reproductive or therapeutic cloning using private money.  The FDA began regulating reproductive cloning in 1993, and researchers conducting studies involving biological products are required to submit applications for review.

 

Blackford, a professed atheist and libertarian, has been struck by much of the public’s response in regards to cloning humans, viewing many of the oppositional fears as irrational and some resulting laws as overly-Draconian.  Consequences for breaking these laws in some states and countries include prison terms, which Blackford believes criminalizes the idea of research and experimentation.  He emphasizes that there should be carefully-drafted and implemented regulations that address real dangers based on scientific evidence and postulations.  Blackford’s views rest on the principle that an emotionally reactive approach is not acceptable in light of how a liberal democratic society “should act.”

 

His concerns target what he sees as unacceptable developments and considerations in how we form laws, including drawing on quasi-religious concepts and emotively distorting concepts.  In his new book Humanity Enhanced, with a slated release date in early 2014, Blackford examines his belief that there is a “crisis for liberal tolerance”, hoping to clearly express the argument that there is no “Frankensteinian crisis”.  What society should really be concerned about in the face of scientific advancement, says Blackford, is a loss of liberal principles that protect our freedom as autonomous and intelligent human beings.

 

But a certain level of emotional reaction and diversified set of perspectives on human morality can serve a purpose in advancing scientific discovery.  In the July 2013 edition of the journal Science, researchers in China announced that they had found a safer and easier way to create induced pluripotent stem cells (iPSCs), which are as versatile as embryonic stem cells.  This method entails using a combination of small molecules to chemically reprogram adult tissue cells to arrive at the iPSCs.  Many experts make the claim that this type of stem cell cannot be used to clone humans.  In the beginning of August, Japan announced its plans to begin recruiting human patients for the world’s first clinical study using iPSCs.  These cells will be genetically identical to each patient’s cell, a method that has seemed to eliminate past problems with immune rejection of stem cells.

 

In an interconnected world full of perspectives, it seems logical that ultimate survival and betterment of humanity rests on compromise and innovation.  When the public reacts strongly, there will undoubtedly be a mix of irrational and rational, but listening to both sides with open ears may help inform ethical decisions that drive progress further.  As noted in one of many articles on the topic, Andre Oosterlinck contends that science thrives in a “climate of freedom”, but that this does not free society from social responsibility or ethical concern.

Blackford sees the need for a more inclusive perspective that takes into account all of the objective evidence before creating and putting into effect laws that impact the integrity of a science that has the potential to heal and enhance human lives. Moratoriums and debate amongst an array of parties – in this case, religious organizations; medical centers; abortion groups; ethicists; and individuals who might benefit from stem cell therapy – has led researchers to continue experimenting with alternative ways of making stem cells, for the purposes of growing tissues and organs in a manner that preserves the integrity of potential human life forms.

 

 

 

 

 

 

 

 

 

The Singularity – Cart Before the Horse?

 

Thinking MachineThe artificial intelligence (AI) field is full of forward thinkers; in the midst of moving ahead, some are particularly grounded in addressing the very real philosophical issues that continue to persist in the world of AI.  Dr. Karl F. MacDorman, Roboticist and Researcher at Indiana University, is a “healthy skeptic”, specifically when it comes to embracing the idea of achieving an intelligence that surpasses humans’.  As Dr. MacDorman voiced in a recent interview, “I think a fundamental question is…whether we have a kind of post-human future”, certainly one of the foremost questions on the minds of all scientists and followers of AI.  As Dr. MacDorman explains, the quest for immortality assumes a metaphysical position – is consciousness something that can be realized in different media outside the human form?  If we duplicate every neuron in a human brain and encase it within the body of a machine, does this make the machine conscious?

If the answer to such questions is a speculative “yes”, then we attribute or base these ideas on information processing theory, which (in a nutshell) assumes that the cognitive processing system of information – input and output – is all that’s necessary to achieve a level of consciousness in an entity.  Of course, humans have particular motivations in pursuing such questions.  Dr. MacDorman effectively notes that humans, by nature, are meaning makers, and many of us look to transfer our presence beyond life through some form of immortality, a concept inherent in many religions i.e. the form of the soul in an afterlife.  Dr. MacDorman points out that even some atheists pursue a form of “everlasting life” through other forms of expression, with Freud and his writing as one example.

 

Dr. MacDorman describes two ends of the spectrum in thinking about immortality – the party that believes we cannot achieve immortality, because building machines with human-like intelligence is an impossible feat, and an opposite party – one that believes we can undoubtedly achieve immortality through machines, that human beings are extremely complex organisms that have the ability to self-replicate.  Both require a leap of faith, says Dr. MacDorman.  While he may fall somewhere in the middle, he questions Ray Kurzweil’s idea of the Singularity.  This theory assumes that “we’re going to reach a point at which computers have achieved a human-level of intelligence and then from that point on…they’d be in a kind of god-like intelligence.”  Dr. MacDorman’s concerns lie primarily in the qualitative differences between machines and humans.

 

Computers can do many things that humans can’t do – manage the Internet, for example.  But for something like the Singularity to take place, a shift in the qualitative would need to take place.  At present, Dr. MacDorman believes there hasn’t been enough work done on AGI to understand how or what kind of qualitative shifts would need to take place in machines to really achieve a human-like level of intelligence.  The computer Watson, for example, may be able to answer at light-speed a question about the Gettysburg address, but this interpretation of symbols does not necessarily signify intelligence.  Watson cannot physically manipulate, by picking up the Gettyburg address for example, or make meaning, by spontaneously recognizing its historical significance.  Ken Jennings, the trivia whiz who went up again Watson in the game show Jeopardy, makes a case for the value of human knowledge in comparison to machines in this TED talk.    

 

Dr. MacDorman poses two fundamental problems – the Symbol Grounding problem, which assumes intelligent action is originated in a symbolic system and every symbolic system is capable of intelligent action (Loula, A, and Queiroz, J., 2008).  In AGI, there is still the necessity of finding a stable, representational form from which to build a human-like intelligence.  Then there is the Frame Problem, which asserts that in the world of a robot, surroundings are not static, and that forcing robots to adapt to modifications presents a string of problems.  Though there was much work done in this area in the 1980s and 1990s, Dr. MacDorman believes this problem is still relevant today.

 

Dr. MacDorman explains that there exists a tension between too much freedom, which leads to the Turing Tarpit, and systems that can perform complex tasks with human intervention, but that fail when encountering unaccounted for changes in the environment.  As John Searle drew attention to in his famous “Chinese Room Argument” (1980), one thing constructed symbol systems lack is a key ingredient that many include in the recipe for intelligence – ‘intentionality’.  This intentionality is rooted in an ability to understand language, to ‘think’ and make meaning.  This argument led to the further development of many other theories, one being Brook’s (1990) Physical Grounding Hypothesis, which asserts that machines should be built from the bottom up, using simpler processes that begin to interact with a real and complex world and to form causal relationships.  This theory is just one that led to the ideas of situatedness and embodiment, concepts embraced by scientists such as Dr. Ben Goertzel in the creation of intelligent robots. Researchers Rolf Pfeifer and Matej Hoffman at the Artificial Intelligence Laboratory at the University of Zurich, Switzerland also make the case that we need to look beyond refining AI and revisit the nature of computation to accurately incorporate the influence of environment.

 

Another fascinating and relevant avenue of research that looks at meaning making from another angle is the interconnection of language systems, humans, and technology.  Underlying the theory of Symbol Grounding is Semiotics, the study of how certain things come to represent other things to someone, a theory attributed to the late C.S. Peirce.  MIT Professor and Chief Media Scientists at Twitter, Deb Roy, has spent the last decade focusing on how people connect words to physical and social contexts.  Through the Cognitive Machines Research Group, he and his students have pursued related questions by building robotic and computational models merged with large-scale data analytics, with one goal being to create autonomous systems that learn to communicate in human-like fashion.  If we are to ever create a human-level of intelligence, Dr. MacDorman emphasizes the need to continue to seek the answers to these and other fundamental questions that may further reveal the science behind the qualitative nature of intelligence.

 

Will Machines Reinforce or Redefine a ‘Social’ Humanity? – Dr. Ben Goertzel

 

“It is change, continuing change, inevitable change, that is the dominant factor in society today. No sensible decision can be made any longer without taking into account not only the world as it is, but the world as it will be…”Isaac Asimov, 1978 

Wired Brains

 

Isaac Asimov, the Russian-born American author and biochemist, was onto something simple yet profound.  Change is constant, and the implications are nowhere more evident than in the field of advancing Artificial General Intelligence (AGI).   Dr. Ben Goertzel, American author and researcher in the field of AI, sites an early interest in science-fiction as at least partly responsible for his entry into AGI.  Dr. Goertzel recalls an Isaac Asimov novel, in which people retreat into solitary worlds of virtual reality, estranged from the meaning of social relationships.

His reference to the novel sprang from an at-present unanswerable question – will the continual rise in AGI increase our cooperation and compassion for fellow beings and other intelligent forms, or will it give way to increased conflict and/or isolation?   Dr. Goertzel Ben sees today’s reality of technology as leveraging more social connection than before, bringing up Facebook and other social media as modern evidence of this phenomenon.  Dr. Keith Hampton, an Associate Professor at Rutgers University, has research interests in the relationship between information and communication technology, to include the Internet, social networks, and community.  In a paper published in 2011, Dr. Hampton et al. investigated the relationship between social networking sites and our lives.  The results led to some general conclusions: social media users are more trusting; have more close relationships; get more social support; and are more politically engaged.   While these findings are still in their infancy, this idea is also supported by the media equation, a theory developed in the 1990s by Stanford researchers Byron Reeves and Clifford Nass; the two researchers used a collation of psychological studies to form the foundation for their overarching idea that people treat computers, television and media as real people and places.

 

The intersection of these findings has interesting implications for the future, and they beg the question – what role do social relationships and collaboration have in the future of AGI?  In the book Social Intelligence, (The New Science of Human Relationships), American author and researcher Daniel Goleman illustrates new findings in neuroscience that show the brain’s design makes it sociable; we form, subconsciously, neural bridges that let us affect the brain – and the body – of those with whom we interact; the stronger the connection, the greater the mutual force.  What affect might these constructed feedback loops have in human interaction with technology?  Dr. Goertzel’s views seem to align with the general nature of people’s tendency towards the social in relationship to technology.  He describes his vision of the future of brain-computer interfacing, with one possible result being a sort of Internet-mediated telepathy – “…if I put a brain chip in the back of my head, you have one in the back of your head, we could beam our thoughts and feelings to each other indirectly, so if you have human beings with their brains networked together like that, that would seem to enable both a level of empathy and…understanding…than what we have now”.

 

The OpenCog Foundation, of which Dr. Goertzel is Chairman of the Board, works on projects rooted in the vision of advancing artificial intelligence beyond human levels within the next few decades.  Work on projects is done by multidisciplinary teams and individuals located in various parts of the world, which can make unified collaboration and understanding of ideas a challenge; Dr. Goertzel speculates on using this brain-computer interfacing in the formation of various ‘group minds’: picture a software development team, all sharing thoughts and understanding of codes as they work, perhaps an early-stage AGI processing system before AGI becomes much smarter than humans.  “This sort of interfacing could allow us to become closer to each other and closer to our developing intelligent machines and in a way, going beyond the whole individualistic model of the self that we have now”, a speculative but potential reality.  This type of interfacing speaks to the idea of an increasingly united global mind and consciousness, which presumably is a more efficient way to transfer information and feelings than at present.

 

The stark difference seems to lie between the notions of people’s relationships with potential greater intelligences and how people interact with today’s technology, which is responsive to human controls or more akin to being interacted with on a human-like level.  What happens when we reach what Ray Kurzweil dubs the ‘Singularity’, the point where unenhanced human intelligence is unable to keep up with the rapid advance of progress in artificial intelligence?  Might our social and emotional natures be our potential downfall as a species?   It would seem that once this ‘Singularity’ is reached (or close to being reached,) the unenhanced human brain will be vulnerable to manipulation by more intelligent machines, which are able to instantly mine Internet data, or even the information in our minds, effectively influencing our decision-making and thought patterns.

 

This seems like a bleak outcome for humanity, especially to those of us who identify with the social and emotional idea of what it means to be human.  How do we prepare ourselves for what may lie ahead?  “We all have the tendency to become attached to specific ideas about what’s going to happen”, notes Dr. Goertzel.  He describes one of human beings’ greatest challenges as the ability to stay open-minded, to be aware of the need to constantly change and adapt in the face of obsolete ideas and a quickened pace of new information, “…ultimately on the most profound level of who we are and what we are and why we are…some of our thinking about the singularity itself…ideas like friendly AI…what is friendly to humanity, what is humanity?”  The evolution of human thought that will need to take place is imminent and real in the face of such difficult questions on human existence alongside ever-advancing AI.

 

When Robots Become Co-Workers and Peers – A Conversation with Dr. Karl MacDorman

Working with Robots

When I first think of robots in the workforce, I can’t help but mentally form the introductory scene of some future-oriented sitcom, with a silver-plated, two-legged robot coming through the front door, briefcase in hand and shouting in digitized monotones, “Honey, I’m home!”  As silly and culturally-bound as this image may be, the overall picture may not be too far from the scenes most humans form when thinking about intelligent machines actively working alongside humans.  In a recent interview with Dr. Karl MacDorman, we asked about the implications of robots or more human-like androids mixing with society, particularly in the workforce.  His logical response – “We don’t know yet…human beings habituate too many things.”

 

As a species, we’re so familiar and conditioned to human beings controlling most of the jobs, that the thought of robots or androids taking our place seems foreign.  Dr. MacDorman expounds this idea by discussing his experiences working in a school for the mentally and emotionally disturbed.  “There are people doing very strange things there…but you get used to it.”  At some point, you see past the odd behaviors, able to see each person as an individual and appreciate their human worth.  A similar adaptation from the strange to the routine may occur between humans and androids, or initially robots.

 

Emotional reactions aside, what might be the economic ramifications of a workforce populated by intelligent ‘bots’?   At its core, Dr. MacDorman makes the point that the idea or “the process of people losing their jobs to some kind of innovation has been going on for thousands of years”.  He refers to the Romans and the building of aqueducts and indoor plumbing, which resulted in taking jobs from water bearers.  This type of shift in the workforce is a long-term process; in Japan, where Dr. MacDorman lived for a period of years, he notes that some manufacturing jobs have been taken over by robots, but that he’s not sure that robotics has been as disruptive as other types of technological innovations or changes.

 

Yet the eventual “disruption” seems to be inevitable.  In 2011, Chinese companies spent $1.3 billion on industrial robots.  FoxConn, the company that builds iPads for Apple, hopes to have the first fully automated plant within the decade.  In light of such moves, there are several interesting predictions about the outcomes of society driven by an automated workforce.  As is inherent of any intellectual debate, varying projections seem to rest on two fulcrums – societal values and attitude (optimism versus skepticism).  Each argument, of course incorporates, each of these factors to a varying degree.

 

In an article for Forbes, Tim Worstall brings up an existing argument for a future robot-driven economy, the eventual creation of two classes – a “rentier class” that owns the robots and reaps most of the benefits, and “the rest of us.”  Worstall argues against this perspective.  He makes the case that everything will be much cheaper, one of the reasons being the elimination of unnecessary wages.  If income is relative to purchasing price, won’t the cheap costs and reduced wages to humans result in some for of balanced income equality?  Robert Skidelsky, Professor Emiritus of Political Economy at Warwick University in the U.K., argues machines may “engineer an escape” from poverty; in poor countries, economists use the term “disguised unemployment”, in which laborers find the means to share a limited amount of work.  Perhaps instead of a few very wealthy at the top, a decrease in required human labor will alter and reduce the standard for required human output, creating time for “more leisure.”   This would, he maintains, require a major shift in social thinking of current western values.

 

In an article from Wired, Author Kevin Kelly writes that the post-industrial economy will keep expanding, even if robots perform most work.  Like MacDorman, he sees the post-industrial revolution as an almost predictable repeat of history.  Just as in the early 19th century, when machinated innovations eventually replaced all but 1 percent of the existing farm jobs (the livelihood of 70% of the workforce 200 years ago), robots will likely do the same to the current workforce before the end of the century.  In the wake of the industrial revolution, a plethora of jobs in completely new fields, ones we had not dreamt of while we were busy plowing the land, were born, ushering in a new era of human cultural and societal structures.  Kelly makes the case that this will be the new reality when robots take the helm.  The human task will be to find, make, and complete new things to do, acting as “robot nannies” in the interim to other robots, and possibly androids, in a repeating cycle of robot takeovers and human creation and innovation.  He suggests the idea of “personal robot automation” for every human being, emphasizing that success will go to those who innovate in the organization, optimization, and customization of process of getting work done with his or her bots.  In Kelly’s eyes, this cycle allows us to become more “human than we already are”, increasingly free to explore the depths of our consciousness and purpose.

 

Circling back around to the shorter-term future and the beginnings of robot labor integration, Dr. MacDorman thinks service professions might be a good place for bots to start.   He makes the claim that there are some things that other cultures, like the Japanese, do better than we do; for example, service personnel and shop owners don’t talk to each other, but instead focus solely on the customer.  He remarks that while they seem to an empathetic culture in general, this seems to be taken to the extreme in service jobs.   Accordingly, there seems to be more of the opposite experience in the U.S.; especially from the perspective of service personnel, working in the industry can be a negative experience, especially when there are expectations to be the “perfect” service person and to unfalteringly treat the customer as king.   “In general, I don’t think people like to be servants…especially Americans don’t like to be in that kind of role”, MacDorman states.

 

Robots could be programmed to respond efficiently to humans’ every need, sans moving slowly on account of feeling tired or entering into emotionally-triggered conflicts with customers.  He points out that the U.S. economy has created a numbers of jobs in the low-end service sector, and the economy would need to evolve rather quickly to ensure balance, and enough jobs for those workers who would be temporarily displaced.   And what about when we get to the point where “androids” or a similar intelligence enter reality, which have the capabilities to potentially support social complex interactions?  We can make our predictions, but this is far enough into the future that we still recognize the wisdom in Dr. MacDorman’s initial response – we just don’t know yet.