If Positive Psychology Aims for 51% of the Human Population to Flourish, We’ll Need to Take Technology Seriously

The abbreviated version of this very article was first published on Positive Psychology News with the editing help of Kathryn Britton.

– – –

The Aim of Positive Psychology:

Positive Psychology’s founder Dr. Martin Seligman has undertaken the “moon-shot” endeavor to have/ensure? 51% of the human population psychologically flourishing (http://www.apa.org/monitor/2011/10/seligman.aspx) by 2051. It’s an audacious goal intended to unite positive psychology’s forces in making the grandest impact on the whole of humanity, though some might argue it is an impossible aim.

Along positive psychology’s march to it’s goal, technology has been seen as a useful aide. Dozens of new apps help people integrate habits of well-being into daily life (https://www.happier.com/home), and UPENN’s own positive psychology graduates are developing their own companies around proliferating some of this science’s biggest findings (www.gozen.com). Seligman himself addresses technology’s important influence on human fulfillment in his famous 2004 TED talk (https://www.ted.com/talks/martin_seligman_on_the_state_of_psychology).

I argue that technology ought not simply be seen as an aide to positive psychology’s efforts, but as a direct influence on the nature of happiness and wellbeing itself – an augmentation of the human psyche itself – and, as such, a force that positive psychology cannot afford to ignore.

While technologies such as the iPhone or the internet might indirectly influence happiness by spreading knowledge or providing real-time updates and feedback, future technologies in brain-machine interface will actually be able to induce, enhance, and even extend the visceral, sentient feelings associated with happiness, as well as the cognitive capacity to experience fulfillment or meaning in a deep and real way. In other words, everything that comprises Seligman’s notions of the pleasant life, the engaged life, and the life of meaning, will be made malleable by the technologies of the coming few decades.

Today, positive psychology aims to remold psychology into a means to more than “fixing” people. Since the second World War, psychology has been concerned with taking people from -10 to 0 (on a scale of their function or their happiness), while positive psychology aims to go beyond 0, towards +10 on the same representative well-being scale. In other words, positive psychology is concern with going beyond addressing limitations and taking ameliorative measures, into developing the further reaches of potential.

While sharing that goal, I argue that technology will make this exact same transition from “fixing” obstacles to human well-being, to permanently enhancing our affective experience – and bringing us beyond any present notions of well-being or happiness that we have today.

In the Coming Decades Technology Will Define “Well-Being,” Not Merely Aid it:

The future intersection of technology and psychology will have the most astonomical ethical impact of anything yet known to humankind. If “tonnage of human happiness” is what we are to measure our impact by (as Seligman alludes to in his TED talk: http://ed.ted.com/lessons/martin-seligman-on-positive-psychology), then technology which directly alters the human emotional state (possibly well beyond the present emotional range) would seem to be the most ethically relevant developments of all time. If the richter scale of ethical relevance is measured in conscious experience, we are moving toward a world where we can manually move that needle itself – as opposed to “indirectly” impactive emotions through other forces and conditions.

It is this control over affective experience that I argue should be humanity’s paramount concern, given the ethical weight of its implications. I argue that positive psychology – the study of human well-being itself – aught be involved in defining and assessing these new frontiers of technology and psychology.

As a student of Seligman himself at the University of Pennsylvania’s Masters program in Applied Positive Psychology, as an extension student at MIT, and as a writer and speaker on the intersection of technology and psychology, my aim is to extend Positive Psychology’s influence into the larger conversations on human well-being where it will be most desperately needed. That is, to guide and help direct the technological developments that will mold the future of consciousness. a look at some of the technologies that directly interact with sentience will clarify my point.

Thus far, most of technology’s “influence” on human well-being has been indirect. The happiness app on our iPhone (there are plenty of them http://www.theguardian.com/lifeandstyle/2014/jun/21/happy-life-best-psychology-apps) does not literally “make us happy” any more than the microwave on our counter does. The app, like the microwave, allows us to attain some end (learn about fulfillment and calibrate my habits, or heat up frozen vegetables) in a convenient way, which we hope will be conducive to happiness. 

I posit that technologies of the future will, in contrast, directly mold consciousness itself, and all the conceivable constituents of fulfillment.

In 2004, Deanna Cole-Benjamin of Kingston, Ontario bit down hard as holes were drilled into her skull, and electrodes placed in what is know as “area 25” of her brain. Nothing else had worked for her severe and persistent depression, no drugs, no psychotherapies, no electroshock therapies, and she hoped that deep brain stimulation would finally help. It did (http://www.nytimes.com/2006/04/02/magazine/02depression.html?pagewanted=all&_r=0), and for many other depression sufferers, this treatment has transformed  their quality of life.

For well over a decade, Oxford philosophers Davis Pearce and Nick Bostrom have spoken and written about the further reaches of what happiness might be accessed with an augmented human mind. Is it rational to assume that such brain augmentation will be a commonplace and highly desired surgery once this procedure of increasing subjective well-being can be performed without side effects? To answer this question, I might point out that the top-selling drug in America is Abilify, an anti-depressant with $6.46B in sales in 2013 alone (http://www.medscape.com/viewarticle/820011). It could be argued that all of our actions are geared toward enhancing our positive emotions, engagement, or meaning (Seligman’s happiness “types”), and when one or more of these is available in a bottle, or via a surgery, there’s reason to believe that society will jump on it.

In 2005, Cathy Hutchinson went under the knife for an even more experimental procedure at BrainGate (www.braingate2.org). Through a hole bored into her skull, Cathy had a baby aspirin-sized sensor implanted in her motor cortex, allowing her to move a robotic arm and other devices with her thought alone. She had been completely paralyzed for over 10 years before the surgery, and was willing to do anything – to regain some degree of the control, volition, and communication she had enjoyed (and like you and I, probably taken for granted) before she was paralyzed by a stroke. Her amazing, dextrous control of a robotic arm – using her thoughts alone – was hailed as one of the most astounding breakthroughs in neuroscience, and it landed stories in Nature, 60 Minutes, and more (http://www.cbsnews.com/news/paralyzed-woman-uses-mind-control-technology-to-operate-robotic-arm/).

In the future, if Cathy is able to control a fully-functional robotic body, would it be limited in the way that the human body is now? It would probably not tire out the way our muscles do, it would probably be strong enough to force her out of a dangerous situation – like the twisted metal of a car wreck. It’s safe to say that other people would want the same kind of enhanced body – and that at some point of the technology’s progress, not only the paralyzed would be interested in it’s applications. The same goes for Cathy’s “plugged in” ability to move a computer cursor or reply to emails. When the technology becomes safe and effective, what modern knowledge worker could afford not to control their devices with thought alone? “Enhanced” financial traders might control and monitor a dozen screens without the limitations of keyboards and mouse, and workers of all types will be able to take “multitasking” to entirely new levels. Could the un-enhanced then have any role of importance in the workplace?

Since the 1970’s, humans have been bypassing their sensory organs to convey sensory input directly to the nerves and brain, starting most prominently with the cochlear implant (see the cochlear implant – http://www.cochlear.com/wps/wcm/connect/us/about/company-information/history). More recently, we’ve seen bionic eyes bypassing damaged or degenerated cornea (http://www.foxnews.com/health/2014/04/23/michigan-man-among-first-in-us-to-receive-bionic-eye/). At the Swiss Federal Institute of Technology at Lausanne, a hand amputee was fitted with a prosthetic arm connected to electrodes implanted in the nerves that once controlled his hand. The device was able to send sensory signals to the man’s brain through the prosthetic device, allowing him to grasp and distinguish between soft, hard, round and angular objects – even with a blindfold (http://www.newscientist.com/article/dn25008-natural-sense-of-touch-restored-with-bionic-hand.html#.U_SMQ7xdVt8). 

The future, however, will not be limited to attaining vision “as usual,” hearing “as usual,” or other senses and abilities “as usual.” Just as positive psychology aims to go beyond the absence of mental illness, and humanity aims to go beyond it’s little blue planet, we will push beyond our present senses and mental capacity once technology allows for it. Even though cognitive implants are being developed to help people with Alzheimer’s disease (http://m.technologyreview.com/view/514491/how-to-make-a-cognitive-neuroprosthetic/), there is already talk about enhancing “regular” human memory (where did I leave my keys, again?). Similarly, technology to restore senses may provide the ability to go beyond present senses, and for over a decade there’s been talk of a transhuman transition beyond biological limitations (http://www.wired.co.uk/news/archive/2012-09/04/seeing-beyond-human-transhumanism).

Three factors may surprise you about the “enhancement” stories above. One is that human beings (like you, like me) are getting wires and sensors jammed into their skulls in the first place. The second is that these implanted devices have allowed for astounding increases in the functioning of the patients – breakthroughs that most of us have never heard of. Third is that even Dianna’s and Cathy’s seemingly “futuristic” procedures are hardly breaking news, now being nearly a decade old at the time of this writing.

These technologies could not be said to “indirectly” influence well-being – or some kind of outside factor that may hold sway over happiness if we study it closely enough. These are technologies that wield direct influence over sentience itself, and directly recreate human potential and the human experience. 

How Positive Psychology Can Influence the Future:

So what can positive psychology do now? Though awareness is useful, action is needed. The technologies developing now will be those that mold, enhance, and possibly re-define consciousness and well-being as we know it. 

If increasing all human well-being is the metric of success we’d like to impact, and if technology can be a conduit to engagement / meaning / positive emotion (Seligman himself states both points in this TED talk: http://ed.ted.com/lessons/martin-seligman-on-positive-psychology), then it would seem that we’d want to be part of the committee that determines the direction and uses of technology that will be able to alter sentience itself.

First, positive psychology can – with the other disciplines involved (including but not limited to cognitive science, neuroscience, and machine learning ) – help to define the horizons for research that could yield the most important findings for human happiness. Defining our explorations toward these new horizons will undoubtedly have a massive impact on the end results.

Second, positive psychology can help to assess the impact and implications of brain-machine interface and neuroscience studies that are underway today – and into the future. 

If these technologies aim to improve human life, it would seem that they’d be open to  the perspectives of Positive Psychology, a science designed to understand human fulfillment and well-being. Having a positive psychologist in the room when technologies are being developed to alter emotion or mental conditions would seem a must for any research that aims at aggregately improving well-being. What if the developments in cognitive enhancement acted wholly without a strong grounding in what would be conducive to human happiness?

Neither the defining or direct guiding of research needs to imply rare or nearly inaccessible technologies. I’d argue that if positive psychology waits until invasive brain-machine interface procedures are commonplace before it gets involved, it will be too late to make much of a contribution, and too many major shifts will be underway. 

As more an more immersive virtual reality experiences are developed to facilitate gaming, hold meetings, or even interact with loved ones thousands of miles away – shouldn’t positive psychology contribute it’s perspective on how this developments might impact relationships and well-being?

As non-invasive brain-machine interface (EEG) caps and open-source EEG-reading software allows us to tie brain activity to emotions, and control our devices with thought, might positive psychology itself be informed by these new findings?

When experiments with memory and perception make it possible to enhance human memory, or even selectively remove or implant memories (research is already well underway on other mammals: www.washingtonpost.com/blogs/compost/wp/2013/07/29/mit-scientists-implant-memory-in-mouse-brain), will it not be critical to monitor both their short-term and long-term effects on human well-being?

Especially as more and more procedures are developed to directly impact human emotion (more refined brain stimulation, optegenetics, smarter drugs), doesn’t it make sense that positive psychology would inform these direct attempts at improving well-being?

Technology is isn’t simply a “conduit” for  spreading positive psychology, it will be one  force that  re-shapes and augments our very notions of the human experience, and of well-being itself. Positive psychology cannot afford to be a technological bystander if it wants to bring about it’s desired aim that 51% of the human population flourishes  by 2051 – it will have to be part of those important conversations at the intersection of technology and psychology.

Seligman’s determination to improve the “tonnage of human happiness” (http://ed.ted.com/lessons/martin-seligman-on-positive-psychology) is an aim that all toiling scientists and researchers who claim they’re working to “make the world a better place” seem to share. More important than whether one shares positive psychology’s goals is the fact that its expertise is needed – more than ever – in a world where happiness and the means to it are redefined and altered by technology itself.  We need to have the perspective of positive psychology helping to guide this transition, lest we have no place for the science of happiness at the table of tomorrow.

– Daniel Faggella

From Man to Deity: Today’s “Augmentation” is Nothing

1) Augmentation is nothing new

Until recently, “augmented reality” seemed to only have a place in video games and science fiction movies. Though at the time of this writing, “AR” has yet to permeate our day-to-day experiences, the number of burgeoning technologies in this field is staggering. With simulated “sampling” of everything from couches to watches in retail (www.trylive.com), with augmented reality advertising on its way to going mainstream, and with Google Glass now available to the public, there’s plenty of opportunity for augmentation to become as much a part of work and home life as the internet, or electricity itself. Augmented reality technologies like ‘Glass’ are admittedly novel, but the transition that they represent is anything but.

Since before the dawn of language, humans have aimed to “augment” their experience to be closer to what they believe to be good, or conducive to fulfillment. Adding to their physical capacities with tools, pre-human species maintained an edge over predators and competing mammals. Adding to their mental capacities with language and symbols, early humans distilled and passed down knowledge. This same primitive push to overcome our limitations and extend our capacities has simply snowballed for millennia to produce our present world of iPhones, Bitcoin, and space exploration.

A modern thought process underlying Google Glass might be something like: “Wouldn’t it be great to be able to overlay restaurant reviews over the restaurants I’m seeing on the street?” or “Wouldn’t it be great to see emails arriving in real time and determine their importance quickly?” The thought process used by the human ancestor Australopithecus afarensis (though they certainly didn’t have such ornate language for their own internal dialogue) when developing the proper knife was something like: “Wouldn’t it be great if we could cut up these animal parts and eat them more efficiently?” 

We have always augmented our present condition in ways correlated to our capacities and technology: early hunter gatherers would have had no notions of email and so would not (indeed could not) have dreamed of a more efficient way to check them. What makes this connection between human desires and human capabilities so important is the fact that what we augment will soon include not only our tools, conveniences, and outside worlds – it will also impact our consciousness and our humanity at the very deepest levels.

2) Augmentation Act I: Molding our experience to fit our desires

The term “augmentation” means “to add something to (something) in order to improve or complete it” (Merriam-Webster). Though innovations like running water and written language might be seen to “augment” our comfort or capacities, they do not represent the extensions of human potential that this essay intends to cover.

Though it certainly doesn’t represent the furthest reaches of augmented experience, Google Glass is an innovation that fits the bill of potentially “extending human potential.” Receiving directions, checking emails, getting reminders, and communicating with friends and co-workers through an augmented display will potentially become as necessary to modern life as are the cell phone and internet connection today (does anyone read an encyclopedia anymore?).

Augmenting the body’s appearance has been a theme for ages, with the first actual breast augmentation surgery occurring in 1895. Today, regular procedures not only reconstruct or beautify the body, but also increase it’s function – as does the RFID personal identification chip, surgically implanted in the flesh of the arm. RFID evangelists aim to see a world where paying for a restaurant tab, starting your car, and unlocking your smartphone can all be done by the wave of a hand and a simple identification number that responds to a low frequency radio wave hitting the implant. Augmenting and enhancing the human senses is nothing new either, with the first pair of eyeglasses probably having its origins in the middle of the 13th century, and cochlear implants to help the hard of hearing flourishing since the 1980s. Augmenting our looks, our function, and our senses are nothing new, and the trend isn’t one that we should expect to slow down any time soon.

Expanding our further sensory experience could be best exemplified by the technologies of virtual reality, where – even today – entirely new sensory environments hint at the potential for an “immersive” virtual reality that people might not want to leave.

Radically extending our functional is perhaps best exemplified by the experiments of Braingate, where scientists are directly “plugging” paralyzed individuals into an apparatus that allows them to move a robotic limb, move a mouse on a screen, and more. 

The augmentations listed here adjust our condition to be “better” by human terms today (IE: checking email more efficiently, “upgrading” our body parts to look better, exploring more interesting worlds than we have access to in “real life”, etc…). Even more “far out” technologies like BrainGate seem merely to permit “normal” human activity, and appeal to current homo sapiens’ desires. All of these enhancements function to amplify what human beings find useful, interesting, or ultimately enjoyable. Through this “improvement” or “betterment,” I would pose, humans strive to become happier. By aiming to become “happier” I not only imply that we push towards simple comforts or pleasures, but also towards meaning, engagement, and all of the other rich constituents of fulfillment.

In this way, becoming more efficient, connecting with loved ones, experiencing fun virtual games, or regaining lost function of limbs all serve to bring us closer to our own notions of fulfillment. Enhancement, however, will not stop at the mere attempted perfection of the present human condition.

3) Augmentation Act II: Molding consciousness to fit our ideals

Our technologies that strive at bringing us happiness, however, face the same problem that all of man’s efforts have come up against: fulfillment is fickle. 

We as humans are difficult to satisfy, and despite massive increases in our physical comforts, lifespan, and capacities in the last hundred years (from penicillin to electricity to the cell phone to the internet and beyond), there is evidence that these improvements may not have been followed by a similar improvements in well-being in first world nations, and that indeed, well-being may be at a relative standstill. The relative nature of our sense of “satisfaction” levels off the benefits of running water, of internet searches, of video games, of motorcars – leaving us no happier after all – despite these “improvements.” 

Even drastic events like winning the lottery or being paralyzed in a terrible accident don’t seem to have lasting impacts on one’s baseline level of day-to-day happiness. Are we to believe that Google Glass, virtual reality, or telepathic control of electronic devices would give any more of a “jump” to our aggregate level of human well-being than all of the innovations up until now? I would posit that this is unlikely.

This is not to say that humanity ought not pursue “betterment” or “augmentation” (I’m certainly grateful for the internet, running water, etc…), but that a more fundamental kind of enhancement would have to take place in order to move the needle on aggregate human happiness as a whole. 

It seems inevitable, then, that we will tinker with our consciousness and emotional experience directly. As soon as technology allows, I would posit that the question of “how can I make this more enjoyable?” makes a swift transition to “how can I enjoy at all times?” The question of “how might this experience be more fulfilling?” quickly moves to “how can I always be fulfilled?” As soon as technologies make it viable to shape emotions and consciousness, much of humanity will be eager to directly sustain positive emotion, pleasure, engagement, and all the rich potential facets of fulfillment and happiness at last. 

Already today we have scientists controlling the memories of rats with a cybernetic cerebellum, hinting that we can develop the ability to create or delete what we can recall about our waking experience (have anything you’d like to forget?). 

We have paralyzed individuals checking their email by moving a mouse on a screen with their thoughts, potentially making way for a future where we can “think” ourselves into virtual worlds – in addition to manipulating the electronics in our environment.

We have depression itself being treated with deep brain stimulation – running wires into the skull to directly influence the pleasure centers, possibly paving the way to a world where we can modulate our own emotions at will (“Do I want to be a 6.5 on the happiness scale today, or a 9?” “Do I want to feel enthusiastic right now, or contemplative?”). 

No one is supposing that the technologies involved in modulating memory, telepathically controlling technology, or volitionally changing our emotions are going to slow down anytime soon – and one can only imagine what twenty more years of development will allow for (especially considering the pace of improvement in so many industries). We cannot possibly suppose this wave of augmentation to be hindered by any agreed-upon definition of  “human” that cannot be violated. Our ideals, it seems –  extend so far beyond such boundaries.

4) Augmenting sentience is the only augmentation there is

I would argue that as human beings, we do not care about augmenting things, and we do not care about augmenting reality, and we do not care about augmenting our appearance or senses either – we only care about augmenting the one  “thing” we truly have access to – our conscious experience, our sentience. 

From running water to Angry Birds, “improvements” and “innovation” only matter because we believe that they do – and because they impact our conscious experience. We don’t want more accessible drinking water or bathrooms for their own sake, and we don’t want smartphone games for their own sake either, but only because they bring our present experience closer to one that seems more conducive to fulfillment.

I believe that a shift from using external gadgets and gizmos to tinkering with the “gadgetry” in our minds will occur for two reasons:

The first is the simple fact that humans do not seem to be capable of continued experiences of fulfillment or pleasure (see “Act I”). Though reasons are speculative, one of the more common theories is that a hunter-gatherer world required consistent vigilance in finding food, gaining a mate, and staying safe from predators, which made it advantageous to continually aim for more – and to experience limited satiety and complacency (http://faculty.som.yale.edu/ShaneFrederick/HedonicTreadmill.pdf?subject=Please+mail+a+hard+copy+of). Attainment of some ideal, continued state of fulfillment might have held prehistoric man back from learning, expanding his capacities, or productively working (assuming he could be just as happy without any of those activities).

However, it can reasonably be said that literally all human actions originate from a desire for positive emotions and fulfillment (see “psychological egotism”). What technological innovation has not been created to “make the world a better place” or to bring profit or accolades to the inventor? Some might argue that an inventor enjoys creating in and of itself (which is still motivated by his own self-interest), but it could also be argued that some of his fulfillment would or could be derived from the recognition and perceived usefulness of his creation. Would the smart phone that you own have ever been created if someone didn’t think it would make people happier or improve their efficiency? Would anyone aim to improve the efficiency of checking email and looking up maps on a smart phone if they believed that it would have an insignificant or negative impact on aggregate human happiness? 

Would anyone undergo dangerous cosmetic surgery, or embed a RFID device under their skin, or allow researchers to cut open their skull and stick a microchip with hundreds of tiny spikes into their cerebral cortex… if none of these “augmentations” would make a direct and positive impact on quality of life? Will our desire for “more” and “better” stop short at the gates of the human mind itself – when we are finally able to directly impact happiness? When such a device could exist, would anyone who was hooked up to a “2x happiness multiplier” ever aim to go back to “normal” human experience?

Some might say that people wouldn’t be willing to alter their conscious states with some kind of futuristic treatment. If that were so, we wouldn’t expect to have seen antidepressant sales soar at nearly 400% between 1988 and 1994, and between 2005 and 2008. With all the undeniable and potentially dangerous side effects of antidepressant treatments, we wouldn’t expect to see that three of the top four prescription drugs sold in America are antidepressants. It seems that Americans (and citizens of Iceland, Canada, Australia, etc…) are perfectly comfortable bathing their brains in chemical combinations in order to feel better – side effects or not. Should deep brain stimulation (or some other future treatment) prove to be even more effective in altering mood directly, would we expect a less frequent adoption rate? 

It could be argued that we do everything to arrive at some kind of fulfillment or happiness – but our happiness (or what composes our happiness) is never completely under our volitional control – and this we are bound to change. I would posit that if people could pay for feelings directly, the merchants of emotion would have few troubles making sales.

The second reason that we may indeed tinker with consciousness itself is the continual pull towards our own ideals – and our desire for more and better (see “Act II”). It is not just that we do not remain satisfied and are therefore restless, it is that we yearn for experiences and capacities beyond our present condition, and can envision futures better aligned with our desires and values. Ford envisioned a world where people had the autonomy that came along with their own motorcars. Edison saw a world that didn’t depend on candles for light. Kennedy marshaled the will and resources of America to place a man on the moon. Looking to history – or even to religion, mythology, stories, art, and film – it’s clear that our visions extend well beyond the boundaries of “human,” even if culture and mythology ardently warn us against becoming more than human (from Icarus to Frankenstein and beyond).

If we could be drawn to reach the moon and to map the human genome, is it beyond reason that we would aim to erase horrific memories, or eliminate the need to eat food to survive, or drastically extend human vision or hearing, or aim to live forever, or any other fantastic ideal? Whether we like it or not (and I’m not making a value judgement myself), it seems that we’ve sailed past Francis Bacon’s proverbial pillars of Hercules on so many occasions that it would seem ridiculous for humanity to stop now – or that nearly any policy could prevent it.

If it’s true that sustaining consistent fulfillment is essentially impossible in our present state, then it would seem that humanity will aim to overcome this limitation. Also, if it is true that we pull ourselves towards inventions and ways of living that align with a higher vision of what life might be like, then it seems that our restless yearning and improvement won’t stop any time soon.

It is these factors which seem to inevitably take us beyond augmenting our present experience, and into designing our conscious experience itself.

5) Maybe this isn’t so implausible…

Inevitably there will be resistance to direct changes in our human condition, and we can’t expect that each human will unconditionally agree to change themselves. It may be possible that this kind of augmentation will only be available to the wealthiest citizens first, creating a number of unforeseen social and political concerns (on top of those posed already by a trans-human transition). 

Also – there will invariably be a number of social and cultural arguments as to why people ought not to take the “push-button” approach to getting smarter or feeling happier. Indeed, the very notion of “wire-heading” (embedding electrodes into the brain to induce a continued state of fulfillment or bliss: http://www.wireheading.com/) might appear to be a short-cut to the type of fulfillment which some people feel we should have to earn, not simply plug into.

One argument against volitionally improving our happiness or well-being (or indefinitely prolonging it) is that without pain or struggle we might never know a real fulfillment, or that without challenge there is no richness and meaning in life. 

It’s important for us to bear in mind that the entire dynamic of “the bad” making “the good” more meaningful or enjoyable is merely a constituent of our current criterion for happiness. In other words, we (as human beings) cannot possibly imagine any kind of unbroken fulfilling experience, because we’re incapable of imagining – experiencing it. This is not due to the fact that positive emotions are inherently fleeting – or inherently “better” when we have suffered more before or after – but is merely a side effect of how we are naturally “wired.” 

In other words, if our “hardware” were different, we might be capable of knowing deeper, more intense kinds of happiness and fulfillment without the dimensions of negativity that inevitably accompany them now. To posit that our cerebral cortexes can know all possible emotional and intellectual experiences would seem preposterous (see Oxford’s Nick Bostrom explain the further experiences humanity might attain in his famous 2005 TED talk).

Apes don’t seem remotely capable of grasping human poetry, humor, politics, or morality – what are the future possibilities of intelligent sentience that we cannot presently imagine? It seems plausible that there are thousands of senses outside the petty five that humans now have access to (io9 – limitations of human sensory perception), and an “enhanced” entity might experience them all. It seems likely that a sentience (either man-made or built from an enhanced human mind) a thousand times more powerful than that of humans would conceive of moral dictates and understand the world in ways far beyond anything that present mortals could imagine. Such an entity would grasp meagre notions of “beauty” or “relationship” or “happiness” at such different and potentially deeper levels than  we can presently fathom that it is hardly worth prognosticating.

A second argument against such augmentation of sentience might be that we could not possibly experience any kind of worthwhile happiness if we were hooked up to some kind of “happy machine.” In other words, expression, challenge, learning, and purpose would seem to be totally void in a bland world of hardware-induced bliss.

Though again we could argue that this element of “richness” and “challenge” are merely constituents of happiness that we believe are necessary given how our brains are “wired,” this argument might be refuted in other ways. Technology – and even technology that directly influences our consciousness and emotions – would not inherently bring about a slothful and placid state. I’m nearly certain that our ancestors of 200 (or 2,000) years ago would have expected to see a much lazier breed of human being walking the earth with the advent of the motorcar, the internet, the cell phone, running water, and the like (external augmentations of our condition). They might not have foreseen the added richness of learning, connectedness, and new, more refined degrees of work that would accompany these technologies. 

Despite our complaints of environmental damage and the annoyances of modernity, few of us would willingly live as our predecessors did even fifty (never mind 500) years ago. Of those who would claim to prefer such conditions, few seem to have the courage to leave their iPad or Volkswagen behind.

One might posit that -in such futures no challenge could exist, and so no meaning could exist. It raises the question of , how anything might be worthwhile, motivating, or stimulating if the experiences of “struggle” and “challenge” were wholly foreign to us? 

First, augmentation or enhancement – either of our emotional condition or of our mental / physical capacities – would not inherently yield sloth or listlessness. If most of society wore augmented reality contact lenses and were capable of overlaying and interacting with computer screens anywhere and any time, there’s reason to believe we’d work more, not less (cell phones don’t seem to have eased the work-burden much…).

Second, as mentioned, the constituent of “challenge” as a necessary component of happiness is only the case with our current “wiring.” It seems reasonable that if chips with tiny metal spikes stuck into the cerebral cortex can allow a paralyzed person to move a cursor and check their email (again, see Braingate), it might be possible for us to re-wire what constitutes “happiness” in the human mind as well. After all, it is our conscious sense of well-being that we want “enhanced” when we enhance anything else. 

6) Becoming gods: The massive responsibilities of a world in transition

Most of us can imagine what life behind Google Glass might be like. We could take photos of places we’ve been, receive notifications of messages and email, or even look up restaurant reviews or interesting facts with voice commands. A step beyond this might imply an embedded, cerebral chip that allows for literally photographic memory – and access to all of today’s gadgetry via telepathic commands. One might imagine playing music, watching a movie, accessing websites and software – all within the built-in “screen” of your own existing visual field (see the short film Sight for explorations of the pros and cons of this kind of enhancement).

The leaps in human potential that I predict  will go much farther than  merely epitomizing   today’s conveniences. We might imagine a future with a virtual world that is as – if not more – real than “real life” is today. This cyber world might be equipped with digital “people” who are calibrated to your needs and likings, capable of being deeper and truer friends than “real” friends – with their own needs and agendas – ever could (like a better version of Samantha in Her). This virtual experience might operate in a completely different time paradigm, capable of stretching a real “day” into a hundred or a hundred thousand years. 

Though I see this transition as nearly inevitable, I don’t necessarily believe all aspects of the alteration of human potential will be inherently beneficial, and I don’t think they will be inherently evil, either. It seems more rational to view all technological development – particularly that which involves altering consciousness – to be a potentially double-edged sword, with myriad ethical consequences and ramifications.

The reason that so much of this trans-human experience is difficult to imagine is that we’re probably incapable of imagining much of it – as chimpanzees are unlikely to comprehend the rules of chess or the writings of Emerson. At a certain level of higher intelligence and awareness, we’d almost certainly transcend human communication as we know it – becoming capable of exploring thoughts without the hinderances of today’s primitive representations. Our fulfillment might then not be derived from a mindless bliss (as mentioned before), but be a rich exploration of the world around us at a million times today’s depth, or be the God-like creation of countless virtual worlds. 

Our consciousness might be split a thousand ways so as to experience a thousand “lives” and countless experiences – yet still have a wholeness and continuity in a kind of “personhood” that we can’t possibly understand today. In the future, we may very well attain “divine” powers and experience that go unhindered in a cybernetic eternity. Futurist and entrepreneur Peter Diamindis believes that an increasingly interconnected global society might eventually merge into one aggregate consciousness, becoming “God-like.”

Google Glass, then, is nothing – neither is any of today’s “augmentation.” That is, except as stepping stones.

Google itself is preparing for the next wave of technological enhancement: embedding silicon chips into human brains.

Right now people use Google’s search engine because it provides them with what they want quickly and conveniently. Google Glass can only hope to catch on by permitting even more efficient and effective ways to provide users with the ends or experiences they want. 

When people can type in (or “think up,” as the case might be) what they want and have it BE their experience – no waiting, no finding, no treatments… it seems likely that they will do that instead. 

The momentum of these developments, I posit, would quickly take us beyond the satisfaction of human desires or enhanced human desires to a re-defining of fulfillment, of ethics, of relationships, of consciousness itself… and these are transitions worth considering here and now.

– Daniel Faggella

Image credit: The Conversation

Tinkering with Consciousness – The Great Ethical Precipice We Face

1) A Change in Our Nature, Not Just Our Gadgets

There’s much to be said about how different we’ve become from the society of even 30, never mind 300 years ago. Hunkered over our desks or our iPhones, heads buried in a dozen screens per day, there’s plenty of research to suggest that we’re losing vital skills and capacities that our ancestors might have taken for granted (Discovery). Indeed, there’s evidence that search engines may be acting as a kind of crutch to much of our memory (NY Times).

On the other hand we very well may be freeing up cognitive “space” for more complex and varied tasks, and adapting to a world where handling a hundred data streams is hardly a choice anymore. In addition, technology might be argued to afford us more freedom to pursue our highest callings (as we see them), and explore new and rich elements of life. We may be putting up with a lot more “beeping” in our rooms and in our pockets, but no longer are we trapped on a provincial farm or in a hot and noisy factory to generate an income for ourselves.

We may be losing some faculty of memory by using a search engine as opposed to a library, but we also have the video, audio, and text information of the world at our very fingertips – making memory potentially less relevant. We probably have weaker shoulder muscles because we don’t harvest our own crops and wash our clothes on stones. Whether technology is influencing the human condition in a positive or negative way is in large part a topic that’s debatable – but where technology seems to be taking us in the future may not be.

The true overlap of technology and psychology isn’t something exogenous – that is, it is not about our external interaction with technology, or “tools” that we pick up and use. The actual overlap happens when technology and psychology literally intersect. When our technology becomes endogenous – or forms an inner part of us – we will have gone beyond an increase in our capacities aided by gadgets, to a literal change in our human condition itself. The technologies of brain-machine interface, nanotechnology and artificial intelligence (which humanity has just begun to explore) represent not just a new transition in how we live and work – but the beginning of a profound transition in life and consciousness itself.

I believe that we as a society are at a point in our development of consciousness-altering technology that mirrors the era of the Wright brothers’ first flight. We are surrounded by burgeoning technologies that may – sooner than we expect –  literally alter all aspects of our sentient experience.

2) The Cusp of an Era of Technology + Psychology

For a concrete example of the technology and psychology overlap, look no further than Institute for Brain Science at Brown University. Headed by Dr. John Donoghue, scientists at the IBS use a technology called BrainGate to connect tiny pads full of electrically sensitive spikes to the brains of paraplegic subjects. Using the patient’s own brain activity, hardware and software are used to translate thought into robotic or digital “action.” Brain activity  can move robot limbs to, say, bring a cup of coffee with a straw to the patient’s face for a drink – or, move a computer mouse on a screen to play a video game.

2008 – BMI used to control mouse / play video games:

2012 – Cathy’s famous use of BMI robotic arm:

Deep brain stimulation, another example, is a procedure involving pulse generators implanted under the skin near the clavicle.  Like a Pacemaker for the mind,  its electrodes run directly through small holes drilled into the skull and into specific brain regions that require stimulation. This procedure was approved for treatment of movement disorders like Parkinson’s and dystonia in the early 2000’s. Its applications in the the past half-decade however, have  broadened. Treating severe depression is one of the newer applications of deep brain stimulation, in addition to treating chronic pain and phantom limb pain. Future applications being explored now include treating obesity.

Across the Atlantic at Israel’s Tel Aviv University, rodent brain parts are being successfully swapped out with man-made replacements (New Scientist). It should be noted that the cerebellum of a rat – albeit not very large – is no simple piece of neurological equipment. Receiving sensory information and coding it to transmit to the rest of the brain is a complex task, but it’s one that scientists have been able to achieve through silicon in place of s. Without a cerebellum, a rat is unable to perform even basic motor reflex actions (in this experiment: blinking in response to noise signaling  that a harmless puff of air will blow in the animal’s face). With this new mental hardware, however, the rat is able to develop the reflex. “It’s proof of concept that we can record information from the brain, analyze it in a way similar to the biological network, and return it to the brain,” says lead scientist Matti Mintz of Tel Aviv. In the future this kind of technology may permit us to alter the memory of mammals – even to the extent of implanting memories, or drastically improving memory capacity and recall.

Optogenetics is another burgeoning technology, with research commencing all over the world, including at the Massachusetts Institute of Technology. Optogenetics involves the use of light – shined directly into the brain cavity – to trigger electrical activity in individual neurons or sets of neurons. How does light fire off neurons? It doesn’t… unless those brain cells have been genetically modified in by adding a piece of DNA from a photosynthesizing algae that responds to direct light (see the full explanation at Video.MIT). Led by Ed Boyden, the MIT team is using this technology to directly control the brains of rats to make them exhibit specific behaviors (such as running in circles), or even to recover eyesight by turning the right receptors on via optogenetics. This technology may allow us to experiment with turning on and off the many thousand different types of cells in the brain, and is intended to create better therapies for an entire host of psychological disorders which we do not fully understand today.

In the very least, these technological advancements seem to present interesting scientific horizons. Indeed, in media, news, and research publications, they are mostly portrayed as just that: “interesting.”

But there will in fact come a time (sooner than most expect) when we realize what we’re truly up to. There will come a time when we will recognize that we are not simply working on catalysts in a transition to a merely “better” or more technologically advanced human world – but that instead we are catalysts in a transition beyond “human” altogether. There will come a time when we grasp that we are in fact tinkering with the very bedrock of morality itself: sentience.

3) The Highest Stakes Imaginable: Altering Sentient Itself

We might refer to “moral gravity” as the relative importance of an act, occurrence, or event, based on weighing the various ethical factors at play. The practical application of the concept is (like so many other constructs) better explained by example than by purely conceptional reasoning.

For the sake of exploring our point, let’s use the relatively extreme example of a train conductor making a choice between the lesser of two evils:

Let’s suppose you’re a train conductor traveling down an open country stretch at 80 miles per hour, and ahead of you in the distance is a fork in the track. Getting closer to the fork you can see that each strip of track after the fork is occupied by different animals.  At this speed, there’s no way to stop the train before the fork, and one or the other path must be taken. You determine that – for whatever reason – the left track is overrun with gerbils and hamsters, and the right track is overrun with cats and dogs. Hardly believing how strange a situation this is – you get ahold of yourself and take responsibility for making an important ethical decision. So – what do you do?

For most of people asked this question, cats and dogs would be spared ahead of hamsters and gerbils. Why is this?  Because they hold a greater capacity for sentience  and intelligence. Cat lives represent a greater richness of conscious highs and lows, a greater grasp of the world and their experience than do gerbils. It is this inner world – this capacity for volition and rich conscious experience – which essentially determines ethical relevance.

On a practical level, it would appear to be a being’s capacity for rich experience, relationship, and emotional / cognitive range that dictates the moral value of a given entity – the weight of its life on an ethical scale. I say this not to be cold, nor to bypass the millennia of ethical thought which humanity is well off to continue exploring. I say this merely to make a practical point – a point made clearly with the example of the train.

Think about it, what weighs on an ethical scale? There are massive events of great physical significance all over the universe, happening every millisecond, and none of them registers on a Richter scale of morality as we are describing it. Think of a black hole, or the formation of an entire nebula of stars. These events involve some of the most intense forces in the known universe, concentrated and powerful, affecting matter and space around them for distances that we as humans would be almost incapable of comprehending. However, if no mind is there to experience or be affected by these events, what is their ethical gravity?

It’s the old “if a tree falls in the woods” question. I’d argue that things matter ethically if they matter to an experiencing agent. Consciousness, then, is the bedrock of positively all ethical relevance. For thousands and thousands of years, despite the advent of agriculture, or tools of modern society, and technologies of today, our perceiving, feeling awareness is what has allowed events to “matter” at all.

If you took a prehistoric newborn, and somehow swapped him at birth with a baby born today, both would have the same innate capacities, and both would grow into and adapt to their environments, despite the differences therein. Hence, through the ages, for somewhere between one tenth and one fifth of a million years, our innate mental faculties, our capacity to do, to feel, to learn… has been the same… but now that is changing. The significance of this change in our sentient and conscious potential itself represents a shift that cannot be understated.

I believe the creation of and tinkering with sentience itself – on a grand scale – entails greater ethical gravity than any other ability or event in the known universe.

Everything from the food we eat to the smartphones we carry around to the buildings where we work only “matter” because they “matter” to our conscious mind, they matter to us, the experiencer – and it could be argued that this “capacity to experience” alone grants ethical significance. Hence, we feel saddened by both the loss of a pet dog and accidentally losing our iPad, but only the former registers on the ethical radar itself (as the iPad only matters to the experiencer, while the dog is believed to be somewhat sentient). No “ticks” on the ethical Richter Scale exist outside the consciousness of a living being.

We feel worse when we see a dead squirrel on the road than when we see a dead bug on our windshield for the same reason. The bug may be conscious only at some very low level, while the squirrel, we imagine, can feel and even “think,” and as a mammal like ourselves, may have baby squirrels in a nest who depend on it.

Similarly, I believe that the world will come to see that technologies which tinker with conscious experience and conscious capacity are by far the most ethically important (IE: have the highest ethical weight) of all inventions. Sentience, indeed, is the highest currency.

Growing hamburgers in petri dishes will certainly be a convenience and a change in our way of life, but won’t affect our happiness or the essence of our human condition any more than the toaster oven does. What I’m referring to here would be an entirely separate level of impact on our actual sentient experience; that impact would be occur through developments like a technology allowing us to determine our emotional experience at will.

Developing a robot that could help with household chores would be a great benchmark achievement of human ingenuity, and would certainly matter in our day-to-day lives. However, unless these robot maids were built with awareness and sentience, they could hardly be argued to “matter” in comparison to – say – a brain implant that allowed for total recall of all memories, the addition of entirely new sensory perceptions, or a completely immersive virtual reality.

Both the “petri burger” and the robot maid are exogenous changes to our condition, external changes in our world that would potentially make life simpler with regard to handling manual tasks or labor. Volitionally controlling emotional experience or memory, however, would make endogenous changes to our actual human condition; they would be primary because they wouldn’t impact the world we live in… they would impact the world we live through – the very lens of our awareness – our conscious experience.

The possibilities of technology that has the potential to alter human potential goes beyond mere “adjustments” to our present condition. Oxford philosopher David Pearce believes that sentient life will be capable of transcenting suffering in all forms: “I predict we will abolish suffering throughout the living world. Our descendants will be animated by gradients of genetically pre-programmed well-being that are orders of magnitude richer than today’s peak experiences” (hedweb.com/confile.htm). Other future thinkers like Google’s head of engineering Ray Kurzweil predict that humans will be able to upload the contents of their entire mind and potentially explore an infinite combination of euphoric and rich virtual experiences. No higher stakes exist than when building upon sentience itself.

4) How is This Sentient Transformation Beginning?

With so many transformative technologies in our midst today, which are most likely to bring us to what philosophers refer to as a “post-human” condition? In addition – what might be the time-frame in which this transition takes place?

Although I believe that brain-machine interface and the development of significant artificial intelligence will be the primary drive behind “tinkering” with sentience, I – like any other prognosticator – cannot make that statement with any degree certainty. Similarly, I cannot be certain of any succinct timeframes, though my inklings (and the educated guesswork of others like Kurzweil) tell me that irrevocable changes to the techno-human condition will occur within the coming 25 years.

Though we cannot look very far into the future, it’s interesting to analyze technological transitions of the past to glean insight into how technologies have historically made their way from ideation to global significance.

The Wright brothers first took flight in 1903, and by the first World War – hardly twelve years later – the entire civilized world had planes, and the first commercial flights  were available. Forty-four years after the first flight, the sound barrier was broken. Twenty two years after the sound barrier was broken, man stepped foot on the moon.

Are we to expect that our technological advancements will be any slower than those of centuries past? Could we possibly imagine what brain-machine interface, virtual reality, and artificial intelligence will be capable of in fifteen, never mind forty years time?

In an interesting example, modern jet aircraft are increasingly becoming some of the most advanced systems for human-machine interface. Already, pilots don helmets with spacial detection systems that augment their vision – permitting them to see “through” their aircraft with x-ray vision in literally any direction (aided by cameras placed on the outside of the jet itself – see the BBC article / video here). Similarly, experiments are being conducted now with drugs and even transcranial electric shocks to keep pilots awake and aware for long spans of time. Are we to assume that no efforts to permanently enhance our limited senses and abilities will be made? On the contrary, we ought to assume that research and experimentation (military or otherwise), will aim to bend our human capacities towards our ideals, and away from the “un-enhanced” biological limitations we are born with.

5) How Far Will Tinkering with Consciousness Go?

The chimpanzee shares over 90% of the same DNA base pairs with a human, but is a drastically different creature, physically and mentally. If a chimp had access to and basic command of a television remote control and a toaster oven, he could be said to be more “advanced” than his jungle-dwelling brothers, but not inherently different. His innate qualities would be the same, his core physical and mental faculties the same… and few people would argue that his moral status as a sentient being would be raised above that of his fellow chimps.

Alternatively, let’s say we have the opportunity to transform the chimp into a human being. Now, even without a cell phone, television, or any other technology, the chimp-turned-man is almost immeasurably more intelligent. Entire worlds of richer emotional experience, language learning, relationships, literature, art and science now open up – permitting a much greater change by a 1-3% genetic shift, than by all of the more complex technology in the world.

A chimpanzee with a television also is not inherently valued any more than a chimpanzee without television. Transformed just a few percentage points into a human being, the animal enjoys a drastically increased moral status  because we value human life in all it’s richness far above the life of any chimpanzee. This is not because of our dextrous thumbs, or our motor cars and skyscrapers; the “richness” derives from our intelligence and our awareness.

Let us suppose that a method could be developed for enhancing human minds to have twice the capacity to learn, or to have an essentially unlimited and flawless memory, or greater volitional control and mindfulness of deep virtues, and a vastly greater capacity for creativity, making Shakespeare and Da Vinci relatively boring in comparison. Assuming that the moral goodness of such  individuals remains at or above that of any given “un-enhanced” human being, would they then be rescued first from a burning building? If their industry, their art, and their superior methods of political governance were vastly superior to that of “un-enhanced” humans, then might we even be morally obligated to not just save them first from buildings, but create more of them, or to become enhanced ourselves?

Most people might cringe at the seemingly unrealistic thought of “enhanced” human beings holding some kind of moral weight that is superior to the humans from which they were created. After all – shouldn’t we feel a kind of reverence for our “ancestors” – or for a sense of equality between all sentient beings? But  if no “ticks” on the ethical Richter Scale exist outside the consciousness of a living being… then it might be argued that the more depth and intensity of that life experience, the more that being “registers,” ethically speaking.

I have an inkling that many of us would not choose to rescue an enhanced over an un-enhanced human from a burning building. Similarly, I do not believe that many chimpanzees would rescue a human before they rescued another chimpanzee. We like our own kind; they are like us, they represent us, we don’t want to turn our back on them, and we don’t want to look like a traitor when we go back to our domestic life (“Yeah, I felt like I had to rescue the enhanced guy and let the un-enhanced guy die – I know it’s a little weird but if you think about the moral underpinnings it’s really the right choice – can we still be friends even though you’re un-enhanced?”).

With our own kind, there is a sense of felt relatedness, an affinity, and the enhanced individual seems to bring with it the real fear of something better or more powerful than ourselves. So, who’s to say, maybe there will emerge an equality amongst us all, and the hard-won value of equality will stick with humanity and post-humanity long into the future. Why would it, though, when we are to enhanced humans as humans today are to chimps… or to mice?

My projection is that the continued dominance/acceptance our presently popular set of ethical values – though hard-won and functional in our present condition (with positively all due respect to Locke and many others) – will be about as predictable as roulette – or the post-human condition itself. I other words, it will be unpredictable to the highest degree.

If our values and capacities significantly shift and in the decades ahead, I can imagine that tinkering with sentience and intelligence itself might yield some unique permutations of what “values” and “virtues” even imply. These ideas themselves are constructs which are likely to be subject to much deeper probing and understanding by beings with 100x “regular” human intelligence.

Where might this transition eventually leave us? The answer to this question is positively unknowable, but a number of the world’s future thinkers – from fiction writers to philosophers to researchers – have contributed to the potential “snapshots” of the future that we might find ourselves in.

Peter Diamandis, founder of the X-Prize Foundation and co-founder of Singularity University suggests that we will indeed become “God-like” in our capacity to access all knowledge and connect – possibly even merge – with one another into a kind of meta-intelligence.

The famous American inventor and futurist Ray Kurzweil posits that humanity will conceive of full immersive virtual realities by the 2020’s. The “VR” that Kurzweil has in mind will not have it’s origins in a pair of goggles and a joystick, but in microscopic nano robots that will be able to enter the brain through the capillaries and provide direct stimuli, creating a virtual experience indistinguishable from physical experience.

Oxford philosopher David Pearce foresees the potential of a world where all biological life is genetically altered to experience no pain, and only “gradients of well-being” in a process he refers to as “paradise engineering.” Pearce even mentions that we likely should not stop at stimulating our current cognitive systems, lest we be “trapped in [a] local minimum,” unable to access “the richer states of consciousness or the more sublime states of wellbeing.”

With a brain-power a hundred times more powerful than that within our craniums today, would we expect anything about our world to be the same? Would we expect to only have the capacity of normal human eyes, or might we see infrared, or have nearly infinite zoom, or see through objects? Would there be a more effective mode of communication than speaking or writing? Wouldn’t some kind of system or connection emerge to replace this antiquated English, or indeed any language on earth? Would we expect to still feel jealousy, or to be subjected to the petty heuristics that plague our human decision-making processes? Would we still experience “boredom”?

Would we retain anything at all that is “human” – or would be eventually swim in an open pool of limitless experience? If an indistinguishably “real” virtual reality (maybe more “real” than anything we now know) can be ever-tailored to our desires and dreams, would we ever return to “reality?” Would we need “relationships?” In our present human condition, given our present “nature,” relationships are inherent to meaningful life and fulfillment at present (I’ve compiled some of them here) – but why would this have to be the case in a future of limitless conscious potential?

Indeed, the far-reaching implications are by definition outside the grasp of the human mind… and it may be the “far out” appearance of post-human realities that keeps them from being taken seriously by society at large.

6) Will it Be Too Late? A Trumpet Call

My fear is that the writing on the wall will not be seen as writing on the wall, and that the human forays into “tinkering” with consciousness itself will continue to be seen as little more than “interesting,” and that we might ignore the grandest ethical precipice of all time. I hope that shock or trauma will optogenetics not be required for more minds to grasp the eminent importance of these transitions toward our merger with technology.

At the time of this writing, scientists are able to control rodent and insect motion via brain-machine interface (and in the case of rodents, optogenetics). I often jest that it may not be until the emergence of remote-controlled cats that society “wakes up” to the tremendous potential of sentience-altering technology. Though I’m not one to prognosticate about technological trends, my gut tells me that feline cyborgs may be what catapults the consciousness revolution into general awareness.

I am no techno-pessimist, but I’m no techno-optimist, either. As much as possible, I think that we should be aware of the potential issues and possibilities of conscious enhancement, but that above all else we pool our resources to make sure that this transition of all transitions is one that makes the universe an aggregately better place. Though we cannot tell precisely what “better” will entail, our work to discern “better” as our technologies progress will be the most important ethical explorations that our race can embark upon.

I would argue that few efforts will be more important than an effort to unite the positive intentions and technological powers of humanity in a connected, guided front towards the future of our conscious experience.

The trumpet-call for this mission has not yet been sounded, and “writing on the wall” is yet to be seen as “writing.” I believe that it is in all of our best interests not only to unite those who ponder these issues today (from fields as varied as psychology, philosophy, economics, and beyond), but to bring these highest ethical concerns and all-important future considerations into the awareness of the many bright and well-intended minds as possible. To wait for a triggering event to occur on it’s own – whether “good” or “bad” – would put the all-important transition of consciousness in the hands of fate.

The United Nations was founded in 1945 in the wake of the horrors of World War II. Similarly, the founding fathers of the United States met together in order to discuss their own formation of a new government – and succession from Britain.

In both the above situations, there was a desire to unite in order to assess and come to an ideal conclusion for the people (in one case of the world, in one case of a nation). In both circumstances there was a “reason” strong enough to make the meeting happen. With consciousness and technology beginning to mesh already – what will be “reason” enough for our well-intended experts to pool their thoughts and explore their policies? The importance of this transition, I fear, may be grossly overlooked – relegated to private company labs or university studies until it’s impact catches the world off guard.

Our mastery over materials, the improvement of global living conditions and the myriad human inventions that have the potential to make life richer – none of these represent the potential ethical impact of tinkering with consciousness itself – with the bedrock of moral relevance itself. Our wielding of consciousness represents not just a higher impact on the ethical Richter Scale, but control of all the forces that move it’s needle in the first place.

Like our biological ancestors, we will develop into more aware and intelligent beings. The only difference between ourselves and our predecessors, is that we will – to some extent – direct this transition ourselves. Though we can’t possibly grasp all of the implications and possibilities, exploring our options and opportunities – and indeed exploring ethics itself – will be our most likely path to finding an aggregately “better” future ahead of us – while being caught off guard seems a most unfortunate circumstance.

For thousands of years, “Playing God” has been nothing but a metaphor. Now, the gateway to a Rosetta Stone of consciousness may lie within our grasp. If sentience and entire worlds of awareness can be captured, crafted, and created – shouldn’t we be more than obligated to guide this process towards the best ends that we can determine?

Considering who controls these technologies and transitions beyond the present human condition is of the utmost importance. What – for example – might be the next step past “Google Glass” in the augmentation of our senses? The horrors of animal experimentation in the cosmetic and medical fields seems to pale in comparison with potential ethical nightmares of experimenting with human consciousness itself. Would we have private company CEOs, ivory tower philosophers, government agencies, or silicon valley techies control this transition? I’d argue that because we don’t know where we’re headed – and because of how very complex this subject matter is – our best bet is from a well-intended contribution from all these groups.

The benefits of better vaccines seem to pale in comparison to the potential benefits of controlling or indefinitely extending our conscious experience. Who will run those most important experiments? How transparent will those companies, governments or entities be about the uses of those technologies? What could or should we be doing now to ensure that humanity’s initial steps forward in sentient potential are beneficial – and not tragically detrimental?

Any significant step forward in the technologies mentioned at the beginning of this article (optogenetics, brain-machine interface, hippocampus replacements, etc…) could imply drastic changes to our conscious condition. It is now that the writing on the wall must be read – and I argue that the uniting our intentions and expertise has never been more called for.

-Daniel Faggella

Human Ideals Will Tear Us From Humanity

I. Ideals Have Taken Us Here

Could the yearning to improve the quality and efficiency of our daily human experience also bring us to abandon much of what we consider “human”?

Before making such a bold proposition, let us look at modern “first world” society in comparison to life in Europe in the year 1200. In many respects, we of the 21st century could be considered “super human,” or indeed “inhuman” from a dark ages perspective. We fly in the air, and we communicate across the planet and through the ether. We have touched foot on the moon itself. We have cured many of the world’s worst diseases, predicted earth’s weather patterns, and harnessed the sun to power many of our machines. Under our skin we have pacemakers, artificial hips, and cochlear implants – and the parts of some human hearts have been replaced with plastic. In society today, these additions are not occasional extraneous additions to our “human” form, but supporters and aides, prolongers and enhancers of what “human” life is.

A forward, progressive yearning has taken us to the destination we call “now.” Our own dreams and ideals have pulled us beyond our past conditions to a state which thinkers and doers now deem to be better, happier, more efficient, more complete.

“I will build a motorcar for the multitude” said Henry.

Thomas exclaimed: “We will make electricity so cheap that only the rich will burn candles.”

Alexander yearned to build “the method of, and apparatus for, transmitting vocal or other sounds telegraphically.”

“If birds can glide in the air… then, why can’t I?” asked Orville.

I’m sure you easily provided the last names of these individuals, because all of our lives have been touched by these very thoughts, and our way of life is indeed molded by them. Because they dreamed beyond what was presently possible – and turned ideas into realities – the names of Ford, Edison, Bell, and Wright will not soon be forgotten.

Wonder and curiosity drive our visions for what a different, better future might be. Cars have replaced horses, audio recording allow for infinite number of repeat performances, vaccines prevent epidemics, cell phones connect the most distant people, and a human fetus can be examined and treated through all stages in the womb.

My American Oxford dictionary defines “ideal” as: “(n) a standard of perfection; a principle to be aimed at,” and “something existing only as an idea.” Ideals help to take our thoughts from “what is” to what “could be.” Ideals are subjective, and might seem possible (IE: the ideal form of transportation might imply a vehicle which burns no fossil fuel), or impossible (IE: “uploading” human consciousness into another computational substrate to indefinitely store our mind and experience). These ideas of something “better” precede all vision – and action – for the development of cultures and technologies.

The ideal visions in our past have driven us to transcend the challenges of that past – from health to communication to transportation and beyond. No boot could have been planted on the moon’s grey surface, no organ transplanted from one human to another, unless someone had thought about it first, and posed the far-out question, “what if?” Centuries of these “what ifs,” corresponding visions of possibility, and technological developments have taken us from hunters and gatherers to hackers and jet setters. Compare our lives to our mediaeval ancestors, and we are like monsters, or gods.

II. Relative Technologies and Relative Standards

Today, however, most of us don’t feel like monsters or gods. We were “born into” many technologies in use today, or our world evolved with their development – They are our norms, but we can’t see how outlandish our norms are when compared to those of our ancestors. “God” and “monster” only appear through a particular perspective, like a rainbow which is invisible unless the sun is behind the observer.

It is the contrast between a medieval perspective and the perspective of today that makes the latter stand out as “god-like” or “inhuman.” Gradual change that produces incremental differences does not incite the same shock. When confronted with large changes in our current technology, we resist the difference so that we are like the frog, who – when held above boiling water – will squirm to escape, resisting the drastic change in temperature. Placed in a pot at room temperature, however, the frog will not notice gradual changes of heat, and so will sit still – adjusting his body temperature to the water around him as it rises – even to the extent of being boiled alive.

With each generation, a new set-point of “normal” is established. To us, as for our ancestors, there seems to be a reasonable limit, a ceiling to technology’s development. Some thinkers – however – are capable of extending technology and culture into future applications that the previous generations were fearful of, ignorant about, or deemed blasphemous. They prove the past generation’s “ceilings” of development to be an illusion.

Stem cell research – for example – was initially seen as unacceptable by a huge swath of the American population, and today has a much greater general acceptance. Each generation alters the level of “acceptability” of a technology, a cultural trend, or a way of doing or being. In 1950, the “morning-after pill” would have been quite a controversial pharmaceutical technology – and today it is not. Today, it might be said that human cloning occupies the “no man’s land” for acceptable technologies, and some of us likely see the imaginary ceiling once again. Yet, cultural and technical forces proceed. It is in the nature of an “ideal” to exist beyond present conditions and to build off of or entirely neglect past notions.

Visiting Mars – never mind colonizing it – might have been a wholly absurd notion in the 50‘s as well, and today companies are planning to send the first humans to mars within a decade (http://www.mars-one.com/). Uploading human minds into computers might not have even been imagined by previous generations, though many experts posit that 30 to 40 years may be all it takes to house consciousness inside a machine (http://news.discovery.com/tech/when-will-humans-upload-their-brains-to-computers-130517.htm).

With our perspective grounded firmly in the present, it’s difficult for us to think of just how many of those “impossible” circumstances we as a society have caught up to – and blown past – in just the past few decades. In 1950, major innovations included Dr. Jonas Salk’s successful polio vaccine and the telephone answering machine by Bell Laboratories. From a 1950‘s perspective, how many of the now-mundane achievements of humanity would have been deemed absurd, literally impossible, or obviously not morally permissible? From surrogate mothers for hire, to internet pornography, to the Mars rover, a bygone era’s notions of “impossible” and “blasphemous” are more than occasionally noticeable. We’re swimming in them.

Yet, our automatic assumption is that OUR future will somehow not be as groundbreaking, and OUR standards of “blasphemous” or “impossible” will stick firm. We might think to ourselves “Maybe prosthetic limbs will become more and more the norm, but surely there will never be entirely prosthetic parts of the human brain.” “Maybe we will improve the artificial intelligence on our cell phone to GPS systems, but surely nobody would allow for an artificial intelligence to gain sentience and take any sort of leadership or governmental role above humans… machines will always be only aides to man.” “Maybe deep brain stimulation will be used to help the truly clinically depressed, but surely an electrical alteration of our emotional states would never become popular as an enhancement for ‘normal’ people who want to just feel happier.”

How much easier it is to chuckle at the limited notions of past generations than to begin to imagine just how many of our own beliefs will be laughable in the coming decades. In the present, all of our notions seem safe and rational, but we have no better idea than our ancestors as to which ideas will remain useful and which would go the way of sun-god worship and phrenology.

III. We Think We’re so Wise

Lets say I told you about a science fiction film in which huge portions of society decided to permanently “plug in” their minds to a virtual reality device as opposed to continuing to exist as humans in a physical reality. You might deem the idea novel or ridiculous – as it implies cultural and technological shifts that we haven’t come close to seeing yet. You could write that possibility off rather quickly, it would seem, as there’s no reasonable way that such a shift would occur in your lifetime, or maybe even that of your grandchildren.

Hindsight would show that our grandparents’ assumptions were clearly misguided, naive, or uninformed. But given our level of technological development, we clearly have better perspectives for making informed judgements about what technologies will or will not come into existence. How often have we uttered such words of certainty about technologies that we now take entirely for granted? How many respected thinkers and perfectly reasonable people have had egg on their faces for boldly assuming “certainty” in setting limits to human innovation?

Guglielmo Marconi was thought to be insane when he suggested to the Italian Ministry of Post and Telegraphs that wireless communication might be possible.

The four minute mile was deemed impossible even by some scientists of the 1940’s and 1950’s until Roger Bannister broke the record in 1954.

In 1936, the New York Times read: “A rocket will never be able to leave earth’s atmosphere.”

In 1895, Lord Kelvin, head of the Royal Society, was quoted as saying “Heavier-than-air flying machines are impossible.”

The associates and engineers of Ford Motor Company told Henry Ford that the V-8 engine would be literally impossible to construct in one engine block. 

Sir John Eric Ericksen, dubbed ‘Surgeon-Extraordinary’ to Britain’s Queen Victoria bluntly stated in 1873 that “The abdomen, the chest, and the brain will forever be shut from the intrusion of the wise and humane surgeon.”

So, what are we certain about? Are our hunches so much better than those of our grandparents? Why do we habitually block off these mental pathways for exploring radically different futures? Unfortunately, we are victims of the same mental heuristic, the same doubting tendency that may leave us unprepared to handle the practical and ethical concerns posed by the next wave of revolutionary technologies (which we cannot possibly predict).

IV. Culture Will Protect Us – or Will it?

The speed of technological advancement, though, is not the only factor that will determine these potentially radical futures. Culture, policy, and politics also play a part. Some might argue that even though the Wright brothers did fly, it would not have amounted to any kind of change in our day-to-day lives if the government hadn’t allowed for commercial or personal flight, or if society hadn’t at least in some way seen flight as useful and acceptable. Politics and cultures may be the forces that impede the “slippery slope” technologies that seem most likely to disrupt our notions of present-day human life.

It may very well be that society will simply not permit replacement brain parts in human beings – even to cure disease – because of the perceived risks of this kind of technology. Even with all the computer intelligence in the world of 2050, governments may completely prevent an artificial intelligence from having a say in matters of politics or leadership. It may very well be that society will pass laws against the use of technologies that allow us to modulate our own emotions with the push of buttons, or against a kind of immersive virtual reality that rivals actual experience.

Some technology, it seems, will inevitably be halted or slowed by society’s standards and norms, and certainly by policy and law (as we’ve seen with cloning and stem cell research). However, we should not allow ourselves to make the same insidious mistake of “certainty” that thinkers and societies have made previously with technological progress – and end up with the same kind of egg on our faces.

We can admit it is nearly inevitable that mind-expanding cerebral implants or brain-machine interfaces – for instance – will be forced to comply to some kind of regulation from government agencies, and it seems that there are human safety concerns that would warrant this scrutiny. Cultural forces will dictate that some technologies or applications will inevitably be deemed by some as “unacceptable”, such as chemical weapons, or human cloning.

Some technologies and applications, however, will be “unacceptable” in the same way that interracial dating, stem cell research, deep brain stimulation and sex-change operations might have been seen as “unacceptable.” Imagine just how unacceptable a sex-change operation would have been considered in Christian medieval Europe, never mind the early 1950’s, when America saw it’s first sex-change with Christine Jorgensen. Though it is difficult to imagine from our present vantage point, ever so many of our notions of “unacceptable” will be trampled over by time, necessity, and the ideals of new generations.

As many examples of previously “impossible” cultural shifts exist around us as do “impossible” technical shifts. Prosthetic limbs and interracial couples are in all ways “acceptable” in much of the modern world. Similarly, even sex-change operations are today relatively commonplace, and carry a minute fraction of the moral stigma that once accompanied the procedure. A study by the University of Michigan estimates that 1 in over 2,500 men in the United States have undergone surgery to become a woman (http://ai.eecs.umich.edu/people/conway/TS/TSprevalence.html).

In addition, censure or aversion to a technology in one nation does not mean that the same development will fall flat altogether. Not only do “times change” (again, think sex-change operations in the USA), but what is not permitted within our culture may easily and swiftly be adopted elsewhere – and so flourish there instead of here. How many seemingly offensive and unbelievable acts are carried out as common cultural practices all over the world? From female genital mutilation, to stoning, to coming-of-age ceremonies, the smorgasbord of the world’s traditions shows a massive and wide range of “acceptabilities” outside of those that our lives are accustomed to. This doesn’t just translate to cultural oddities like stoning – we also see Scotland permitting cloning, and stem-cell research flourishing in Korea.

If it were possible to “upload” new information – or even new senses – into our brains – wouldn’t it be reasonable that some nations would adopt this technology rather quickly? If augmenting human memory with implants becomes illegal in the United States, it may be legal in Denmark, or in China, or in Japan. Would we permit ourselves to be left behind? Would an “arms race” of human enhancement begin?

V. Our Responsibility is Vigilance

Not only does it seem as though our minds aren’t inclined to foresee groundbreaking change, but our “certainties” also serve to help us sleep at night. The universe seems a lot less daunting when we believe that these most disturbing alterations to our present norms are just science fiction. Whether it is a natural disposition to think the future will be like the present (as Peter Diamandis and others happen to believe: http://bigthink.com/in-their-own-words/the-difference-between-linear-and-exponential-thinking), or a head-in-the-sand perspective to preserve our sense of comfort, it would seem more responsible to instead seek a more truthful perspective from which to handle our future head-on. It is a future that is today being guided and facilitated by our own impetus toward the ideal, by the same visionary force and primal desires that brought our human race to where it stands today.

It was human questing for the ideal that took man from horse and carriage to motorcars; that brought us cellular phones, email, heart transplants, freedom of religion, freedom in choosing a mate, and so many other societal and technological shifts. Where will the ideal take us next? When technologies become available to literally alter human sentience, or create beyond-human general intelligence from silicon, will we be prepared for the consequences?

Our desire for the ideal is bringing us now towards arguably better – and almost certainly more efficient – modes of… everything. Faster travel, instant communication, fixing memory problems, treating depression, curing the world’s deadly diseases and even staving off death altogether. As a result, we see the “Hyperloop” (http://mashable.com/2013/08/16/elon-musk-hyperloop-mashtalk/), Google Glass, replacement brain parts, medication to safely influence emotion, and research to attain indefinite life extension (http://www.sens.org/).

Each of these ideas is the airplane, motorcar, or cotton gin of it’s own era. Which will become realities, how they manifest in our societies, and when… is all for time to tell – and our innovators and policy makers to determine. The difference between motorcars and replacement brain parts, or a moon landing and a human-level artificial intelligence, may simply be the era in which we live, and our level of technological development. It may well be that “automobiles” were more monstrous to generations past than brain prosthetics will be for us in five years. It may well be that a moon landing was a more god-like feat than our eventual creation of post-human intelligence. There is no certainty whatsoever about which technologies will develop first, or how – but there should be certainty about our responsibility to steward them into reality in a positive, careful way.

VI. Pliable “Human”

“Monstrous” and “inhuman” transitions have not prevented the ideal from taking tangible form in the past, and we should assume no safety from similar developments now. When Benjamin Walt (http://articles.timesofindia.indiatimes.com/2013-10-02/mumbai/42615303_1_stimulation-dbs-depression) went under the knife to treat severe depression by having electrodes implanted inside his skull – the ideal was to feel better. The fact that deep brain stimulation is an “experimental” procedure apparently paled in comparison to the potential benefits of a better conscious experience.

When Cathy Hutchinson (http://cnettv.cnet.com/60-minutes-braingate-movement-controlled-mind/9742-1_53-50004319.html) had a stroke and was left mentally sharp but trapped in a body incapable of movement or speech, she aimed to do whatever it took to interact with her world and loved ones again. She opted to have a device implanted in her motor cortex to allow her to drive a wheelchair, control a computer mouse on a screen, and even bring a cup to her mouth to drink with a robotic arm… all using the power of her thoughts alone.

Benjamin and Cathy are human beings with machines interacting with their brains, enhancing and/or recovering their mental and physical capacities. These treatments began as “what ifs,” too – dreams that some would have supposed might never happen. Who could say that we aren’t already inhuman, monstrous, or god-like compared to our ancestors?

Our sense of the ideal drives action and creates the future from our imaginations. Different versions of the “ideal” may or may not place a highest value on a conception of “human.” If Benjamin believed that a happier day-to-day life was more important than preserving a body unencumbered by machine parts, then making the choice for brain stimulation was an obvious one. I’m sure Cathy had some notion of “human” that clashed with the image of a hole drilled in her skull and a computer chip inserted into her brain tissue with dozens of tiny metal spikes. However, escaping her incapable body to interact with her environment was a priority that superseded the desire to stay “human” or “regular.” It’s easy to see how either Benjamin or Cathy could argue that the pursuit of a better and more capable life was the most “human” thing that they could do… and who could argue with them?

This brings us to a potentially troubling perspective on our condition – a perspective which also happens to be a requirement for our honest, vigilant, and open-minded transition to a future: “Human,” as a notion, is not concrete, and is as pliable an idea as any other. It is an ephemeral idea, an un-graspable concept that has already altered over centuries, and may be completely re-invented with the advent of tomorrow’s technology. Our perceptions of communication, of travel, of speed, of “normal” have all been drastically altered by the passage of time and procession of technological and cultural changes. There is no safety – or indeed sanctity – in a present notion of “human,” and so there is no telling what the future might permit. The only “certainty” to find may be in the fact the “human” idea is bound to the same fate of alteration as all others, and that we will all be a part of that transition. As it is, few people have accused Benjamin or Cathy of being “inhuman”… and what will happen when their procedures are commonplace? When will this same acceptance transfer to “humans” with wires or chips effecting their brain, behavior, or personality? The lines continue to show themselves to be grey, the slopes slippery, the notion of “human,” pliable.

Even if “human” were to imply the un-enhanced, biological human body, it seems as though we have more than ample evidence to suggest that – even if valued – this concept does not necessarily rank at the top of our notions of the ideal, and we may move beyond it altogether. When I say that “Human ideals will tear us from humanity,” I am using “humanity” to represent the notion of “human” that we hold today. Our momentum and technologies are taking us not just to fancier, smaller, more capable gadgets… we are moving to an entirely new human condition within our lifetime. It is a circumstance where our minds and bodies will themselves be altered – where we don’t just break from humanity with regard to our “tools,” but with regards to ourselves.

VII. Stewards of Consciousness and Intelligence

The urgency of our present condition comes not only from the gravity of the situation on the whole – but from the speed of it’s approach. Our future will not just come “faster” than that of the middle ages… it will hit harder. We need to be taking the steps now to understand the ramifications and implications of the technologies that will shape humanity, and guide this transition with caution and with collaboration. More likely than a malicious use of tomorrow’s technology, is the risk of our race being ill-prepared for just how drastic the shift will be. We may be unready to steer with the aide of our technological and ethical compasses as we venture to the important “ports” of the future.

Our unfortunate tendency is to ignore or brush off notions which prove too different from our present condition – but a blind eye turned to the real possibilities of these major trends and trajectories is a blind eye turned to the human future. We have our grandparents’ tendency to do this. Kodak was put out of business by its underestimating digital photography. Thalidomide was distributed to to pregnant women without anyone’s calculating it’s horrific impact on an unborn fetus. What needs to be considered when 3-D printers can produce everything from guns to human organs? What would we need to plan for in considering memory or intelligence augmentations to our very brains?

Our own technological explosion will not necessarily lead to being boiled alive like the frog, unless we, too, lack an appropriate understanding–and therefore an appropriate response to our conditions and also ignore the constant, incremental changes that shift our predicament. The only thing worse than a frog boiled alive out of poor perception is an ostrich with it’s head in the sand, willful ignoring real concern – or acting on cowardice. Ignoring what seems too “far out” now would mean being horribly ill-prepared for a transition that requires all the attention and preparation we can muster as a united race.

Our future is the future of our ideals.

It is these very notions of the ideal–what to improve and create to achieve what is best–that will pull us farther and farther from not only our human condition (as did automobiles and electricity) but from what is “human” in the first place. I am not foretelling the “crash” of our proverbial ship in the future, but merely calling us to unite in this cause, gripping the wheel with both hands and making sure we’re looking forward. This is not a task merely for scientists, for philosophers, for businessmen, or for governments, but for all of society. Though it doesn’t require that we hold the same beliefs, or use the same methods, it does require that we share the same intention and purpose. The time to unite our efforts is now – when we hold the responsibility not just as providers for our next generation, but as stewards to consciousness and intelligence itself. This collaboration, then, is not just geared towards discovering technologies, but discovering the best ways to introduce and implement them as we swing forward into a transformative new era.

Will we have no say in the direction of our race’s future? Will we ignore any deviance from our own perceived norms and miss out on guiding the course of the greatest changes that have ever tested human kind? I hope not.

How can we prepare now for further shifts away from what we “know” to be  “human” – and how can we collaborate to ensure the safest and best transition forward? Only with open eyes, and only with resources dedicated to understanding the massive risks and untold opportunities of altering our condition – a process that requires our united vigilance and best intentions now.

-Daniel Faggella

Image credit: fineartamerica.com

Non-Android Humans May Still Be Enhanced – an Interview with Dr. Thomas Ray

Dr. Thomas Ray is  Harvard-educated doctor of Biology, and also the original researcher in the Tierra Artificial Life project. Tierra ended up receiving major media coverage all over the world as one of the most promising forays in generating “evolution” in a digital system. Today, Dr. Ray’s research is honed in on the study of the human mind – and our conversation centered around his thoughts about human enhancement and machine consciousness.

Machines May Never Live

For Dr. Ray, the affective portion of the brain and the cognitive portion the brain (he uses the two separately in order to draw a distinction) work in unison to create the amalgam that we call consciousness. He believes that animals with only a primitive brain (lacking the capacity for language, logic, and reason) are capable of a a certain, limited extent of consciousness, while humans are capable of a higher and richer variety.

[Continue Reading]

Building Bridges: Humanitarian Efforts to Artificial Intelligence – Dr. Soenke Ziesche

In the push towards bridging humanitarian efforts and advances in computing and artificial intelligence, there seem to be a minimal number of thinkers AND doers.  Dr. Soenke Ziesche recognizes the imperative need to better integrate these worlds and sees potential implications for both those in the humanitarian fields and those in the AI sectors.

As a member of the United Nations since 2000 in the humanitarian and recovery sector, Dr. Ziesche speaks from a grounded perspective.  When asked to speak about the overlap between these two fields, he opts to take another perspective, that there exists quite a gap between humanitarian issues and the field of AI.  Applications of AI have mostly been limited to western countries.  While there may be more isolated attempts to try and apply technologies for humanitarian-based reasons, a tangible bridge that unites the two has yet to be implemented.  Perhaps the humanitarian field has been too conservative; or, perhaps the AI field has had too narrow of a scope in its outreach efforts.  Likely it’s some of both of these issues, and the fact that there are not many people who are actively involved in both fields; often when there’s a gap, there’s a lack of communication between two groups, and this seems to be the case between humanitarian activists and AI scientists.

Granted, there are many components of humanitarianism, as with any field.  The UN’s homepage for Humanitarian Affairs includes a list of thematic issues – everything from demining to global food security to protection of civilians in armed conflict.  While any of these areas could potentially be helped by advanced computing and developing artificial technologies, Early Warning and Disaster Risk Reduction is an area that often demands greater attention.  Unfortunately, even though some gains have been made in this area, there remains a need for the leveraging of technologies, particularly in the areas of communication and coordination between managers of crises’, aid workers on the ground, and victims of a disaster.

Mobile phones are invaluable tools in relaying information from disaster areas.  Click the link on the UN’s page for the Global Disaster Alert and Coordination System (GDACS) and find a homepage that shows a real-time list of current emergencies and alerts from around the world.  A page for Mobile Technology for field operations states that smartphones are becoming widely available, even in remote countries, and have the ability to provide important information from disaster areas; however, the implementation and management of such technology presents manifold obstacles.  Beginning in 2011, a conglomerate of organizations began working towards solving some of these issues, including development of user-friendly tools for different populations; a common application programming interface (API); a system for culling and processing an array of information; and a plan for how and when to promote tools in the GDACS community.

Dr. Patrick Meier is a lead thinker and navigator in the area of applying technologies for early warning crisis and humanitarian response and resilience.  Presently Director of Social Innovation at the Qatar Computing Research Institute (QCRI), Dr. Meier has written extensively about and is currently working towards a research-based framework for an information system for crisis response and management that he dubs Next Generation Humanitarian Technology & Innovations.   There do exist humanitarian donors and organizations with investments in technology – DfID, ALNAP, and OCHA are a few that he mentions, but he also pinpoints the crucial missing link as familiarity by many humanitarian agencies with field of AI.  Meier also clearly articulates that mobile phone use has sky-rocketed, and social media sites such as Twitter are heavily used amidst conflicts and disasters.  But as attributed in a quote by DfID, “…Currently the people directly affected by crises do not routinely have a voice, which makes it difficult for their needs to be effectively addressed.”  What’s more, he also affirms the point that in the face of disaster, mass amounts of data pour in quickly, and analysis of this data – like food – has a “sell-by” date.  Having systems by which to organize and make sense of massive amounts of data is critical.

Dr. Meier believes we have the tools to effectively begin to address the “Big Crisis Data Challenge”, and that they are not unique or new ones; we need to make better use of Human Computing (which he sub-defines as crowd-sourcing and micro-tasking) and AI (sub-defined as natural language processing and machine learning) in mitigating these challenges.  His far-reaching philosophy is that relevant technology applications within both of these methodologies must be united by a framework that promotes Research and Development (R&D) and is applied to humanitarian response and crisis prevention.

It seems that exasperating the communication gap between fields is the idea that disasters often serve as the triggers toward action; discussion and speculation about which disasters could or might occur often exhibit a theoretical or more visceral tone of reaction.   In the immediate wake of publicized disasters or conflicts, people’s interests or emotions are often peaked, and actions toward prevention of the same types of disasters or conflicts are frequently undertaken with renewed vigor by a greater segment of the governments, other organizations, and the scientific community.  As voiced in this article published by Ovum, a data analysis company, connection technologies that aid in a response to humanitarian disasters has become an established field, but using technologies in the prevention of such disasters is much more challenging.

Another important point is that AI could potentially be the cause of catastrophes.  A well-developed humanitarian structure could be used to address these potential risks, but populations will not be prepared if the AI field fails to give as much attention to the inherent risks of AI as they do to the potentials.  How do we plan for catastrophes at levels we haven’t seen before? Dr. Meier and other experts in the humanitarian technology field are honing in on how big data can be leveraged in both structural and operational crisis prevention.  The Ovum article gives a succinct overview of how such data can be used in both types of prevention, taking into account data from three distinct phases in the “life” of a crisis – pre-event, during, and post-event.  As Dr. Ziesche mentions, the field of AI has as much to learn from the humanitarian sector as the other way around.  Understanding these three phases of data in specific geographic, cultural, and demographic contexts, and how they evolve, will provide an important window through which to consider and prepare for the prevention and intervention of natural, social, and AI-related disasters.

Will We Out-Grow Our Inherited Brains? – A View of Societal Lenses, with Dr. Pat Hopkins

In the future, will technology deconstruct or reconstruct gender identities as culturally represented and understood by society today?   In a recent interview, Dr. Patrick Hopkins, a professor of Philosophy at Millcaps College, provides some interesting insights into the crossroads of technological influences on gender roles, societal values, and the implications for humanity.

The intersection of technology and gender is not a new, but certainly a constantly evolving domain, and there are various domains of interest.  Technology has the potential to revolutionize or reinforce gender dynamics in a culture.  Technologies, at their essence, are gender neutral, free of biological constraints.  But most humans walking around with cell phones in their hand still by and large very much identify with gender (though there are exceptions), and may be making use of certain technologies in ways influenced by their gender, even if on a subconscious level.  This idea triggers a loosely-related dual question – might these differences one day become more of a blurred line, and will further advances in AI or pharmacological enhancements help us along the way to diminishing those identity distinctions?

Over time, certain technologies come to be associated with a particular image of gender encapsulated in an epoch – the washing machine or any household appliance might bring to mind newspaper ads in the 40’s and 50’s, with the operators and the audience being women – the home-keepers.  This stereotype has changed with the tides over the past couple of decades, but the image from the past is still a type of artifact that represents where human culture has been, and gives us a reference point for how it’s evolved alongside the presence of other technologies.    There exists an array of interesting relationships that intersect gender and technology, as outlined in this article by Social Anthropologist Francesca Bay.  In fact, this is now such an established field that there are multiple publications addressing the topic, and Virginia Tech holds an annual Gender, Bodies and Technology (GBT) conference addressing related topics.

In the course of the discussion, Dr. Hopkins presents two core questions – how distinct are (or could) the genders be, and what does technology allow us to do in terms of gender?  There’s the possibility of minimizing differences between genders, but what are the implications?  The idea of a “post-gender” world, predicted in the 1990’s, does not seem to have taken effect, and in fact in many cases seems to have taken a path of gender reinforcement, particularly in terms of of physical features – plastic surgery, for example.  In the general public, there still seems to be a strong traditional interest in maintaining a gendered identity.  This is not all that surprising, remarks Dr. Hopkins, if you consider that we still have the brains of our ancestors, out of which the ideas of gender were constructed.  Though technology will allow us to make radical changes, it seems for the time being there are some constraints on potential transgender possibilities due to our inherited brains.

What technology can do, Dr. Hopkins asserts, is allow us to do something new that taps into old interests – for example, the creation of artificial wombs; this technology in and of itself is ripe for debate; there a number of post-birth ethical implications.  This example also weaves biological-driven instincts with what it means to be a female, as do a thousand other examples – clearly, sex and gender are inextricably linked.  For example, another pharmacological technology in the works is a consumable form of the chemical oxytocin, which could potentially be made available to couples that feel they have lost initial sexual attraction.  Again, biological drives are inherent influences that must be taken into account.  Society could just as easily envision a drug that eradicates sexual desire for those who would like to tune their energies to other priorities.  While possible, this would undoubtedly be of present interest to a very small segment of the population.

But we still don’t know how many of these chemical enhancements affect the human brain.  Drugs must still be taken and the experience interpreted by participants and observers in order to arrive at more objective conclusions as to how increased doses of a specific drug affect the brain, its interconnected components, and human behavior.  Psychology of emotions and neurological processes is still a relatively new area.  Findings across many related studies show time and again that there are multiple interconnected systems in the brain, which make what would seem to be a “simple” human emotion a much more intricate endeavor.  For example, humans who suffer damage to the amygdala may experience an “absence” of fear, yet they may still experience symptoms of fear – the anticipation of an uncomfortable situation and possession of intuitive knowledge of what is to come, though they may not react in a way to prevent the situation from occurring.  As Dr. Hopkins remarks, tinkering with a particular emotion could very well produce such human being that displays “socially-bizarre” behaviors that don’t fit with out current schema of expected human behavior.

Nanotech, brain-machine interface, and other mechanical enhancement processes may eventually trump pharmacological options in terms of providing a greater ability and wider range of cognitive hardware that allows us transcend our present mental capacities and paradigm of reality.  Looking back at predictions from the past, including pop culture like The Jetsons, it’s interesting to note that creators of these artifacts, in spite of their innovative visions of flying cars and other increasingly-realistic technologies, seemed unable to anticipate going beyond enculturated gender stereotypes.  “We should at least”, remarks Dr. Hopkins, “…be open to these two conflicting forces in human nature, which is one: still keeping our social primate brains, and two: (that) those social primate brains might react to new environments in ways that we really have a hard time predicting now, because we just don’t know what our desire set…will be triggered by in that future.”

Dr. Hopkins notes that transhumanism is not really a “transhumanist” philosophy, but more like a “superhumanist” philosophy, precisely because we still conceive of ideas from an ancestral brain and haven’t had the opportunity to transcend or augment those brains that we inherited.  As we look forward to the future, the majority of human beings may not have an interest in certain potential technological enhancements, including ones that relate to surpassing gender, simply because we can’t imagine how these new conceptions fit in or are useful to our human experience.  If machines and nanotechnology become the preferred modus operandi for human enhancements over the next decade, perhaps another 50 years will produce a breed of humanity that develops an entirely new set of cognitive-driven motivations and perception on human identity, with or without gender roles.

Re-engineering the Mind-Body Connection – with Kyle Munkittrick

Like so many scientists and science-loving scholars, Kyle Munkittrick had an interest in science and science-fiction at a young age; however, he didn’t actually consider pursuing “science as a career” until he entered NYU and found a program that basically allowed students to construct their own course of study.  Munkittrick took a class on the transhumanist movement, a crystallizing move that gradually shifted his focus to the field of bioethics.  Now a Bioethicist and Affiliate Scholar at the Institute of Ethics and Emerging Technologies (IEET), Munkittrick writes on topics for various publications in the field of human enhancement and bioethics, including maintaining his own blog at popbioethics.com.

In making a global transition to transhumanism, Munkittrick sees the movement in emerging technologies developed for those with disabilities as being one of the greatest catalysts towards an authentic realization of a “transhumanist” reality.  “A lot of the technology that’s being built right now…a lot more attention is being put on how it can help those who aren’t able, the way you and I are.”  Individuals with physical disabilities can increasingly leverage technologies that help them better operate in a world that has not always been so accommodating.  And those who do not have disabilities are also leveraging some of these technologies on different scales; for example, the voice-command app Siri is now a staple on iPhones and Androids.

This same idea has applied to the realm of education since the passing of the Individuals with Disabilities Education Act (IDEA), which ensures interventions and accommodations for students with special education needs.  Educators, especially those who work with students with specific learning and other types of disabilities, use a “universal learning design” approach, which includes incorporating approaches and technologies that can help all students learn to their greatest potential, with the assistance of necessary accommodations. Technologies that include voice-to-speech recognition, such as Dragon NaturallySpeaking, help those students with disabilities, but can also be used with benefit by students without disabilities.  The same type of universal design approach might be increasingly incorporated into the field of artificial technology as the industry progresses.

Neurotechnologies offer the potential for paradigm-shifting realities in the mind-body realm.  Brain-computer interface (BCI) systems offer the capability to repair and enhance human physical and mental functions.  There are two types of BCIs – invasive systems, electrode arrays that are implanted in the brain and communicate with neural signals; and non-invasive BCI systems that intercept signals outside the head with scalp electroencephalography.  Munkittrick describes how this technology is currently being engineered with exoskeletons.  Ekso Bionics, a California-based company and pioneer in the field of robotic exoskeletsons, which has had success in helping paraplegics to walk again.  In 2012, the company shipped EksoTM, the first commercialized robotic exoskeleton for use in rehabilitative and medical facilities.  A related and even more ambitious goal has been set by Brazilian Neuroscientist Miguel Nicolelis, who in 2010 pledged to create a brain-controlled robotic body suit that will allow a paralyzed person to step onto the field during the opening ceremony of the 2014 World Cup and, aided by an exoskeleton operated by implanted electrodes in the brain, kick a soccer ball.

The interface being developed by Dr. Nicolelis uses implanted electrodes to interface with neuronal signals.  He and his research team at Duke University are currently using monkeys as test agents, and as of February 2013 had already raised the number of captured neuronal electric impulses from a previous 100 to 500; using using four of these electrode arrays has allowed the team to record from almost 2,000 brain neurons in an individual monkey.  Nicolelis envisions this number rising to 30,000 neurons in a prospective human patient.  Another developmental leap accomplished by Nicolelis’ team has been the development of tactile feedback within this BCI system.  In 2011, his team demonstrated a neural prosthesis that allowed monkeys to experience an artificial sense of touch.  Nicolelis and many other scientists emphasize sensory feedback, which entails a “closed loop” brain-machine-brain interface system, as a key development in the BCI technology’s success.

While some scientists in the field view Nicolelis’ attempts as an overly-ambitious promise that throws caution to the wind, brain-machine interface is nonetheless a rapidly-evolving field.  The Brain-Machine Interface Systems Laboratory at the University of California, Berkeley, which was the granting partner behind the company Ekso Bionics, is a leader in technology that transforms “thought into action” and “sensation into perception”.  Another recent and frequently publicized case is associated with researchers at University of Pittsburgh.  In 2012, a research team implanted 96 electrodes into the motor cortex of a Tetraplegic woman.  The electrodes, able to detect the firing of neurons in the motor cortex and transmit those signals to an external processor, allowed the woman to control a robotic arm in three dimensions of translation, three dimensions of orientation, and one dimension of grasping.  As reported by the journal Nature in April 2012 and shared in a Brown University press release, similar feats were accomplished with two tetraplegic patients through the BrainGate2 collaboration of researchers at the Department of Veterans Affairs, Brown University, Massachusetts General Hospital, Harvard Medical School, and the German Aerospace Center (DLR).

There is also great interest by scientists and investors in moving brain-computer-interface products into the mainstream.  In a New York Times blog post, Nick Bilton mentions a few companies already marketing related technologies.  NeuroSky, based in San Jose, California, sells a Bluetooth-enabled headset that monitors brain waves and allows people to play concentration-based games on computers and smartphones.  Emotiv, another neuro-engineering company, sells headsets that operate based on user-trained mental commands to control customized computer applications and games.  At present, these technologies use scalp electroencephalography, which is not nearly as effective in communicating between the mind and external devices as is the more invasive implanted electrodes.

As with any technology, there remain ethical concerns, and the potential benefits must continue to be weighed against risks.  In an article written by Dr. Jose L. Contreras-Vidal, director of the Laboratory for Noninvasive Brain-Machine Interface Systems at the University of Houston, Texas, he notes that questions of personality and identity may result from alterations in behavior triggered by BCI.  These are important questions not only as it relates to preserving personal identity agency but also as it applies to future public policy and legal issues.  While invasive BCI technologies are primarily being used to aid those with disabilities, the potential for these technologies to be developed for the general public requires a much broader and far-reaching moratorium on the implications of enhanced cognition, perception and behavior across societal and cultural boundaries.

Where Human Ethics and Legislation Meet: Less or More Progress? – With Russell Blackford

In the spirit of valuing freedom of thought in the face of scientific advancement and expansion of human potential, how much room is there for religious debate?  What are the relevant and logical ethical arguments to consider in the face of scientific progress?  These interrelated questions underlie the continuing tension between religion, human morality, and scientific progress, a weighted point of conflict that can be traced back to early human civilizations.  Human ethics walks the lines between science and religion, fused with an oscillating overlap of political involvement and influence, which continues to change human perspectives on issues over time.

In the last 20 years, this intersection of conflicting values and views has become archetypal in response the issue of human cloning.  Russell Blackford, an Australian author and philosopher, has always been interested in scientific advancements but became particularly invested in the social and political elements of cloning, following the breakthrough of Dolly the Sheep in 1997.  The two types of cloning historically debated are reproductive and therapeutic cloning (i.e. cloning for research purposes).  While there is a general consensus amongst most sects of society that reproductive cloning should be banned, Blackford draws attention to laws-enacted and proposals that he believes unnecessarily target therapeutic cloning.  The initial process of therapeutic cloning is identical to that of reproductive cloning, but development of the organism is halted at an earlier stage (blastocyst), when the original cell has divided into eight cells.  These “stem cells” are capable of generating specialized cells, such as liver or brain, for use in scientific research.

The safety issues regarding reproductive cloning are widely known and protested.  Animal clones have suffered from genetic and other defects, and the failure rate for reproductive cloning is high.  In a National Academies 2002 report on cloning, a majority of scientists and policy-makers spoke out against human cloning due to safety and ethical concerns.  The debate as to where the ethical line should be drawn is widely debated in the U.S. and internationally.  Criminalization of nuclear transportation (also known as somatic cell nuclear transfer (SNCT)) for both reproductive and research purposes is supported by some, including former President Bush.  Many others believe in criminalizing only reproductive cloning, evidence by individual states in the U.S. The varying perspectives are clearly seen amongst the different bans enacted across countries, as illustrated in this chart summarizing world human cloning policies.

In 2002, the American Association for the Advancement of Science (AAAS) issued its statement on human cloning, in which they supported stem cell research, including the use of nuclear transplantation techniques (research or therapeutic cloning), because of the great potential health benefits.  But AAAS noted that due to “religious, ethical, and social concerns”, such research should “only proceed under close scrutiny by the federal government over both public and private sectors”.   In contrast, the United Nations in 2005 adopted the contested nonbinding “Declaration on Human Cloning”, in which is expressed the need to prohibit all forms of cloning “in as much as they are incompatible with human dignity and the protection of human life.”  Many countries expressed disappointment that the declaration did not define between reproductive and therapeutic cloning.

In a free and democratic society, there exists a spectrum – on one end, the belief that placing restrictions on any medical research is counterproductive and unacceptable in a free society, and on the counter end the argument that a democratic people have the right to work together to adopt policies, including those that ban or forbid, if society believes they contribute to a “better world.” Cloning laws in the U.S. vary in the 15 states that have enacted such laws.  Federal laws have so far only applied to studies using federal funding; there is no federal law prohibiting reproductive or therapeutic cloning using private money.  The FDA began regulating reproductive cloning in 1993, and researchers conducting studies involving biological products are required to submit applications for review.

Blackford, a professed atheist and libertarian, has been struck by much of the public’s response in regards to cloning humans, viewing many of the oppositional fears as irrational and some resulting laws as overly-Draconian.  Consequences for breaking these laws in some states and countries include prison terms, which Blackford believes criminalizes the idea of research and experimentation.  He emphasizes that there should be carefully-drafted and implemented regulations that address real dangers based on scientific evidence and postulations.  Blackford’s views rest on the principle that an emotionally reactive approach is not acceptable in light of how a liberal democratic society “should act.”

His concerns target what he sees as unacceptable developments and considerations in how we form laws, including drawing on quasi-religious concepts and emotively distorting concepts.  In his new book Humanity Enhanced, with a slated release date in early 2014, Blackford examines his belief that there is a “crisis for liberal tolerance”, hoping to clearly express the argument that there is no “Frankensteinian crisis”.  What society should really be concerned about in the face of scientific advancement, says Blackford, is a loss of liberal principles that protect our freedom as autonomous and intelligent human beings.

But a certain level of emotional reaction and diversified set of perspectives on human morality can serve a purpose in advancing scientific discovery.  In the July 2013 edition of the journal Science, researchers in China announced that they had found a safer and easier way to create induced pluripotent stem cells (iPSCs), which are as versatile as embryonic stem cells.  This method entails using a combination of small molecules to chemically reprogram adult tissue cells to arrive at the iPSCs.  Many experts make the claim that this type of stem cell cannot be used to clone humans.  In the beginning of August, Japan announced its plans to begin recruiting human patients for the world’s first clinical study using iPSCs.  These cells will be genetically identical to each patient’s cell, a method that has seemed to eliminate past problems with immune rejection of stem cells.

In an interconnected world full of perspectives, it seems logical that ultimate survival and betterment of humanity rests on compromise and innovation.  When the public reacts strongly, there will undoubtedly be a mix of irrational and rational, but listening to both sides with open ears may help inform ethical decisions that drive progress further.  As noted in one of many articles on the topic, Andre Oosterlinck contends that science thrives in a “climate of freedom”, but that this does not free society from social responsibility or ethical concern.

Blackford sees the need for a more inclusive perspective that takes into account all of the objective evidence before creating and putting into effect laws that impact the integrity of a science that has the potential to heal and enhance human lives. Moratoriums and debate amongst an array of parties – in this case, religious organizations; medical centers; abortion groups; ethicists; and individuals who might benefit from stem cell therapy – has led researchers to continue experimenting with alternative ways of making stem cells, for the purposes of growing tissues and organs in a manner that preserves the integrity of potential human life forms.

The Singularity – Cart Before the Horse?

The artificial intelligence (AI) field is full of forward thinkers; in the midst of moving ahead, some are particularly grounded in addressing the very real philosophical issues that continue to persist in the world of AI.  Dr. Karl F. MacDorman, Roboticist and Researcher at Indiana University, is a “healthy skeptic”, specifically when it comes to embracing the idea of achieving an intelligence that surpasses humans’.  As Dr. MacDorman voiced in a recent interview, “I think a fundamental question is…whether we have a kind of post-human future”, certainly one of the foremost questions on the minds of all scientists and followers of AI.  As Dr. MacDorman explains, the quest for immortality assumes a metaphysical position – is consciousness something that can be realized in different media outside the human form?  If we duplicate every neuron in a human brain and encase it within the body of a machine, does this make the machine conscious?

If the answer to such questions is a speculative “yes”, then we attribute or base these ideas on information processing theory, which (in a nutshell) assumes that the cognitive processing system of information – input and output – is all that’s necessary to achieve a level of consciousness in an entity.  Of course, humans have particular motivations in pursuing such questions.  Dr. MacDorman effectively notes that humans, by nature, are meaning makers, and many of us look to transfer our presence beyond life through some form of immortality, a concept inherent in many religions i.e. the form of the soul in an afterlife.  Dr. MacDorman points out that even some atheists pursue a form of “everlasting life” through other forms of expression, with Freud and his writing as one example.

Dr. MacDorman describes two ends of the spectrum in thinking about immortality – the party that believes we cannot achieve immortality, because building machines with human-like intelligence is an impossible feat, and an opposite party – one that believes we can undoubtedly achieve immortality through machines, that human beings are extremely complex organisms that have the ability to self-replicate.  Both require a leap of faith, says Dr. MacDorman.  While he may fall somewhere in the middle, he questions Ray Kurzweil’s idea of the Singularity.  This theory assumes that “we’re going to reach a point at which computers have achieved a human-level of intelligence and then from that point on…they’d be in a kind of god-like intelligence.”  Dr. MacDorman’s concerns lie primarily in the qualitative differences between machines and humans.

Computers can do many things that humans can’t do – manage the Internet, for example.  But for something like the Singularity to take place, a shift in the qualitative would need to take place.  At present, Dr. MacDorman believes there hasn’t been enough work done on AGI to understand how or what kind of qualitative shifts would need to take place in machines to really achieve a human-like level of intelligence.  The computer Watson, for example, may be able to answer at light-speed a question about the Gettysburg address, but this interpretation of symbols does not necessarily signify intelligence.  Watson cannot physically manipulate, by picking up the Gettyburg address for example, or make meaning, by spontaneously recognizing its historical significance.  Ken Jennings, the trivia whiz who went up again Watson in the game show Jeopardy, makes a case for the value of human knowledge in comparison to machines in this TED talk.    

Dr. MacDorman poses two fundamental problems – the Symbol Grounding problem, which assumes intelligent action is originated in a symbolic system and every symbolic system is capable of intelligent action (Loula, A, and Queiroz, J., 2008).  In AGI, there is still the necessity of finding a stable, representational form from which to build a human-like intelligence.  Then there is the Frame Problem, which asserts that in the world of a robot, surroundings are not static, and that forcing robots to adapt to modifications presents a string of problems.  Though there was much work done in this area in the 1980s and 1990s, Dr. MacDorman believes this problem is still relevant today.

Dr. MacDorman explains that there exists a tension between too much freedom, which leads to the Turing Tarpit, and systems that can perform complex tasks with human intervention, but that fail when encountering unaccounted for changes in the environment.  As John Searle drew attention to in his famous “Chinese Room Argument” (1980), one thing constructed symbol systems lack is a key ingredient that many include in the recipe for intelligence – ‘intentionality’.  This intentionality is rooted in an ability to understand language, to ‘think’ and make meaning.  This argument led to the further development of many other theories, one being Brook’s (1990) Physical Grounding Hypothesis, which asserts that machines should be built from the bottom up, using simpler processes that begin to interact with a real and complex world and to form causal relationships.  This theory is just one that led to the ideas of situatedness and embodiment, concepts embraced by scientists such as Dr. Ben Goertzel in the creation of intelligent robots. Researchers Rolf Pfeifer and Matej Hoffman at the Artificial Intelligence Laboratory at the University of Zurich, Switzerland also make the case that we need to look beyond refining AI and revisit the nature of computation to accurately incorporate the influence of environment.

Another fascinating and relevant avenue of research that looks at meaning making from another angle is the interconnection of language systems, humans, and technology.  Underlying the theory of Symbol Grounding is Semiotics, the study of how certain things come to represent other things to someone, a theory attributed to the late C.S. Peirce.  MIT Professor and Chief Media Scientists at Twitter, Deb Roy, has spent the last decade focusing on how people connect words to physical and social contexts.  Through the Cognitive Machines Research Group, he and his students have pursued related questions by building robotic and computational models merged with large-scale data analytics, with one goal being to create autonomous systems that learn to communicate in human-like fashion.  If we are to ever create a human-level of intelligence, Dr. MacDorman emphasizes the need to continue to seek the answers to these and other fundamental questions that may further reveal the science behind the qualitative nature of intelligence.