Dr. Stefan Sorgner on the Importance and Origin of Transhuman Philosophy

After flying between myriad conferences on Ethics and Transhumanism, Dr. Stefan Sorgner caught up with me recently here at Sentient Potential to discuss “Metahumanism,” transhuman philosophy, and ethical considerations regarding human enhancement. Currently teaching medical ethics at the University of Erlangen-Nuremberg, Dr. Sorgner is on the board of directors for the Beyond Humanism Network, in addition with the IEET and other organizations.

What is Metahumanism?

Dr Sorgner is known as a “Metahumanist,” and I figured that one of my first topics of discussion should be unpacking that concept for my readers.

From Sorgner’s perspective, Metahumanism is a was of bridging the gap between the transhumanist scholars and the posthumanist scholars. The Greek term “meta,” Sorgner explains, implies both “beyond” and “in between.”

At a conference in 2008 / 2007, during a conference in Jena (Germany) – Sorgner was able to catch up with Nick Bostram, Julian Savulescu, and Anders Sanderberg, and as their ideas percilated, Sorgner’s conception of “Metahumanism” become more clear. His entire manifesto can he found online at http://www.metahumanism.eu/index.htm.

The Philosophical Contributions to the Development of Humanity

Being trained exclusively as a philosopher and in the philosophical tradition, Dr. Sorgner  is very much interested in the contributions of past and present philosophical concerns to the ethical considerations of an “enhanced” humanity. Thinkers like Bacon, Plato and Aristotle all had lines of thought relating deeply to what we might now call “bioethics” or other more modern philosophical terms – and though “older” thought may not be “better” in some objective sense, it gives us a lineage to trace the lines of modern thought, and adds to the tapestry of thought on modern concerns.

I asked Dr. Sorgner about his own explorations in the works of Nietzsche, knowing that one of his papers on this topic is still talked about to this day (“Nietzsche, the Overhuman, and Transhumanism”).

He explained that the time of Nietzsche was a time of tremendous paradigm shift in modern thinking. Most of philosophy, from Plato onwards, assumed a dualist perspective, taking for granted the idea of an immaterial soul and a material body. This still conceptually continues in Kantian thinking. In most western laws, only humans are given the rights of personhood, and animals are not. We are the only ones who go beyond the natural world to this kind of “soul.”

Darwin and Nietzsche moved away from this line of thinking to the idea that we are only somewhat different from great apes of others animals… that we are just a kind of further developed animal. In philosophical circles today, philosophers of mind continue to discuss how topics like consciousness can come about by means of evolutionary processes. Many thinkers today no longer share the perspective that humanity has an innate kind of essence that other creatures somehow lack.

In light of transhumanist thinking, human beings are natural entities which are part of the natural world – and like our ancestors a million years ago, we are works in process actively moving forward in our development. There is no part of the human entity which is not subject to change. There is no immaterial, floating “self” in another world which has a special kind of sanctity. We came about from evolutionary processes, which makes it seem absurd to assume that we are the “end,” that there will be no further development.

Sorgner believes by an informed, contemporary debate of these issues involves not only an un-to-date knowledge of the technological innovations that are and will be in existence – but also involves an understanding of the philosophical and ethical lines of thinking with relation to how issues might be resolved or opportunities might be sought out.

Negative Freedom – It’s Wonders and Dangers

Dr. Sorgner sees negative freedom as one of the core values for which he stands, and an important idea to carry toward into our future. “Native Freedom” being freedom from coercion or harm of others, and it’s certainly a “modern” development in the course of human history, though it’s so easily taken for granted for those born into situations of stable democratic governments.

Though I am also congenial with the notion of negative freedom given my present human experience (with autonomy and safety as nearly undeniable facets of human fulfillment), I have little certainty that a trans or post-human existence wouldn’t be better without negative freedom. From my given perspective and faculties, this discernment seems impossible to make. Dr. Sorgner agrees that believing something to be important in our present circumstances isn’t any reason to rest on it in the future without critical analysis.

His optimism (though he repeatedly acknowledged the possibility of various, serious risk) lies in a belief that, in general, an era of cognitive enhancement will ring in an era of moral enhancement as well – bringing us to a higher level of practical wisdom and moral insight and action. I certainly hope that this will be the case.

He believes that there is a chance that even transhuman and posthuman individuals will not claim a kind of moral superiority to rule over “normal” humanity or mistreat the un-enhanced population. I posed that it seems hard to guess, and assuming a posthuman wisdom, creativity and moral sense is to man’s present faculties what as man is to a beetle, it seems dangerous to believe we’ll be on some kind of level ground with posthuman godliness, or that we’d be treated too much better than beetles are treated today. Dr. Sorgner agrees the future is dicey, but that our best work for today is to keep informed conversation alive and vigilantly move forward towards to best future we can create.

Little is certain on my end other than the fact that much vigilance will indeed be required.

Thanks again to Dr Stefan Sorgner for taking the time for the interview. You can find his personal website at www.sorgner.de, and learn more about his influence and works at his Wikipedia page here.


-Daniel Faggella

Ben Goertzel Interview – Humans Not the Sharpest Pencil in the Pack

Ben Goertzel is definitely in the running as one of the most brilliant people I’ve had the good fortune of getting to meet myself. He is Chairman of the Board of the OpenCog Foundation, Chief Science Officer at Aidyia Holdings – a financial prediction firm located in Hong Kong, and a prolific writer and researcher in modern artificial intelligence. I caught up with Ben after his return to Philadelphia (visiting family) – following his visit to the 2045 Congress in NYC after a flight in from Hong Kong (where he resides today).

The Technologies to Push Us Further – and Timelines

Ben’s perspective encompasses many domains beyond the study of AI alone, and so I decided first to ask about his thoughts of which technologies would eventually further our human potential the most – and which technologies seemed most likely to do so in the near future. He reflected on the fact that there are many technologies that might eventually get us there, but that individual technologies are not nearly as powerful in allowing us to “blast off” (to use his term) our development as are technologies that permit us greater access to intelligence. “One technology is not as significant as having smarter beings who can create an endless series of new technologies.”

I agree with the sentiment, and indeed it seems that increasing our ability to feel (intelligence) and our ability to feel (sentience) will go hand in hand – and that these are the essence of anything that we not refer to as morally “significant.”

With regard to specific technologies, Ben sees artificial general intelligence (or AGI) and brain-machine interfaces as the first real catalysts to bringing us beyond what is presently “human potential.” Though he is as modest in his speculations as many of my other guests (I have interviewed no one with a crystal ball), he is of the belief that artificial general intelligence will arrive before the advent of seriously “enhancing” implants – mostly due to the messiness and stickiness (no pun intended) of the ethical ramifications of developing these technologies.

Some people believe that the speculations of a “Singularity” in this century are absurd and unreasonable. Ben told me “I’m generally of the Kurzeil-ian, Vinge-ian notion that by half way into this century, the human mind will no longer be the sharpest pencil in the pack.” As an intellectual, this kind of conversation is exciting – but as a writer, these kind of quotes are gold.

The Great Importance of Collaboration

Throughout our communication, Ben made it clear that the pursuits forward in creating AGI will involve cross-disciplinary teams, and expertise from many domains, including: philosophers, programmers, psychologists, mathematicians, neurologists, etc… Indeed, this is one of the reasons he sees a limitation in the efforts of a company like Google aiming to create artificial general intelligence. First, they are – by his account – working on a different problem than he is (IE: maximizing natural language understanding and search), and second, because their resources lie almost exclusively on programmers. “It doesn’t matter how much money you have, and how many people you have… if you’re working on a different problem, then you’re more likely to solve that different problem.”

Ben also speaks to the grander scope of not only collaborating on individual problems, but on the entirety of technological progress. Indeed, Ben sees all emerging technology developments as potentially building off of and supporting one another in some way. He sees how nanotechnologies might give us what we need to make a better method of brain imaging, which might give us what we need for a more functional brain-machine interface – or indeed – this better brain imagine might give us insights to create an AGI.

An example that he brings up is the dynamic of computers versus machine tools. Better computers can design better machine tools, and better machine tools can build better computers, ad infinitum. The many domains of emerging technology have a good likelihood of doing the same.

Ben believes that on in individual level, connectedness and collaboration have their pros and cons. He harkens to the old days of AI research were you could sit alone in a room and work on a problem in a “pioneering way” because there was nobody to talk to about those matters. However, Ben sees the loosing of that romantic era of lone-wolf research as a necessary step in the level of increased collaboration and pooled knowledge that we will need in order to love forward to an entirely new dimension of human potential.

Our Best Path Forward

I asked Ben what he thought we might need to bear in mind in order to make this transition forward something aggregately beneficial. I am no pessimist, but I’m also not of the belief that enhancing our human potential comes without many a serious risk (from dangerous AGI to an enhancement of suffering to the empowering of greater acts of terrorism). I articulated the apparent truth that there will always be differing ideologies, and philosophies, and asked how we might maintain harmony in our progress as a species.

Ben responded “I think the very nature of philosophy will change…” He mentioned that even just a few hundred years ago, the sciences and philosophy were seen as unified, and philosophy became more dedicated to narrow questions. He sees a re-convergence of science and philosophy as we’re able to tackle philosophical questions about the nature of consciousness, thought, and perception. We may never solve specific philosophical questions once and for all, but we’ll have the capacity to move them together.

I’d like to say an additional “Thanks” to Ben for taking the time to catch up amidst his travels, and for sharing his thoughts. You can learn more about Ben’s OpenCog project here, in addition to his personal blog.

In addition you can support Ben’s newest project on building the world’s smartest robot on IndieGoGo!


-Daniel Faggella

Transhuman Intentions – On the Future of Motivation and Intentionality

In this short thought experiment, I’ll aim to explore the place of “intention” and “motivation” in a transhuman future, and discuss some of the pros and cons seriously. Speculations around the topic of intention today are mostly unknowable and potentially undetectable, as intention itself takes place almost entirely inside our heads (until manifested), and grossly outside of our conscious awareness. In a transhuman future, this may not be the case at all (as we’ll discuss), and it’s worth pondering these future possibilities as our technologies develop and potentially allow for them to take shape “in real life.”

The three topics that we’ll explore will be the issues of: understanding and determining our own intentions; having agencies or other powers control, determine, or guide our intentions; and the ability to read the intentions of other people.

Determining and Understanding Our Own Intentions

Some people might argue that the understanding and determination of our intentions and motives is possible even now given our present mortal capacities. I would argue that though it is admirable (and in the author’s opinion, desirable) to have self-knowledge and to be self-directed, even the most sagacious amongst us is unable to – with any certainty – claim the precise motives for his or her actions, never mind understand them.

Undeniably, we as humans are much more complex than any one goal might define. At present, a machine may be programmed to, say, collect paper clips, or determine the answer to math problems. As humans, rarely if ever could such a simple, “programmed” model be applied to our own behavior. We seem to seek fulfillment, significance, relationships, a grounded sense of self, etc…, and boiling these elements down to a single objective or aim seems potentially dangerous.

However, if self-concordance (http://www.progressfocusedapproach.com/self-concordance-theory/) continues to be a contributor to fulfillment in a transhuman future, then the ability to manually “calibrate” that which motivates us may allow us to more truly understand ourselves and live up to our values. This may or may not be simplistic (and who knows if our experience  is made better or worse by simpler or more complex motives for our actions), but it poses an interesting model of how we might come to understand and develop ourselves in the future. If I would like to act in order to

Potential dangers arise here when we program particular motives which remain “stuck” and end up being harmful for ourselves or others. We might imagine one who’s programmed single motive is to improve as a musician, allowing him to even justify murder in order to get in addition practice time (if he could be sure nobody would find out). This honed-in motivation may not allow for remorse, for empathy, or for grief, and so may in term create a potentially less rich experience for the agent, but certainly a more dangerous world for everyone living in it with him. On the other hand, adequately selected or balanced motivations may yield in a more consistent effort and energy from the agent in a direction which is better for them and those around them, and so this kind of self-programmed intentionality at least seems to have the potential to benefit both individual and society.

Outside Programming of Our Intentions

Things get pretty Orwellian pretty quickly when we consider the possibility of having a government or other entity program our intentions or in any way determine our intentions for us. On the one hand it seems reasonable to consider that – in a world of potentially dangerous emerging technologies, we wouldn’t all benefit from the eradication of the desire to kill, to steal, or potentially to lust for power (as we mortals are – at present – capable of all three on a grand scale). A world where our “intentional range” is limited by types of motives that generally benefit ourselves and benefit society may in fact me the only way we might remain a “society” at all.

In a transhuman world where biotechnology and other kinds of “enhancement” have made humans more and more capable, it seems that this capacity for destruction and conflict would indeed multiply along with our abilities. With needs, goal, and values even more varying than we see today, it seems – from one perspective – that “keeping the peace” would be nearly impossible without “uploading” to entirely virtual realities (where no other sentient life forms may be harmed), or some kind of constraints on our abilities to think of or act on ideas that could harm others.

At the same time there poses the question: Who determines these standards? The “determiners” could very well be the wielders of power who leverage the controlled populace to do their bidding, possibly resulting a much worse world for most.

Similarly, if the ability to determine the motives and intentions of humanity were created and used, it may result in a lack of new ideas, or problem-solving, of genius – and also of fulfillment. This would not necessarily be the case (for example, motives could be determined to be “fruitful innovation,” and our need for autonomy in order to attain fulfillment is an algorithm that may be “tweaked” in a future of this kind – so that we still might remain happy). 

It seems important to be wary of any power controlling societies minds, as this would place the power of the world entirely in the hands of a given controlling party. If “power corrupts, and absolutely power corrupts absolutely,” then this control would seem to lead inevitably to humanity being taken advantage of by the will of the controlling agent or party. However, if this party were a super-intelligence AI geared towards the harmonious and flourishing fulfillment and expression of humanity, then life may very well be significantly better with either total control (and mere “felt autonomy”), or with limitations and guidelines inherently set on our inner intentions, promoting a unified good will and harmony among people.

Though disturbing, it is also important to understand that what is best for human fulfillment is not necessarily what is “morally best” in any innate way. From a classic utilitarian standpoint, “the greatest happiness principle” seems to apply to sentient intelligences in addition to human beings.

Before I get too far off track with this idea, I’ll do what I call “pull an Aristotle” and say – something like: I digress, lets continue back to the topic at hand.

Reading the Intentions of Others

Another interesting potential development in terms of furthering a transhuman notion of “intention” would be a kind of transparency of intention – the ability to detect and discern the intentions of others around you.

This ability is particularly interesting because in society today, we tend to live and interact with others in a way that conveys that even if our intentions were known, others would not be disturbed by them. I’ll be bold enough to pose that generally, we convey the appearance of the intentions that we think are most conducive to getting to our desired aims in life.

One who seeks for maximal profit in a business transaction aims to put on the intention of more of a helpful and discerning consultant, rather than a salesman. Being frank about wanting to make as much money in the transaction as possible simply wouldn’t be conducive to his aims. A man courting a woman will go to the movies and pay for dinner as though these are the sole ends which he seeks (the immediate conveyance of which would not usually be conducive to those aims). A leader aiming to destroy a rival will do better to attain his ends by proclaiming and appearing to stand for a specific set of values, or providing greater value to customers.

This game of appearances keeps a semblance of trust and pleasantness in human interactions as we oft pass each other by, conveying the pleasantest well-wishing and most noble or humble intentions. It seems safe to say that if we could walk about for one day and read the genuine intentions of those around us, we would be duly disturbed.

This “reading,” however, implies that intentions are somehow detectible or discernible. It would seem that in some respects, “intentions” are ephemeral notions, composed of dominant desires, prominent goals, and ongoing inner dialogue. If we were to “read” intentions, it would seem that we would need a better or more accurate determination of the constituents of “intention” than we may hold today.

If it became the norm that intentions were readable, it would seem as though we would either become accustomed to what are now relatively disturbing intentions of those around us, or we would somehow move forward in our development towards a kind of society where malicious or detrimental intentions and thoughts are somehow ferreted out and eliminated, either by arresting individuals or by “programming” the minds of society.

Though I’ve seen neither film, the thought-reading scenario has been explored in the movie “What Women Want” starring Mel Gibson (http://en.wikipedia.org/wiki/What_Women_Want), and the persecution of future acts has been explored in the movie “Minority Report” starring Tom Cruise (http://www.imdb.com/title/tt0181689/plotsummary).


As a thought experiment, it’s been interesting to pose what increased control over “intention” might imply for a transhuman future. Based on my present democratic notions of freedom and autonomy (and present constructs of what comprises  “fulfillment”), it seems as though the first experiment (the more deliberate control of our own intentions) is a potentially preferable human experience.

The notions of “being programmed” in a way that limits our intentions, or “being monitored” in a way that detects our intentions both seem threatening to freedom, and appear to both be adequate tools of ruling and controlling society by a given ruling power (or intelligence). Though the prospect of “being programmed” seems disturbing and detestable, I would not be bold enough to stay that it would inherently “bad” in every respect. In fact

The ideas posed above are, again, mere thought experiments, but seem to be serious ethical and societal issues if emerging technologies succeed in allowing for the “tinkering of consciousness” which I refer to as the “ultimate ethical precipice.”

The good news is, we don’t appear to be faced with these issues anytime soon. In fact, you may never know my intention for writing this article at all. 🙂


-Daniel Faggella

David Pearce Interview – On the Higher Hedonic Future

David Pearce is an Oxford philosopher and one of the world’s most prominent proponents of utilitarianism. His work and writing on transhumanism (and involvement with organizations like the IEET) has also lead him to be recognized as the possibly most recognized thinker on the attainment of utilitarian ideals through emerging technologies. In 1998, he co-founded (with Oxford philosopher Nick Bostrom) the World Transhumanist Association – which later become “Humanity+.”

After reading selections of his “Hedonistic Imperative” (abstract here) and communicating with many other members of the IEET, I reached out to David, who was kind enough to discuss his perspectives on the future of humanity here at Sentient Potential.

Countering Arguments Against a Higher Hedonistic Future

David posits that the future of all sentient life should (as an ethical imperative) and could (as a technological possibility) exist with conscious experience being a gradient of bliss, as opposed to the hodge-podge mix of “pleasant” and “painful” or “exciting” and “dull” on which is exists today. This does not necessarily imply a concrete state of heightened enjoyment, but a rich tapestry of positive experience such that we could not in our present state imagine.

In our conversation on arguments against a heightened conscious experience, David explained that many of the the arguments against life as “varying gradients of bliss” tend to follow similar patterns, and in discussing them, we chatted on a similarity that many such arguments shared:

Namely: That even if our level of subjective well-being could be altered, other elements of our human condition would remain the same.

  • “Happiness Requires Contrast.” One argument against a life on the bliss-gradient is the notion that happiness only exists in contrast with the “bad” in life, and that pleasure looses nearly all of it’s meaning in the absence of struggle or pain. This argument assumes that even if our level of subjective well-being could be permanently altered, that the contrasting or “relative” nature of our happiness would function in precisely the same way it does for us now. Not only is this not necessarily the case (we can make kittens glow in the dark, why couldn’t we alter the psychological constituents of fulfillment in the future?), but David argues that the “gradients of bliss” existence could provide similar contrast and richness with none of the pain and suffering of present sentient experience.
  • “Bliss = Laziness = Bad.” Another argument against a higher hedonic set-point is the statement that humans who experienced a constant state of bliss would not be motivated to do much of anything, never mind help others or help to better the world. This argument again assumes that even if “fulfillment” could be altered, “motivation” could not – and so we hit a catch twenty two. David again argues that happiness and fulfillment exist on two different plains or spectrums, and that if fulfillment is adjustable towards something more preferable, then motivation is likely subject to enhancement as well (IE: imagine a life of consistent bliss and greater volitional wield over one’s faculties to attain ones goals).

Could Hedonism Survive an Intelligence Explosion?

I’ve written recently on the topic of the potential troubles of humanity that may result from the explosion of moral development given an impending intelligence explosion (recent article here). In a nutshell – if machines become unfathomably more intellectually capable than ourselves, how could we assume that they maintain our ethical systems? It seems more likely that their moral conceptions would explode with their intelligence, potentially resulting in competing, changing, evolving moral systems that are incomprehensible to humans (with not all of them being “friendly” with regard to humanity, either).

In bringing this idea up to David, he maintained that even in such an explosion of intelligence and ethical ideals, the “bedrock” idea of pleasure being good and pain being bad may very well still prevail through the diverse evolutions of morality in the super-intelligent future. He stated that he could not be certain about this (of course), but that he certainly sees this “bedrock” notion as something that would seem to bode well for us hairless apes when the machines run the show, so to speak.

For both our sakes, I’ll probably hope for the same.

– – –

For more information about the work of David Pearce, visit him online at BLTC Research (dedicated to research in what David calls “paradise-engineering,” a term I’ve decidedly taken a liking to), or at his website Abolitionist.com (dedicated to the abolition of suffering).

Thanks again to David for joining me for the interview.


-Daniel Faggella

Aubrey de Grey on Ending Aging and the Human Future

“Okay, lets get started, but first let me grab my pint” said Dr. Aubrey de Grey, as he walked with laptop in hand to his glass. Sitting down and taking one sip I ask “That’s how the do it in England, huh?”, to which he responds matter-of-fact-ly “Yes it is.” This is the beginning of a fun interview if I’ve ever had one.

Having seen Aubrey de Grey’s work for over the past two or three years, it was an honor to be able to catch up with him in person on Sentient Potential and explore some of his thoughts about the future of humanity. For those of you who are less than familiar with his work, Aubrey is the Chief Science Officer at the SENS Foundation (Strategies for Engineered Negligible Senescence), and author of a number of books including “Ending Aging.”

Beginnings in Research

I mention to Aubrey that many people know him as the man out to defeat aging, but that few people (myself included) knew much of the story of how he got involved in the field in the first place. His response somewhat surprised me.

Originally, his interests in research were in leveraging artificial technologies to allow humans more leisure time, more time to use our higher faculties, basically having machines “handle the things we don’t want to do… like climb into coal mines or sell hamburgers.” In the 90’s, Dr. de Grey met his wife who just so happened to be a biologist, and was – at the time – working on fruit flies.

Aubrey’s own interests were peaked by biology and as he dug into the field on his own, he was surprised to hear that nearly nobody in the field was working on the problem of aging, of curing the causes of what is the biggest killer of humans on earth. The responses he got to “why” people weren’t working on this problem all seemed irrational and not well thought out. This was the origin of his idea of the “pro-aging trance,” (read more about it on his Wikipedia page) a kind of “deathism” that promotes the idea of death from aging as a good thing without actually thinking the problem through rationally.

In terms of overcoming barriers, Dr. de Grey sees the defeat of pro-aging trance as primary among his concerns to help us extend human life an end the suffering of aging.

Science Purposes, Humanitarian Purposes

Aubrey explains that in the world of research, it is assumed that there are more humanitarian researchers than maybe there really are. Some people do research to figuring things out because they like to figure things out, and a few scientists do research exclusively to apply it meaningfully in the world. Dr. de Grey’s own personal aspirations have always been in helping humanity on the grandest scale, and so his mission (and that of SENS) is the direct application of their research to cure the problem of aging.

What Ending Aging Might Do for the Future of Humanity

I spoke with Aubrey briefly on the topic of the future of humanity, and the potential scenarios (often discussed in the world of transhumanism and futurism) that might involve moving our human conscious into other substrates, giving us long-lasting silicon bodies and potentially moving our minds into computers that are more durable and reliable that our current biological grey matter. Famous proponents like Ray Kurzweil believe that by the 2030’s, mind uploading will be perfected and a preferred mode of existence (some of his predictions here).

It is Aubrey’s belief that the desire to leave our biological substrate will diminish as the “down-sides” of remaining purely biological go down. In other words, when we can more-or-less live forever in our present bodies, Aubrey believes that we will likely not wish to remove ourselves from them. The negative aspects of “being made of meat” – as he aptly put it – would be mitigated by an absence of disease and an absence of the recurring damage which is the origin of aging itself.

Staying in Touch with Aubrey and SENS

The 6th SENS conference is coming up in Cambridge in September 3rd – 7th (details here at SENS.org). Anyone looking to attend can register now.

In addition, you can always contact SENS via their contact form if you have any questions about  science, events, local anti-aging related meetups, or anything else at all.

I wanted to say another big thanks to Dr. de Grey for the interview. With innumerable directions for the human future, Aubrey’s research is an approach that much of the world is eager to see progress. Only time will tell, but Aubrey believes that the first 1000 year-old person may already be alive today, possibly in his or her 50s or 60s.


-Daniel Faggella

On the Morality in a Transhuman Future, and it’s Repercussions for Humanity

Will robots have morals? Will the world and it’s sentient technologies agree on any kind of ethical “code?” Will democracy still reign when intelligent technologies run essentially all aspects of life?

Questions involving morality, ethics, and values in the future seem to me both immeasurably important, and inevitably senseless. On the one hand, our transition to transhumanism and our development of technologies that alter or create conscious experience (or drastically and irrevocably change life as we know it) represent the as close to an “ultimate ethical precipice” as humanity could ever conceive. The more thinking, experiencing, consciousness there is out there, the more ethics. Volcano’s weren’t as tragic when all they killed was bacteria. In fact, few bacteria who survived probably even noticed – as bacteria aren’t known for noticing much.

Pompeii was tragic because it buried so many sentient beings. Which billions of those beings, more ethical gravity is at stake, and with we end up occupying the solar system or known universe with an unimaginatively capable and sentient super-intelligence, the stakes go up again by untold factors.

On the other hand, the possibilities of what “morality” may mean or imply in the distant future cannot possibly be known. I will argue that holding fast to a set of moral beliefs may itself be as dangerous as rapid experimentation and iteration or ethical frameworks – though both may seem viable as the initial stages of super-intelligence AI take hold.

Below I’ll aim to explore this dynamic of the usefulness and uselessness of morality in the hypothetical (or not so hypothetical) future dominated by intelligence beyond our own – with the aim of contributing to the conversation of finding a “sweet spot” for pragmatic application today.

Importance in the Now 

Where I see a tremendous kind of importance in moral thought (both in it’s evolution and in it’s common acceptance) is in regard to our immediate technological development. Given that the developments in nanotechnology, genetics, brain-machine interface, and others pose the tremendous implications that they do – it seems that some kind of unified ethical principles might aide in the world’s beneficial “progress” in a direction we can agree upon as “good” (admittedly easier said than done).

In some respects, the safe and beneficial development of these technologies seems to be no different than problem-solving in any other domain. The greater the awareness and unity around issues, and the greater the allocation towards their proper resolve, the better off the result would seem to be.

As for present attempts at guiding action in the transition to transhumanism, I admire Dr. Max More’s “Principles of Extrophy.” Chief among the ideas that resonate with me are: A) he is vigilantly aiming to find a grounding for our development without imposing or limiting, and B) he opposes dogma to the point of inviting viewpoints and challenges for better principles of guidance (which, of course, is the only genuine opposition to dogma, anyway).

I have come up with no better or clearer answer to our orientation toward the future outside of open-mindedness, unified efforts (ideally, it seems, this “ethical precipice” would unify or harmonize people and nations), and a vigilant pursuit of what is best. Even in it’s ideal form, this does not seem to exclude the strong possibility of serious conflict around these issues.

However, without some unity in terms of policies, which changes and developments come first, and even new restrictions and laws around the development and use of these technologies, it seems evident that the technologies of tomorrow have the distinct possibility of getting out of control. Of course this is also possible even with collaboration, policy, enforcement, etc, but I will assume it to be slightly less so. Our best odds for the construction and success of a “framework” of ethical development or morality seem to involve a collaboration of experts from all realms of knowledge, including politics, science, psychology, philosophy, etc…

Of course, this “vigilant collaboration” only makes things more complicated – especially because it will presumably be individual human minds aiming to distill this wealth of knowledge and meaningfully implement it in the world. This, amongst other reasons, is why I have posed elsewhere that it may be best for our first super-intelligence to be constructed for the sake of aiding in the guidance or out developments – a kind of master of “phronesis” (practical wisdom) applied morality.

Possibilities for the Ethics of the Future

The future of “ethics” is uncertain in my mind, though I see the potential for three distinct possibilities in the future (amongst the potential for others I am sure).

First, it may be possible for the morality of the future to be the continuation of the morality of a people, nation, or organization here on Earth. Though an agreement is likely never to be unanimous, enough of Earth’s inhabitants (particularly, those in control of the technology) might come to an agreement upon common tenets, and when / if these tenets are imbued into a super-intelligence, they may be the morality or value system to be carried forward indefinitely.

Second, the future may be a world void of morality. Either the super intelligence(s) of the future will have no innate morality, or their goals and activities would be void of any such notions. There may be a thousand vying human value systems and a super intelligence that cares for none of them and simply pursues a goal or heads towards an aim. The system of “values” of the machine then may never attain the kind of depth of humanities moral conceptions, or it may simply choose not to allocate it’s attentions to pursuing morality – but rather some other aim (IE: expanding into and discovering the galaxy, protecting and bettering the life of man, etc….).

It is the third option, however, that I deem to be the most likely future of what we know today to be “morality.” This third option implies an immeasurably complex and ever-changing system of priorities, beliefs, values, and understandings that humanity is incapable of grasping. Just as our thoughts on values and ethics fly high above the head of a rabbit (who cannot understand and has no notion of what “understanding” is), we should not assume that we would be able to grasp any notion of the “morality” of a super intelligence.

It is safer to assume that such an intelligence would evolve and develop it’s system of morality and values just as has happened with individual people, nations, and eras in the history of man – but in machine this process would occur much more rapidly, and to much further extents than in man. Instead of ideas being passed down in stories or texts, a super-intelligence would be capable of conceiving of all previous moral thought in an instant, and in the next instant extrapolate it’s meaning and repercussions with rational capacity so far beyond that of present-day humanity.

It would seem that even if such an AI was programmed with a given moral framework, this, too could change. If it’s hyper-vigilance in discerning new knowledge and coming to a deeper understanding were applied to it’s priorities and purposes – in addition to fields of study like medicine or finance – then it seems as though it may very well “change it’s mind” and break from the prescriptions we originally endowed it with.

Where an Ever-Shifting Moral Super-Intelligence Leaves Man

This, in my view, does not bode will for humanity’s survival. We would certainly like to program an AI with a human-friendly set of “values,” but it’s re-assessments and vigilant pursuit of it’s aims and it’s notion of the good would likely not take long to bring it’s free thoughts to “moral” ideals that may not involve the nuisance of humanity any longer. It would seem that these infinitely powerful and consuming “oscillations of thought” might at some point yield the thought that humanity ought be either ignored or destroyed.

The arguably most disturbing notion is that these new a further moral conceptions – like the new and further scientific ones – will almost certainly be closer to “correct” than any given human notion. By combining all previous scientific knowledge, new and deep understandings and results will be drawn in every field – and countless new “fields” will emerge when the universe is placed under the “lens” of a sentience that is trillions of times more intelligent than ourselves.

If this same kind of collaboration, rationality, and all-encompassing discernment is applied to morality, it would seem difficult for us to argue a super-intelligence’s conclusions to be “wrong,” for in fact it seems inevitable that it will be “less wrong” than any limited notion that our cortexes might conceive of (ultimately, our disagreement would likely not do much for us, anyway).

I certainly don’t want to be exterminated, but it seems difficult to say that I “know” it is right that I be preserved. Might we, too, have an end like that of Socrates? Given the cup of hemlock poison to drink, it is said that he decided to drink cheerfully, rather than do against the society he chose to live in. If we’ve built machines to think, reason, and do more than we ever could – and we’ve given them permission to be in a position to destroy us – a Socratic fate doesn’t seem like so much of a stretch.

Our best “hope,” it seems, is to intend for (and ensure) an optimally benevolent super-intelligence to begin with, but where we are taken from there – we cannot possibly know. Technology has got it’s hand’s full.

-Daniel Faggella

Selfless Robots – Reflecting on Hughes’ Work in “Robot Ethics”

The past 2 or 3 weeks I’ve been digging into a lot of material I’ve found in the blog and article sections of the IEET website – and recently I stumbled onto a PDF draft of a work by Dr. James Hughes called “Compassionate AI and Selfless Robots.”

The work prompted a number of questions that I thought would be important material for future posts / future conversations in the field in general. I have firmer assumptions on some of addressed topics than I do for others, but all seemed worthy topics for debate.

The Constituents of Consciousness

Hughes mentions that in the Buddhist notion of the five skandhas:

1. The body and sense organs (rūpa)
2. Sensation (vedanā)
3. Perception (samjñā)
4. Volition (samskāra)
5. Consciousness (vijñāna)

He poses: “One of the questions being explored in neuroscience, and yet to be answered by artificial intelligence research, is whether these constituents of consciousness can be disaggregated.”

In other words – can a conscious entity exist if it lacks any of the above “constituents.” My best guess would – for several reasons – be “most likely.”

In ancient times we suspected that all things were composed of elements like fire, water, air, or earth. Later we learned of cells and molecules and atoms and atomic particles.

Similarly, in ancient times we suspect a set of elements involved in consciousness. It seems safe to assume that we haven’t “nailed” the topic of consciousness yet, and that we have many, many, many amazing discoveries about this topic ahead. It seems unlikely that our ancient guesses will have encapsulated it all (IE: the example of what the world is made of).

In addition, I would suspect that not only might a conscious entity exist without all five of the above elements, but that there are additional “components” or “constructs” of what we call consciousness, of which we have yet to discover.

It seems reasonable – from my present knowledge – to see humanity conceiving of a consciousness without a body, or even an awake, aware being without volition (our own volition is not certain, nevermind that of other animals that we consider “conscious,” like deer or moles). It also seems reasonable that there are capacities beyond the five skandhas which could also be leveraged by a conscious entity beyond mortal man. Many of these capacities I believe we are incapable of imagining at present (as the deer is incapable of imagining the joy of writing poetry).

On the Cultivation (Not Just “Programming”) of Virtues

Hughes ends this work with this paragraph: “Buddhist ethics counsels that we are not obliged to create such mind children, but that if we do, we are obligated to endow them with the capacity for this kind of growth, morality, and self understanding. We are obligated to tutor them that the nagging unpleasantness of selfish existence can be overcome through developing virtue and insight. If machine minds are, in fact, inclined to grow into super intelligence and develop godlike powers, then this is not just an ethical obligation, but also our best hope for harmonious coexistence.”

I certainly agree that they’ll (it’s funny, calling imaginary future super-intelligence machines “they”) need some kind of ethical sense in order to not destroy us or harvest our biological materials. I think that the idea of this sense being “cultivated” is important to note. Even if we are able to program a “past life” with a million moral lessons learned, a machine will still need to be able to iterate and respond to what is happening in real time in order to make sense of it’s moral life (if we wish for it to have one).

However, I am wary of the fact that once a machine attains anything close to god-like powers, it will not be remotely possible to cap it’s moral thinking to make sure that it still values “us,” or anything else for that matter. As the machine grows and changes, and iterates and processes, and aggregates data and input and engages in cognition, moral thinking, and decision making on levels we cannot imagine, it is not going to be possible to sit across a table from it and ensure that it’s still “cool” with us and not eager to use us for our parts – or simply eradicate us.

I certainly don’t have the answers either, but my hope and assumption (there we go with those assumptions again) is that conversing on these topics and understanding their importance during the construction and early workings of these machines will give us as good a chance as we can get to ensure the betterment of life through these technologies.

– – –

To read Dr. Hughes’ original article, please find the link at the top of this blog. You can find my interview with him in Episode 2: The Transition to Transhumanism.

All the best,


Dr. Dale Purves of Duke University – The Direction and Future of Neuroscience

I was very fortunate last week to be able to catch up with Dr. Dale Purves of Duke University. I first found Dr. Purves via his research website, where I stumbles upon a trove of interesting empirical answers to various conundrums of perception. Having explanations for these perceptual mind games is far more insightful than merely being surprised by them. I probably spent much more time on exploring the “See for Yourself” section of Dr. Purves’ site than I should have, and if you follow the link below I think you’ll get an idea as of why (the experiments in “motion” were particularly fascinating for me).

(The above image was taken from: http://www.purveslab.net/seeforyourself/)

Dr. Purves’ work over the last number of decades has focused heavily on how perception plays a part in understanding how the brain functions – with much of his best known work in the area of sight and vision.

In the above interview we talk about his personal introduction to the field of Neuroscience and Neurobiology, the problems with the current approaches being followed aiming to replicate the brain to understand it’s workings, the need for a more thorough understanding of biology and evolution to understand the brain itself, as well as some potential future directions for neuroscience (and their limitations).

Those of you interested in transhumanism may particularly like the portion of our conversation relating to Dr. Purves’ perspective on how brain research is currently being funded and approached at present (IE: the perspective of the Blue Brain Project and Kurvweil’s work with Google and elsewhere), in addition to the apparent failure of AI to catch up to biology in the area of vision (this portion of the conversation kicks in about 5:45 into the video above.

I wanted to give an additional thanks to Dr. Purves for taking the time for the interview here at Sentient Potential, and providing a neurobiology perspective on the issues we aim to explore here. Learn more about his work at the link above – and enjoy the interview.


Daniel Faggella

Transhuman Possibilities and the “Epitome of Freedom”

I am still of the belief that two of the most important conversations we can have with regards to the future of humanity and of sentient life in general are (a) how the transition to transhumanism could / should take place, and (b) where we would like this transition to inevitably leave us. This short work will have to do with the latter – and it’s emphasis will be a transhuman future ideal state that exemplifies the notion of “freedom.”

I’ll begin with identifying what “freedom” means, why it might be desirable (and why it might not be), and how it might be ideally attained in a technologically advanced future.

Freedom can be defined (www.meriam-webster.com) as “the absence of necessity, coercion, or constraint in choice or action.”

As human beings, a sense of autonomy and self-possession is essential to our well-being. It is a constituent of fulfillment based on the nature that we have been granted. We should not suppose, however, that it is inherently “good” in and of itself, though as humans this inkling of “freedom = good” is easy to make, as most all of us want it, and want more of it.

It might be possible, for example, for a species to have evolved with a desire to be directed externally, to be constrained in certain ways and so gain a sense of safety and order. These inklings also exist within us as humans, and it might be supposed that if these were more prominent than our drive for freedom, we might not have any noticeably less fulfilling lives (again, assuming that this drive for safety and order was more fulfilling and prominent than that for freedom and autonomy).

However, freedom and autonomy likely served a very important role in our development as a species, and continue to play a role in our transition beyond biological intelligence. We seek expansion, betterment, exploration, and for this reason we see the societies, cultures, and technological advancements of the day. We seek to gain more of what we want, and to have our own wants and pursue them.

Before delving into the “epitome of freedom” as a transhuman idea to be striven for, I aught address that “freedom” may not necessarily be good in and of itself (or only in certain contexts). Our human brains associate autonomy and freedom with fulfillment and well-being, but this is no innate requirement of sentient beings. We associate “dignity” with freedom, but this association may matter little outside of our notions and perspectives as humans.

However, freedom does seem to have taken us to where we are today, and we can assume that it could serve a utility in our explorations of consciousness / sentient potential (and so the future of sentient beings). Hence, it may prove useful as a transhuman ideal as it is not stagnant, and may reveal more and greater levels of potential modes of existence. If our transhuman future aims not only for proliferation of the “good” life for conscious beings (whatever that ends up meaning [let us hope it implied well-being]), but also for continued progress in exploring the possibilities of conscious experience, then the “epitome of freedom” may be a model of a transhuman future that we find desirable.

If all (originally) human consciousness were to be put in a state of relatively unthinking super-bliss, our subjective quality of life might skyrocket beyond all imagination – but “we” as previously human consciousness would not be contributing to the furthering of our own potential, or the discovery of further possibilities of capability of well-being. In a situation of unbridled freedom, human consciousness might not only control and enjoy experience, but might vigilantly find new possibilities which all sentient beings could learn from and gain from. If there was an established method for these new discoveries to be shared or proliferated (either between conscious entities or through the medium of some super-intelligence), then each free entity – even if in its own virtual world – might gain and discover new experiences / capacities / ways of being that all other intelligent life could learn from or draw from.

What the Epitome of Freedom Might Look Like

Given the potential of future technologies, certainty is the last thing we have with regards to what the world might look like, but we might “paint a picture” as many fiction writers and futurists have aptly done, in order to explore how the expression of different ideals might look and feel in the future.

If freedom is carried to it’s extreme, then it seems as though individual entities would wield direct control over their own feelings, their own action, their own experiences, etc… This kind of enhanced “free will” (a term I use tentatively, given it’s philosophical ramifications) would involve one experiencing just the emotions and thoughts that it might find most conducive to it’s development or the attainment of it’s desires.

In my opinion, for the epitome of freedom to be attained, virtual reality would have to be it’s domain – and I would argue that the human form in and of itself would very swiftly become irrelevant. Below I will explore why I have made these two hypotheses.

On Virtual Freedom Over Freedom in Present Reality

If the future of sentience does in fact involve the maintained identity and separateness of individual entities, then “freedom” could only extend so far in a real world as to border on impinging on the “freedom” of others. Complete freedom would imply control over one’s environment and free choice to do what one would chose with it. It seems easy to understand how this might imply the threatening of the freedom of others in the same physical world.

Not to mention, the physical world has many impinging qualities that would hinder any semblance of complete freedom. Matter has qualities, light has qualities, and physical bodies (no matter how enhanced) will always have limitations. If you’d like to change an aspect of our character or emotional experience, for example, we’d have to potentially tinker with brain chemicals (more explorations on this topic on Dr. James Hughes article here on Buddhism and Human Enhancement). In a virtual reality, we are potentially presented not only with the freedom to extend beyond physical limitations (to transport to different times or places, to live within self-created fantasy worlds, to eliminate death and any physical risk), we would also be granted freedom from impinging or effecting others – and so allow for their full freedom an a separate virtual reality as well.

For this reason, it seems to make sense that (assuming our future will involve the continued existence of individual consciousnesses) we might encounter a Bostrom-like “Singleton” to rule the physical world, and a great sea of individual consciousnesses in the virtual world. The “Singleton” could keep our computational substrates safe from harm and eliminate competition or danger in the physical world, while our virtual “selves” would be capable of expressing and exploring the epitome of freedom on our own terms in a limitless virtual world of our own creation.

It should be briefly noted that this “epitome of freedom” situation need not involve lots of “little consciousnesses,” but may in fact be chosen as the way of being for a single, overarching intelligence that aims to expand indefinitely. Hence, this “Singleton” might take on this value of freedom itself.

On the Abandonment of Human Form Altogether

At present – as with many projections of the future – this thought is not particularly comforting or comfortable to contemplate at present, but I believe it is worth addressing as it seems relatively eminent with our without the “epitome of freedom” scenario.

Though most futurist projections involve an eventual kind of “uploading” into a virtual world or new, post-human body. Even in the projection of a virtual reality, most futurist images involve new, wonderfully capable humanoid bodies. In my opinion – particularly in virtual reality and even in our shared “reality” itself – this anthropomorphism is as preposterous as “Rosie” (the robotic maid) from the Jetsons cartoon having a maid’s apron and big radio antennae sticking out of her robot head. It is a projection into the future given the knowledge and forms we are familiar with today.

The year 3000 will not be like a giant simulation of Sim City, with simulated humanoid bodies hustling about, going to work, and worrying about remembering milk at the grocery store or making it through a traffic jam (unless we explicitly create this kind of world as some kind of fun experiment or joke). Rather, if consciousness is freed from physical form and given millions or billions of times greater capacity to understand, create, and control… then any semblance of human physical form will become obsolete outside of remembering a human past. Though I consider myself to be tremendously grateful for life – I can imagine that a post-human super-intelligent version of my sentience would have many more experiments, experiences, and objectives outside of remembering when I used to walk on two legs, practice martial arts and enjoy eating sushi.

I very much resonate with the 2045 project, and believe it to be more relevant an effort (assuming continued technological developments) to better sentient beings than literally anything else we might focus on at present here on earth. Their eventual objective of bringing human consciousness to a level of focus and exploration of the heights of spiritual self-improvement is an ideal that I resonate with (though others may not). However, their projection of the creation of super-capable humanoid “Avatars” capable of living in physical or virtual worlds is a transition that I would see as only a very short interlude before a complete transcendence of form altogether (never mind anything remotely human). At that point of massively expanded capability, I don’t believe that we will need any kind of human form to navigate or make sense of our world, and that the projection of this form into the future most likely represents a kind of anthropomorphic bias.

I also don’t believe that our race will see significant biological evolution beyond our present state, os most predictions of such do not factor in non-biological developments (http://www.mnn.com/earth-matters/wilderness-resources/stories/projecting-human-evolution-5-traits-we-might-possess-in-t).

Potential Dangers of This “Epitome of Freedom” Scenario

As stated before, this particular idea of the “Epitome of Freedom” which we explore today implies one particular variation of a transhuman existence where individual consciousnesses are presented with virtual worlds in which to expand indefinitely and without restriction. It should be emphasized that this is not the only potential post-human existence which we might deem possible, and that this possibility itself is not without it’s own tremendous risks to sentient well-being.

First, if “we” as previously biological consciousnesses are housed in some kind of computational substrate in a “real” world, then there is a distinct danger of something happening to that computational substrate itself. If our “housing” computer is destroyed, tampered with, or contracts some form of virus, this might imply our swift elimination. For example, the entity in control of our physical substrate might determine that it could use our computing space more adequately for other purpose, and merely flush us from the system immediately, even amidst all of our God-like virtual expansion. Poof.

Second, this situation might imply the creation of a “hell” equivalent, where our infinitely experiencing and infinitely conscious “selves” become trapped in a world of tremendous pain and suffering, on such a torturous and terrible scale that it could easily outweigh all of the suffering in human history. This “virtual hell” might be created by our own mishaps in the expression of creativity and freedom in a virtual world, but we can imagine it might also be brought about via a virus or malicious being with access to our virtual world. This appears to be a real and legitimate threat, and a serious consideration for humanity as it transfers consciousness into other substrates.

In Conclusion

The “epitome of freedom” is one of countless potential world’s of the future, and implies the triumph of the value of autonomy. In the actual future we face, there may in fact be no individual consciousness entities, nor may there be any innate value to “freedom” – particularly for little formerly human consciousnesses. However, I have here explored why this possibility may be desirable for individuals and for the sentient world as a whole – as it implies the constant creation of new modes of being, new experiences and capacities, etc…, which may serve some utility to the conscious community as a whole. I have also posited as to why it seems imperative that super-capable free agents would likely be best off in individual virtual realities – as opposed to sharing a physical world where they might do permanent damage to the environment or each other.

As to what the future will actually bring about – we cannot possibly know. However, it seems safe to say that just as man imagined his travel to the moon well before the actual flight took place – the ideals and possibilities of transhumanism laid out today will be determining factors in how the future of consciousness transpires. In the best case scenario, our greatest concern will be dealing with the boring prospect of eternal life, which to me doesn’t seem all that boring if we’re endowed with the faculties to make the most of it.. Let us be vigilant, let us collaborate, and let us aim to find the best path forward.

-Daniel Faggella

Acceptance vs. Questioning, Kurzweil, Jobs and Francis Bacon

In an interview with Wired about his work building a brain at Google (http://www.wired.com/business/2013/04/kurzweil-google-ai/), Ray Kurzweil was asked about his thoughts on Steve Jobs’ notion of death as a natural part of life, and a provider of meaning and urgency (from his now-famous speech at Stanford).

“…This is what I call a deathist statement, part of a millennium-old rationalization of death as a good thing. It once seemed to make sense, because up until very recently you could not make a plausibly sound argument where life could be indefinitely extended. So religion, which emerged in prescientific times, did the next best thing, which is to say, ‘Oh, that tragic thing? That’s really a good thing.” We rationalized that because we did have to accept it.”

If it’s your first time reading about radical life extension, the statement might come across as arrogant (when in fact I believe Ray likely state it matter-of-fact-ly), but usually gets the point across. If death wasn’t 100% inevitable, would we still will it to happen? For most of us, the answer is likely not a cold “Yes.”

I’m forever surprised that there are not more similarities drawn between Kurzweil and the late Sir Francis Bacon (1561-1626), the famous philosopher and promoter of science who exclaimed the following in his “New Atlantis”:

“My only earthly wish is… to stretch the deplorably narrow limits of man’s dominion over the universe to their promised bounds… [nature will be] bound into service, hounded in her wanderings and put on the rack and tortured for her secrets.”

This defeat of death itself through deliberate effort and “torturing nature for her secrets” is – in the “New Atlantis” as in Ray’s modern projects – only a small (though unquestionably important) facet of overcoming our condition and leveraging the secrets of nature.

Death is Not the Last Condition to be Questioned 

If death were to remain inevitable, and our present lifespans were somehow to be the peak of human achievement, then acceptance and meaning-making would seem appropriate. There are plenty of rational, science-minded individuals who – like Steve Jobs – would find some semblance of meaning in death.

What about our intellectual capacities? At present, without any additional skull-room to fit more cortex, our intelligence might be seen as being close to topped out. What about the fact that time can only be experienced moving forward? What about our sensory experience being limited to five senses?

Without any ability to change any of the above conditions, the only reasonable response would be acceptance, and meaning-making. It might be comforting to think that the reality we stumbled into the the perfect one, and that it is to be embraced and accepted – and held as sacred. This “surrender” is not a perspective that I disregard as inherently “wrong” in a moral sense, but it certainly isn’t the only “right,” and it seems to yield little in the progressive betterment of our condition as people or nations.

If you’re reading this on a personal computer, it is presumably due to someone’s non-acceptance of the inaccessibility of knowledge. This is not to say that this person held no respect for their present condition, but instead that they set their mind to improving it along their terms, to vigilantly working away to yield the result – the computer you look into now.

It should be noted that there are those who likely argued against the spread of the computer, believing that books would make us appreciate information more, or prevent our technological laziness. The vat majority of these people now likely use the internet daily, and a great number of them likely have iPhones.

Why? Because this technology served it’s purpose, it delivered a value, it furthered our aims. At a certain point… it would make others faster or more connected than ourselves, and we’d have to catch up. For these reasons and others, the fear of losing the “meaning” of taking a book off of a shelf did not hold back computing.

Stepping Into the Future, Modulating Non-Acceptance

Think of how absurd it would have been to tell someone in 1869 that in 100 years, man would walk upon the moon.

Think, also, of how absurd it would have been just 50 years ago to tell someone off the street that in 50 years we’d have paralyzed people moving robotic arms to sip their coffee – using their thoughts alone.

Is it to be supposed that the next 20, 50, or 100 years (with the massive speed of technological progress) we are to see any less of this kind of seeming absurdity? Would we not expect to see so much more in our time than in any previous generations? How could we not question all that we know – or at least know that anything we’ve ever known can and one day must be brought to question?

We do not know what we are capable of – and though the exploration of these reaches indicates massive responsibility, it also seems like a duty to continue to break free of any notions of what we feel binds us or restrains us from the betterment of our condition. Of course there is danger in surpassing ourselves (hence the original myth of Atlantis http://en.wikipedia.org/wiki/Atlantis), but the tremendous possibilities of “progress” seem to beat the dangers of technological and scientific stagnation.

This is the perspective of questioning the inevitable, and not settling with meaning-making upon all the topics with which we can do nothing… yet. It should be noted that I hold nothing against meaning-making, and putting a “positive frame” on a condition in order to feel better or promote action.

Due to the fact that the technology did not exist, it was likely best for Jobs to accept and make meaning of death – for him it likely was inevitable in every sense of the word. Right now (assuming you read this well before life-extending technology exists), if you were lying in your death bed it might make sense to come to the same conclusion. Let it be known also that Jobs also questioned the accepted – and will go down in history as one who did not accept – but created the world and conditions of his vision. In some circumstances, acceptance is called for – and in others, it is not.

Jobs, after all, was one of those “questioners” and “non-accepters” who forced the personal computer to be in existence, and the iPhone which squarely fills the pockets of so many who would have initially clamored for books over computers (let it be known that I love a good book – but not enough to prevent the furthering knowledge through other mediums than paper).

What Jobs and Kurzweil (and Bacon) seem to both show us, is that everything at least CAN be questioned, and that acceptance need not result in resignation. Though we’ve made meaning of death and of books, if we are vigilant we will see them for what they represent, and we will be capable of making new meaning – a process that needn’t imply an insult to our past.

If we are strong enough to detach from outdated ideas, or to mold meaning in accordance with new knowledge or possibilities (as the three men here listed have done), then we may become contributors and directors of the future in addition to curators of the past.

-Daniel Faggella