• OK, it's on.
  • Please note that many, many Email Addresses used for spam, are not accepted at registration. Select a respectable Free email.
  • Done now. Domine miserere nobis.

Dalyth's Article Thread

Dalyth

small.
Local time
Yesterday 11:19 PM
Joined
Jun 6, 2015
Messages
66
-->
Location
United States
I'll post any articles I find interesting here. Feel free to share, read, discuss, debunk, or ignore as you please. :p

First:
Gene-editing must wait until safety and ethics catch up

Gene-editing on human reproductive cells on a large scale may be possible, but the ethical and safety issues still outweigh the urgency to put it into practice. That's the decision of members of the Organizing Committee for the International Summit on Human Gene Editing who, after three days debating the potential for genetically tinkering with embryo, sperm, or egg cells from human donors, concluded it would be "irresponsible" to go ahead now.

The conference was an opportunity to put the fast-progressing genetic modification research into a greater context since, as the committee pointed out, "the prospect of human genome editing raises many important scientific, ethical, and societal questions."
While genetic modification and stem-cell research have been progressing for some time, the summit was prompted by a new way of gene editing that has seen rapid acceleration in recent years.
Dubbed CRISPR-Cas9, it's based on a bacterial CRISPR-associated protein-9 nuclease (Cas9) from Streptococcus pyogenes. Compared to existing tools, it's easier to adapt to different experiments, while modification accuracy is also higher.
That's opened the door to wide-scale genetic editing at a much faster pace, but also triggered renewed concerns about the longer-term implications.
The committee concluded that, although promising, it's still too early to allow human reproductive cell editing. For instance, "the safety issues have not yet been adequately explored; the cases of most compelling benefit are limited; and many nations have legislative or regulatory bans on germline modification," the group highlighted.
Still, the potential for somatic cell editing, which would allow for changes not passed on through reproduction to future generations, has "promising and valuable" applications including dealing with cancer and other diseases, the group argued.
To the disappointment of some, though, it's not the door being closed completely on the research. "However, as scientific knowledge advances and societal views evolve, the clinical use of germline editing should be revisited on a regular basis," the group said.
The recommendation is that the three organizations which formulated the summit - the U.S. National Academy of Sciences and U.S. National Academy of Medicine; the Royal Society; and the Chinese Academy of Sciences - put together a committee to assess the different factors on an ongoing basis. Such a committee should include not only scientists, it's suggested, but ethicists, patients, representatives of both industry and faith groups, the public, and policymakers, among others.
SOURCE International Summit on Human Gene Editing



http://www.slashgear.com/gene-editing-must-wait-until-safety-and-ethics-catch-up-03416908/
 

Dalyth

small.
Local time
Yesterday 11:19 PM
Joined
Jun 6, 2015
Messages
66
-->
Location
United States
Gene Editing Won't Work on Complex Traits Like Intelligence

As we consider the ethics of human gene editing, we need to understand what can and can’t be meaningfully edited.
Last week, scientists gathered in Washington, DC for the International Summit on Human Gene Editing to discuss a technology called CRISPR-CAS9, which can insert, remove and change the DNA of basically any organism. It is relatively simple, inexpensive and accurate, and it’s already being used in laboratories around the world to make cells and breed laboratory animals with modified DNA for the study of diseases.
CRISPR could also be used to modify DNA in human embryos, but the question is whether this should be allowed. Among the concerns scientists and bioethicists have highlighted are heritable gene modifications and the use of this technology to create “designer babies.” CRISPR provides new opportunities for disease treatment and prevention, but with unknown and potentially substantial risks that warrant an ethical discussion. And this discussion should be rooted in an understanding of what can and cannot be meaningfully edited.

I study the genetic prediction of complex diseases and traits. Research in my field has consistently shown that human traits and common diseases are not genetic enough to be predicted using DNA tests. For the same reasons, it will be impossible to successfully program the presence of traits in embryos.
Any concerns that CRISPR could taken a step further to enhance babies by selecting favorable traits such as intelligence and athleticism may be unwarranted.
What can be edited?

The first (and failed) experiment of human embryo editing aimed to repair a single gene mutation for beta-thalassemia, a severe blood disorder. Other diseases mentioned as future targets for gene editing, such as sickle cell disease and Duchenne muscular dystrophy, are caused by single gene mutations.
These diseases are – at least hypothetically – easier to fix because the cause is entirely genetic and simple. For these diseases, research using CRISPR may lead to breakthrough discoveries for therapies and, potentially, for prevention.
But genetic editing of embryos for single-gene disorders also warrants caution. Not only could off-target cuts – unintended edits in the wrong places of the DNA – introduce heritable errors, but mutations may have so-called antagonistic pleiotropic effects.
This means that the expression of the gene increases the risk of one disease while decreasing the risk of another. Take beta-thalassemia or sickle cell disease, for example: carrying two mutated copies leads to severe illness, but carrying one mutated copy reduces the risk of fatal malaria.
Why traits cannot be designed in embryos

For a trait to be “programmed” with gene editing, it needs to meet two criteria.
First, the traits must be predominantly determined by DNA, which means that their heritability needs to be close to 100%. The lower the heritability, the more nongenetic factors such as lifestyle, education and stress play a role. The less likely the trait can be genetically programmed.
Parents who wish to enhance their offspring may be particularly risk-averse when it comes to the unknown adverse consequences of genome editing. That means that the heritability of favorable traits may need to be very close to 100%.
But a recent review, summarizing 50 years of heritability research, showed that only a few traits and diseases had an estimated heritability higher than 90%. Intelligence and higher-level cognitive function were around 50%, muscle power at 70% and temperament and personality at around 45%.
Second, the “genetic architecture” must be straightforward. Traits must be caused by a single mutation, like beta-thalassemia, or by an interaction between a limited number of mutations. It may technically become possible to edit DNA accurately at multiple places in the near future. But we still won’t know what exactly needs to be edited to program a trait when tens or hundreds of gene variants are involved.
Gene editing for favorable traits is not just a matter of tweaking the genes in the right direction. What makes people intelligent, for instance, isn’t a combination of the “right genes” and the “right environment,” but the “right combination” of genes and environment. Since the future environment of the embryo is unknown at the moment of editing, it will be impossible to know what the right genes need to be.
This is why the traits people might want to enhance can’t be programmed in the embryo, not even with the most accurate and reliable version of CRISPR. The technology is not the limitation for enhancing babies – nature is.
Despite the successes in gene discovery of the past 10 years, our knowledge of the combined contribution of all genetic variants is too limited for embryo editing. Even when all genes and their complex interactions are completely understood, our ability to use gene editing for favorable traits will remain limited because human traits are just not genetic enough.
We need to be clear about what cannot be edited

Urged by concerns about the safety and reliability of CRISPR technology and the unknown medical, societal, environmental and ethical consequences of human gene editing, a group of scientists are calling for a voluntary moratorium on “attempts at germline genome modification of clinical application in humans.”
The UNESCO International Bioethics Committee has also called for a moratorium citing concern over the creation of “heritable modifications” and “enhancing individuals.” Interestingly, their report acknowledges that CRISPR:
could be a watershed in the history of medicine […] even though it must be noted that there are only a few diseases for which the abnormality of one single gene is a necessary and sufficient condition.
This little side note, however, marks the boundaries of what can meaningfully be edited in the DNA of an embryo.
Gene editing technology warrants further study and refinement, which should be accompanied by evaluations of potential adverse consequences. But progress should not be hindered by an ethical debate about a potential misuse of the technology that will not be possible.
Polygenic diseases and traits are simultaneously too complex genetically and not genetic enough. This limits the opportunities for disease prediction, and will also prevent the genetic enhancement of babies.
A Cecile JW Janssens is a Research Professor of Epidemiology at Emory University.
This article was originally published on The Conversation. Read the original article.
Image by Nana B Agyei under Creative Commons license


http://gizmodo.com/gene-editing-wont-work-on-complex-traits-like-intellige-1746595228
 

Dalyth

small.
Local time
Yesterday 11:19 PM
Joined
Jun 6, 2015
Messages
66
-->
Location
United States
Machine Morality Etc

Machine Morality: Computing Right and Wrong

By Sherwin Yu May 10, 2012

Imagine a future in which artificial intelligence can match human intelligence, and advanced robotics is commonplace: robotic police guards patrol the streets, smart cars yield to one another, and robotic babysitters care for children. Such a world may appear to lie in the realm of science fiction, but many of its features are increasingly realistic. While the benefits of such a world are enticing and fascinating, advanced artificial intelligence and robots bring a whole set of ethical challenges.
Wendell Wallach, a scholar at Yale’s Inter-disciplinary Center for Bioethics, researches the potential ethical challenges of future technologies and how to accommodate potential societal changes. Wallach is a leading researcher in the field of machine ethics, also known as robot ethics, machine morality, or friendly AI. The central question of machine ethics is: “How can we implement moral decision-making in computers and robots?” This inherently interdisciplinary field is at the interface of philosophy, cognitive science, psychology, computer science and robotics.

Different Levels of Moral Agency
As artificial intelligence and robotics continue to advance, we reach the possibility of computer systems making potentially moral decisions by themselves — artificial moral agents (AMAs). Wallach proposes a continuum of moral agency for all technology, from everyday objects completely lacking agency to full-fledged sentient robots with full moral agency. The continuum exists along two dimensions: autonomy, which indicates what the technology has power to do, and ethical sensitivity, which reflects what inputs the technology can use to make decisions. For example, a hammer has no autonomy and no sensitivity, while a thermostat has sensitivity to temperature and autonomy to turn on a furnace or a fan.
As robots gain increasing autonomy and sensitivity, so too do they have greater moral agency. Wallach explains that the most advanced machines today only have operational morality — the moral significance of their actions lies entirely in the humans involved in their design and use, far from full moral agency. The scientists and software architects designing today’s robots and software can generally anticipate all the possible scenarios the robot will encounter. Consider a robot caregiver taking care of the elderly. The designers of the robot can anticipate possible ethically-charged situations such as a patient refusing to take medication. Because the robot’s autonomy and sensitivity is limited, the designers can feasibly account for all possible situations, and desired behavior in expected situations can be programmed directly.
But what happens when the designers can no longer predict the outcomes? When both autonomy and sensitivity increase, greater moral agency and more complex systems arise. Functional morality refers to the ability of an AMA to make moral judgments when deciding a course of action without direct instructions from humans.

Wallach explains that implementing machine morality has two basic approaches — top-down and bottom-up — as well as a hybrid approach. In a top-down approach, a limited number of rules or principles governing moral behavior are prescribed and implemented. The top-down approach characterizes most moral frameworks in philosophy, such as Kant’s Categorical Imperative, utilitarianism, the Ten Commandments, or Isaac Asimov’s Three Laws.
The system attempts to learn appropriate responses to moral considerations in bottom up approaches, which take their inspiration from evolutionary and developmental psychology as well as game theory. Instead of selecting a specific moral framework, the objective is to provide an environment in which appropriately moral behavior is developed, which is roughly analogous to how most humans “learn” morality; growing children gain a sense of what is right and wrong based on social context and experiences. Similarly, bottom-up approaches, including evolutionary algorithms, machine learning techniques, or direct manipulation in order to optimize a particular outcome, can be applied to facilitate a machine achieving a goal.
Wallach notes that both approaches have their weaknesses. The broad principles in top-down approaches may be flexible, but they can also be too broad or abstract, making them less applicable to specific scenarios. Bottom-up approaches are good at combining different inputs, but they can be difficult to guide toward an explicitly ethical goal. Ultimately, AMAs will need both top-down principles as an overall guide as well as the flexible and dynamic morality of bottom-up approaches.

Challenges in Machine Morality
Two main challenges stand in the way of implementing moral decision-making: first, implementing the chosen approach compu-tationally. For example, utilitarianism might look attractive because it is inherently computational: choose the action that produces the result with the highest utility. But what is the stopping point for what is considered a result of an action? How far in the future is an AMA expected to calculate? Furthermore, how does one computationally define utility for the calculation, and how does an AMA evaluate the utility of different outcomes? The difficulty of computational instantiation of decision making is also showcased in the short stories of Isaac Asimov, in which robots obey three laws in order: 1) do not harm humans, 2) obey humans, and 3) protect its own existence. Asimov wrote more than 80 short stories, exploring how many unexpected and potentially dangerous conditions arise from the combination of these rules. Furthermore, to function properly in a society of humans, AMAs may require the computational instantiation of human capabilities beyond reason, many of which we take for granted, such as emotions, social intelligence, empathy, and consciousness.
The second problem for implementing moral decision-making is what Wallach calls the “frames problem.” How does a system even know it is in a morally significant situation? How does it determine which information is morally relevant for making a decision and whether sufficient information is available? How does the system realize it has applied all considerations appropriate for the situation?
Practical Difficulties
With all of these complicated questions, one might wonder just how far along modern day technology is. Wallach explains while we are far away from any machines with full moral agency, it is not too early to give serious consideration to these ethical questions. “We are beginning to have driverless cars on the road, and soon there will be surveillance drones in domestic airspace. We are not far away from introducing a robot to take care of the elderly at home. We already have low-budget robots that entertain: robotnannies and robopets.”
With the advent of robots in daily life, many security, privacy, and legal quagmires remain unresolved. Robots placed in domestic environments pose privacy concerns. To perform their job, they likely need to record and process private information. If they are connected to the Internet, then they can potentially be hacked. The security for a robot performing a critical role, such as pacemakers, cars, or planes is even more paramount. Failure could be catastrophic and directly result in deaths.
Google’s self-driving cars, which are being piloted in Nevada, pose legal issues as well. How do we legally resolve a complicated accident involving a self-driving car? What should a self-driving car do if a situation forces it to choose between two options that both might cause loss of human life? Wallach proposes a question: suppose self-driving cars are found to cause 50 percent fewer accidents than human drivers. Should we reward the robot companies for reducing deaths, or will we sue them for accidents in which robot cars were involved? Wallach says, “If you can’t solve these ethical problems of who’s culpable and who’s liable, you’ll have public concern about letting robots into the commerce of daily life. If you can, new markets open up.”

The Future of Machine Morality
Wallach ultimately tries to anticipate what sort of frameworks could be put in place to minimize the risks and maximize the benefits of a robot-pervasive society. Wallach points out that considering the ethical implications of AMAs falls into the broader discipline of engineering ethics and safety. Engineers need to be sensitive to these ideas when they think about the safety of their systems. Balancing safety and societal benefit has always been a core responsibility of engineering; today’s systems, however, are rapidly approaching the complexity where the systems themselves will need to make moral decisions. Thus, Wallach explains that “moral decision making can be thought of as a natural extension to engineering safety for systems with more autonomy and intelligence.”
When asked whether ethics should be a priority, Wallach responds with fervor: “I think it will have to be. There remain some technical challenges, but we will have to think through these problems eventually for our society to be comfortable accepting robots and AI as part of everyday life.”

About the Author
Sherwin Yu is a senior in Morse College studying Computer Science and Molecular Biophysics & Biochemistry.
Acknowledgements
The author would like to thank Wendell Wallach for his help in writing this article.
Further Reading
Moral Machines: Teaching Robots Right From Wrong (Oxford University Press 2009). Wendell Wallach and Colin Allen
Robots to Techno Sapiens: Ethics, Law and Public Policy in the Development of Robotics and Neurotechnologies. Wendell Wallach. Law, Innovation and Technology (2011) 3:185-207.


http://www.yalescientific.org/2012/05/machine-morality-computing-right-and-wrong/

Machine ethics: The robot’s dilemma

Boer Deng
01 July 2015

In his 1942 short story 'Runaround', science-fiction writer Isaac Asimov introduced the Three Laws of Robotics — engineering safeguards and built-in ethical principles that he would go on to use in dozens of stories and novels. They were: 1) A robot may not injure a human being or, through inaction, allow a human being to come to harm; 2) A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law; and 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Fittingly, 'Runaround' is set in 2015. Real-life roboticists are citing Asimov's laws a lot these days: their creations are becoming autonomous enough to need that kind of guidance. In May, a panel talk on driverless cars at the Brookings Institution, a think tank in Washington DC, turned into a discussion about how autonomous vehicles would behave in a crisis. What if a vehicle's efforts to save its own passengers by, say, slamming on the brakes risked a pile-up with the vehicles behind it? Or what if an autonomous car swerved to avoid a child, but risked hitting someone else nearby?
“We see more and more autonomous or automated systems in our daily life,” said panel participant Karl-Josef Kuhn, an engineer with Siemens in Munich, Germany. But, he asked, how can researchers equip a robot to react when it is “making the decision between two bad choices”?
The pace of development is such that these difficulties will soon affect health-care robots, military drones and other autonomous devices capable of making decisions that could help or harm humans. Researchers are increasingly convinced that society's acceptance of such machines will depend on whether they can be programmed to act in ways that maximize safety, fit in with social norms and encourage trust. “We need some serious progress to figure out what's relevant for artificial intelligence to reason successfully in ethical situations,” says Marcello Guarini, a philosopher at the University of Windsor in Canada.
Several projects are tackling this challenge, including initiatives funded by the US Office of Naval Research and the UK government's engineering-funding council. They must address tough scientific questions, such as what kind of intelligence, and how much, is needed for ethical decision-making, and how that can be translated into instructions for a machine. Computer scientists, roboticists, ethicists and philosophers are all pitching in.
“If you had asked me five years ago whether we could make ethical robots, I would have said no,” says Alan Winfield, a roboticist at the Bristol Robotics Laboratory, UK. “Now I don't think it's such a crazy idea.”
Learning machines

In one frequently cited experiment, a commercial toy robot called Nao was programmed to remind people to take medicine.
“On the face of it, this sounds simple,” says Susan Leigh Anderson, a philosopher at the University of Connecticut in Stamford who did the work with her husband, computer scientist Michael Anderson of the University of Hartford in Connecticut. “But even in this kind of limited task, there are nontrivial ethics questions involved.” For example, how should Nao proceed if a patient refuses her medication? Allowing her to skip a dose could cause harm. But insisting that she take it would impinge on her autonomy.
To teach Nao to navigate such quandaries, the Andersons gave it examples of cases in which bioethicists had resolved conflicts involving autonomy, harm and benefit to a patient. Learning algorithms then sorted through the cases until they found patterns that could guide the robot in new situations1.
With this kind of 'machine learning', a robot can extract useful knowledge even from ambiguous inputs (see go.nature.com/2r7nav). The approach would, in theory, help the robot to get better at ethical decision-making as it encounters more situations. But many fear that the advantages come at a price. The principles that emerge are not written into the computer code, so “you have no way of knowing why a program could come up with a particular rule telling it something is ethically 'correct' or not”, says Jerry Kaplan, who teaches artificial intelligence and ethics at Stanford University in California.
Getting around this problem calls for a different tactic, many engineers say; most are attempting it by creating programs with explicitly formulated rules, rather than asking a robot to derive its own. Last year, Winfield published the results2 of an experiment that asked: what is the simplest set of rules that would allow a machine to rescue someone in danger of falling into a hole? Most obviously, Winfield realized, the robot needed the ability to sense its surroundings — to recognize the position of the hole and the person, as well as its own position relative to both. But the robot also needed rules allowing it to anticipate the possible effects of its own actions.
Winfield's experiment used hockey-puck-sized robots moving on a surface. He designated some of them 'H-robots' to represent humans, and one — representing the ethical machine — the 'A-robot', named after Asimov. Winfield programmed the A-robot with a rule analogous to Asimov's first law: if it perceived an H-robot in danger of falling into a hole, it must move into the H-robot's path to save it.
Winfield put the robots through dozens of test runs, and found that the A-robot saved its charge each time. But then, to see what the allow-no-harm rule could accomplish in the face of a moral dilemma, he presented the A-robot with two H-robots wandering into danger simultaneously. Now how would it behave?
The results suggested that even a minimally ethical robot could be useful, says Winfield: the A-robot frequently managed to save one 'human', usually by moving first to the one that was slightly closer to it. Sometimes, by moving fast, it even managed to save both. But the experiment also showed the limits of minimalism. In almost half of the trials, the A-robot went into a helpless dither and let both 'humans' perish. To fix that would require extra rules about how to make such choices. If one H-robot were an adult and another were a child, for example, which should the A-robot save first? On matters of judgement like these, not even humans always agree. And often, as Kaplan points out, “we don't know how to codify what the explicit rules should be, and they are necessarily incomplete”.
Advocates argue that the rule-based approach has one major virtue: it is always clear why the machine makes the choice that it does, because its designers set the rules. That is a crucial concern for the US military, for which autonomous systems are a key strategic goal. Whether machines assist soldiers or carry out potentially lethal missions, “the last thing you want is to send an autonomous robot on a military mission and have it work out what ethical rules it should follow in the middle of things”, says Ronald Arkin, who works on robot ethics software at Georgia Institute of Technology in Atlanta. If a robot had the choice of saving a soldier or going after an enemy combatant, it would be important to know in advance what it would do.
With support from the US defence department, Arkin is designing a program to ensure that a military robot would operate according to international laws of engagement. A set of algorithms called an ethical governor computes whether an action such as shooting a missile is permissible, and allows it to proceed only if the answer is 'yes'.
In a virtual test of the ethical governor, a simulation of an unmanned autonomous vehicle was given a mission to strike enemy targets — but was not allowed to do so if there were buildings with civilians nearby. Given scenarios that varied the location of the vehicle relative to an attack zone and civilian complexes such as hospitals and residential buildings, the algorithms decided when it would be permissible for the autonomous vehicle to accomplish its mission3.
Autonomous, militarized robots strike many people as dangerous — and there have been innumerable debates about whether they should be allowed. But Arkin argues that such machines could be better than human soldiers in some situations, if they are programmed never to break rules of combat that humans might flout.
Computer scientists working on rigorously programmed machine ethics today favour code that uses logical statements, such as 'If a statement is true, move forward; if it is false, do not move.' Logic is the ideal choice for encoding machine ethics, argues Luís Moniz Pereira, a computer scientist at the Nova Laboratory for Computer Science and Informatics in Lisbon. “Logic is how we reason and come up with our ethical choices,” he says.
Crafting instructions capable of the logical steps that go into making ethical decisions is a challenge. For example, Pereira notes, the logical languages used by computer programs have trouble coming to conclusions about hypothetical scenarios, but such counterfactuals are crucial in resolving certain ethical dilemmas.
One of these is illustrated by the trolley problem, in which you imagine a runaway railway trolley is about to kill five innocent people who are on the tracks. You can save them only if you pull a lever that diverts the train onto another track, where it will hit and kill an innocent bystander. What do you do? In another set-up, the only way to stop the trolley is to push the bystander onto the tracks.
People often answer that it is all right to stop the trolley by hitting the lever, but viscerally reject the idea of pushing the bystander. The basic intuition, known to philosophers as the doctrine of double effect, is that deliberately inflicting harm is wrong, even if it leads to good. However, inflicting harm might be acceptable if it is not deliberate, but simply a consequence of doing good — as when the bystander simply happens to be on the tracks.
This is a very difficult line of analysis for a decision-making program. To begin with, the program must be able to see two different futures: one in which a trolley kills five people, and another in which it hits one. The program must then ask whether the action required to save the five is impermissible because it causes harm, or permissible because the harm is only a side effect of causing good.
To find out, the program must be able to tell what would happen if it chose not to push the bystander or pull the lever — to account for counterfactuals. “It would be as if a program was constantly debugging itself,” says Pereira — “finding where in a line of code something could be changed, and predicting what the outcome of the change would be.” Pereira and Ari Saptawijaya, a computer scientist at the University of Indonesia in Depok, have written a logic program4 that can successfully make a decision based on the doctrine of double effect, as well as the more sophisticated doctrine of triple effect, which takes into account whether the harm caused is the intended result of the action, or simply necessary to it.
Humans, morals, machines

How ethical robots are built could have major consequences for the future of robotics, researchers say. Michael Fisher, a computer scientist at the University of Liverpool, UK, thinks that rule-bound systems could be reassuring to the public. “People are going to be scared of robots if they're not sure what it's doing,” he says. “But if we can analyse and prove the reasons for their actions, we are more likely to surmount that trust issue.” He is working with Winfield and others on a government-funded project to verify that the outcomes of ethical machine programs are always knowable.
By contrast, the machine-learning approach promises robots that can learn from experience, which could ultimately make them more flexible and useful than their more rigidly programmed counterparts. Many roboticists say that the best way forward will be a combination of approaches. “It's a bit like psychotherapy,” says Pereira. “You probably don't just use one theory.” The challenge — still unresolved — is to combine the approaches in a workable way.
These issues may very soon come up in the fast-moving field of autonomous transport. Already, Google's driverless cars are zipping across parts of California (see Nature 518, 20–23; 2015). In May, autonomous trucks from German car-maker Daimler began driving themselves across the Nevada desert. Engineers are thinking hard about how to program cars to both obey rules and adapt to situations on the road. “Up until now we've been trying to do things with robots that humans are bad at,” such as maintaining attention on long drives or being quick on the brakes when the unexpected occurs, says Bernhard Weidemann, a spokesperson for Daimler in Stuttgart. “Going forward, we will have to try to program things that come more naturally to humans, but not to machines.”


http://www.nature.com/news/machine-ethics-the-robot-s-dilemma-1.17881

Robotics: Ethics of artificial intelligence


Stuart Russell: Take a stand on AI weapons
Sabine Hauert: Shape the debate, don't shy from it
Russ Altman: Distribute AI benefits fairly
Manuela Veloso: Embrace a robot–human world

Stuart Russell: Take a stand on AI weapons

Professor of computer science, University of California, Berkeley
The artificial intelligence (AI) and robotics communities face an important ethical decision: whether to support or oppose the development of lethal autonomous weapons systems (LAWS).
Technologies have reached a point at which the deployment of such systems is — practically if not legally — feasible within years, not decades. The stakes are high: LAWS have been described as the third revolution in warfare, after gunpowder and nuclear arms.
Autonomous weapons systems select and engage targets without human intervention; they become lethal when those targets include humans. LAWS might include, for example, armed quadcopters that can search for and eliminate enemy combatants in a city, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions.
Existing AI and robotics components can provide physical platforms, perception, motor control, navigation, mapping, tactical decision-making and long-term planning. They just need to be combined. For example, the technology already demonstrated for self-driving cars, together with the human-like tactical control learned by DeepMind's DQN system, could support urban search-and-destroy missions.
Two US Defense Advanced Research Projects Agency (DARPA) programmes foreshadow planned uses of LAWS: Fast Lightweight Autonomy (FLA) and Collaborative Operations in Denied Environment (CODE). The FLA project will program tiny rotorcraft to manoeuvre unaided at high speed in urban areas and inside buildings. CODE aims to develop teams of autonomous aerial vehicles carrying out “all steps of a strike mission — find, fix, track, target, engage, assess” in situations in which enemy signal-jamming makes communication with a human commander impossible. Other countries may be pursuing clandestine programmes with similar goals.
International humanitarian law — which governs attacks on humans in times of war — has no specific provisions for such autonomy, but may still be applicable. The 1949 Geneva Convention on humane conduct in war requires any attack to satisfy three criteria: military necessity; discrimination between combatants and non-combatants; and proportionality between the value of the military objective and the potential for collateral damage. (Also relevant is the Martens Clause, added in 1977, which bans weapons that violate the “principles of humanity and the dictates of public conscience.”) These are subjective judgments that are difficult or impossible for current AI systems to satisfy.
The United Nations has held a series of meetings on LAWS under the auspices of the Convention on Certain Conventional Weapons (CCW) in Geneva, Switzerland. Within a few years, the process could result in an international treaty limiting or banning autonomous weapons, as happened with blinding laser weapons in 1995; or it could leave in place the status quo, leading inevitably to an arms race.
As an AI specialist, I was asked to provide expert testimony for the third major meeting under the CCW, held in April, and heard the statements made by nations and non-governmental organizations. Several countries pressed for an immediate ban. Germany said that it “will not accept that the decision over life and death is taken solely by an autonomous system”; Japan stated that it “has no plan to develop robots with humans out of the loop, which may be capable of committing murder” (see go.nature.com/fwric1).
The United States, the United Kingdom and Israel — the three countries leading the development of LAWS technology — suggested that a treaty is unnecessary because they already have internal weapons review processes that ensure compliance with international law.
Almost all states who are party to the CCW agree with the need for 'meaningful human control' over the targeting and engagement decisions made by robotic weapons. Unfortunately, the meaning of 'meaningful' is still to be determined.
The debate has many facets. Some argue that the superior effectiveness and selectivity of autonomous weapons can minimize civilian casualties by targeting only combatants. Others insist that LAWS will lower the threshold for going to war by making it possible to attack an enemy while incurring no immediate risk; or that they will enable terrorists and non-state-aligned combatants to inflict catastrophic damage on civilian populations.
LAWS could violate fundamental principles of human dignity by allowing machines to choose whom to kill — for example, they might be tasked to eliminate anyone exhibiting 'threatening behaviour'. The potential for LAWS technologies to bleed over into peacetime policing functions is evident to human-rights organizations and drone manufacturers.
In my view, the overriding concern should be the probable endpoint of this technological trajectory. The capabilities of autonomous weapons will be limited more by the laws of physics — for example, by constraints on range, speed and payload — than by any deficiencies in the AI systems that control them. For instance, as flying robots become smaller, their manoeuvrability increases and their ability to be targeted decreases. They have a shorter range, yet they must be large enough to carry a lethal payload — perhaps a one-gram shaped charge to puncture the human cranium. Despite the limits imposed by physics, one can expect platforms deployed in the millions, the agility and lethality of which will leave humans utterly defenceless. This is not a desirable future.
The AI and robotics science communities, represented by their professional societies, are obliged to take a position, just as physicists have done on the use of nuclear weapons, chemists on the use of chemical agents and biologists on the use of disease agents in warfare. Debates should be organized at scientific meetings; arguments studied by ethics committees; position papers written for society publications; and votes taken by society members. Doing nothing is a vote in favour of continued development and deployment.
Sabine Hauert: Shape the debate, don't shy from it

Lecturer in robotics, University of Bristol
Irked by hyped headlines that foster fear or overinflate expectations of robotics and artificial intelligence (AI), some researchers have stopped communicating with the media or the public altogether.
But we must not disengage. The public includes taxpayers, policy-makers, investors and those who could benefit from the technology. They hear a mostly one-sided discussion that leaves them worried that robots will take their jobs, fearful that AI poses an existential threat, and wondering whether laws should be passed to keep hypothetical technology 'under control'. My colleagues and I spend dinner parties explaining that we are not evil but instead have been working for years to develop systems that could help the elderly, improve health care, make jobs safer and more efficient, and allow us to explore space or beneath the oceans.
Experts need to become the messengers. Through social media, researchers have a public platform that they should use to drive a balanced discussion. We can talk about the latest developments and limitations, provide the big picture and demystify the technology. I have used social media to crowd-source designs for swarming nanobots to treat cancer. And I found my first PhD student through his nanomedicine blog.
The AI and robotics community needs thought leaders who can engage with prominent commentators such as physicist Stephen Hawking and entrepreneur–inventor Elon Musk and set the agenda at international meetings such as the World Economic Forum in Davos, Switzerland. Public engagement also drives funding. Crowdfunding for JIBO, a personal robot for the home developed by Cynthia Breazeal, at the Massachusetts Institute of Technology (MIT) in Cambridge, raised more than US$2.2 million.
There are hurdles. First, many researchers have never tweeted, blogged or made a YouTube video. Second, outreach is 'yet another thing to do', and time is limited. Third, it can take years to build a social-media following that makes the effort worthwhile. And fourth, engagement work is rarely valued in research assessments, or regarded seriously by tenure committees.
Training, support and incentives are needed. All three are provided by Robohub.org, of which I am co-founder and president. Launched in 2012, Robohub is dedicated to connecting the robotics community to the public. We provide crash courses in science communication at major AI and robotics conferences on how to use social media efficiently and effectively. We invite professional science communicators and journalists to help researchers to prepare an article about their work. The communicators explain how to shape messages to make them clear and concise and avoid pitfalls, but we make sure the researcher drives the story and controls the end result. We also bring video cameras and ask researchers who are presenting at conferences to pitch their work to the public in five minutes. The results are uploaded to YouTube. We have built a portal for disseminating blogs and tweets, amplifying their reach to tens of thousands of followers.
I can list all the benefits of science communication, but the incentive must come from funding agencies and institutes. Citations cannot be the only measure of success for grants and academic progression; we must also value shares, views, comments or likes. MIT robotics researcher Rodney Brooks's classic 1986 paper on the 'subsumption architecture', a bio-inspired way to program robots to react to their environment, gathered nearly 10,000 citations in 30 years (R. Brooks IEEE J. Robot. Automat. 2, 14–23; 1986). A video of Sawyer, a robot developed by Brooks's company Rethink Robotics, received more than 60,000 views in one month (see go.nature.com/jqwfmz). Which has had more impact on today's public discourse?
Governments, research institutes, business-development agencies, and research and industry associations do welcome and fund outreach and science-communication efforts. But each project develops its own strategy, resulting in pockets of communication that have little reach.
In my view, AI and robotics stakeholders worldwide should pool a small portion of their budgets (say 0.1%) to bring together these disjointed communications and enable the field to speak more loudly. Special-interest groups, such as the Small Unmanned Aerial Vehicles Coalition that is promoting a US market for commercial drones, are pushing the interests of major corporations to regulators. There are few concerted efforts to promote robotics and AI research in the public sphere. This balance is badly needed.
A common communications strategy will empower a new generation of roboticists that is deeply connected to the public and able to hold its own in discussions. This is essential if we are to counter media hype and prevent misconceptions from driving perception, policy and funding decisions.
Russ Altman: Distribute AI benefits fairly

Professor of bioengineering, genetics, medicine and computer science, Stanford University
Artificial intelligence (AI) has astounding potential to accelerate scientific discovery in biology and medicine, and to transform health care. AI systems promise to help make sense of several new types of data: measurements from the 'omics' such as genomics, proteomics and metabolomics; electronic health records; and digital-sensor monitoring of health signs.
Clustering analyses can define new syndromes — separating diseases that were thought to be the same and unifying others that have the same underlying defects. Pattern-recognition technologies may match disease states to optimal treatments. For example, my colleagues and I are identifying groups of patients who are likely to respond to drugs that regulate the immune system on the basis of clinical and transcriptomic features.
In consultations, physicians might be able to display data from a 'virtual cohort' of patients who are similar to the one sitting next to them and use it to weigh up diagnoses, treatment options and the statistics of outcomes. They could make medical decisions interactively with such a system or use simulations to predict outcomes on the basis of the patient's data and that of the virtual cohort.
I have two concerns. First, AI technologies could exacerbate existing health-care disparities and create new ones unless they are implemented in a way that allows all patients to benefit. In the United States, for example, people without jobs experience diverse levels of care. A two-tiered system in which only special groups or those who can pay — and not the poor — receive the benefits of advanced decision-making systems would be unjust and unfair. It is the joint responsibility of the government and those who develop the technology and support the research to ensure that AI technologies are distributed equally.
Second, I worry about clinicians' ability to understand and explain the output of high-performance AI systems. Most health-care providers will not accept a complex treatment recommendation from a decision-support system without a clear description of how and why it was reached.
Unfortunately, the better the AI system, the harder it often is to explain. The features that contribute to probability-based assessments such as Bayesian analyses are straightforward to present; deep-learning networks, less so.
AI researchers who create the infrastructure and technical capabilities for these systems need to engage doctors, nurses, patients and others to understand how they will be used, and used fairly.
Manuela Veloso: Embrace a robot–human world

Professor of computer science, Carnegie Mellon University
Humans seamlessly integrate perception, cognition and action. We use our sensors to assess the state of the world, our brains to think and choose actions to achieve objectives, and our bodies to execute those actions. My research team is trying to build robots that are capable of doing the same — with artificial sensors (cameras, microphones and scanners), algorithms and actuators, which control the mechanisms.
But autonomous robots and humans differ greatly in their abilities. Robots may always have perceptual, cognitive and actuation limitations. They might not be able to fully perceive a scene, recognize or manipulate any object, understand all spoken or written language, or navigate in any terrain. I think that robots will complement humans, not supplant them. But robots need to know when to ask for help and how to express their inner workings.
To learn more about how robots and humans work together, for the past three years we have shared our laboratory and buildings with four collaborative robots, or CoBots, which we developed. The robots look a bit like mechanical lecterns. They have omnidirectional wheels that enable them to steer smoothly around obstacles; camera and lidar systems to provide depth vision; computers for processing; screens for communication; and a basket to carry things in.
Early on, we realized how challenging real environments are for robots. The CoBots cannot recognize every object they encounter; lacking arms or hands they struggle to open doors, pick things up or manipulate them. Although they can use speech to communicate, they may not recognize or understand the meaning of words spoken in response.
We introduced the concept of 'symbiotic autonomy' to enable robots to ask for help from humans or from the Internet. Now, robots and humans in our building aid one another in overcoming the limitations of each other.
CoBots escort visitors through the building or carry objects between locations, gathering useful information along the way. For example, they can generate accurate maps of spaces, showing temperature, humidity, noise and light levels, or WiFi signal strength. We help the robots to open doors, press lift buttons, pick up objects and follow dialogue by giving clarifications.
There are still hurdles to overcome to enable robots and humans to co-exist safely and productively. My team is researching how people and robots can communicate more easily through language and gestures, and how robots and people can better match their representations of objects, tasks and goals.
We are also studying how robot appearance enhances interactions, in particular how indicator lights may reveal more of a robot's inner state to humans. For instance, if the robot is busy, its lights may be yellow, but when it is available they are green.
Although we have a way to go, I believe that the future will be a positive one if humans and robots can help and complement each other.


http://www.nature.com/news/robotics-ethics-of-artificial-intelligence-1.17611

Haven't had a chance to read these yet nor this other site:

Will check them out later.
 

Dalyth

small.
Local time
Yesterday 11:19 PM
Joined
Jun 6, 2015
Messages
66
-->
Location
United States
Transhumanism again

The problems with transhumanism

http://rationallyspeaking.blogspot.com/2009/07/problems-with-transhumanism.html


This blog addresses Munkittrick's optimistic views on technology and Transhumanism. The writer takes a critical stance on the feasibility of Transhumanism with regard to living forever (giving an argument that death relates to thermodynamics and, thus, is unavoidable) and brings up some concerns about resource management (the more people who live longer, the more resources will be consumed). In the end, the author mentions that they are not against Transhumanism, per se, but they do believe that "techno-optimists" are getting ahead of themselves with fantasy instead of paying real attention to the challenges that Transhumanism raises.

I have pondered writing about the transhumanism movement for a while, and the opportunity has finally landed on my desktop when I read a brief article by Kyle Munkittrick of the Institute for Emerging Ethics & Technologies. The article is in the form of a FAQ expressly addressing the question of whether aging is a moral good, and in it Munkittrick briefly explains and (thinks that he) refutes some of the standard arguments against transhumanism. Let’s take a look.

To begin with, what is transhumanism? It is a type of futurist philosophy aimed at transforming the human species by means of biotechnologies. Transhumanists think of disease, aging and even death as both undesirable and unnecessary, and think that technology will eventually overcome them all. I must confess that — despite being a scientist always fascinated by new technologies (hey, I am writing this on a MacBook Pro, I carry an iPhone with me at all times, and I read books on the Kindle!) — I have always been skeptical of utopias of any kind, not excluding the technological variety. Which is why I am using Munkittrick’s short essay as a way to clarify my own thoughts about transhumanism.

Munkittrick begins his own response to critics of transhumanism by stating that if anyone has a problem with technology addressing the issues of disease, aging and death then “by this logic no medical intervention or care should be allowed after the age of 30.” This, of course, is a classic logical fallacy known as a false dichotomy. Munkittrick would like his readers to take one of two stands: either no technological improvement of our lives at all, or accept whatever technology can do for you. But this is rather silly, as there are plenty of other, more reasonable, intermediate positions. It is perfectly legitimate to pick and choose which technologies we want (I vote against the atomic bomb, for instance, but in favor of nuclear energy, if it can be pursued in an environmentally sound way). Moreover, it is perfectly acceptable — indeed necessary — for individuals and society to have a thorough discussion about what limits are or are not acceptable when it comes to the ethical issues raised by the use of technologies (for instance, I do not wish to be kept artificially alive at all costs in case of irreparable damage to my brain, even if it is technologically feasible; moreover, I think it immoral that people are too often forced to spend huge amounts of money for “health care” during the last few weeks or months of their lives).

Munkittrick continues: “Transhumanists are trying to escape aging — and its inevitable symptom, death — because we actually acknowledge it for what it is: a horror.” Well, I personally agree with the general sentiment. As Woody Allen famously put it, I don’t want to be immortal through my work, I want to be immortal through not dying. But to construe death as a “symptom” to the disease of aging is far fetched, and biologically absurd. Aging and death are natural end results of the lives of multicellular organisms, and in a deep sense they are the inevitable outcome of the principles of thermodynamics (which means that we can tinker and delay them, but not avoid them).

There are several problems with the pursuit of immortality, one of which is particularly obvious. If we all live (much, much) longer, we all consume more resources and have more children, leading to even more overpopulation and environmental degradation. Of course, techno-optimists the world over have a ready answer for this: more technology. To quote Munkittrick again: “Malthus didn’t understand that technology improves at an exponential rate, so even though unaided food production is arithmetic, the second Agricultural Revolution allowed us to feed more people by an order of magnitude.” Yes, and how do we explain that more people than ever are starving across the world? Technology does not indefinitely improve exponentially, and it must at some point or another crash against the limits imposed by a finite world. We simply don’t have space, water and other prime materials to feed a forever exponentially increasing population. Arguably, it is precisely technology that created the problem of overpopulation, as the original agricultural revolution (the one that happened a few thousand years ago) lead to cycles of boom and bust and to the rapid spread of disease in crowded cities. This may be an acceptable tradeoff (I certainly don’t wish to go back to a hunter-gatherer society), but it does show that technology is not an unqualified good.

Yet, the transhumanist optimist can’t be stopped. Here is more from Munkittrick: “One of the key goals of transhumanism is to get the most advanced and useful technology to developing countries, allowing them to skip industrialization (and the pollution/waste associated) and go straight into late capitalist, post-industrial society, where population growth is negative and mortality rates extremely low.” Besides the fact that with the current global economic meltdown a late capitalist society doesn’t really sound that appealing, do we have any evidence that this is happening, or even possible? The current examples of such transition come from countries like India, China, and Brazil, and those don’t look at all encouraging, as the result seems to be increasing economic disparity and massive amounts of additional pollution. How exactly are transhumanists planning on skipping industrialization?

As for post-industrial societies having negative population growth, this is true of only a very few countries, and certainly not of one of the most massively polluting of them all, the United States. It is true that birth rates are dramatically lower in post-industrial countries in general, but this is the result of education not technology per se. It happens when women realize that they can spend their lives doing something other than being perennial baby factories. Despite this, the world population is still going up, and environmental quality is still dropping dramatically. Technology can surely help us, but it is also (perhaps mostly) a matter of ethical choices: the problem will be seriously addressed only when people abandon the naive and rather dangerous idea that technology can solve all our problems, so that we can continue to indulge in whatever excesses we like.

One last point: Munkittrick depicts what he thinks is an idyllic scenario of people living to 150 (this may not be possible without significant alterations of the human genome, which of course raises additional questions of both feasibility and ethics). He says that “any technology that would extend life beyond the current average of 70-100 would do so by retarding aging as a whole, that is, the degradation that begins to occur after about age 27. Maturation would occur at the same rate, peaking between 22 and 26 depending on the person, but after that preventative medicine and repair techniques would slow aging, resulting in a much longer “prime” age, say extending youthful adulthood (what we think of now as 20’s and 30’s) well into the 50’s and perhaps 60’s. Because these techniques will be far from perfect, aging will still occur to some degree. Like youthful adulthood, middle-age would presumably begin much later and last much longer. So lets say a person reaches genuine old age at 100, with all the problems that reduce one from ‘thriving’ to surviving, leaving them 50 years of old age instead of 20 or 10.” Hmm, I like the first part (extending my prime through my ‘60s), but the latter one seems ghastly. Both from a personal and a societal perspective, fifty years of old age are a hefty price to pay, and one that would be psychologically devastating and further bankrupt our resources. Now if we could consider euthanasia for the really old, non-functional and suffering people... but that’s another discussion.

I do not wish to leave the reader with the impression that I am a Luddite, far from it. But I do think that techno-optimists the world over really ought to fantasize less and pay much more attention to the complexities not just of the logistics, but particularly of the ethics implied by their dreams. Better and longer lives are certainly a worthy goal (though I personally would put the emphasis on quality rather than quantity), but this doesn’t license a mad pursuit for immortality. Besides, true immortality (the ultimate goal if you think of death as a “symptom”) must be unbearable for any sentient being: imagine having so much time on your hands that eventually there will be nothing new for you to do. You would be forced to play the same games, or watch the same movies, or take the same vacation, over and over and over and over. Or you might kill time by reading articles like the one by Munkittrick literally an infinite number of times. Hell may be other people, as Sartre said, but at least at the moment we don’t have to live in Hell forever.

Facing the Challenges of Transhumanism: Philosophical, Religious, and Ethical Considerations

By Hava Tirosh-Samuelson

This essay paints a pretty thorough history of Transhumanism, reflects the broader context on which Transhumanism is emerging - ie, the way that interdisciplinary boundaries are becoming less defined, discusses critically how the field relate to: human nature (some notable names: Bostrom, Gregory Stock,Raymond Kurzweil, Kevin Warwick, Steven Pinker, etc), human happiness (this section focuses on religion, philosophy, and critiques the Transhumanist tendency to follow the hedonistic philosophy of happiness, as well as the potential problems caused with the assumption that "technology will allow us to produce pleasant sensations all the time".), aging and death (this section takes issue with the idea that the human body is similar enough to a machine to be treated in a similar manner and also asks questions as to why human life should be extended indefinitely), the Eschatology of Cybernetic Immortality (this discusses concerns about scanning all details of ones mind, porting it to a computer, and achieving virtual immortality), and in conclusion emphasizes the need for critical thinking as Transhumanists attempt to achieve their ideals.


http://www.metanexus.net/essay/faci...sophical-religious-and-ethical-considerations


Introduction1
What is Transhumanism?
The term ‘transhumanism’ denotes a relatively young and still changing ideology that posits a new vision of humanity as a result of the confluence of advancements in the life sciences, neurosciences, genomics, robotics, informatics, and nanotechnology. These developments include new kinds of cognitive tools that combine artificial intelligence with interface technology, molecular nanotechnology, extension of human life span, genetic enhancing of human mental and physical capacities, combating diseases and slowing down the process of aging, and exercising control over desires, moods, and mental states. Those who enthusiastically promote these developments in biotechnology and bioengineering maintain that the accelerating pace of technological development and scientific understanding will usher in a new age in the history of the human species during which people will live longer, will possess new physical and cognitive abilities and will be liberated from suffering and pain due to aging and disease. In the transhuman age, humans will no longer be controlled by nature; instead they will be the controllers of nature.
The term ‘transhumanism’ was coined by Julian Huxley in 1957,2 although the meaning of the term was quite different from the way it is now being used. Today the term means a way of thinking about the future that is based on the premise that the human species in its current form does not represent the end of our development but rather a comparatively early phase. Some of the ideas and characteristics of transhumanism today can be traced to the 1920s and early 1930s in the works of J.B.S. Haldane, J.D. Bernal and Aldous Huxley. The horrors of WW II invalidated the goal of creating a new and better world through a centrally imposed vision and they also discredited the eugenics movement of the 1920s.
In the 1960s, however, more optimistic futuristic scenarios were articulated by science fiction writers such as Arthur C. Clarke, Isaac Asimov, Robert Heinlein, Stalislaw Lem, and later Bruce Sterling, Greg Egan, and Vernor Vinge who speculated about the new, transhuman future. In the 1960s the futurist Fereidoun M. Esfandiary, who later changed his name to FM 2030 (the year of his 100th birthday) began to identify “transhumans” as persons who behave in a manner conducive to a posthuman future. In the late 1980s, philosopher Max More formalized a transhumanist doctrine, advocating the “Principles of Extropy” for continuously improving the human condition. At that time various organizations began to advocate life extension, cryonics, space colonization, and other scenarios while advances in biotechnology, neuroscience, and nanotechnology began to make their mark. Eventually Marvin Minsky, an eminent artificial intelligence researcher, articulated many of the themes of the transhumanist vision and he was joined by other famous scientific visionaries such as Ray Kurzweil, Eric Drexler, Frank Tipler, and Hans Moravec.
In the late 1990s a group of transhumanist activists authored the “Transhumanist Declaration” stating various ethical positions related to the use of and planning for technological advances. In 1998 the World Transhumanist Association was founded by philosophers Nick Bostrom and David Pearce and its membership today is about 4000 people world-wide with several geographically divided chapters and special-interest affiliates.3 Other contemporary organizations, such as the Extropy Institute, the Foresight Institute, the Immortality Institute, the Institute for Ethics and Emerging Technologies, and the Singularity Institute for Artificial Intelligence, also play a role in the transhumanist movement. In 2006 the World Transhumanist Association voted to recognize the Mormon Transhumanist Association as its first religious, special interest affiliate. All of these organizational activities have been facilitated by the communication revolution of 1980s and 1990s with instant communication world-wide. The Internet is not just a means of communication for transhumanist ideas, but functions as part of the transhumanist eschatological vision. In short, transhumanism is the ideology that attempts to give coherence to a range of disparate ideas based on the technological advancements in the second half of the 20th century.
Transhumanism and Transdisciplinarity
It is instructive to note the relationship between the emergence of this new ideology and larger cultural trends that have contributed to the collapse of disciplinary boundaries. The division of knowledge into the natural sciences, the social sciences, and the humanities, has a long history but it was consolidated with the professionalization of the European universities (especially in Germany) during the 19th century.4 While the Enlightenment discourse of progress prevailed, each discipline expanded in its own domain, claiming for itself a distinctive methodology and ensconced in its own institutional setting of academic department. Knowledge in each discipline of the natural sciences, the social sciences and even the humanities expanded while Europe’s nation-states fought each other for control of resources in the presumably under-civilized world, which had to be saved from its own inadequacies and backwardness through enlightened modernization. This modernist self-understanding legitimized less than noble treatment of large sections of the world at least until the middle of the 20th century, when it finally collapsed as the horrors of World War II exposed the dark side of modern science and technology.
In the post-WW II world, postmodernism, post-colonialism, globalization, and the confluence of science and technology spelled the end of traditional disciplinary boundaries. Postmodernism has made us all more aware of the role that language plays in construing our perception of reality, our knowledge claims, and our self-justifying individual and collective narratives. Post-colonialism has made us conscious of the negative legacy of the colonial and imperial past which has destroyed many indigenous cultures in its relentless pursuit of progress. Globalization has given rise to multi-national corporations that trade commodities, services, and intellectual properties across national borders through instant communication systems. Nowadays, nation-states still exist and fight for their collective goals, but large scale migrations (be it forced or voluntary) as well as the communication and transportation revolutions entail that people of diverse cultures and traditions constantly interface with each other. By the same token, in academic institutions departments still exist as a structural feature of the institutions of higher learning, but increasingly their disciplinary boundaries are becoming meaningless. At ASU in particular a deliberate attempt to break down disciplinary boundaries is under way with the commitment to create transdisciplinary schools, institutions, and centers. It is thus quite fitting that ASU was awarded the Templeton Research Lectures on transhumanism, since the vision of the transhuman mirrors the breakdown of disciplinary, political, and national boundaries. Transdiciplinarity, therefore, captures the new state of affairs in human knowledge as much as transhumanism captures the new human condition.

Engaging Transhumanism Critically
Like all ideological movements, transhumanism has diverse concerns and does not speak in one voice about all issues. However, we can identify several main themes: a view of evolving human nature, the emergence of enhanced humans who will exceed ordinary human physical and cognitive traits, a preoccupation with human well-being or happiness that can be perpetuated indefinitely, and a vision of cybernetic immortality. A brief explanation of these features is in order.
Transhumanism and Human Nature
The main feature of transhumanism is the claim that human nature is not fixed and that the future of humanity is malleable because of the “dramatic progress in technological capabilities.” It is technology that will enable humans to transform themselves gradually into persons whose capacities will exceed what we today recognize by the term “human.” For the advocates of transhumanism such development is entirely welcome.
Nick Bostrom, who heads the Institute for the Future of Humanity at Oxford University, defines transhumanism’s view of human nature as follows:
“Transhumanists view human nature as a work-in-progress, a half baked beginning that we can learn to remold in desirable ways. Current humanity need not be the endpoint of evolution. Transhumanists hope that by responsible use of science, technology, and other rational means, we shall eventually manage to become posthuman, beings with vastly grater capacities than present human beings have.”5
Bostrom’s view of human nature is shared by Gregory Stock, who heads the Center for the Study of Evolution and the Origin of Life in UCLA, who similarly states that “the human species is moving out of its childhood.” According to Stock,
“it is time for us to acknowledge our growing powers and being to take responsibility for them. We have little choice in this, for we have begun to play god in so many of life’s intimate realms that we probably could not turn back if we tried.”6
As Langdon Wiener already noted, Stock equates “taking responsibility” with “recognizing the inevitability” of the development of a new species and advocates the use of genetic engineering to move the human organism beyond what he depicts as “its present decrepit condition.”7
The revolution in genetics and the advent of genetic engineering is the reason for the transhumanist vision. The unlimited prospects of biotechnologies, especially those related to reproduction, has led Lee Silver of Princeton University to hold that the ongoing developments in scientific laboratories will produce a revolution in society, an upheaval whose consequences will include a radical division of the species into superior and inferior genetic classes. Silver speculates that by the end of the third millennium, the two groups will have become “entirely separate species with no ability to cross-breed, and with as much romantic interest in each other as a current human would have for a chimpanzee.”8 Since, the genetic revolution would enable us to treat defects in and make enhancements to our biological state with treatments in the form of new drugs and gene therapies, Silver had concluded that “for better and worse, a new age is upon us.”9
A different approach to the notion of enhanced humans in the transhuman future comes from computer scientists and specialists in robotics and artificial intelligence such as Marvin Minsky, Raymond Kurzweil, Hans Moravec, and Kevin Warwick. They focus on the revolution in robotics and the merger between information technology and biology. By 2030 computing power should readily exceed the computing power of the non-enhanced human brains. As understanding of and control over biology increases, humans would also learn to integrate biological and information technology. Robotics would become so sophisticated that humans would transfer their experience and knowledge to one another or to non-biological substrates to enable indefinite extension of life. The emergence of strong artificial intelligence, in non-biological or enhanced biological forms, would lead to the advent of “The Singularity.” Kurzweil defines The Singularity as “a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed.” In Kurzweil’s prediction, while The Singularity is neither utopian nor dystopian, it will change our nature and our world in unprecedented ways. In these future scenarios, “our technology will match and then vastly exceed the refinement and suppleness of what we regard as the best of human traits.”10 As a result, humans will no longer be the ultimate beneficiaries of technological development and will probably be destined to obsolescence.
In line with this type of futuristic thinking, Hans Moravec, for example, predicts the eventual replacement of humans by intelligent machines, analogous to ongoing innovations in the business world in which the quest for better services at lower prices has brought about the replacement of many human functions by intelligent machines. Eventually super-intelligent creatures, “Ex-Humans” or “Exes,” would grow weary of the limitations of Earth and would seek their fortunes elsewhere in the universe. When robots will produce all foods and manufactured goods “humans way work to amuse other humans.”11
The critique of the transhumanist project of enhancing humans comes from the relatively new field of evolutionary psychology whose practitioners argue that human nature is a reality that has emerged from the long evolutionary process. Steven Pinker, the best known proponent of evolutionary psychology, defines human nature as “the endowment of cognitive and emotional faculties that is universal to healthy members of the Homo sapiens.”12 According to Pinker, all human beings share a universal human nature despite differences among individuals, races, and sexes, since these differences too are also in our nature. Other evolutionary psychologists such as ASU’s Templeton Co-Fellows, Leda Cosmides and John Tooby,13 hold that the normal make-up of human minds is a result of evolution by natural selection. The major finding of the Cosmides-Tooby team is that the human mind “has evolved a specialized machinery that is designed to carry out specific tasks.” For this reason, Cosmides objects to germline genetic engineering which will alter “what defines a human personality… [because it] affects the control system of the body and alters complex, exquisitely well-designed mental mechanisms that have been engineered by the evolutionary process to solve problems of survival and reproduction.”14 Human intervention in the evolutionary process may produce humans with greater-than-human intelligence, but we do not know what will be the unintended consequences of such intervention.
Given their understanding of human nature, evolutionary psychologists tend to be quite skeptical about and even critical of the transhumanist project. Tooby identifies two strands within transhumanism: the Enlightenment strand and the Romantic strand. The former is an extension of the 18th century Enlightenment Project and it involves the attempts by science and technology to improve the human condition.15 Viewed from this perspective, transhumanism is not as novel as it seems, since all of us are already augmented beings if we take into considerations the many technological advancements over the centuries that have transformed who we are. Thus agriculture, writings, postal services, navigation, calculus, antibiotics, radio, television and photography, computers, are all technological innovations that has shaped who we are, and it is reasonable to assume that we will continue to be augmented by future technologies. So long as transhumanism simply advocates the 19th century commitment to progress and alleviation of human suffering, it is hard to critique it.
However, transhumanism becomes much more problematic from an evolutionary perspective when it predicts a dramatic change in the human species, due to technological enhancement. It is this claim, which evolutionary psychology disputes, because of the way in which the human brain has evolved to perform certain tasks and because we are still largely ignorant about the operation of the brain. Tooby thus urges us to ask the simple but crucial question: “what is the goal of technological change?” and he correctly warns us to be careful not to confuse “evolution” with “progress.” Tooby notes that evolution is also capricious, cruel, and random, and that we are the effects of biochemical natural selection that has produced things we hate (for example, infanticide). The case of infanticide shows that human nature is real: the mind is not a blank slate but rather a computational structure that is full of mechanisms that have been selected over a long evolutionary process of adaptation. Therefore, Tooby encourages scientists to continue to map the mechanism of the adapted mind and its specific programs before we naively embrace the projects of transhumanism. At present, we do not even know what does it mean to have a thought, and therefore it is very unlikely that the transhuman vision of uploading the thought content of our personality should be taken too seriously.
The difference between evolutionary psychologists such as Tooby, Cosmides and Pinker and the proponents of transhumanism is that the latter take more seriously the notion of “radical revolution” as a result of exponential growth of knowledge, or what Joel Garreau defines as “The Curve.” Garreau illustrates The Curve in human evolution as follows:
From the formation of the Earth to the first multi-cellular organisms it took perhaps 4 billion years. Getting form tiny organisms to the first mammals took 400 million years. Getting from mammals to the first primitive monkeys took 150 million years. Getting from monkeys to hominid species such as chimpanzees took something like 30 million years. Getting from hominids to walking erect took 16 million years. Getting from walking erect to humans paintings on cave walls at Altamira, Spain, took 4 million years. Getting from cave painting from settlements to the invention of writing in Sumeria took about 4000 years. At that point, biological evolution was trumped by cultural evolution. We could now store, recall and widely share our thought and insights. Intelligence became less the property of isolated bands and more the sum of civilization. As humans increasingly became capable of acting collectively they could make advances in the arts, sciences and economics far beyond the capabilities of any individual, and the Curve started to take off. Four thousand years to the Roman Empire, 1800 years to the Industrial Age, 169 years to the moon and 20 more years to the Information Age, where we now find ourselves.”16
Garreau’s dramatic retelling of the story of human evolution coheres with the sense of urgency in the transhumanist vision, which foresees humanity on the verge of radical change. But Sander van der Leeuv, the head of the School of Human Evolution and Social Change at ASU, who reflects on transhumanism from the perspective of anthropology, provides a much less dramatic and less sensational view of the information revolution.17 Prof. van der Leeuw ponders why did it take so long to “invent” and accelerate innovation? And why did it go so fast, once we reached that point? His account of human development highlights how humans were able to invent causal sequences, acquire dimensionality, conceptualize scale, composite tools, conceptualize space, learn to take risks, settle and develop agriculture, control time and space, intervene in nature, control motion and energy, create social organizations and trade. All of these developments involved technology, but the most important of all is the development of information because it facilitated communication channels and the emergence of culture. Energy and matter are subject to the laws of conservation; they can be displaced but cannot be shared. In contrast, information systems are not subject to conservation and therefore can be shared. Societies are held together by a shared culture, shared ways of doing things. Biological systems transmit information genetically, but social systems transmit information through learning and energy and information networks reciprocally interact.
According to Prof. van der Leeuw, human beings have already experienced deeper revolutions than the one posed by transhumanism, for example, the rise of towns. Van der Leeuv demonstrates his point by looking carefully at the case of Rome in terms of relationship between energy, population, and innovations. In his assessment of transhumanism, it is our human nature that generated the challenges we face, and these challenges are no worse than the earlier transitions. The difference is one of a-priori and a-posteriori perspectives. We are still human and will continue to be so, but transhumanism raises the challenge how to master the very process of innovation itself. Because information is now independent of its substratum, all fantasy worlds are potentially possible. The emergence of nanotechnology entails that we no longer have an intuitive grasp of the world. Transhumanism is so challenging because of the accelerating rate of change, the potentially negative impact on the environment, and most importantly the lack of focus on values. While van der Leeuv holds that we need more self-restraint (as illustrated in the debate about sustainability), he concedes that there is no answer to the question how to adjudicate between conflicting values. The debate about values is the core element in facing the challenges of transhumanism. That aspect becomes clearer once we consider transhumanism as a pursuit of happiness.
Transhumanism and Human Happiness
Transhumanism is an outgrowth of modern humanism: it is secular, rationalist, individualistic, and concerned with the attainment of individual happiness. The pursuit of happiness, of course, has been a major concern of humanity and a major feature of western thought, at least since ancient Greek philosophy.18 Happiness, or human well-being and flourishing, was understood by Greek and Hellenistic philosophers to be an objective standard that organizes all human activities into a meaningful pattern for the duration of one’s life. According to premodern philosophers, happiness is not an affect or a subjective feeling but an objective state that expresses human nature and to be happy means to flourish and experience well being in accord with the nature of the human species. Aristotle, who provided the first systematic analysis of happiness in his Nicomachean Ethics, regarded reason as the distinguishing marks of humanity, and concluded that to be happy, or to flourish as a human being, necessitates the actualization of the human potential to know abstract, necessary, and eternal truths. The highest kind of reasoning, according to Aristotle, is the kind of reasoning that belongs to God, a thought thinking itself eternally.19
When Greek and Hellenistic reflections on happiness were integrated into monotheistic religions, first Judaism, later Islam, and finally Christianity, the pursuit of happiness was given a decidedly religious interpretation even when analyzed philosophically, illustrating the integration of science and religion characteristic of the premodern era. In the modern period, however, the secularization of the Christian West and the scientific revolution, gave rise to materialism and naturalism and the dissociation of science and religion. In the 17th and 18th centuries, happiness came to be identified with well-feeling. By the 19th century this idea would give rise to utilitarianism and its calculus of happiness as a balance between pleasure and pain for the greatest number of people. Moving away from the eudaimonistic conception of happiness, the Utilitarians defined happiness subjectively. For Jeremy Bentham, for example, pleasure is the only good and pain is the only evil; pleasure and pain determine what we do and it is only the scientific analysis of the balance between them that leads to happiness, requiring no recourse to religious belief.
As science and religion were gradually pulled apart from each other during the 19th century, a strictly materialistic and hedonic notion of happiness prevailed: happiness is a subjective, mental state of individuals closely akin to joy and inherently associated with a range of pleasures. In a capitalistic setting, the hedonic notion of happiness means that happiness is equated with possession of material good or the instant gratification of bodily cravings. The discoveries of chemical substances (legal or illegal) that control moods and mental states further trivialized the pursuit of happiness. As neuroscientists have unraveled the chemical processes of the brain, they have enabled the pharmaceutical industry to produce chemical substances that control, alleviate, or change moods and emotions. Under the impact of the brain sciences, both happiness and unhappiness are now viewed strictly in materialist term: a pill presumably makes one attain happiness or alleviate unhappiness. By the beginning of the 21st century a strict materialist approach to happiness prevails.
The Transhumanist Declaration does not discuss happiness directly, but if one peruses the literature generated by leading transhumanists such as Max More, one can immediately detect how this conception of happiness undergirds the entire project. Max More is the head of Extropy Institute. Extropy is defined as “the extent of a living or organizational system’s intelligence, functional order, vitality, and capacity and drive for improvement” and “extropic” are the “actions, qualities, or outcomes that embody or futher extropy.” According to More is extropy “is not a real entity or force, but only metaphor representing all that contributes to our flourishing,” in other words, happiness. The principles of Extropy enumerated by More include: “perpetual progress, self transformation, practical optimism, intelligent technology, open society in terms of information and democracy, self-direction, and rational thinking."20Like other promoters of transhumanism, Max More emphasizes how the pace of change – technological, cultural and economic – continues to accelerate and to reach deeper. For him advances in technologies (including “social technologies” of knowledge management, learning and decision making), will enable us to change human nature itself in its physical, emotional, and intellectual aspects. More predicts that with better knowledge and decision making, humans could live far longer in better than “perfect” health, improve their self knowledge and awareness of interpersonal dynamics; overcome cultural, psychological and memetic biases in thinking; enhance intelligence in all its various forms, and learn to thrive on change and growth. In short, humans will finally be happy.
The transhumanist approach to the pursuit of happiness is problematic for the following reasons. First, the transhumanist notion is an extension of the hedonic understanding of happiness characteristic of 19th century Utilitarianism. Focusing on self-fulfillment, transhumanists do not take the notion of virtue seriously enough nor do they explain how the values of the authentic Self promote human flourishing. Transhuamnists talk a lot about life satisfaction, self-fulfillment and self realization but they have not provided an analysis of the relationship between the subjective and objective aspects of happiness. A more rigorous analysis of the meaning of happiness which lies at the foundation of the transhumanist project is needed.
Beyond the philosophical lack of clarity, the hedonic understanding of happiness is problematic on scientific grounds, because it is materialistic and reductionist. Reducing mind to brain functions, transhumanists use the metaphor of the computer to explain how the mind works, but as Pinker has already argued persuasively, this metaphor has serious shortcomings. The human brain is much more than a computational machine; it is part of a highly complex and integrated organism that requires to take into account not only the nervous system but also the immune system as well as the socio-cultural context in which we are embedded. If happiness concerns the flourishing of the individual as a whole, happiness cannot be reduced just to the functioning of the body, as we encounter in transhumanist literature. Nor can we reduce the human self just to brain functions of neurons that communicate using chemical messengers, neurotransmitters and neuromodulators via synaptic transmission. We need a more holistic understanding of the human self than the one presupposed by transhumanism.
But the most troubling aspect of the transhumanist approach to happiness is the notion that technology will allow us to produce pleasant sensations all the time. The ability to manipulate the molecules and electrical impulses in the brain is reaching a new sophisticated level due to precise brain scanning and soon neural implants which are now treating people with Parkinson’s disease will someday jolt regions of the brain to induce or suppress specific emotions. It is this specter of transhumanism which makes me most uneasy because it ignores the value of insecurity, anxiety, uncertainty which are very much part of being human. Human culture (especially art and philosophy) could not have been possible without these allegedly negative aspects of being human. But if chemicals root out these human abilities, what will be the source of creativity? Hedonic engineering is not a prescription for cultural depth and creativity; it is a prescription for childish shallowness that regards having fun and feeling good above all other values. That transhumanism perpetuates the youth culture that prevails in America becomes more evident once we examine the third main concern of transhumanism, namely, radical life extension.
Transhumanism, Aging, and Death
A third focus of transhumanism is the struggle against aging and the attempts to postpone death. For leading transhumanists such as Aubrey de Grey, aging is “an enemy” because “it saps our strength and ability to enjoy life, [it] cripples us, and eventually kills us.”21 De Grey is convinced that one day scientists will find a way to defeat aging, and in the meantime he encourages us to take action and help make therapies for aging and life extension medicine a reality. The “crusade” against aging utilizes science to increase healthy life span and reduce the risk of suffering age related conditions in later life. Today, the most affective method for life extension is calorie restriction and taking supplements, but de Grey predicts that main breakthroughs will come from biomedical gerontological research which he conducts under the title of Strategies for Engineered Negligible Senescence (SENS).
De Grey’s science of SENS focuses on eliminating the telomere-related mechanisms that lead to cancer by selectively modifying our telomere elongation genes by tissue type, using targeted gene therapies. Another focus of his research is mitochondrial DNA outside the cellular nucleus which accumulates damage with age that impairs its critical function. De Grey suggests using gene therapy to copy mitochondrial DNA into the cellular nucleus and other strategies for manipulating and remapping mitochondrial DNA in situ. A third aspect of aging process is the protein outside our cells such as those vital to artery walls and skin elasticity. Research is now undergoing for suitable enzymes or compounds to break down problem proteins that the body cannot handle A fourth area of research focuses on certain classes of senescent cells that accumulate where they are not wanted, for example, in the joints. De Grey proposed to use immune therapies to tailor our immune system to destroy cells as they become senescent and thus prevent any related problems. Further research into the biochemistry of “junk material” that accumulates outside the cells will facilitate immune therapies (vaccines). De Grey and other scientists, including John Schloendron of ASU, also envision searching for suitable non-toxic microbial enzymes in soil bacterial that could be safely introduced into human cells.
I have no doubt that De Grey’s intentions are good and that the goal of extending human life span to 150 years or more is meant to benefit humanity. I am also convinced that the medical gerontological research he and others conduct will have some benefits, perhaps unrelated to the intended benefits. What I question is the assumption underlying the project which treats the human body as a “resilient machine” that requires long term care. The machine model appears to me to be inadequate for the reason I already stated in the discussion of human happiness. Human beings are not just machines, although some aspects of human somatic operation bear some resemblance to them. The car metaphor used by de Grey is no more convincing than the computer model used by Kurzweil to capture the meaning of being human. It will be more useful to talk about humans as organisms, but organisms experience aging and death precisely because they are alive.22
The crusade against aging is thus a protest against the reality of death. De Grey does not deny that death is real but he wants to postpone it as much as possible. But for what end? For what purpose should we extend human life indefinitely? What is human life going to be about for this extended duration? What will human beings engage in for the duration of 150 or 500 years? Will it be more consumerist activities, more entertainment, more “fun,” more wars, more destruction of the natural environment? I wonder. Of course, I am not against healthy living or finding ways to alleviate the suffering caused by debilitating diseases such as Altzheimer and Parkinson, but I also believe that the extension of human life cannot be divorced from a deeper reflection about the purpose of human life, and that reflection seems to be missing from transhumanist literature.
Finally, I do not believe that aging should be viewed as the enemy of living. Aging can also mean “sage-ing,” to use the word coined by Zalman Schachter-Shalomi,23 and there is a lot of wisdom that comes only with age and is directly related to encountering the challenges of growing frail and losing vigor. With aging comes the wisdom of compassion, acceptance, and forgiveness that is hard to attain when the good life is defined in terms of having fun or feeling pleasure. As Hans Jonas has understood already in the 1970s,24 human life is lived more deeply and richly if we are aware of our mortality and finitude; we make decisions differently and we live less wantonly and superficially with the awareness of death than without it.
Transhumanism and the Eschatology of Cybernetic Immortality
The most radical aspect of transhumanism is the scenario that humans will be able to transport the content of their brains, their minds, to a non-biological entity and thereby achieve immortality. Kurzweil and other transhumanist visionaries imagine a “brain-porting scenario” that will involve “scanning a human brain capturing all of the salient details.” This will entail reinstantiating the brain’s state in a different – most likely much more powerful – computational substrate. According to Kurzweil this will be a feasible procedure and will happen mostly likely around the late 2030s.25 In this scenario “we will continue to have human bodies, but they will become morphable projections of our intelligence.” Such “software-based humans,” he predicts, “will be vastly extended beyond the severe limitations of humans as we know them today. They will live out on the Web, projecting bodies whenever they need or want them, including virtual bodies in diverse realms of virtual reality, holographically projected bodies, foglet-projected bodies, and physical bodies comprising nanobot swarms and other forms of nanotechnology.”26 For Kurzweil this is a form of immortality, although he concedes that the data and information do not last forever; the longevity of information depends on its relevance, utility, and accessibility.
For Kurzweil and other transhumanists here lies the meaning of transcendence, which he takes literally to mean “to go beyond,” that is, “to go beyond the ordinary powers of the material world through the power of patterns”27 Yes, the body, the hard-ware of the human computer will die, but the software of our life, our personal “mind-file” will continue to live on the Web in the posthuman futures where holographic avatars will interact with other without bodies. For Kurzweil uploading ourselves to a human-made machine is spiritual, because it will exhibit complexity, elegance, knowledge, intelligence, beauty, creativity and levels of subtle attributes such as love. While Kurzweil is reluctant to talk about his own personal belief in God, he does assert that “evolution moves inexorably toward this conception of God, although never quite reaching this ideal.”28
So, how do we make sense of the transhumanist vision of the eschatological future? Should we simply dismiss this vision as “silly,” as some of my colleague at ASU do, or should we engage this vision historically, philosophically, and ethically? I will do the latter. Historically speaking, the vision of the eschatological end as immortality of the intellect is not new; it was articulated already in the Middle Ages by Muslim and Jewish thinkers, most notably by Ibn Rushd (d. 1198) and by Maimonides (d. 1204) who followed Aristotle’s conception of God as a mind that thinks itself eternally. Following Aristotle, these thinkers indeed understood God as a thought that thinks itself eternally and envisioned that very developed human minds (the minds of outstanding philosopher-prophets) will reach such perfect knowledge and such minds will experience the bliss of immortality, an infinite intellectual activity unencumbered by the corporeal body.
Does that mean that Maimonides was the first transhumanist? Not really. Yes, Maimonides did believe that it is possible for some humans to be outstanding in knowledge and understanding of the structure of reality, and he clearly believed that the Prophet Moses was such an individual.29 However, Maimonides did not think that Moses was God nor did he identify Moses with the Separate Intellects, the philosophic version of the traditional beliefs in angels. Moses was in a class of his own among humans, but he was neither an angel nor God; Moses remains human and was able to translate his profound understanding into laws that guide human action. In other words, even in regards to Moses, Maimonides was clear not to erase the boundaries between the human and the divine, and to acknowledge the humanity of Moses. But it is precisely the boundary between the human and the divine which transhumanism in its hubris seeks to erase as it imagines the fusion between human and intelligent machines.
What do I find problematic about this vision of technologically-based immortality? First, I find it offensive to talk about humans as “software-based” entities. While Kurzweil and others think about humans in terms of patterns, human identity and idiosyncratically unique personality cannot be reduced to these patterns of information, because each one of us is distinctive and unique, an Other than cannot be reduced to sameness. This point was articulated already in the 13th century during the debate about Maimonides’s legacy and it has been developed philosophically in a profound manner by Emmanuel Levinas. 30
Several Christian theologians have critiqued the transhumanist vision of cybernetic immortality as a return to premodern substance dualism. The notion that information patterns can exist as disembodied intelligent entities is but another name for the premodern disembodied soul. But this notion is problematic both scientifically and theologically, as Ted Peters has already noted.31 Scientifically is it problematic because “the brains and hence minds are embodied, perhaps even communal,” and theologically it is problematic because transhumanism presupposes a dualistic view of the human which denigrates the human body, considering it as an evil that should be combated and fixed by use of technology. The vision of cybernetic immortality, advanced by Kurzweil or Frank Tipler, fails to appreciate the wisdom of our finite, created body and the implications of theology of createdness. Even if uploading our personality to a machine were possible, which I highly doubt, is this the spiritual vision we want to promote? Isn’t this spiritual vision rather impoverished, precisely because the machine is but a human product? As Noreen Herzfeld has observed, the transcendence depicted by transhumanists is no more than prolongation of a materially-based human product. In the transhumanist vision of cybernetic immortality, eternity simply means a “very long time,” rather than a fundamentally different kind of existence.32
More troubling is the notion that humans can actually achieve the eschatological ideal. Here I am speaking as a Jew who is committed to the pursuit of the ideal rather than to its realization. The pursuit of the ideal endows life with meaning and gives life direction, but when the prescription is taken a description of a state of affairs, disasters lurk. The description of the eschatological end as envisioned by transhumanism fills me not with beauty and elegance but with horror and disgust. Perhaps, this reaction indicates a failure of the imagination, but it can also be that my reluctance to endorse the transhumanist future is based on a historical awareness of the destructive powers of utopian thinking. No one understood this point better than Hans Jonas,33 the German-Jewish philosopher and early critic of modern technology and its utopian visions of enhanced humans.
It is befitting to remind ourselves of Jonas’s profound reservations about biotechnology as we assess transhumanism. In terms of life expansion, Jonas suggested that mortality is not just a curse or a burden, it is also a blessing.It is a burden insofar that we organic beings must wrest our being for the continuous threat of non-being. But it is a blessing insofar as our resting is the very condition for any affirmation of being at all, so that “mortality is the narrow gate through which alone value – the addressee of a yes – could enter the otherwise indifference universe.”34 For Jonas the effort to forestall death or overcome mortality is a fundamental denial of what makes us human. The process of life requires mortality as the counterpart of the natality that alone can supply the novelty and creativity that enrich human life and express freedom. Freedom is imperiled when it ignores necessity. In terms of genetic engineering, Jonas considered many ends of genetic engineering to be frivolous). Genetic enhancement for the sake of improving one’s look or one’s chances of social success falls in that category.
As for germ-line intervention, without which the transhumanist vision is not possible, Jonas appealed exclusively to consequences: the irreversibility of germ-line interventions, the range of their effects, the impossibility of drawing a line in practice between therapy and enhancement of traits or prohibiting the outright invention of new human forms that isolate the ontological states of human nature. In terms of human improvement, or eugenics, Jonas distinguished between negative eugenics (namely, developing diagnostic tools to identify genetic diseases and then manipulating the genetic code to eliminate bad genes) and positive eugenics Jonas (namely, manipulating genes so as to enhance human performance). In regards to both programs he reminds us that an ambitious eugenics violates the normative status of nature, but that we do not have criteria or standard to determine what is normal and what is pathogenic. Finally, as for the elimination of “bad genes” from the population, Jonas held that any effort to eliminate undesirable genes from the gene pool altogether threatens the biological necessary of a varied gene pool and encounters our ignorance about the role apparently useless genes may play in human adaptability. Jonas argued against positive eugenics on the same ground: the lack of criteria and standards for intervention; positive eugenics aims at a qualitative improvement over nature and therefore it cannot claim the sanction of nature. Although technology has advanced well beyond what Jonas reflected about, his reservations about biotechnology in which the human becomes a design object deeply resonates with me.

Conclusion: Transhumanism and the Dialogue of Science and Religion

It is quite clear that the transhumanist vision emerged because of the confluence of knowledge in certain scientific fields and their technological applications, especially in genetics, robotics, and nanotechnology. The fusion of horizons of knowledge demonstrates why the traditional disciplinary boundaries are becoming increasingly obsolete and why scholars in the humanities and the social sciences need to become at least aware if not conversant with the new disciplines. But conversely, scholars in the applied sciences and especially engineering and public policy must become more attune to the humanities and must engage their own scientific disciplines critically in light of the values articulated by the humanities.
The promoters of transhumanism are right to hold that we are facing a new situation in human development because today the human being has become a design project. But unlike the advocates of the transhumanist vision, I am deeply concerned, even worried, about the current situation. The new genetics enables us to enhance our biological state; nanotechnology enables us to manipulate materials on an atomic scale; and robotics not only replaces the human brain with non-biological computing power, which will exceed the human brain, but also facilitates the integration of biological and information technology. Given these developments, the modernist dichotomy between the observer and the observed, the humans and the physical environment, nature and culture, making and thinking is no longer tenable. Indeed we are now faced with a challenge that requires us to rethink the legacy of the Enlightenment and articulate a new theoretical framework that could address the new complex reflexivity.
Transhumanists dismiss their critics as “bio-Conservative” or “bio-Luddites”, but name calling is not a substitute for critical thinking.35 We do need to understand the novelty of the new situation, and this will be the focus of the second year of the Templeton Research Lectures at ASU with the Templeton Co-Fellows of Braden Allenby and Daniel Sarewitz and invited guest speakers: Jean-Pierre Dupuy, Katherine Hayles, Don Ihde, Andrew Pickering, and Ted Peters. We cannot yet predict the outcome of these deliberations, but it is clear that the deliberations will be decidedly transdicsiplinary, crossing the boundaries between history, philosophy, science, technology, art, literature, religion, and spirituality. Transdisciplinarity indeed characterizes intellectual life in the 21st century, which is indeed progressing toward a post-human future with its promises and perils.


The Overhuman in the Transhuman
Max More
Link

Abstract

Stefan Sorgner (2009) says that on becoming familiar with transhumanism, he “immediately thought that there were many fundamental similarities between transhumanism and Nietzsche’s philosophy, especially concerning the concept of the posthuman and that of Nietzsche’s overhuman.” In contrast to Bostrom (2005), Sorgner sees significant and fundamental similarities between the posthuman and the overhuman. (I will adopt his use of “overhuman” in place of “overman” or Übermensch.) This overall view seems to me highly plausible. I agree with most of Sorgner’s comments in this respect. My intent is to give further support to the conceptual parallels. In addition, I argue that these are not merely parallels: transhumanist ideas were directly influenced by Nietzsche.

Introduction

Should transhumanists look upon Friedrich Nietzsche’s thought as an embarrassment – just as Nietzsche suggested the ape was to man? Is there an abyss between his “philosophy with a hammer” and the philosophy of transhumanism? Stefan Sorgner (2009) says that on becoming familiar with transhumanism, he “immediately thought that there were many fundamental similarities between transhumanism and Nietzsche’s philosophy, especially concerning the concept of the posthuman and that of Nietzsche’s overhuman.” In contrast to Bostrom (2005), Sorgner sees significant and fundamental similarities between the posthuman and the overhuman. (I will adopt his use of “overhuman” in place of “overman” or Übermensch.) This overall view seems to me highly plausible. I agree with most of Sorgner’s comments in this respect. My intent is to give further support to the conceptual parallels. In addition, I argue that these are not merely parallels: transhumanist ideas were directly influenced by Nietzsche.

First, it is necessary to note that an enormous range of ideas can be found in Nietzsche’s writing, some of which – especially comparing different periods of his work – may be inconsistent. Although there are clear parallels between Nietzsche’s thinking and some core transhumanist ideas, the latter are inspired very selectively by the former. Perhaps the most salient example of a Nietzschean idea alien to transhumanism is his “eternal recurrence.” Nietzsche thought this idea inseparable from that of the overman (or overhuman).
Many scholars have been puzzled at this connection and have often rejected eternal recurrence. Nietzsche’s attachment to the concept probably results from his seeing it as the ultimate affirmation of the real world as against the Christian (and Platonic) denial of the primacy of the actual, physical reality. Not only is eternal recurrence a bizarre piece of metaphysics in itself, it was part of Nietzsche’s denial of the idea of progress. Both for its inherent implausibility and for its opposition to progress, this concept cannot be reconciled with transhumanism. Nevertheless, several other concepts can be so reconciled. As a strong opponent of philosophical systems, Nietzsche could hardly object to transhumanism’s picking and choosing from among his thoughts.

Direct influence

Sorgner’s essay establishes parallels between transhumanism and Nietzsche’s thought, but does not address the question of whether transhumanist ideas were directly influenced by Nietzsche. I can state with complete confidence that such an influence does indeed exist. I know that because his ideas influenced my own thinking. That thinking led to my introduction of the term “transhumanism” (only later did I discover Huxley’s prior use of the term), to the publication of my essay, “Transhumanism: Towards a Futurist Philosophy” (More 1990), and to my original transhumanist statement, “The Extropian Principles” (later “The Principles of Extropy”, More 1990b). While these essays are far from the only sources of contemporary transhumanism, these seminal writings have been influential. Since they were themselves influenced by some of Nietzsche’s core ideas, the direct connection between transhumanism and Nietzsche is established.

In “Transhumanism: Towards a Futurist Philosophy,” for instance, I wrote that “The religionist has no answer to the extropic challenge put by Nietzsche’s Zarathustra: ‘I teach you the overman. Man is something that is to be overcome. What have you done to overcome him?’” Sorgner notes, “The overhuman represents the meaning of the earth. The overhuman is supposed to represent the meaning- giving concept within Nietzsche’s worldview which is supposed to replace the basically Christian worldview.” He also states that “Nietzsche upheld that the concept of the overhuman is the meaning of the earth. I think that the relevance of the posthuman can only be fully appreciated if one acknowledges that its ultimate foundation is that it gives meaning to scientifically minded people.” This again agrees closely with my “Transhumanism” essay in which I wrote: “I agree with Nietzsche (in The Will to Power) that nihilism is only a transitional stage resulting from the breakdown of an erroneous interpretation of the world. We now have plenty of resources to leave nihilism behind, affirming a positive (but continually evolving) value-perspective.”

Critical rationalism

Reflecting its humanist and Enlightenment roots, transhumanism places an extremely high value on rationality. Especially popular among transhumanists is critical rationalism. This form of rationalism differs from the foundationalist certitude of Descartes. In its most consistent form it becomes pancritical rationalism (Bartley 1984). As Sorgner points out, Nietzsche, too, had an immense respect for critical thinking and valued scientific inquiry highly.

In my 1994 talk on pancritical rationalism at the first Extropy Institute conference (More 1994), I started by citing Nietzsche’s statement: “A very popular error: having the courage of one's convictions; rather it is a matter of having the courage for an attack on one’s convictions!” I might just as easily have cited another passage: “Convictions are more dangerous foes of truth than lies.” Or the passage from The Gay Science (Nietzsche 1882): “Not to question, not to tremble with the craving and joy of questioning … that is what I feel to be contemptible, and this feeling is the first thing I seek in everyone: some foolishness persuades me ever and again that every human being has this feeling, as a human being. It is my kind of injustice.” Although Nietzsche is not essential to critical rationalism, he does provide inspiration for what might otherwise seem a dry epistemology.

Self-Transformation
One of the core transhumanist principles of extropy has been that of Self-Transformation. In a later version of the Principles, this was complemented by the principle of Self-Direction. Both of these are highly compatible with Nietzsche’s thinking. They are also influenced by his work, along with that of many other thinkers. Most centrally, I would point to Zarathustra’s declaration (Nietzsche 1885): “And life itself confided this secret to me: ‘Behold,’ it said, ‘I am that which must always overcome itself.’”

From both the individual and species perspective, the concept of self-overcoming resonates strongly with extropic, transhumanist ideals and goals. Although Nietzsche had little to say about technology as a means of self-overcoming, neither did he rule it out. And, as a champion of what he saw as a coming age of science, it is not difficult to see technology as part of the process of self-overcoming, so long as it is integrated firmly with will and self-assertion. Self-assertion in this case, of course, being not assertion of an existing self to preserve itself, but a striving to “become who you are”. New technologies allow us new means of becoming who we are – another step toward posthuman ideals – and new ways of “giving style” to our character. As Nietzsche put it: “a great and rare art!”

Utilitarianism, slave-morality, and heroic transhumanism

The sole reason Bostrom (2005) gives for saying that transhumanism has merely “some surface-level similarities with the Nietzschean vision” is that transhumanism – thanks to its Enlightenment roots – has an emphasis on individual liberties and a “concern for the welfare of all humans (and other sentient beings).” Bostrom is correct about this emphasis, as reflected, for instance, in the principle of Self-Direction in the Principles of Extropy. Bostrom concludes that transhumanism therefore “probably has as much or more in common with Nietzsche’s contemporary J.S. Mill, the English liberal thinker and utilitarian.”

Nietzsche famously had nothing positive to say about the utilitarians. When he mentioned them, it was to say something caustically critical, such as: “Man does not strive for pleasure; only the Englishman does” (Nietzsche 1889). Should we infer from Nietzsche’s distaste for the slave-morality of utilitarianism (which turns every moral agent into a slave yoked to the task of maximizing the greatest good of the greatest number) that transhumanism has little in common with Nietzsche’s thinking? I think not.

What we can infer is that differing variants of transhumanism are possible. Certainly there is no inconsistency between transhumanism and a utilitarian morality. But neither is there any inconsistency between transhumanism and a more Nietzschean view of morality. While Nietzsche viewed morality as essentially perspectival, we can easily enough fit him loosely within the virtue ethics approach classically represented by Aristotle. Yes, transhumanism can be sanitized and made safe so that it fits comfortably with utilitarian thinking. Or we can take seriously Nietzsche’s determination to undertake a “revaluation of all values.”

This not need imply any kind of illiberal social or political system. It may simply lead to a version of transhumanism that champions the self-overcoming of the individual without an obligation to “the masses.” Many sound pragmatic reasons exist for each of us to want to uplift everyone – at least for those of us who reject the idea of society and economy as a zero-sum game. Pragmatic considerations are not the only reason a Nietzschean transhumanist may have for benevolence of this kind. Unlike a utilitarian transhumanist who must regard uplifting others as an obligation, a Nietzschean transhumanist would look upon the prospect of uplifting the masses as an expression of overflowing personal power or well-being or health.

Neither a utilitarian nor a Nietzschean transhumanism can plausibly claim to be the true transhumanism. Both share the central elements of the radical transhumanist worldview. My goal has not been to show that transhumanism must be Nietzschean. It has been to show that central elements of Nietzsche’s philosophy are not only compatible with transhumanism, but have historically had a considerable direct influence on major strands of this philosophy of life.

Transhumanist Values
[FONT=Times New Roman, Times, serif]NICK BOSTROM[/FONT]
http://www.nickbostrom.com/ethics/values.html#_ftn1

1. What is Transhumanism?

[FONT=Times New Roman, Times, serif]Transhumanism is a loosely defined movement that has developed gradually over the past two decades.[1] It promotes an interdisciplinary approach to understanding and evaluating the opportunities for enhancing the human condition and the human organism opened up by the advancement of technology. Attention is given to both present technologies, like genetic engineering and information technology, and anticipated future ones, such as molecular nanotechnology and artificial intelligence.[/FONT]
[FONT=Times New Roman, Times, serif]The enhancement options being discussed include radical extension of human health-span, eradication of disease, elimination of unnecessary suffering, and augmentation of human intellectual, physical, and emotional capacities. Other transhumanist themes include space colonization and the possibility of creating superintelligent machines, along with other potential developments that could profoundly alter the human condition. The ambit is not limited to gadgets and medicine, but encompasses also economic, social, institutional designs, cultural development, and psychological skills and techniques.[/FONT]
[FONT=Times New Roman, Times, serif]Transhumanists view human nature as a work-in-progress, a half-baked beginning that we can learn to remold in desirable ways. Current humanity need not be the endpoint of evolution. Transhumanists hope that by responsible use of science, technology, and other rational means we shall eventually manage to become posthuman, beings with vastly greater capacities than present human beings have.[/FONT]
[FONT=Times New Roman, Times, serif]Some transhumanists take active steps to increase the probability that they personally will survive long enough to become posthuman, for example by choosing a healthy lifestyle or by making provisions for having themselves cryonically suspended in case of de-animation.[2] In contrast to many other ethical outlooks, which in practice often reflect a reactionary attitude to new technologies, the transhumanist view is guided by an evolving vision to take a more proactive approach to technology policy. This vision, in broad strokes, is to create the opportunity to live much longer and healthier lives, to enhance our memory and other intellectual faculties, to refine our emotional experiences and increase our subjective sense of well-being, and generally to achieve a greater degree of control over our own lives. This affirmation of human potential is offered as an alternative to customary injunctions against playing God, messing with nature, tampering with our human essence, or displaying punishable hubris.[/FONT]
[FONT=Times New Roman, Times, serif]Transhumanism does not entail technological optimism. While future technological capabilities carry immense potential for beneficial deployments, they also could be misused to cause enormous harm, ranging all the way to the extreme possibility of intelligent life becoming extinct. Other potential negative outcomes include widening social inequalities or a gradual erosion of the hard-to-quantify assets that we care deeply about but tend to neglect in our daily struggle for material gain, such as meaningful human relationships and ecological diversity. Such risks must be taken very seriously, as thoughtful transhumanists fully acknowledge.[3][/FONT]
[FONT=Times New Roman, Times, serif]Transhumanism has roots in secular humanist thinking, yet is more radical in that it promotes not only traditional means of improving human nature, such as education and cultural refinement, but also direct application of medicine and technology to overcome some of our basic biological limits.[/FONT]
2. Human limitations

[FONT=Times New Roman, Times, serif]The range of thoughts, feelings, experiences, and activities accessible to human organisms presumably constitute only a tiny part of what is possible. There is no reason to think that the human mode of being is any more free of limitations imposed by our biological nature than are those of other animals. In much the same way as Chimpanzees lack the cognitive wherewithal to understand what it is like to be human – the ambitions we humans have, our philosophies, the complexities of human society, or the subtleties of our relationships with one another, so we humans may lack the capacity to form a realistic intuitive understanding of what it would be like to be a radically enhanced human (a “posthuman”) and of the thoughts, concerns, aspirations, and social relations that such humans may have.[/FONT]
[FONT=Times New Roman, Times, serif]Our own current mode of being, therefore, spans but a minute subspace of what is possible or permitted by the physical constraints of the universe (see Figure 1). It is not farfetched to suppose that there are parts of this larger space that represent extremely valuable ways of living, relating, feeling, and thinking.[/FONT]


[FONT=Times New Roman, Times, serif]The limitations of the human mode of being are so pervasive and familiar that we often fail to notice them, and to question them requires manifesting an almost childlike naiveté. Let consider some of the more basic ones.[/FONT]
[FONT=Times New Roman, Times, serif]Lifespan. Because of the precarious conditions in which our Pleistocene ancestors lived, the human lifespan has evolved to be a paltry seven or eight decades. This is, from many perspectives, a rather short period of time. Even tortoises do better than that.[/FONT]
[FONT=Times New Roman, Times, serif]We don’t have to use geological or cosmological comparisons to highlight the meagerness of our allotted time budgets. To get a sense that we might be missing out on something important by our tendency to die early, we only have to bring to mind some of the worthwhile things that we could have done or attempted to do if we had had more time. For gardeners, educators, scholars, artists, city planners, and those who simply relish observing and participating in the cultural or political variety shows of life, three scores and ten is often insufficient for seeing even one major project through to completion, let alone for undertaking many such projects in sequence.[/FONT]
[FONT=Times New Roman, Times, serif]Human character development is also cut short by aging and death. Imagine what might have become of a Beethoven or a Goethe if they had still been with us today. Maybe they would have developed into rigid old grumps interested exclusively in conversing about the achievements of their youth. But maybe, if they had continued to enjoy health and youthful vitality, they would have continued to grow as men and artists, to reach levels of maturity that we can barely imagine. We certainly cannot rule that out based on what we know today. Therefore, there is at least a serious possibility of there being something very precious outside the human sphere. This constitutes a reason to pursue the means that will let us go there and find out.[/FONT]
[FONT=Times New Roman, Times, serif]Intellectual capacity. We have all had moments when we wished we were a little smarter. The three-pound, cheese-like thinking machine that we lug around in our skulls can do some neat tricks, but it also has significant shortcomings. Some of these – such as forgetting to buy milk or failing to attain native fluency in languages you learn as an adult – are obvious and require no elaboration. These shortcomings are inconveniences but hardly fundamental barriers to human development.[/FONT]
[FONT=Times New Roman, Times, serif]Yet there is a more profound sense in the constraints of our intellectual apparatus limit our modes of our mentation. I mentioned the Chimpanzee analogy earlier: just as is the case for the great apes, our own cognitive makeup may foreclose whole strata of understanding and mental activity. The point here is not about any logical or metaphysical impossibility: we need not suppose that posthumans would not be Turing computable or that they would have concepts that could not be expressed by any finite sentences in our language, or anything of that sort. The impossibility that I am referring to is more like the impossibility for us current humans to visualize an 200-dimensional hypersphere or to read, with perfect recollection and understanding, every book in the Library of Congress. These things are impossible for us because, simply put, we lack the brainpower. In the same way, may lack the ability to intuitively understand what being a posthuman would be like or to grok the playing field of posthuman concerns.[/FONT]
[FONT=Times New Roman, Times, serif]Further, our human brains may cap our ability to discover philosophical and scientific truths. It is possible that failure of philosophical research to arrive at solid, generally accepted answers to many of the traditional big philosophical questions could be due to the fact that we are not smart enough to be successful in this kind of enquiry. Our cognitive limitations may be confining us in a Platonic cave, where the best we can do is theorize about “shadows”, that is, representations that are sufficiently oversimplified and dumbed-down to fit inside a human brain.[/FONT]
[FONT=Times New Roman, Times, serif]Bodily functionality. We enhance our natural immune systems by getting vaccinations, and we can imagine further enhancements to our bodies that would protect us from disease or help us shape our bodies according to our desires (e.g. by letting us control our bodies’ metabolic rate). Such enhancements could improve the quality of our lives.[/FONT]
[FONT=Times New Roman, Times, serif]A more radical kind of upgrade might be possible if we suppose a computational view of the mind. It may then be possible to upload a human mind to a computer, by replicating in silico the detailed computational processes that would normally take place in a particular human brain.[4] Being an upload would have many potential advantages, such as the ability to make back-up copies of oneself (favorably impacting on one’s life-expectancy) and the ability to transmit oneself as information at the speed of light. Uploads might live either in virtual reality or directly in physical reality by controlling a robot proxy.[/FONT]
[FONT=Times New Roman, Times, serif]Sensory modalities, special faculties and sensibilities. The current human sensory modalities are not the only possible ones, and they are certainly not as highly developed as they could be. Some animals have sonar, magnetic orientation, or sensors for electricity and vibration; many have a much keener sense of smell, sharper eyesight, etc. The range of possible sensory modalities is not limited to those we find in the animal kingdom. There is no fundamental block to adding say a capacity to see infrared radiation or to perceive radio signals and perhaps to add some kind of telepathic sense by augmenting our brains with suitably interfaced radio transmitters.[/FONT]
[FONT=Times New Roman, Times, serif]Humans also enjoy a variety of special faculties, such as appreciation of music and a sense of humor, and sensibilities such as the capacity for sexual arousal in response to erotic stimuli. Again, there is no reason to think that what we have exhausts the range of the possible, and we can certainly imagine higher levels of sensitivity and responsiveness.[/FONT]
[FONT=Times New Roman, Times, serif]Mood, energy, and self-control. Despite our best efforts, we often fail to feel as happy as we would like. Our chronic levels of subjective well-being seem to be largely genetically determined. Life-events have little long-term impact; the crests and troughs of fortune push us up and bring us down, but there is little long-term effect on self-reported well-being. Lasting joy remains elusive except for those of us who are lucky enough to have been born with a temperament that plays in a major key.[/FONT]
[FONT=Times New Roman, Times, serif]In addition to being at the mercy of a genetically determined setpoint for our levels of well-being, we are limited in regard to energy, will-power, and ability to shape our own character in accordance with our ideals. Even such “simple” goals as losing weight or quitting smoking prove unattainable to many.[/FONT]
[FONT=Times New Roman, Times, serif]Some subset of these kinds of problems might be necessary rather than contingent upon our current nature. For example, we cannot both have the ability easily to break any habit and the ability to form stable, hard-to-break habits. (In this regard, the best one can hope for may be the ability to easily get rid of habits we didn’t deliberately choose for ourselves in the first place, and perhaps a more versatile habit-formation system that would let us choose with more precision when to acquire a habit and how much effort it should cost to break it.)[/FONT]
3. The core transhumanist value: exploring the posthuman realm

[FONT=Times New Roman, Times, serif]The conjecture that there are greater values than we can currently fathom does not imply that values are not defined in terms of our current dispositions. Take, for example, a dispositional theory of value such as the one described by David Lewis.[5] According to Lewis’s theory, something is a value for you if and only if you would want to want it if you were perfectly acquainted with it and you were thinking and deliberating as clearly as possible about it. On this view, there may be values that we do not currently want, and that we do not even currently want to want, because we may not be perfectly acquainted with them or because we are not ideal deliberators. Some values pertaining to certain forms of posthuman existence may well be of this sort; they may be values for us now, and they may be so in virtue of our current dispositions, and yet we may not be able to fully appreciate them with our current limited deliberative capacities and our lack of the receptive faculties required for full acquaintance with them. This point is important because it shows that the transhumanist view that we ought to explore the realm of posthuman values does not entail that we should forego our current values. The posthuman values can be our current values, albeit ones that we have not yet clearly comprehended. Transhumanism does not require us to say that we should favor posthuman beings over human beings, but that the right way of favoring human beings is by enabling us to realize our ideals better and that some of our ideals may well be located outside the space of modes of being that are accessible to us with our current biological constitution.[/FONT]
[FONT=Times New Roman, Times, serif]We can overcome many of our biological limitations. It is possible that there are some limitations that are impossible for us to transcend, not only because of technological difficulties but on metaphysical grounds. Depending on what our views are about what constitutes personal identity, it could be that certain modes of being, while possible, are not possible for us, because any being of such a kind would be so different from us that they could not be us. Concerns of this kind are familiar from theological discussions of the afterlife. In Christian theology, some souls will be allowed by God to go to heaven after their time as corporal creatures is over. Before being admitted to heaven, the souls would undergo a purification process in which they would lose many of their previous bodily attributes. Skeptics may doubt that the resulting minds would be sufficiently similar to our current minds for it to be possible for them to be the same person. A similar predicament arises within transhumanism: if the mode of being of a posthuman being is radically different from that of a human being, then we may doubt whether a posthuman being could be the same person as a human being, even if the posthuman being originated from a human being.[/FONT]
[FONT=Times New Roman, Times, serif]We can, however, envision many enhancements that would not make it impossible for the post-transformation someone to be the same person as the pre-transformation person. A person could obtain quite a bit of increased life expectancy, intelligence, health, memory, and emotional sensitivity, without ceasing to exist in the process. A person’s intellectual life can be transformed radically by getting an education. A person’s life expectancy can be extended substantially by being unexpectedly cured from a lethal disease. Yet these developments are not viewed as spelling the end of the original person. In particular, it seems that modifications that add to a person’s capacities can be more substantial than modifications that subtract, such as brain damage. If most of someone currently is, including her most important memories, activities, and feelings, is preserved, then adding extra capacities on top of that would not easily cause the person to cease to exist.[/FONT]
[FONT=Times New Roman, Times, serif]Preservation of personal identity, especially if this notion is given a narrow construal, is not everything. We can value other things than ourselves, or we might regard it as satisfactory if some parts or aspects of ourselves survive and flourish, even if that entails giving up some parts of ourselves such that we no longer count as being the same person. Which parts of ourselves we might be willing to sacrifice may not become clear until we are more fully acquainted with the full meaning of the options. A careful, incremental exploration of the posthuman realm may be indispensable for acquiring such an understanding, although we may also be able to learn from each other’s experiences and from works of the imagination.[/FONT]
[FONT=Times New Roman, Times, serif]Additionally, we may favor future people being posthuman rather than human, if the posthumans would lead lives more worthwhile than the alternative humans would. Any reasons stemming from such considerations would not depend on the assumption that we ourselves could become posthuman beings.[/FONT]
[FONT=Times New Roman, Times, serif]Transhumanism promotes the quest to develop further so that we can explore hitherto inaccessible realms of value. Technological enhancement of human organisms is a means that we ought to pursue to this end. There are limits to how much can be achieved by low-tech means such as education, philosophical contemplation, moral self-scrutiny and other such methods proposed by classical philosophers with perfectionist leanings, including Plato, Aristotle, and Nietzsche, or by means of creating a fairer and better society, as envisioned by social reformists such as Marx or Martin Luther King. This is not to denigrate what we can do with the tools we have today. Yet ultimately, transhumanists hope to go further.[/FONT]
4. Basic conditions for realizing the transhumanist project

I[FONT=Times New Roman, Times, serif]f this is the grand vision, what are the more particular objectives that it translates into when considered as a guide to policy?[/FONT]
[FONT=Times New Roman, Times, serif]What is needed for the realization of the transhumanist dream is that technological means necessary for venturing into the posthuman space are made available to those who wish to use them, and that society be organized in such a manner that such explorations can be undertaken without causing unacceptable damage to the social fabric and without imposing unacceptable existential risks.[/FONT]
[FONT=Times New Roman, Times, serif]Global security. While disasters and setbacks are inevitable in the implementation of the transhumanist project (just as they are if the transhumanist project is not pursued), there is one kind of catastrophe that must be avoided at any cost:[/FONT]
[FONT=Times New Roman, Times, serif]Existential risk – one where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.[6][/FONT]
[FONT=Times New Roman, Times, serif]Several recent discussions have argued that the combined probability of the existential risks is very substantial.[7] The relevance of the condition of existential safety to the transhumanist vision is obvious: if we go extinct or permanently destroy our potential to develop further, then the transhumanist core value will not be realized. Global security is the most fundamental and nonnegotiable requirement of the transhumanist project.[/FONT]
[FONT=Times New Roman, Times, serif]Technological progress. That technological progress is generally desirable from a transhumanist point of view is also self-evident. Many of our biological shortcomings (aging, disease, feeble memories and intellects, a limited emotional repertoire and inadequate capacity for sustained well-being) are difficult to overcome, and to do so will require advanced tools. Developing these tools is a gargantuan challenge for the collective problem-solving capacities of our species. Since technological progress is closely linked to economic development, economic growth – or more precisely, productivity growth – can in some cases serve as a proxy for technological progress. (Productivity growth is, of course, only an imperfect measure of the relevant form of technological progress, which, in turn, is an imperfect measure of overall improvement, since it omits such factors as equity of distribution, ecological diversity, and quality of human relationships.)[/FONT]
[FONT=Times New Roman, Times, serif]The history of economic and technological development, and the concomitant growth of civilization, is appropriately regarded with awe, as humanity’s most glorious achievement. Thanks to the gradual accumulation of improvements over the past several thousand years, large portions of humanity have been freed from illiteracy, life-expectancies of twenty years, alarming infant-mortality rates, horrible diseases endured without palliatives, and periodic starvation and water shortages. Technology, in this context, is not just gadgets but includes all instrumentally useful objects and systems that have been deliberately created. This broad definition encompasses practices and institutions, such as double-entry accounting, scientific peer-review, legal systems, and the applied sciences.[/FONT]
[FONT=Times New Roman, Times, serif]Wide access. It is not enough that the posthuman realm be explored by someone. The full realization of the core transhumanist value requires that, ideally, everybody should have the opportunity to become posthuman. It would be sub-optimal if the opportunity to become posthuman were restricted to a tiny elite.[/FONT]
[FONT=Times New Roman, Times, serif]There are many reasons for supporting wide access: to reduce inequality; because it would be a fairer arrangement; to express solidarity and respect for fellow humans; to help gain support for the transhumanist project; to increase the chances that you will get the opportunity to become posthuman; to increase the chances that those you care about can become posthuman; because it might increase the range of the posthuman realm that gets explored; and to alleviate human suffering on as wide a scale as possible.[/FONT]
[FONT=Times New Roman, Times, serif]The wide access requirement underlies the moral urgency of the transhumanist vision. Wide access does not argue for holding back. On the contrary, other things being equal, it is an argument for moving forward as quickly as possible. 150,000 human beings on our planet die every day, without having had any access to the anticipated enhancement technologies that will make it possible to become posthuman. The sooner this technology develops, the fewer people will have died without access.[/FONT]
[FONT=Times New Roman, Times, serif]Consider a hypothetical case in which there is a choice between (a) allowing the current human population to continue to exist, and (b) having it instantaneously and painlessly killed and replaced by six billion new human beings who are very similar but non-identical to the people that exist today. Such a replacement ought to be strongly resisted on moral grounds, for it would entail the involuntary death of six billion people. The fact that they would be replaced by six billion newly created similar people does not make the substitution acceptable. Human beings are not disposable. For analogous reasons, it is important that the opportunity be become posthuman is made available to as many humans as possible, rather than having the existing population merely supplemented (or worse, replaced) by a new set of posthuman people. The transhumanist ideal will be maximally realized only if the benefits of technologies are widely shared and if they are made available as soon as possible, preferably within our lifetime.[/FONT]
5. Derivative values

[FONT=Times New Roman, Times, serif]From these specific requirements flow a number of derivative transhumanist values that translate the transhumanist vision into practice. (Some of these values may also have independent justifications, and transhumanism does not imply that that the list of values provided below is exhaustive.)[/FONT]
[FONT=Times New Roman, Times, serif]To start with, transhumanists typically place emphasis on individual freedom and individual choice in the area of enhancement technologies. Humans differ widely in their conceptions of what their own perfection or improvement would consist in. Some want to develop in one direction, others in different directions, and some prefer to stay the way they are. It would neither be morally unacceptable for anybody to impose a single standard to which we would all have to conform. People should have the right to choose which enhancement technologies, if any, they want to use. In cases where individual choices impact substantially on other people, this general principle may need to be restricted, but the mere fact that somebody may be disgusted or morally affronted by somebody else’s using technology to modify herself would not normally a legitimate ground for coercive interference. Furthermore, the poor track record of centrally planned efforts to create better people (e.g. the eugenics movement and Soviet totalitarianism) shows that we need to be wary of collective decision-making in the field of human modification.[/FONT]
[FONT=Times New Roman, Times, serif]Another transhumanist priority is to put ourselves in a better position to make wise choices about where we are going. We will need all the wisdom we can get when negotiating the posthuman transition. Transhumanists place a high value on improvements in our individual and collective powers of understanding and in our ability to implement responsible decisions. Collectively, we might get smarter and more informed through such means as scientific research, public debate and open discussion of the future, information markets[8], collaborative information filtering[9]. On an individual level, we can benefit from education, critical thinking, open-mindedness, study techniques, information technology, and perhaps memory- or attention-enhancing drugs and other cognitive enhancement technologies. Our ability to implement responsible decisions can be improved by expanding the rule of law and democracy on the international plane. Additionally, artificial intelligence, especially if and when it reaches human-equivalence or greater, could give an enormous boost to the quest for knowledge and wisdom.[/FONT]
[FONT=Times New Roman, Times, serif]Given the limitations of our current wisdom, a certain epistemic tentativeness is appropriate, along with a readiness to continually reassess our assumptions as more information becomes available. We cannot take for granted that our old habits and beliefs will prove adequate in navigating our new circumstances.[/FONT]
[FONT=Times New Roman, Times, serif]Global security can be improved by promoting international peace and cooperation, and by strongly counteracting the proliferation of weapons of mass destruction. Improvements in surveillance technology may make it easier to detect illicit weapons programs. Other security measures might also be appropriate to counteract various existential risks. More studies on such risks would help us get a better understanding of the long-term threats to human flourishing and of what can be done to reduce them.[/FONT]
[FONT=Times New Roman, Times, serif]Since technological development is necessary to realize the transhumanist vision, entrepreneurship, science, and the engineering spirit are to be promoted. More generally, transhumanists favor a pragmatic attitude and a constructive, problem-solving approach to challenges, preferring methods that experience tells us give good results. They think it better to take the initiative to “do something about it” rather than sit around complaining. This is one sense in which transhumanism is optimistic. (It is not optimistic in the sense of advocating an inflated belief in the probability of success or in the Panglossian sense of inventing excuses for the shortcomings of the status quo.)[/FONT]
[FONT=Times New Roman, Times, serif]Transhumanism advocates the well-being of all sentience, whether in artificial intellects, humans, and non-human animals (including extraterrestrial species, if there are any). Racism, sexism, speciesism, belligerent nationalism and religious intolerance are unacceptable. In addition to the usual grounds for deeming such practices objectionable, there is also a specifically transhumanist motivation for this. In order to prepare for a time when the human species may start branching out in various directions, we need to start now to strongly encourage the development of moral sentiments that are broad enough encompass within the sphere of moral concern sentiences that are constituted differently from ourselves.[/FONT]
[FONT=Times New Roman, Times, serif]Finally, transhumanism stresses the moral urgency of saving lives, or, more precisely, of preventing involuntary deaths among people whose lives are worth living. In the developed world, aging is currently the number one killer. Aging is also biggest cause of illness, disability and dementia. (Even if all heart disease and cancer could be cured, life expectancy would increase by merely six to seven years.) Anti-aging medicine is therefore a key transhumanist priority. The goal, of course, is to radically extent people’s active health-spans, not to add a few extra years on a ventilator at the end of life.[/FONT]
[FONT=Times New Roman, Times, serif]Since we are still far from being able to halt or reverse aging, cryonic suspension of the dead should be made available as an option for those who desire it. It is possible that future technologies will make it possible to reanimate people who have cryonically suspended.[10] While cryonics might be a long shot, it definitely carries better odds than cremation or burial.[/FONT]
[FONT=Times New Roman, Times, serif]The table below summarizes the transhumanist values that we have discussed.[/FONT]

Random Site:

 

Sinny91

Banned
Local time
Today 4:19 AM
Joined
May 16, 2015
Messages
6,299
-->
Location
Birmingham, UK
I think I would enjoy this thread more if you could provide some short over views or previews.
 

Sinny91

Banned
Local time
Today 4:19 AM
Joined
May 16, 2015
Messages
6,299
-->
Location
Birmingham, UK
My bad, I totally didn't see some of the embeds there, I was remembering the OP from before.

But yea, eveythings a work in progress with us, right? ;)
 

Pizzabeak

Banned
Local time
Yesterday 8:19 PM
Joined
Jan 24, 2012
Messages
2,667
-->

birdsnestfern

Earthling
Local time
Yesterday 11:19 PM
Joined
Oct 7, 2021
Messages
1,707
-->

ZenRaiden

One atom of me
Local time
Today 4:19 AM
Joined
Jul 27, 2013
Messages
4,597
-->
Location
Between concrete walls
Top Bottom