By CR on Thursday, 28 June 2018
Category: Race, Culture, Nation

Dietrich on Strong AI and Human Replacement By Chris Knight

     Through surfing the net in my full body rubber suit, I came across this article by philosopher Eric Dietrich, who argues this:

“Recently on the History Channel, artificial intelligence (AI) was singled out, with much wringing of hands, as one of the seven possible causes of the end of human life. I will argue that this wringing of hands is quite inappropriate: the best thing that could happen to humans, and to the rest of life on planet Earth, would be for us to develop intelligent machines and then usher in our own extinction.”
https://philosophynow.org/issues/61/After_The_Humans_Are_Gone

     No matter what is proposed, there is always a philosopher to champion it, unless it is some prescribed sacred politically correct doctrine, in which case they will be as silent as mice hiding from a hungry cat. Anyway, let’s look at the philosopher’s arguments, since arguments are  their big thing. Dietrich’s argument against humanity is that humans commit evil acts, child abuse, rape and environmental destruction are his examples. I was hoping that he might plug in “racism,” but that list will do. However, he does not then go on to show that these negatives vastly outweigh the positives, which he also accepts exist, such as art and science. In fact, even on the child abuse issue, his argument fails since the great majority of children are not abused. Most women, at least in homogeneous societies, are not raped, but that is changing by design. Nevertheless, don’t let all of this get in the way of proposing that AI replace humans!

“Humankind shouldn’t just go extinct. There are things about us worth preserving: art and science to name two. Some might think that these good parts of humanity justify our continued existence. This conclusion no doubt used to be warranted, before AI became a real possibility. But now it no longer is. If we could implement the better angels of our nature in machines, then morally we should; and then we should exit, stage left. So let’s build a race of machines – Homo sapiens 2.0 – that incorporate only what is good about humanity, that do not feel any evolutionary tug to commit certain evils against others of their own kind, and that let the rest of the world live in peace. And then let us – the humans – exit, leaving behind a planet populated with nice machines, who, while not perfect angels, will nevertheless be a vast moral improvement over us. One way to do this would be to implement in the machines our best moral theories, in such a way that the machines do not draw invidious distinctions for example.

These best theories see morality as comprising universal truths, applying fairly to all beings. One such truth is that it is normally wrong to harm another being. (I say ‘normally’ because even in a better, machine society, it is likely there will be bad or defective machines, and these must be dealt with.) What are the prospects for building such a race of robots? They seem moderately good to me. The theories and technologies for building a human-level robot seriously elude us at the present, but we already have, I think, the correct foundational theory – computationalism (I have argued for this many times in various places; see Dietrich, Thinking Computers and Virtual Persons, and Dietrich and Markman, Cognitive Dynamics). Assuming that computationalism is correct, then it is only a matter of time before we figure out which algorithms govern the human mind. Once we know this, we could, with careful diligence, remove at least some of the parts responsible for us behaving abominably. Then after building such a race of machines, perhaps we could bow out with some dignity – with the thought that we had finally done the best we could do.”

     Details on the bowing out, though are conveniently omitted. In all good philosophy fairy tales there is also a bad guy, and the counter-argument considered is this:

“With such a hard-nosed view of their world and their place in it, the machines won’t feel any angst, nor awe and wonder. Lacking these states (it is not that they can’t feel awe and wonder, it is that they don’t), they will not be driven to do art and science. They will not take risks. Since they can’t be cowards, they won’t be heroes. Something incalculably important will be lost, therefore, if we replace ourselves with these machines. No matter how good they are, no matter how much better for the other life on planet Earth, if we engineer these creatures and then embrace our own extinction, we will be extinguishing something profound, beautiful, and important.”

     He then sets out to refute that. But, there is a better argument against this whole scheme, and that is, that flawed humans are attempting to bootstrap their superiors. Who says that this will work? Surely it is more probable than not that if humanity is as flawed as Dietrich says it is, that their ultimate creation could be even worse than them! Perhaps an evil robot like Ultron, as seen in The Avengers movie would be created, one hostile to all life, that sets out to destroy everything, even plants and micro-organisms.

     Clearly, this transhuman agenda is flawed because it has a blind faith in the goodness of technology. There is utterly no reason to believe that this is true. In fact, perhaps inconsistently, Dietrich has a book with V. G. Hardcastle, Sisyphus’s Boulder: Consciousness and the Limits of the Knowable, (2005), which argues for intrinsic limits to human knowledge as part of the human cognitive condition. I have not read the book, and may not understand it, since there is so much that I do not understand,  but this would seem to raise grave doubts about transhumanism from the very foundation, up.

Leave Comments