.comment-link {margin-left:.6em;} <$BlogRSDURL$>

Being an Archive of the Obscure Neural Firings Burning Down the Jelly-Pink Cobwebbed Library of Doom that is The Mind of Quentin S. Crisp

Thursday, December 29, 2005

The Soft Machine

I have just written an e-mail to New Scientist magazine. The text is as follows:

Dear New Scientist,
I would like to offer some comment on Ray Kurzweil's article in the September 24th issue of New Scientist, 'Human 2.0'(Sep 24th, 2005, p.32).

Mr Kurzweil describes a future in which human beings will 'merge' with the technology of artificial intelligence, having brains and bodies enhanced by nanobots. He posits this future as quite close, specifically, he mentions the 2020s, the 2030s and the 2040s. Towards the end of the article he wisely makes the reservation that "[t]his is not a utopian vision".



I would call that an understatement. As a person who has to share the same world with Mr Kurzweil, I would have to describe the scenario he puts forward as a nightmare, or, to try and use more 'objective' language, a positive dystopia.

Mr Kurzweil further states that, "some commentators have questioned whether we would still be human after such dramatic changes. These observers may define the concept of human as being based on our limitations, but I prefer to define us as the species that seeks - and succeeds - in going beyond our limitations."

I would say that the scenario Mr Kurzweil puts forward is dehumanising not because it shows us exceeding our limitations, but for precisely the oppostie reason, that it ties human destiny forever to the thread of a single type of technology, therefore limiting us further. We all know how IQ tests may be culturally biased. How much more so the software of 'artificial intelligence'. The limitations themselves may prove to be 'exponential'- one of Mr Kurzweil's favourite words, it seems - in their implications.

The very title of the article gives a clue to the kinds of limitations we might face. If we are all to be permeated with software, how can we be free of the values of those manufacturing the software? We know already of Mr Gates' monopolising practises in the field of software, and how ineffecient Windows and Word are. Must we have such corporate forms of corruption actually built into us?



Mr Kurzweil argues that this future is inevitable and that we must make the best of it. It seems to me that people often argue that a certain future is inevitable in order to demoralise all resistance against it when they have a vested interest. I cannot help but wonder whether that is the case here.

Yours,
Quentin S. Crisp
Comments:
"...how can we be free of the values of those manufacturing the software."

A part of us would be as rigged as the the elections. What would the rest of us do?
 
Post a Comment


This page is powered by Blogger. Isn't yours?