Links
- STOP ESSO
- The One and Only Bumbishop
- Hertzan Chimera's Blog
- Strange Attractor
- Wishful Thinking
- The Momus Website
- Horror Quarterly
- Fractal Domains
- .alfie.'s Blog
- Maria's Fractal Gallery
- Roy Orbison in Cling Film
- Iraq Body Count
- Undo Global Warming
- Bettie Page - We're Not Worthy!
- For the Discerning Gentleman
- All You Ever Wanted to Know About Cephalopods
- The Greatest Band that Never Were - The Dead Bell
- The Heart of Things
- Weed, Wine and Caffeine
- Division Day
- Signs of the Times
Archives
- 05/01/2004 - 06/01/2004
- 06/01/2004 - 07/01/2004
- 07/01/2004 - 08/01/2004
- 08/01/2004 - 09/01/2004
- 09/01/2004 - 10/01/2004
- 10/01/2004 - 11/01/2004
- 11/01/2004 - 12/01/2004
- 12/01/2004 - 01/01/2005
- 01/01/2005 - 02/01/2005
- 02/01/2005 - 03/01/2005
- 03/01/2005 - 04/01/2005
- 05/01/2005 - 06/01/2005
- 06/01/2005 - 07/01/2005
- 07/01/2005 - 08/01/2005
- 08/01/2005 - 09/01/2005
- 09/01/2005 - 10/01/2005
- 10/01/2005 - 11/01/2005
- 11/01/2005 - 12/01/2005
- 12/01/2005 - 01/01/2006
- 01/01/2006 - 02/01/2006
- 02/01/2006 - 03/01/2006
- 09/01/2006 - 10/01/2006
- 10/01/2006 - 11/01/2006
- 11/01/2006 - 12/01/2006
- 12/01/2006 - 01/01/2007
- 01/01/2007 - 02/01/2007
- 02/01/2007 - 03/01/2007
- 03/01/2007 - 04/01/2007
- 04/01/2007 - 05/01/2007
- 05/01/2007 - 06/01/2007
- 06/01/2007 - 07/01/2007
- 07/01/2007 - 08/01/2007
- 08/01/2007 - 09/01/2007
- 09/01/2007 - 10/01/2007
- 10/01/2007 - 11/01/2007
- 11/01/2007 - 12/01/2007
- 12/01/2007 - 01/01/2008
- 01/01/2008 - 02/01/2008
- 02/01/2008 - 03/01/2008
- 03/01/2008 - 04/01/2008
- 04/01/2008 - 05/01/2008
- 07/01/2008 - 08/01/2008
Being an Archive of the Obscure Neural Firings Burning Down the Jelly-Pink Cobwebbed Library of Doom that is The Mind of Quentin S. Crisp
Thursday, December 29, 2005
The Soft Machine
I have just written an e-mail to New Scientist magazine. The text is as follows:
Dear New Scientist,
I would like to offer some comment on Ray Kurzweil's article in the September 24th issue of New Scientist, 'Human 2.0'(Sep 24th, 2005, p.32).
Mr Kurzweil describes a future in which human beings will 'merge' with the technology of artificial intelligence, having brains and bodies enhanced by nanobots. He posits this future as quite close, specifically, he mentions the 2020s, the 2030s and the 2040s. Towards the end of the article he wisely makes the reservation that "[t]his is not a utopian vision".
I would call that an understatement. As a person who has to share the same world with Mr Kurzweil, I would have to describe the scenario he puts forward as a nightmare, or, to try and use more 'objective' language, a positive dystopia.
Mr Kurzweil further states that, "some commentators have questioned whether we would still be human after such dramatic changes. These observers may define the concept of human as being based on our limitations, but I prefer to define us as the species that seeks - and succeeds - in going beyond our limitations."
I would say that the scenario Mr Kurzweil puts forward is dehumanising not because it shows us exceeding our limitations, but for precisely the oppostie reason, that it ties human destiny forever to the thread of a single type of technology, therefore limiting us further. We all know how IQ tests may be culturally biased. How much more so the software of 'artificial intelligence'. The limitations themselves may prove to be 'exponential'- one of Mr Kurzweil's favourite words, it seems - in their implications.
The very title of the article gives a clue to the kinds of limitations we might face. If we are all to be permeated with software, how can we be free of the values of those manufacturing the software? We know already of Mr Gates' monopolising practises in the field of software, and how ineffecient Windows and Word are. Must we have such corporate forms of corruption actually built into us?
Mr Kurzweil argues that this future is inevitable and that we must make the best of it. It seems to me that people often argue that a certain future is inevitable in order to demoralise all resistance against it when they have a vested interest. I cannot help but wonder whether that is the case here.
Yours,
Quentin S. Crisp
I have just written an e-mail to New Scientist magazine. The text is as follows:
Dear New Scientist,
I would like to offer some comment on Ray Kurzweil's article in the September 24th issue of New Scientist, 'Human 2.0'(Sep 24th, 2005, p.32).
Mr Kurzweil describes a future in which human beings will 'merge' with the technology of artificial intelligence, having brains and bodies enhanced by nanobots. He posits this future as quite close, specifically, he mentions the 2020s, the 2030s and the 2040s. Towards the end of the article he wisely makes the reservation that "[t]his is not a utopian vision".
I would call that an understatement. As a person who has to share the same world with Mr Kurzweil, I would have to describe the scenario he puts forward as a nightmare, or, to try and use more 'objective' language, a positive dystopia.
Mr Kurzweil further states that, "some commentators have questioned whether we would still be human after such dramatic changes. These observers may define the concept of human as being based on our limitations, but I prefer to define us as the species that seeks - and succeeds - in going beyond our limitations."
I would say that the scenario Mr Kurzweil puts forward is dehumanising not because it shows us exceeding our limitations, but for precisely the oppostie reason, that it ties human destiny forever to the thread of a single type of technology, therefore limiting us further. We all know how IQ tests may be culturally biased. How much more so the software of 'artificial intelligence'. The limitations themselves may prove to be 'exponential'- one of Mr Kurzweil's favourite words, it seems - in their implications.
The very title of the article gives a clue to the kinds of limitations we might face. If we are all to be permeated with software, how can we be free of the values of those manufacturing the software? We know already of Mr Gates' monopolising practises in the field of software, and how ineffecient Windows and Word are. Must we have such corporate forms of corruption actually built into us?
Mr Kurzweil argues that this future is inevitable and that we must make the best of it. It seems to me that people often argue that a certain future is inevitable in order to demoralise all resistance against it when they have a vested interest. I cannot help but wonder whether that is the case here.
Yours,
Quentin S. Crisp
Comments:
"...how can we be free of the values of those manufacturing the software."
A part of us would be as rigged as the the elections. What would the rest of us do?
Post a Comment
A part of us would be as rigged as the the elections. What would the rest of us do?