In our last post, we explored projections by futurist
Raymond Kurzweil that advances in computer technology, genetics, nanotechnology and robotics will, by 2050, enable humans to essentially become superintelligent human/machine entities capable of immortality.
Now we're going to posit some questions about those projections.
First, there are plenty of other "futurists" who dispute Kurzweil's vision. He takes them on, often convincingly, but there's still clearly plenty of room for debate. In 1950 there were highly intelligent visionaries who thought we'd be a lot more advanced by 2000 than we are, with flying cars, Martian colonies, robotic helpers and so forth.
We, of course, have made huge scientific and technological strides since 1900 and 1950. We've practically doubled human life expectancy since 1900. Imagine if we doubled it again by 2100--humans would regularly live to 150, with many approaching 200. The Curmudgeon would have a good chance of still being around in 2100. Will we have great, great, great, great grandchildren? Will Mrs. Curmudgeon still love us?
Other scientists believe it would be difficult to extend human life spans beyond those of today's oldest human, but that we could see a much larger percentage of people live to those extended lifespans. That would mean large numbers of people living to 110-115 years old, and the Curmudgeon perhaps making it to 2070 or so, to see how well Kurzweil's predictions fare.
On the other hand, we can't get anywhere any faster today than we could in 1950. Back then, you could drive a car at 70 mph--with a lot less congestion--and by 1960 you could take a jet plane at 600 mph. Both were a lot more expensive back then, however--now just about anyone in the western world can own an automobile or afford to take a jet plane flight. But we don't have supersonic flight (although we did for awhile), and the plain fact is that you can't physically get anywhere today faster than you could 50 years ago. (In Europe in Asia you can go a lot faster by train than in 1950; alas, not here in the U.S.)
But then maybe we don't need to go so far anymore, as communications technology has made huge strides, connecting the world like never before. By 2020 we should be able to have easy, affordable video communications worldwide. Maybe even in 3-D.
In any event, we can easily envision a future in which the things that are familiar to us now are better: robotic cars that zip around computer controlled highways at 150 mph; medical treatments that banish most diseases and postpone aging; nanobots that clean up pollution; household robots that do all the things we don't want to do.
But what happens when we reach two milestones that Kurzweil reasonably projects: (1) the ability, through nanotechnology, to create self-assembling nanobots capable of creating anything we can imagine by assembling atoms and molecules from scratch; and (2) artificial intelligence that is more intelligent than human intelligence?
Both pose grave risks. As Kurzweil discusses in chilling detail, it would be possible to create a nanobot capable of replicating itself, that could, in a matter of hours, consume all the organic matter on earth, turning everything into a "grey goo." (We won't go into the details, but if you want to know more, click
here.) Kurzweil believes we'll be able to create defenses against this doomsday scenario. Let's assume we can. Will this really be great for everybody? We'll discuss that in a minute, but first let's look at the artificial intelligence angle.
The danger with artificial intelligence that is smarter than us (what Kurzweil calls "strong AI") is that it, too, could self-replicate and ultimately decide it has no use for humans. Kurzweil thinks we'll work that out, too, fusing them with human and machine qualities.
Our problem is this: if we succeed in creating nanobots and robots capable of solving all our problems and doing everything for us, what are humans going to do? Kurzweil never addresses this. His answer appears to be that humans will fuse with machine intelligence, live forever, create virtual worlds and enjoy virtual life while pondering ever bigger thoughts.
(Just an aside, but when we're 1000 years old, are we still going to be nagging our 970 year old children about their life choices? "What are doing with that cyborg, you know she's just taking advantage of you!")
Again, our problem is this: which humans? The way things work on our current earth is that some humans--the rich ones--acquire technology first, while others lag way behind. Are the superintelligent, immortal machinohumans going to have a use for the rest of the humans? Will android robots take humans as pets, or keep them in zoos? After all, that's what WE do with the lesser animals.
The world will certainly look different. If nanobots can manufacture food, we won't need farmers. If robots can perform all the services humans now perform--only better--we won't need humans for too many jobs. Maybe not for any. So, how does our economy work at that point?
Another problem, of course, is the way humans like to use technology against each other. Will those disaffected humans use older generation--but still lethal--robots and nanobots against the wealthier ones? Will religious zealots--ever more threatened by the advances of technology--unleash a doomsday to fulfill their own sick prophecies?
When we make advances in technology, they bring on new dangers. Sure, we think it's great when we can take out a "terrorist" in Pakistan with a remotely fired missile on a drone aircraft. But we won't think it's such a great technology when some terrorist uses a drone to fire a missile--or drop a crude bomb--into the White House or some other important institution. Yet, having invented that technology, we'll have to defend against it.
Kurzweil's view, ultimately, is utopian. The question is whether humans really want a utopia. There is some truth to the scene in the movie "The Matrix" where the "Smith" android describes what happened when they tried to program a perfect virtual reality world for humans: no one liked it; so they had to go back and create a world with conflict, despair, depression and so forth.
And while kids can be a pain in the butt (we have one doing his best right now), who wants a world without kids?
Maybe it's just because 2010 Curmudgeon can't understand what 2030 Curmudgeon, and 2050 Curmudgeon, will have come to accept as the norm--looking back on 2010 as a still primitive era. We hope we're still around to post our thoughts then--or beam them into your machine enhanced heads.