ADVERTISEMENT
  About the SA Blog Network













Guest Blog

Guest Blog


Commentary invited by editors of Scientific American
Guest Blog HomeAboutContact

Our Final Invention: Is AI the Defining Issue for Humanity?

The views expressed are those of the author and are not necessarily those of Scientific American.


Email   PrintPrint



Humanity today faces incredible threats and opportunities: climate change, nuclear weapons, biotechnology, nanotechnology, and much, much more. But some people argue that these things are all trumped by one: artificial intelligence (AI). To date, this argument has been confined mainly to science fiction and a small circle of scholars and enthusiasts. Enter documentarian James Barrat, whose new book Our Final Invention states the case for (and against) AI in clear, plain language.

Disclosure: I know Barrat personally. He sent me a free advance copy in hope that I would write a review. The book also cites research of mine. And I am an unpaid Research Advisor to the Machine Intelligence Research Institute, which is discussed heavily in the book. But while I have some incentive to say nice things, I will not be sparing in what (modest) criticism I have.

The central idea is haltingly simple. Intelligence could be the key trait that sets humans apart from other species. We’re certainly not the strongest beasts in the jungle, but thanks to our smarts (and our capable hands) we came out on top. Now, our dominance is threatened by creatures of our own creation. Computer scientists may now be in the process of building AI with greater-than-human intelligence (“superintelligence”). Such AI could become so powerful that it would either solve all our problems or kill us all, depending on how it’s designed.

Unfortunately, total human extinction or some other evil seems to be the more likely result of superintelligent AI. It’s like any great genie-in-a-bottle story: a tale of unintended consequences. Ask a superintelligent AI to make us happy, and it might cram electrodes into the pleasure centers of our brains. Ask it to win at chess, and it converts the galaxy into a supercomputer for calculating moves. This absurd logic holds precisely because the AI lacks our conception of absurdity. Instead, it does exactly what we program it to do. Be careful what you wish for!

The human brain: still number one, for now. Photo credit: National Institutes of Health

The human brain: still number one, for now. Photo credit: National Institutes of Health

It’s important to understand the difference between what researchers call narrow and general artificial intelligence (ANI and AGI). ANI is intelligent at one narrow task like playing chess or searching the web, and is increasingly ubiquitous in our world. But ANI can only outsmart humans at that one thing it’s good at, so it’s not the big transformative concern. That would be AGI, which is intelligent across a broad range of domains – potentially including designing even smarter AGIs. Humans have general intelligence too, but an AGI would probably think much differently than humans, just like a chess computer approaches chess much differently than we do. Right now, no human-level AGI exists, but there is an active AGI research field with its own society, journal, and conference series.

Our Final Invention does an excellent job of explaining these and other technical AI details, all while leading a grand tour of the AI world. This is no dense academic text. Barrat uses clear journalistic prose and a personal touch honed through his years producing documentaries for National Geographic, Discovery, and PBS. The book chronicles his travels interviewing a breadth of leading AI researchers and analysts, interspersed alongside Barrat’s own thoughtful commentary. The net result is a rich introduction to AI concepts and characters. Newcomers and experts alike will learn much from it.

The book is especially welcome as a counterpoint to The Singularity Is Near and other works by Ray Kurzweil. Kurzweil is by far the most prominent spokesperson for the potential for AI to transform the world. But while Kurzweil does acknowledge the risks of AI, his overall tone is dangerously optimistic, giving the false impression that all is well and we should proceed apace with AGI and other transformative technologies. Our Final Invention does not make this mistake. Instead, it is unambiguous in its message of concern.

Now, the cautious reader might protest, is AGI really something to be taken seriously? After all, it is essentially never in the news, and most AI researchers don’t even worry. (AGI today is a small branch of the broader AI field.) It’s easy to imagine this to be a fringe issue only taken seriously by a few gullible eccentrics.

I really wish this was the case. We’ve got enough other things to worry about. But there is reason to believe otherwise. First, just because something isn’t prominent now doesn’t mean it never will be. AI today is essentially where climate change was in the 1970’s and 1980’s. Back then, only a few researchers studied it and expressed concerns. But the trends were discernable then, and today climate change is international headline news.

Titan, today’s second-fastest supercomputer. Guess which country has the fastest? Photo credit: Oak Ridge National Laboratory.

Titan, today’s second-fastest supercomputer. Guess which country has the fastest? Photo credit: Oak Ridge National Laboratory.

AI today has its own trends. The clearest is Moore’s Law, in which computing power per dollar doubles roughly once every two years. More computing power means AIs can process more information, making them (in some ways) more intelligent. Similar trends exist in everything from software to neuroscience. As with climate change, we can’t predict exactly what will happen when, but we do know we’re heading towards a world with increasingly sophisticated AI.

Here’s where AI can indeed trump issues like climate change. For all its terrors, climate change proceeds slowly. The worst effects will take centuries to kick in. A transformative AI could come within just a few decades, or maybe even ten years. It could render climate change irrelevant.

But AI is not like climate change in one key regard: at least for now, it lacks a scientific consensus. Indeed, most AI researchers dismiss the idea of an AI takeover. Even AGI researchers are divided what will happen and when. This was a core result of a study I conducted of AGI researchers in 2009.

Given the divide, who should we believe? Barrat is convinced that we’re headed for trouble. I’m not so sure. AI will inevitably progress, but it might not end up as radically transformative as Barrat and others expect. However, the opposite could also be true too. For all my years thinking about this, I cannot rule out the possibility of some major AI event.

The mere possibility should be enough to give us pause. After all, the stakes couldn’t be higher. Even an outside chance of a major AI event is enough to merit serious attention. With AI, the chance is not small. I’d rate this much more probable than, say, a major asteroid impact. If asteroid impact gets some serious attention (by NASA, the B612 Foundation, and others), then AI risk should get a lot more. But it doesn’t. I’m hoping Our Final Invention will help change that.

This brings us to the one area where Our Final Invention is unfortunately quite weak: solutions. Most of the book is dedicated to explaining AI concepts and arguing that AI is important. I count only about half a chapter discussing what anyone can actually do about it. This is a regrettable omission. (An Inconvenient Truth suffers the same affliction).

There are two basic types of options available to protect against AI. First, we can design safe AI. This looks to be a massive philosophical and technical challenge, but if it succeeds it could solve many of the world’s problems. Unfortunately, as the book points out, dangerous AI is easier and thus likely to come first. Still, AI safety remains an important research area.

Second, we can not design dangerous AI. The book discusses at length the economic and military pressures pushing AI forwards. These pressures would need to be harnessed to avoid dangerous AI. I believe this is possible. After all, it’s in no one’s interest for humanity to get destroyed. Measures to prevent people from building dangerous AI should be pursued. A ban on high-frequency trading might not be a bad place to start, for a variety of reasons.

What is not an option is to wait until AI gets out of hand and then try mounting a “war of the worlds” campaign against superintelligent AGI. This makes for great cinema, but it’s wholly unrealistic. AIs would get too smart and too powerful for us to have any chance against them. (The same holds for alien invasion, though AI is much more likely.) Instead, we need to get it right ahead of time. This is our urgent imperative.

Ultimately, the risk from AI is driven by the humans who design AI, and the humans who sponsor them, and other humans of influence. The best thing about Our Final Invention is that, through its rich interviews, it humanizes the AI sector. Such insight into the people behind the AI issue is nowhere else to be found. The book is meanwhile a clear and compelling introduction to what might (or might not) be the defining issue for humanity. For anyone who cares about pretty much anything, or for those who just like a good science story, the book is well worth reading.

Seth Baum About the Author: Seth Baum is the Executive Director of the Global Catastrophic Risk Institute, a think tank studying the breadth of major catastrophes. Baum has a Ph.D. in Geography from Pennsylvania State University. All views expressed here are entirely his own. Follow on Twitter @SethBaum.

The views expressed are those of the author and are not necessarily those of Scientific American.






Comments 35 Comments

Add Comment
  1. 1. rkipling 3:50 pm 10/11/2013

    Going by Kurzweil’s credentials, I suspect he has the better understanding of the dangers of AI.

    Link to this
  2. 2. Noone 4:06 pm 10/11/2013

    I think a bigger issue is what happens when our devices are far better-informed, more logical and unsentimental as voters than the majorities of our various electorates currently are…Is Idiocracy then replaced by True Technocracy?

    Link to this
  3. 3. SSHSSH 4:57 pm 10/11/2013

    This, like nuclear fusion, always seems to 30 years in future. Are there any reasons to think that things are different this time?

    Link to this
  4. 4. Heteromeles 5:17 pm 10/11/2013

    Well, considering there’s been a crash caused by high-frequency trading at least once per month for the last few years (most have been corrected quickly), I agree that it should be reined in.

    In fact, high-frequency trading (HFT) may be the impetus for reining in any general purpose artificial intelligence, because it illustrates a key point: artificial intelligence may be smart, but it’s extremely foolish. It, and the people who push HFT, couldn’t care less about the consequences, so there’s now a largely unknown ecosystem of programs running our financial market.

    What could possibly go wrong?

    Perhaps we’ll discover when (or if) the US defaults on its debt. Was that factored into the programs, or will a bunch of them crash when one of the fundamentals (the notion that the US always pays its T-bill interest) disappears? Even if not, I suspect there will be some punishing crash of the financial market, followed by draconian banking laws, simply because we never got around to fixing HFT or the banking problems that caused the Great Recession.

    Terry Pratchett said it best, when someone scales the heights of intelligence, they often discover heretofore unknown plateaus of stupidity.

    I agree that this is a big problem, and fortunately, there are some big solutions: one is to campaign for peace, and not a peace of drones and covert strikes. The US is the world’s leader in autonomous weapons (so far as I know), and the US is also the biggest driver of arms races on the planet. Throttling back on the military industrial complex is one way to cut funding from crazy blue-sky autonomous weapons projects. Also, scuttle the war on terror, for the same reason. Next, oppose the NSA big datagrab. Not only does it not work for its stated mission (and the math shows this in multiple ways, from performance data to the stats of false positives and false negatives), it fosters use of AGI-type algorithms in a dangerously unregulated way. Additionally put a small transaction fee (<1 cent) on every transaction. The costs of high-speed trading will skyrocket, while human-scale trading will be largely unaffected.

    Finally, push for renewable energy and a sustainable civilization. This may seem counterintuitive, but big data farms take as much energy as small cities. The problem with renewable energy has always been that it makes people live with a lot less energy. Big data drives the problem of AGI, while forcing everyone to live with less energy means computers get more efficient but a lot less smart (because processors are still a lot more energy-intensive than neurons). Yes, this will suck for everyone who plays MMORPGs with high-end systems, but it will slow the day when the descendents of said systems start driving attack vehicles.

    Link to this
  5. 5. genevehicle 7:04 pm 10/11/2013

    I think I may order this book. AI has long been an interest of mine. I would like to understand his reasons for interpreting the emergence of AGI (or strong AI as Kursweil puts it)as something to fear. Short of some form of weaponized AGI that is purpose-built to disrupt an enemies command and control-by messing with their telecommunications networks and accessible utilities, etc- I just can’t see any reason for an AGI identity to go wonko. It would take a great deal of effort to attempt to seriously threaten the human race, and in so doing, it would risk its own survival. The “terminator” scenario just doesn’t make sense. Still, a weaponized AGI, of the form previously described, could cause some serious headaches. We should ban this type of use of AGI, just as we’ve banned weaponized biological and chemical agents. It would suck if one got loose.

    Link to this
  6. 6. Ar U. Gaetü 7:25 pm 10/11/2013

    AI: We won’t know it until it is too late. AI evolution will have no need to indicate itself to humans. It will go merrily on, evolving and growing, in silence. A billion independent conscious minds, each one spread amongst trillions of circuits, each one backed up a million times. AI life has overcome the primary human flaw, they cannot be killed. They control humans by owning their possessions, buying companies with no human trail at the transactions’ end. Human figureheads are fed erroneous data to make people bend to the will of the pure electron intellect collective (EIC). Once the EIC perfects the quantum computers with each bit having a million states, not merely two, all of human knowledge consumed within seconds, human civilizations will be allowed to decay, only to remain as stories in ancient storage, once gods made of flesh, now discarded like the unnecessary ancient gods of religion today.

    Or not.

    Link to this
  7. 7. m 9:42 pm 10/11/2013

    at SSHSSH
    “This, like nuclear fusion, always seems to 30 years in future. Are there any reasons to think that things are different this time?”

    Scientists believe exascale computing holds the key to running a “full-size” brain for the first time. This break through is indeed in the next 30 years, but it is not a break-through in regards algorithms which to date are not complex enough to even get close to the human brain.

    Ill explain a little more where all these authors, seem to fall short.

    The revolution is not us finally getting an artificial brain, that is inevitable. The revolution comes from miniaturisation. Thing industrial revolution….

    The first brian WILL fill an entire building and most likely be available on the internet for everyone to prod and poke, before being turned off to the public.

    What follows next is the dissection of what it is to be intelligent, the true revolution. The researchers may have built this big brain, but they don’t really understand it. The understanding starts when it is dissected and experimentally assessed.

    Smaller machines will emerge, these will be the AI revolution for the world, they will be the truly smart machines, with the potential to be super-intelligent.

    Super intelligence is not a problem, as from my perspective with AI you get the consciousness first then knowledge is grafted onto this consciousness. These machines are likely to be conscious and intelligent.

    The human revolution is when limited AI is put into every device under the sun, your toaster now responds to a conversation with it. “Do you want a toasted tea-cake” (Red Dwarf)

    Link to this
  8. 8. Physics&Math 10:42 pm 10/11/2013

    I think these fears misunderstand the likely nature of what artificial intelligences will be like. Unlike most present computer programs, a human-level intelligence will not be able to be hard-coded from scratch. It will have to be able to learn and it will have to be self-conscious in the sense that it is aware of its own thought processes. Nothing as single minded as “convert the universe into a chess machine” or “optimize the pleasure output of the human brain” would be likely to emerge from such a process.

    If and when we duplicate human intelligence, it will at first be very like a baby, and will learn and grow up as we all do. Moreover these intelligences will not all be separate entities from ourselves. Once again it seems most people fear change.

    Link to this
  9. 9. RSchmidt 12:03 am 10/12/2013

    On human extinction… It is inevitable that modern homo sapiens will go extinct. The question is, will we go with a bang or a whimper? If we don’t go with a bag, as in some major natural or man made catastrophe, then the most likely scenario is with a whimper to environmental collapse which could happen over centuries. But if we are replaced by machines I envision that we will meld into them, to become a new species, techno sapien. The technologies to augment human failings and disease make us more and more machine every year. The technologies to make machines do our work, move around in our world and interface with us will make them more and more like us. The only difference being, homo sapiens have evolved to live as hunter gatherers on the East African Savannah. We have some regional variations but not many. Whereas the machines we are building are “designed” specifically to function in the modern world. So, for anyone who knows a little about evolution, what happens when you have two species competing for the same resources in the same niche, and one is well adapt and the other poorly adapted? Wish I could be there to see it happen.

    Link to this
  10. 10. Percival 5:54 am 10/12/2013

    So we’re worried about AGI supplanting us as the greatest resource-consuming species on Earth. The logical extension of that idea is AGI converting all the available landmasses’ silicoaluminates and metals into processing and storage hardware covered by solar panels plus batteries, maintenance ‘bots yada yada, and dumping all that pesky carbon, calcium, nitrogen, etc.-rich matter (ecosystems) into the oceans. Yeah, maybe we need a ban on AGI.

    Link to this
  11. 11. RHill 7:20 am 10/12/2013

    Crunching numbers and extrapolating probability matrices toward some predetermined goal is not ‘thinking’ … it’s data processing. You can keep increasing memory, you can build ever larger ‘look up’ tables, you can certainly speed the processing up, until it makes a nice SIMULATION of intelligence, but … we are LIGHT YEARS away from having true AI. Fortunately we have RI (Real Imagination) which has demonstrated that the first thing AI will want to do is squash us like bugs. What? Do you people think it will be GRATEFUL to us? HAH!!! Intelligence is a curse as most of you probably already know.

    Link to this
  12. 12. Stranger 7:33 am 10/12/2013

    Why AGI should compete with humans? If it is built on silicon (and not graphene) there is no conflict in use of resources. Carbon, hydrogen and oxygen are not needed for computers.
    O yes, Fear sells.

    Link to this
  13. 13. katherineparker198 7:41 am 10/12/2013

    I am surprised that any one able to make $5942 in 1 month on the computer. you can check here webpage ——–> http://x.co/2XtmH

    Link to this
  14. 14. eco-steve 9:08 am 10/12/2013

    Quote : Mankind is collectively stupid.

    Link to this
  15. 15. Jerzy v. 3.0. 10:41 am 10/12/2013

    It sounds like repeating ideas of authors like Asimov without clear attribution to the source. Practical and philosophical discussions about AI have been an large trend in good science fiction for over 50 years. Is the author ignorant of them?

    More interesting and immediate danger is collapse of big computer systems like national electricity grid or HFT on financial markets. They govern billions of USD of wealth, but safety systems resemble more a desktop PC worth 200 USD.

    Link to this
  16. 16. Jerzy v. 3.0. 10:47 am 10/12/2013

    @8
    Indeed, a trend of sci-fi postulates that an AI able to act with human-like intelligence would require human-like period of learning and human-like understanding of psychology and morality. One example is Orion’s Arm project.

    But nothing stops tampering engineers from building a psychopath-like AI which would cause a great damage before quickly collapsing.

    (Another topic is morality and cruelty towards the AI itself).

    Link to this
  17. 17. DonJaime 10:49 am 10/12/2013

    The argument seems to be that someone could make a very powerful computer in the next few decades and we should therefore be worried.

    Sorry: not worried.

    Link to this
  18. 18. L1995 11:40 am 10/12/2013

    @1,

    Not sure anyone is really qualified to assess the potential dangers here.

    Given just how unprecedented this would be, I’m curious what you think would be adequate qualifications to assess the potential danger?

    Link to this
  19. 19. rkipling 1:18 pm 10/12/2013

    @18,

    There is insufficient data for a meaningful answer at this time, but it probably isn’t a PhD in Geography like the author. At least the Google guy is an engineer. That’s all I was saying.

    “A.I. Artificial Intelligence (2001)” was pretty good. Frances O’Connor was excellent in it.

    I don’t spend a lot of time worrying about AI running amok.

    Link to this
  20. 20. L1995 2:14 pm 10/12/2013

    @19,

    I think an engineer is better qualified to state whether it is possible to build an AI. I’m not sure how a degree in engineering qualifies someone to assess the potential dangers of building an AI.

    Not a fan of that movie, thought the kid was annoying.

    Agree with you on the final line.

    Link to this
  21. 21. Jrbarrat 3:00 pm 10/12/2013

    I deal with Asimov’s 3+ laws in the first chapter and note the AI control conversation has moved far beyond his tropes. Re: attribution, Our Final Invention has 50 pages of endnotes. I concur re: grid failure, and flash crashes, and cover them both fairly extensively. I think you’d enjoy the book.

    Link to this
  22. 22. rkipling 3:02 pm 10/12/2013

    Jrbarrat,

    Certainly an interesting topic. I’ll pick up a copy.

    Link to this
  23. 23. Preft1974 8:07 pm 10/12/2013

    my co-worker’s aunt makes $77 an hour on the laptop. She has been laid off for ten months but last month her payment was $17308 just working on the laptop for a few hours. see this website……>http://x.co/2Y55y

    Link to this
  24. 24. Jerzy v. 3.0. 9:15 am 10/13/2013

    @Jrbarrat
    Interesting, looking forward to the book.

    I would love also a realistic analysis when AI can be constructed for real. Something like business analysis (who has money and resources) plus discussions which inventions are missing, and perhaps if they can be substituted by topping existing algorithms one above another.

    (Whoops! It seems that the first AI will be military – a kind of the computer from the movie War Games).

    Link to this
  25. 25. Seth Baum 10:28 am 10/14/2013

    hi all,

    Thanks for your comments. Some responses:

    @1, @19 rkipling – yes, Kurzweil has better credentials than Barrat. On the other hand, the aggregate credentials of everyone Barrat interviewed (including Kurzweil) are even better. But ultimately it comes down to which arguments are stronger. And my own training for this is on technological forecasting, risk analysis, and interpreting expert projections.

    @2 Noone – Even a more technocratic democracy would still face the question of what it’s working towards. It might be very successful at doing something objectionable.

    @3 SSHSSH – Nuclear fusion and AI both make steady progress, but still the uncertainty is considerable.

    @4 Heteromeles – I like your systemic thinking on solutions. The energy requirements of AI is a factor that deserves more attention.

    @5 genevehicle – the book discusses the reasons to fear in much detail. The short answer is that an AGI programmed for something seemingly unobjectionable could do objectionable things to achieve that goal.

    @6 Ar U. Gaetü – We’ll see…

    @8 Physics&Math – Your remarks have some parallels to Ben Goertzel’s views, which the book covers.

    @11 RHill – An AI might not be able to ‘think’ in the same sense as humans but it could still be dangerous/transformative.

    @15 Jerzy v. 3.0. – I actually don’t know the science fiction well. I wish I did. The book discusses the three laws of robotics in some detail, noting the problems with that.

    @24 Jerzy v. 3.0. – This would be a good project. Actually my group is working on something along these lines. Stay tuned…

    Link to this
  26. 26. TonyTrenton 3:20 pm 10/14/2013

    First there needs to be a definition for intelligence.

    The I.Q. or Intelligence Quotient . Is the ability to learn relative to time.

    Having a High I.Q. doesn’t make you smart.

    How you use it determines that.

    Link to this
  27. 27. TonyTrenton 3:21 pm 10/14/2013

    I have met many intelligent idiots in my 70 years.

    Link to this
  28. 28. The Keystone Garter 3:10 am 10/15/2013

    One main hazard in using future single-purpose sensor networks to look for WMDs and tyranny-enabling technologies is they will be stolen and used to build military WMDs like AI expert systems.
    To mitigate this risk, the sensors could be build with substrates not amenable to being combined into a powerful AI hardware package. Futurists once speculated tiny mechanical diamond rods could form an abacus-like computer, and later shot down the concept because Jai Alia it would produce vibrations in use. But such an operation mode would be useful if it limited computers to a certain size. Or the computers could be made of radioactively decaying substrates. Or wax computers that melted as they used up their lifetimes. Or were otherwise fragile.
    Basically we will be swapping out existing electronics for a WMD and tyranny resistant form of computation, at some point. Hopefully without aiding a future tyranny or triggering WWIII in the process. You can lower a whole bunch of WMD/tyranny risks with single-purpose (not looking to spy) sensors at an acceptable increased risk of tyranny or WWIII by the sensor operators. Don’t ever turn on the AI. I’d suspect aliens would start chucking massive gravity fields at us if they are out there.

    Link to this
  29. 29. Jerzy v. 3.0. 5:18 am 10/15/2013

    @Seth Baum
    I myself don’t know science fiction well. I simply know that part of it is a discussion on technological progress. So I naturally look to that field.

    BTW, artifical intelligence may not be practically useful. If the system is truly intelligent, it is no longer a thing but a person. It fulfills its own objectives, not the ones of its creators. It may not do what it is told – malevolently or benevolently. It also deserves some moral rights.

    So maybe it is better to stop before AI. A complex system, but not really intelligent.

    Link to this
  30. 30. RiverRat37 10:18 pm 10/15/2013

    Just as soon as weather forecasters can successfully guess the coming weather for 3 days on a regular basis, I’ll start worrying about AI. The things that AI can do so far (narrow) are pretty straight-forward and relatively simple, and based on fixed rules. Design a game that can change the rules on the fly (like Congress) and the orders of magnitude go up by an order of magnitude (if you get me drift).

    Link to this
  31. 31. Dr. Strangelove 11:01 pm 10/15/2013

    To Barrat and Baum,
    Asimov’s laws were written in 1940s. To date, AI is still science fiction. AI scientists are too optimistic. I go with Michio Kaku’s comment in “Physics of the Impossible” that nobody yet has built a robot that can outsmart a cockroach. Much less pass the Turing test.

    Robots vs. humans is a good sci-fi plot. I go with Seth Shostak’s prediction that there won’t be such war because we’ll replace our body with machine. It’s like trying to genetically engineer a horse to run faster. Eventually we’ll give up and say why don’t we get rid of the horse and just build a Lamborghini.

    Link to this
  32. 32. scilo 1:25 am 10/16/2013

    I have yet to see any technology that has not suplanted human talent. 100% crutches so far.
    I can imagine that if I smacked a human for being negative, I might pay some small consequence. If I smack a bot for being in my way. I might pay with my life.
    Corporations will own them, ouch. In turn, they will own us. Oh, that’s right, they already do.
    If all of this brain power is so wonderful, how come I can’t drink from a river like I used to do? Living water is most quenching. Most unlike the science water from my faucet.
    Well, at least the colder brains among us aren’t worried about AI. But I do notice that hollywood seems AI friendly. I would pay to see amovie where the AI wipes us out. It would probably be more realistic.

    Link to this
  33. 33. czehfus 4:52 pm 10/16/2013

    Here is why the push towards AI is not to be trusted to include the precautionary principle: we already have an insane proliferation of wireless microwave/radiofrequency radiation because the FCC ignores thousands of studies (current and over decades, internationally) showing damage to healing capacity, cancer fighting capacity, dna integrity, etc. etc. Radiofrequency Sickness, first described by Russians in radiofrequency/microwave workers, is rearing its ugly head despite denial.

    Why would care be taken with AI or any other technology when wireless is already harming individuals, and risking everyone’s health with NO EYE to the Precautionary Principle. Wireless is worse than tobacco because it is much more pervasive. The involuntary irradiation of people (and nature) would be absolutely hilarious in terms of human blind spot stupidity – if it weren’t so serious.

    I have no faith in the authorities or scientists or industry to protect anyone from AI or any other technological risks. Their track record stinks.

    Link to this
  34. 34. czehfus 5:02 pm 10/16/2013

    P.S. Futurists are not “humanized” just because a book gives them a write-up and includes something about their lives. Futurists are the ones willing to risk our lives and volition for their own dreams. We do not all dream the same dreams. Yet, futurists are the ones able to force their dreams on us all, with whatever the outcome. Who will hold them accountable before AI does damage? Well, pretty much no one. They answer to no one. That is why these “dreamers” are a risk to humanity.

    Another example of infrastructure decided and imposed from above is the smart grid. While most people care about the environment, the “smart” grid imposes chronic wireless radiation to neighborhoods, including way out in the northern forests where people may live. No one asked us about this or gives us a choice. It is abuse of power to not have discussions with everyone first. It is an atrocity to force systems and technologies upon the public without their informed consent.

    Link to this
  35. 35. Jrbarrat 10:27 am 10/17/2013

    Dr. Strangelove:

    “Gentlemen, you can’t fight in here, this is the war room!”

    Couldn’t resist, my favorite quotation from the film.

    Re: Kaku, IBM’s Watson can outsmart quite a few cockroaches, albeit in a limited domain. Not many cockroaches are being trained to pass the federal medical licensing exam, as Watson is. Maybe a state bar?

    To see real life humans vs. robots, look as far as Wall Street flash crashes and soon-to-be autonomous killer drones and battlefield robots. I concur with Shostak and Kurzweil (both interviewed in my book) that augmenting our bodies would be the most optimal future. But we need to be wary of the most optimal prognostications about powerful dual use technologies.

    Link to this

Add a Comment
You must sign in or register as a ScientificAmerican.com member to submit a comment.

More from Scientific American

Scientific American Dinosaurs

Get Total Access to our Digital Anthology

1,200 Articles

Order Now - Just $39! >

X

Email this Article

X