Full description not available
T**R
A lot of banter, summary of sci movies
The title looked good, so did the cover. Some of the reviewers are impressive. I work in large IT systems, AI, Robotics and devices. I found this book to be too much 'What if this happened...' Watch the movies iRobot, Transformers, Terminator, and 1984. Then summarize the movies and you have this book. The books point is we have to watch out for the time (now?) when humans create an intelligent system that dwarfs us and takes over. I didn't read anything new that most sci-fi movies haven't covered related to technology ethics. However, the way this is written there is so much what if and speculation, that reading it becomes tiresome. Sorry, didn't enjoy the book.
E**L
AI Is a Lie
Artificial general intelligence is a ghost story. Why would such an intrinsically human thing we can't define but call "intelligence" emerge in machines? This fiction stems from an intrinsic belief that improvements to technology move us along some kind of "spectrum of intelligence." There's no basis for it.Artificial intelligence is a fraudulent hoax — or in the best cases it’s a hyped-up buzzword that confuses and deceives. The much better, precise term would instead usually be machine learning – which is genuinely powerful and everyone oughta be excited about it.It's time for the term AI to be “terminated”!Eric Siegel, Ph.D.Author, Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, orDie
B**C
75 pages of valid content & 325 pages of blatant overly verbose treatments of simplistic tenants.
Hand-wavey high-level descriptions filled with unsubstantiated "axioms" and a thin layer of mathematical logic that was laughably glued together. The book had about 75 pages of valid content and 325 pages of blatant overly verbose treatments of simplistic tenants. I have to say the editor should have taken out the glib generalities that were a substitute for thought and substance. In general it was embarrassingly overblown and overly speculative with not a clue in how modern software and semiconductor computer architecture development is conducted. A big disappointment, you should not have to fight to find bits of content that are worthy of you time.There was quite a bit of content pulled directly from several movies. War Games, Westworld, 2001: A Space Odyssey, The Matrix, Eagle Eye, Saturn 3, The Terminator, I Robot and most egregious the Star Trek Episode "Spock's Brain" that was actually voted the worst episode from all of Star Trek (reference the 2016 Star Trek convention in Las Vegas). Too bad there was not enough material lifted from "Colossus: The Forbin Project (1970)"
E**E
A good overview of issues in an age of AI
I picked this book up because I have a kid at CalTech majoring in AI programming, machine learning. He seems to see only upside, no real concerns about 99% of the population being put put of work, and what I believe is inadequate apprehension about what could go wrong. Mom is a huge fan of Stephen Hawking and he was more than a bit apprehensive about the potential problems with self-learning machines. Most of the books and articles I have read on the topic are cursory or naieve. Nick Bostrom's book is fairly comprehensive and in depth. I am enjoying it as much as an excellent read in philosophy of science., as I am for his expanding the boundaries of the conversation, indeed, broaching it in many areas. I honestly do not know whether he says everything which needs to be said, but he has clearly thought it through and done a good deal of exploring, consulting, conversing ,collaborating. It is far and away the best book I have read on the topic {though there are some good pieces in MIT Technology Review as well).This is a book which is important and timely. We must seriously consider and weigh the potential for harm as well as good before creating a monster. While there may be areas which he has missed, I feel that when I read about a brute force approach to building human level AI by recreating a brain at the quantum level using Schrodinger's equation, the man is clearly pushing the boundaries. If nothing else it is a very good start to an important conversation.I picked this up because I was considering sending a copy to my son, but read it first because he is a busy guy and chooses his side reading carefully. There are books and articles I might mention or even recommend, and others I tell him not to waste his time on, this is one I will be sending him {though I would be very very surprised if someone at Cal Tech did not broach ...all of what is contained here). I will let him determine if it is redundant. It is well written and thorough, and also very approachable. He says in the prologue that overly technical sections may be skipped without sacrificing any meaning. I have not encountered one I needed to skip, and have, in fact, very much enjoyed the level of discourse.Read it if you are in the field to make sure you are covering all the bases. Read it if you are a scientist, philosopher, engineer to enjoy some very good writing. Read it if you are just encountering AI and want to quickly get to speed on the issues. It is not only a book I would recommend, but have, to anyone who would listen ;)
I**F
Philosophical theorizing about the possible risks of some possible kinds of AI
The first chapter is an interesting, concise history of AI. The following chapters, though... I have to say that if anything, Bostrom's writing reminds me of theology. It's not lacking in rigor or references. Bostrom seems highly intelligent and well-read. The problem (for me) is rather that the main premise he starts with is one that I find less than credible. Most of the book boils down to "Let's assume that there exists a superintelligence that can basically do whatever it wants, within the limits of the laws of physics. With this assumption in place, let's then explore what consequences this could have in areas X, Y, and Z." The best Bostrom can muster in defense of his premise that superintelligence will (likely) be realized (sometime in the future) are the results of various surveys of AI researchers about when they think human-level AI and superintelligence will be achieved. These summaries don't yield any specific answer as to when human-level AI will be attained (it's not reported ), and Bostrom is evasive as to what his own view is. However, Bostrom seems to think, if you don't commit to any particular timeline on this question, you can assume that at some point human-level AI will be attained. Now, once human-level AI is achieved, it'll be but a short step to superintelligence, says Bostrom. His argument as to why this transition period should be short is not too convincing. We are basically told that the newly developed human-level AI will soon engineer itself (don't ask exactly how) to be so smart that it can do stuff we can't even begin to comprehend (don't ask how we can know this), so there's really no point in trying to think about it in much detail. The AI Lord works in mysterious ways! With these foundations laid down, Bostrom can then start his speculative tour-de-force that goes through various "existential risk" scenarios and the possibilities of preventing or mitigating them, the economics of AI/robot societies, and various ethical issues relating to AI. I found the chapters on risks and AI societies to be pure sci-fi with even less realism than "assume spherical cows". The chapters on ethics and value acquistion did however contain some interesting discussion.All in all, throughout the book I had an uneasy feeling that the author is trying to trick me with a philosophical sleight of hand. I don't doubt Bostrom's skills with probability calculations or formalizations, but the principle "garbage in - garbage out" applies to such tools also. If one starts with implausible premises and assumptions, one will likely end up with implausible conclusions, no matter how rigorously the math is applied. Bostrom himself is very aware that his work isn't taken seriously in many quarters, and at the end of the book, he spends some time trying to justify it. He makes some self-congratulatory remarks to assure sympathethic readers that they are really smart, smarter than their critics (e.g. "[a]necdotally, it appears those currently seriously interested in the control problem are disproportionately sampled from one extreme end of the intelligence distribution" [p. 376]), suggests that his own pet project is the best way forward in philosophy and should be favored over other approaches ("We could postpone work on some of the eternal questions for a little while [...] in order to focus our own attention on a more pressing challenge: increasing the chance that we will actually have competent successors" [p. 315]), and ultimately claims that "reduction of existential risk" is humanity's principal moral priority (p. 320). Whereas most people would probably think that concern for the competence of our successors would push us towards making sure that the education we provide is both of high quality and widely available and that our currently existing and future children are well fed and taken care of, and that concern for existential risk would push us to fund action against poverty, disease, and environmental degradation, Bostrom and his buddies at their "extreme end of the intelligence distribution" think this money would be better spent funding fellowships for philosophers and AI researchers working on the "control problem". Because, if you really think about it, what of a millions of actual human lives cut short by hunger or disease or social disarray, when in some possible future the lives of 10^58 human emulations could be at stake? That the very idea of these emulations currently only exists in Bostrom's publications is no reason to ignore the enormous moral weight they should have in our moral reasoning!Despite the criticism I've given above, the book isn't necessarily an uninteresting read. As a work of speculative futurology (is there any other kind?) or informed armchair philosophy of technology, it's not bad. But if you're looking for an evaluation of the possibilites and risks of AI that starts from our current state of knowledge - no magic allowed! - then this is definitely not the book for you.
J**N
Superintelligence Explain, but not for Dummies
It was persistent recommendation through listening to Sam Harris’ fine podcasts that eventually convinced me to read this book.Nick Bostrom spells out the dangers we potentially face from a rogue, or uncontrolled, superintelligences unequivocally: we’re doomed, probably.This is a detailed and interesting book though 35% of the book is footnotes, bibliography and index. This should be a warning that it is not solely, or even primarily aimed at soft science readers. Interestingly a working knowledge of philosophy is more valuable in unpacking the most utility from this book than is knowledge about computer programming or science. But then you are not going to get a book on the existential threat of Thomas the Tank engine from the Professor in the Faculty of Philosophy at Oxford University.Also a good understanding of economic theory would also help any reader.Bostrom lays out in detail the two main paths to machine superintelligence: whole brain emulation and seed AI and then looks at the transition that would take place from smart narrow computing to super-computing and high machine intelligence.At times the book is repetitive and keeps making the same point in slightly different scenarios. It was almost like he was just cut and shunting set phrases and terminology into slightly different ideas.Overall it is an interesting and thought provoking book at whatever level the reader interacts with it, though the text would have been improved by more concrete examples so the reader can better flesh out the theories.“Everything is vague to a degree you do not realise till you have tried to make it precise” the book quotes.
R**C
A seminal book. Challenging but readable, the urgency is real.
A clear, compelling review of the state of the art, potential pitfalls and ways of approaching the immensely difficult task of maximising the chance that we'll all enjoy the arrival of a superintelligence. An important book showcasing the work we collectively need to do BEFORE the fact. Given the enormity of what will likely be a one-time event, this is the position against which anyone involved in the development of AI must justify their approach, whether or not they are bound by the Official Secrets Act.The one area in which I feel Nick Bostrom's sense of balance wavers is in extrapolating humanity's galactic endowment into an unlimited and eternal capture of the universe's bounty. As Robert Zubrin lays out in his book Entering Space: Creating a Space-Faring Civilization , it is highly unlikely that there are no interstellar species in the Milky Way: if/when we (or our AI offspring!) develop that far we will most likely join a club. The abolition of sadness , a recent novella by Walter Balerno is a tightly drawn, focused sci fi/whodunit showcasing exactly Nick Bostrom's point. Once you start it pulls you in and down, as characters develop and certainties melt: when the end comes the end has already happened... Entering Space: Creating a Space-Faring CivilizationThe abolition of sadness
J**Z
Great info but too detailed and overly technical, not for the general public
You can tell the author's made a thorough work in studying and compiling all his thoughts and facts but unfortunately the book ends up being really difficult to navigate for people with a general interest in AI but who lacks an IT background and the appetite for all the technical details
J**R
Worrying stuff but seriously good read!
This is very worrying stuff, it seems it should be in the realms of science fiction, that's why it's so troubling! But it's going on all around us at pace. Really excellent discussion of the whole issue of artificial and human intelligence. Highly recommended.
Trustpilot
1 day ago
1 month ago