Superintelligence

Superintelligence

2014

nickbostrombw

Swedish philosopher Nick Bostrom argued that if machine brains surpass human brains in general intelligence, their Superintelligence could surpass humans as dominant life forms. In his book, Superintelligence, Bostrom explains the existential crises that would emerge from artificial intelligence overpowering humans.

Philosopher David Chalmers has also believed artificial intelligence is very likely to achieve superintelligence. Chalmers believes the human brain is a mechanical system. Scientists and engineers can create something exactly like it it by putting physical materials together. These experts can also use equations to simulate evolution, the same way humans have evolved, and create a human-like intelligence. Artificial intelligence can improve significantly over time, but in its own ways as well.

Strong artificial intelligence research can improve itself, known as “recursive self-improvement”. As the artificial intelligence would also improve at improving, it’d dramatically increase to a superintelligence, known as an “intelligence explosion.” It would surpass human intelligence and lead to its own discoveries and knowledge.

Some computers parts already greatly outspeed human performance. Bostrom believes, “Biological neurons operate at a peak speed of about 200 Hz, a full seven orders of magnitude slower than a modern microprocessor (~2 GHz).” Neurons also transmit spike signals across axons at no greater than 120 m/s, “whereas existing electronic processing cores can communicate optically at the speed of light”. This simple form of superintelligence may run much faster with computer circuitry than the human brain. These speedy computational feats show evidence for a possible superintelligence.

Computers can also improve their computational capacity. Their “brain” could become much larger than a human brain. Bostrom speculates a collective intelligenc, separate reasoning systems that communicate and coordinate to act together. Their combined potential would greatly surpass the sum of their parts.

Human reasoning can improve, too. We already outperform non-human animals in many ways such as long-term planning and language use. Computers may outperform humans in a similar way.

Bostrom mentions catastrophic outcomes in which superintelligence may lead to the extinction of humanity. He raises objections to this reasoning, though, that humans have a capacity to shape and control this outcome in their favor.

back…