Image: Bruno Vincent / Getty Images
Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.
English theoretical physicist
I visualise a time when we will be to robots what dogs are to humans, and I’m rooting for the machines.
AI doesn’t have to be evil to destroy humanity —if AI has a goal and humanity just happens to come in the way, it will destroy humanity as a matter, of course, without even thinking about it, no hard feelings.
By far, the greatest danger of artificial intelligence is that people conclude too early that they understand it.
American AI theorist
By their very nature, heuristic shortcuts will produce biases, and that is true for both humans and artificial intelligence, but the heuristics of AI are not necessarily the human ones.
What all of us have to do is to make sure we are using AI in a way that is for the benefit of humanity, not to the detriment of humanity.
People are spending way too much time thinking about climate change, way too little thinking about AI.
It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers… They would be able to converse with each other to sharpen their wits. At some stage, therefore, we should have to expect the machines to take control.
The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.
Dutch computer scientist
A year spent in artificial intelligence is enough to make one believe in God.
American computer scientist
AI will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies.