Watch our interview with Quintin Pope on Youtube
The astonishing rise of large language models like ChatGPT has transformed the conversation around AI, and has caused many to begin worrying about AI safety in a much more serious way.
Believe it or not, human evolution is one of the main data points AI safetyists use in making their arguments. The basic claim is that, once upon a time, human beings began to evolve a general intelligence. When the cognitive powers of our distant ancestors reached a certain point, they "broke alignment" with natural selection, and they were able to pursue things they wanted in a way that didn't produce more offspring in the next generation.
The fear is that AI will do something similar in the not-too-distant future. Once they pass an invisible threshold in capability, they will be able to pursue an alien (and possibly dangerous) set of goals that look very little like what we wanted them to do, and the world will never look the same.
Quintin is a computer science graduate student who has studied AI alignment very carefully, and he doesn't buy this story. He doesn't believe natural selection offers any real clues about the behaviors of future AI systems, and in this interview, he tells me why.
Though the conversation is highly technical, it's also an important one. It could have major implications for AI policy, AI research, and how we prepare for the arrival of advanced artificial agents.
If you enjoy this interview please help us grow by subscribing to the podcast and sharing it with your friends!
Share this episode.