TechScape: AI is feared to be apocalyptic or touted as world-changing – maybe it’s neither
Too much discourse focuses on whether AIs are the end of society or the end of human suffering - I'm more interested in the middle ground
What if AI doesn't fundamentally reshape civilisation?
This week, I spoke to Geoffrey Hinton, the English psychologist-turned-computer scientist whose work on neural networks in the 1980s set the stage for the explosion in AI capabilities over the last decade. Hinton wanted to speak to deliver a message to the world: he is afraid of the technology he helped create.
You need to imagine something more intelligent than us by the same difference that we're more intelligent than a frog. And it's going to learn from the web, it's going to have read every single book that's ever been written on how to manipulate people, and also seen it in practice."
He now thinks the crunch time will come in the next five to 20 years, he says. But I wouldn't rule out a year or two. And I still wouldn't rule out 100 years - it's just that my confidence that this wasn't coming for quite a while has been shaken by the realisation that biological intelligence and digital intelligence are very different, and digital intelligence is probably much better."
A document from a Google engineer leaked online said the company had done a lot of looking over our shoulders at OpenAI", referring to the developer of the ChatGPT chatbot.
The uncomfortable truth is, we aren't positioned to win this arms race and neither is OpenAI. While we've been squabbling, a third faction has been quietly eating our lunch," the engineer wrote.
Giant models are slowing us down. In the long run, the best models are the ones which can be iterated upon quickly. We should make small variants more than an afterthought, now that we know what is possible in the <20B parameter regime.
Continue reading...