How Google's 'Don't Be Evil' Motto Has Evolved For the AI Age
In a special report for CBS News' 60 Minutes, Google CEO Sundar Pichai shares his concerns about artificial intelligence and why the company is choosing to not release advanced models of its AI chatbot. From the report: When Google filed for its initial public offering in 2004, its founders wrote that the company's guiding principle, "Don't be evil" was meant to help ensure it did good things for the world, even if it had to forgo some short term gains. The phrase remains in Google's code of conduct. Pichai told 60 Minutes he is being responsible by not releasing advanced models of Bard, in part, so society can get acclimated to the technology, and the company can develop further safety layers. One of the things Pichai told 60 Minutes that keeps him up at night is Google's AI technology being deployed in harmful ways. Google's chatbot, Bard, has built in safety filters to help combat the threat of malevolent users. Pichai said the company will need to constantly update the system's algorithms to combat disinformation campaigns and detect deepfakes, computer generated images that appear to be real. As Pichai noted in his 60 Minutes interview, consumer AI technology is in its infancy. He believes now is the right time for governments to get involved. "There has to be regulation. You're going to need laws ... there have to be consequences for creating deep fake videos which cause harm to society," Pichai said. "Anybody who has worked with AI for a while ... realize[s] this is something so different and so deep that, we would need societal regulations to think about how to adapt." Adaptation that is already happening around us with technology that Pichai believes, "will be more capable "anything we've ever seen before." Soon it will be up to society to decide how it's used and whether to abide by Alphabet's code of conduct and, "Do the right thing."
Read more of this story at Slashdot.