OpenAI CTO Says AI Systems Should 'Absolutely' Be Regulated
Slashdot reader wiredmikey writes: Mira Murati, CTO of ChatGPT creator OpenAI, says artificial general intelligence (AGI) systems should be "absolutely" be regulated. In a recent interview, Murati said the company is constantly talking with governments and regulators and other organizations to agree on some level of standards. "We've done some work on that in the past couple of years with large language model developers in aligning on some basic safety standards for deployment of these models," Murati said. "But I think a lot more needs to happen. Government regulators should certainly be very involved." Murati specifically discussed OpenAI's approach to AGI with "human-level capability."OpenAI's specific vision around it is to build it safely and figure out how to build it in a way that's aligned with human intentions, so that the AI systems are doing the things that we want them to do, and that it maximally benefits as many people out there as possible, ideally everyone. Q: Is there a path between products like GPT-4 and AGI? A: We're far from the point of having a safe, reliable, aligned AGI system. Our path to getting there has a couple of important vectors. From a research standpoint, we're trying to build systems that have a robust understanding of the world similarly to how we do as humans. Systems like GPT-3 initially were trained only on text data, but our world is not only made of text, so we have images as well and then we started introducing other modalities. The other angle has been scaling these systems to increase their generality. With GPT-4, we're dealing with a much more capable system, specifically from the angle of reasoning about things. This capability is key. If the model is smart enough to understand an ambiguous direction or a high-level direction, then you can figure out how to make it follow this direction. But if it doesn't even understand that high-level goal or high-level direction, it's much harder to align it. It's not enough to build this technology in a vacuum in a lab. We really need this contact with reality, with the real world, to see where are the weaknesses, where are the breakage points, and try to do so in a way that's controlled and low risk and get as much feedback as possible. Q: What safety measures do you take? A: We think about interventions at each stage. We redact certain data from the initial training on the model. With DALL-E, we wanted to reduce harmful bias issues we were seeing... In the model training, with ChatGPT in particular, we did reinforcement learning with human feedback to help the model get more aligned with human preferences. Basically what we're trying to do is amplify what's considered good behavior and then de-amplify what's considered bad behavior. One final quote from the interview: "Designing safety mechanisms in complex systems is hard... The safety mechanisms and coordination mechanisms in these AI systems and any complex technological system [are] difficult and require a lot of thought, exploration and coordination among players."
Read more of this story at Slashdot.