OpenAI upgrades its natural language AI coder Codex and kicks off private beta
OpenAI has already made some big changes to Codex, the AI-powered coding assistant the company announced last month. The system now accepts commands in plain English and outputs live, working code, letting someone build a game or web app without so much as naming a variable. A few lucky coders (and, one assumes, non-coders) will be able to kick the tires on this new Codex API in a free private beta.
Codex is best thought of as OpenAI's versatile language engine, GPT-3, but trained only on code instead of ordinary written material. That lets it do things like complete lines of code or entire sections, but when it was announced it wasn't really something a non-coder would be able to easily interact with.
That's changed with this new API, which interprets ordinary, everyday requests like make the ball bounce off the sides of the screen" or download that data using the public API and sort it by date" and puts out working code in one of a dozen languages.
I was treated to a live demo in which OpenAI co-founders Greg Brockman (CTO) and Wojciech Zaremba (Codex lead) built a simple game from scratch and explained what was going on behind the curtain.
Programming is about having a vision and dividing it into chunks, then actually making code for those pieces," Brockman explained. The intention with Codex is to let coders spend more time on the first part than the second. After all, a huge amount of code is duplicating or outright copying what others have done before - it can be creative, of course, but no one is going to exercise their imagination in doing basic things like deploying a web server for testing a bit of code. Brockman did just that with a simple line - create a web page that says that" or some such.
Image Credits: OpenAI
A second later there were a dozen lines of Javascript doing just that in a totally standard way.
This is the worst part of programming," said Brockman. I've written this kind of code probably a couple dozen times, and I always forget exactly how it works. I don't know these APIs, and I don't have to. You can just do the same things easier, with less keystrokes or interactions."
Because Codex is trained on basically all the public code on GitHub, among other repositories, it's aware of all the standard practices, the 50 or 100 times someone includes a web server, keyboard controls, or object manipulations and animations in their code. And because the natural language side has all of GPT-3's usual understanding, it gets when you say make it smaller and crop it" and then have its horizontal position controlled by the left and right arrow keys" you're referring to the same it."
It also keeps its own work in mind, several kilobytes worth of coding context for itself - so it knows the naming conventions it must stick to, the existing bounds and requests, and other info that the user's input would have implied.
It's also aware of generalities embedded in the code corpus. For instance, when Brockman told it to make the boulder fall from the sky," the system didn't ask what the sky" is even though it hadn't been defined on the largely blank canvas. Not only did it have the boulder fall from the top of the screen, but the falling speed accelerated like an object normally would - because its best guess at what falling" and sky" mean from other uses and context.
Image Credits: OpenAI
We think it provides a new way to interact with existing software," said Zaremba, who built a limited version of this for his PhD thesis years ago, while they demoed a Codex plug-in for Microsoft Word. Automation exists for many tasks in word processors, of course, but what if you get a weird formatting issue and you want to fix 100 different instances? Type make all the text the same size and font, and make double spaces single" and it'll do that, snipping out stray styles and picking the most likely size and font considered to be normal." Then type make all the headings 24-point and bold" and it zooms through doing that, and so on.
It's worth noting here that this sort of thing is convenient for many, but crucial to those who lack the ability to do these things due to things like disabilities. If you're operating your word processor using voice commands or a joystick, being able to perform complex tasks like the above is extremely helpful. A blind coder, like anyone else, can patch together a standard public test server, but the process of skimming Stack Overflow, grabbing the best snippet, checking the syntax, changing the relevant variables and so on will almost certainly be a longer one.
And for those working within the confines of syntax and conventions handed down from on high, Codex can easily be made to reflect those by exposing the model to the documentation. Codex can also convert and port code from one language to another - much the same way a translation engine turns Spanish into French.
Brockman said that, as with GPT-3, they are only scratching the surface of what's possible and are hoping to be surprised by what developers come up with (after all, OpenAI didn't predict AI Dungeon). The beta will be private, like the one for GPT-3, but devs can apply and describe their project and the Codex team will review them for inclusion. Eventually the API will be a paid public one, but the timing and pricing are still to be determined on that.