AI and Moore’s Law: It’s the Chips, Stupid
Sorry I've been away: time flies when you arenothaving fun. But now I'm back.
Moore's Law, which began with a random observation by the late Intel co-founder Gordon Moore that transistor densities on silicon substrates were doubling every 18 months, has over the intervening 60+ years been both borne-out yet also changed from a lithography technical feature to an economic law. It's getting harder to etch ever-thinner lines, so we've taken as a culture to emphasizing the cost part of Moore's Law (chips drop in price by 50 percent on an area basis (dollars per acre of silicon) every 18 months). We can accomplish this economic effect through a variety of techniques including multiple cores, System-On-Chip design, and unified memory - anything to keep prices going-down.
I predict that Generative Artificial Intelligence is going to go a long way toward keeping Moore's Law in force and the way this is going to happen says a lot about the chip business, global economics, and Artificial Intelligence, itself.
Let's take these points in reverse order. First, Generative AI products like ChatGPT are astoundingly expensive to build. GPT-4 reportedly cost $100+ million to build, mainly in cloud computing resources. Yes, this was primarily Microsoft paying itself and so maybe the economics are a bit suspect, but the actual calculations took tens of thousands of GPUs running for months and that can't be denied. Nor can it be denied that building GPT-5 will cost even more.
Some people think this economic argument is wrong, that Large Language Models comparable to ChatGPT can be built using Open Source software for only a few hundred or a few thousand dollars. Yes and no.
Competitive-yet-inexpensive LLMs built at such low cost have nearly all started with Meta's (Facebook's) LLaMA(Large Language Model Meta AI), which has effectively become Open Source now that both the code and the associated parameter weights -a big deal in fine-tuning language models - have been released to the wild. It's not clear how much of this Meta actually intended to do, but this genie is out of its bottle to great effect in the AI research community.
But GPT-5 will still cost $1+ billion and even ChatGPT, itself, is costing about $1 million per day just to run. That's $300+ million per year to run old code.
So the current el cheapo AI research frenzy is likely to subside as LLaMA ages into obsolescence and has to be replaced by something more expensive, putting Google, Microsoft and OpenAI back in control. Understand, too, that these big, established companieslike the idea of LLMs costing so much to build because that makes it harder for startups to disrupt. It's a form of restraint of trade, though not illegal.
But before then - and even after then in certain vertical markets - there is a lot to learn and a lot of business to be done using these smaller models, which can be used to build true professional language models, which GPT-4 and ChatGPT definitely are not.
GPT-4 and ChatGPT are general purpose models - supposedly useful for pretty much anything. But that means that when you are asking ChatGPT for legal advice, for example, you are asking it to imitate a lawyer. While ChatGPT may be able to pass the bar test, so did my cousin Chad, whom I assure you is an idiot.
If you are reading this I'll bet you are smarter than your lawyer.
This means there is an opportunity for vertical LLMs trained on different data - real data from industries like medicine and auto mechanics. Whoever owns this data will own these markets.
What will make these models both better and cheaper is they can be built from a LLaMA base because most of that data doesn't have to change over time to still fix your car, and the added Machine Learning won't be from crap found on the Internet, but rather from the service manuals actually used to train mechanics and fix cars.
We are approaching a time when LLMs won't have to imitate mechanics and nurses because they will be trained like mechanics and nurses.
Bloomberg has already done this for investment advice using its unique database of historical financial information.
With an average of 50 billion nodes, these vertical models will cost only five percent as much to run as OpenAI's one billion node GPT-4.
But what does this have to do with semiconductors and Moore's Law? Chip design is very similar to fixing cars in that there is a very limited amount of Machine Learning data required (think of logic cells as language words). It's a small vocabulary (the auto repair section at the public library is just a few shelves of books). And EVEN BETTER THAN AUTO REPAIR, the semiconductor industry has well-developed simulation tools for testing logic before it is actually built.
So it ought to be pretty simple to apply AI to chip design, building custom chip design models to iterate into existing simulators and refine new designs that actually have a pretty good chance of being novel.
And who will be the first to leverage this chip AI? China.
The USA is doing its best to freeze China out of semiconductor development, denying access to advanced manufacturing tools, for example. But China is arguably the world's #2 country for AI research and can use that advantage to make up some of the difference.
Look for fabless AI chip startups to spring-up around Chinese universities and for the Chinese Communist Party to put lots of money into this very cost-effective work. Because even if it's used just to slim-down and improve existing designs, that's another generation of chips China might otherwise not have had at all.
The post AI and Moore's Law: It's the Chips, Stupid first appeared on I, Cringely.
Digital Branding
Web DesignMarketing