Apple wants AI to run directly on its hardware instead of in the cloud
by Financial Times from Ars Technica - All content on (#6HADQ)
Enlarge / The iPhone 15 Pro. (credit: Apple)
Apple's latest research about running large language models on smartphones offers the clearest signal yet that the iPhone maker plans to catch up with its Silicon Valley rivals in generative artificial intelligence.
The paper, entitled LLM in a Flash," offers a solution to a current computational bottleneck," its researchers write.
Its approach paves the way for effective inference of LLMs on devices with limited memory," they said. Inference refers to how large language models, the large data repositories that power apps like ChatGPT, respond to users' queries. Chatbots and LLMs normally run in vast data centers with much greater computing power than an iPhone.