Article 6HADQ Apple wants AI to run directly on its hardware instead of in the cloud

Apple wants AI to run directly on its hardware instead of in the cloud

by
Financial Times
from Ars Technica - All content on (#6HADQ)
Screenshot-2023-09-12-at-1.59.20-PM-800x

Enlarge / The iPhone 15 Pro. (credit: Apple)

Apple's latest research about running large language models on smartphones offers the clearest signal yet that the iPhone maker plans to catch up with its Silicon Valley rivals in generative artificial intelligence.

The paper, entitled LLM in a Flash," offers a solution to a current computational bottleneck," its researchers write.

Its approach paves the way for effective inference of LLMs on devices with limited memory," they said. Inference refers to how large language models, the large data repositories that power apps like ChatGPT, respond to users' queries. Chatbots and LLMs normally run in vast data centers with much greater computing power than an iPhone.

Read 15 remaining paragraphs | Comments

External Content
Source RSS or Atom Feed
Feed Location http://feeds.arstechnica.com/arstechnica/index
Feed Title Ars Technica - All content
Feed Link https://arstechnica.com/
Reply 0 comments