Article 7224V Post-transformer inference: 224× compression of Llama-70B with improved accuracy

Post-transformer inference: 224× compression of Llama-70B with improved accuracy

by
from Hacker News on (#7224V)
Comments
External Content
Source RSS or Atom Feed
Feed Location http://news.ycombinator.com/rss
Feed Title Hacker News
Feed Link https://news.ycombinator.com/
Reply 0 comments