Article 6A4BT Big Language Models Can Train Small and Cheap Language Models

Big Language Models Can Train Small and Cheap Language Models

by
Brian Wang
from NextBigFuture.com on (#6A4BT)
Story ImageLarge language models can train smaller language models and uplevel them quickly. Stanford researchers trained Alpaca 7B, a model fine-tuned from the LLaMA 7B model on 52K instruction-following demonstrations. they used GPT 3.5 to train it. On our preliminary evaluation of single-turn instruction following, Alpaca behaves qualitatively similarly to OpenAI's text-davinci-003, while being surprisingly small ...

Read more

External Content
Source RSS or Atom Feed
Feed Location http://feeds.feedburner.com/blogspot/advancednano
Feed Title NextBigFuture.com
Feed Link https://www.nextbigfuture.com/
Reply 0 comments