OpenAI Strawberry LLM Reasoning Needs More Compute and Energy for Inference
by Brian Wang from NextBigFuture.com on (#6QPJ8)
Jim Fan is one of Nvidia's senior AI researchers. The shift could be about many orders of magnitude more compute and energy needed for inference that can handle the improved reasoning in the OpenAI Strawberry (QStar) approach. This could mean far more powerful and energy intensive chips needed to run inference. Nextbigfuture had previously analyzed ...