Amd Unveils Its First Small Language Model, AMD-135M
Arthur T Knackerbracket has processed the following story:
As AMD flexes its muscles in the AI game, it is not only introducing new hardware but is betting on software too, trying to hit new market segments not already dominated by Nvidia.
Thus, AMD hasunveiledits first small language model, AMD-135M, which belongs to the Llama family and is aimed at private business deployments. It is unclear whether the new model has to do anything with the company's recentacquisition of Silo AI(as the deal has to be finalized and cleared by various authorities, so probably not), but this is a clear step in the direction of addressing the needs of specific customers with a pre-trained model done by AMD - using AMD hardware for inference.
The main reason why AMD's models are fast is because they use so-called speculative decoding. Speculative decoding introduces a smaller 'draft model' that generates multiple candidate tokens in a single forward pass. Tokens are then passed to a larger, more accurate 'target model' that verifies or corrects them. On the one hand, this approach allows for multiple tokens to be generated simultaneously, yet on the other hand this comes at the cost of power due to increased data transactions.
[...] AMD believes that further optimizations can lead to even better performance. Yet, as the company shares benchmark numbers of its previous-generation GPUs, we can only imagine what its current-generation (MI300X) and next-generation (MI325X) could do.
Read more of this story at SoylentNews.