Article 6EPX9 Novel AI Benchmark Tests the Speed of Executing AI Models

Novel AI Benchmark Tests the Speed of Executing AI Models

by
Damien Fisher
from Techreport on (#6EPX9)
Benchmark_Logo_LinkedIn_osauqc.png

There has been a drastic boom in artificial intelligence-powered tools and language models as A-list firms compete to hold the reins in emerging technology. While the struggle continues, AI has found utility in diverse industries, rapidly expanding tentacles across various economic sectors.

Recently, two tech firms recorded an impressive performance in an AI benchmark test conducted to explore the speed of hardware in running AI models.

MLCommons Disclosed AI Benchmark Results

A September 11 report revealed that MLCommons, a group that creates AI benchmarks, has published new test results. The tests were carried out to determine the speed capacity of high-end hardware in executing AI models.

According to a report, MLCommons conducted the AI benchmark tests using 6 billion parameters on a large language model. The newly developed benchmark tagged the MLPerf focuses on AI data processing inference features.

In this context, inference implies generating predictions or making outputs using trained AI models. This predictive capability or inference feature is an important aspect of the generative AI (GenAI) tools.

It bears mentioning that the MLPerf tests give a valuable overview of the performance and speed of high-end AI hardware. They play a major role in enhancing the AI ecosystem and making it more powerful and efficient.

Interestingly, the NVidia Corp's AI chip recorded the highest performance during the tests, with Intel Corp's AI semiconductor tailing closely behind it.It is worth mentioning that NVidia is a top player in developing AI models, granting it the ability to dominate the training AI models market.

Moreover, following its recent achievement in the MLPerf benchmark test, it has proven to be a model capable of quickly executing various workloads. This presents it as a worthy leader in the AI inference market.

More importantly, NVidia is a California-based tech firm and a leading producer of integrated circuits for PCs, graphics processing units, AI chips, and more.

Intel's AI Chip Is 10% Slower Than NVidia's AI Chip

Meanwhile, the report revealed that Intel's chip, which emerged second in the benchmarks test, relied on its Gaudi2 chips for its achievement. According to the report, Intel obtained the Gaudi2 chips from Habana Labs, which also develops AI processor chips.

Intel's AI chip lags 10% compared to NVidia's processing speed, but it claims its Gaudi2 chip is cost-effective. Nonetheless, Intel still needs to reveal the exact market value of its chips, and NVidia also held back information about the specific cost of its chips.

The company continues to scale its efficiency despite NVidia's great performance at the benchmark test. Also, the firm recently rolled out upgraded software to double its previous performance recorded in the MLPerf test.

This shows that the company is strongly committed to growth with plans to maintain its status in the tech market. A rival firm, Alphabet, Google's parent company, that launched a custom-built AI chip in August, has also previewed the performance of its chip's latest version.

The post Novel AI Benchmark Tests the Speed of Executing AI Models appeared first on The Tech Report.

External Content
Source RSS or Atom Feed
Feed Location https://techreport.com/feed/
Feed Title Techreport
Feed Link https://techreport.com/
Reply 0 comments