Feed anandtech

Link https://anandtech.com/
Feed https://anandtech.com/rss/
Updated 2024-04-29 00:30
Sponsored Post: OPPO's MariSilicon X Imaging NPU Amps Up Night Video for New Find X5 Smartphones
To bring digital-camera imaging quality to its new smartphones even in challenging captures like high-contrast, low-light, and motion, rather than look for or develop an alternative to established mobile-device CPUs, Global consumer electronics and mobile communications company OPPO designed the new MariSilicon X imaging NPU (Neural Processing Unit) chip.MariSilicon X combines neural processing hardware with an ISP ("Image Signal Processor") and a multi-tier memory subsystem into a dedicated component between the smartphone's cameras and CPU. This lets MariSilicon X run machine learning and AI algorithms many times faster and with a fraction of the energy of previous approaches. The result: jaw-dropping computational photography improvements including superior night and low-light videos, with crisp detail and better color reproduction. OPPO is premiering MariSilicon X and the imaging benefits it brings in its brand-new Find X5 Series smartphones.
NVIDIA Hopper GPU Architecture and H100 Accelerator Announced: Working Smarter and Harder
Depending on your point of view, the last two years have either gone by very slowly, or very quickly. While the COVID pandemic never seemed to end – and technically still hasn’t – the last two years have whizzed by for the tech industry, and especially for NVIIDA. The company launched its Ampere GPU architecture just two years ago at GTC 2020, and after selling more of their chips than ever before, now in 2022 it’s already time to introduce the next architecture. So without further ado, let’s talk about the Hopper architecture, which will underpin the next generation of NVIDIA server GPUs.As has become a ritual now for NVIDIA, the company is using its Spring GTC event to launch its next generation GPU architecture. Introduced just two years ago, Ampere has been NVIDIA’s most successful server GPU architecture to date, with over $10B in data center sales in just the last year. And yet NVIDIA has little time to rest on their laurels, as the the growth and profitability of the server accelerator market means that there are more competitors than ever before aiming take a piece of NVIDIA’s market for themselves. To that end, NVIDIA is ready (and eager) to use their biggest show of the year to talk about their next generation architecture, as well as the first products that will implement it.Taking NVIDIA into the next generation of server GPUs is the Hopper architecture. Named after computer science pioneer Grace Hopper, the Hopper architecture is a very significant, but also very NVIDIA update to the company’s ongoing family of GPU architectures. With the company’s efforts now solidly bifurcated into server and consumer GPU configurations, Hopper is NVIDIA doubling down on everything the company does well, and then building it even bigger than ever before.
The NVIDIA GTC Spring 2022 Keynote Live Blog (Starts at 8:00am PT/15:00 UTC)
Please join us at 8:00am PT (15:00 UTC) for our live blog coverage of NVIDIA’s Spring GTC keynote address. The traditional kick-off to the show – be it physical or virtual – NVIDIA’s annual spring keynote is showcase for NVIDIA’s vision for the next 12 to 24 months across all of their segments, from graphics to AI to automotive. Along with slew of product announcements, the presentation, delivered by CEO (and James Halliday LARPer) Jensen Huang always contains a few surprises.Looking at NVIDIA's sizable product stack, with the companys Ampere-based A100 server accelerators about to hit two years old, NVIDIA is arguably due for a major server GPU refresh. Meanwhile there's also the matters of NVIDIA's in-development Armv9 "Grace" CPUs, which were first announced last year. And of course, the latest developments in NVIDIA's efforts to make self-driving cars a market reality.
AMD Releases Instinct MI210 Accelerator: CDNA 2 On a PCIe Card
With both GDC and GTC going on this week, this is a big time for GPUs of all sorts. And today, AMD wants to get in on the game as well, with the release of the PCIe version of their MI200 accelerator family, the MI210.First unveiled alongside the MI250 and MI250X back in November, when AMD initially launched the Instinct MI200 family, the MI210 is the third and final member of AMD’s latest generation of GPU-based accelerators. Bringing the CDNA 2 architecture into a PCIe card, the MI210 is being aimed at customers who are after the MI200 family’s HPC and machine learning performance, but need it in a standardized form factor for mainstream servers. Overall, the MI200 is being launched widely today as part of AMD moving the entire MI200 product stack to general availability for OEM customers.AMD Instinct AcceleratorsMI250MI210MI100MI50Compute Units2 x 10410412060Matrix Cores2 x 416416480N/ABoost Clock1700MHz1700MHz1502MHz1725MHzFP64 Vector45.3 TFLOPS22.6 TFLOPS11.5 TFLOPS6.6 TFLOPSFP32 Vector45.3 TFLOPS22.6 TFLOPS23.1 TFLOPS13.3 TFLOPSFP64 Matrix90.5 TFLOPS45.3 TFLOPS11.5 TFLOPS6.6 TFLOPSFP32 Matrix90.5 TFLOPS45.3 TFLOPS46.1 TFLOPS13.3 TFLOPSFP16 Matrix362 TFLOPS181 TFLOPS184.6 TFLOPS26.5 TFLOPSINT8 Matrix362.1 TOPS181 TOPS184.6 TOPSN/AMemory Clock3.2 Gbps HBM2E3.2 Gbps HBM2E2.4 Gbps HBM22.0 Gbps GDDR6Memory Bus Width8192-bit4096-bit4096-bit4096-bitMemory Bandwidth3.2TBps1.6TBps1.23TBps1.02TBpsVRAM128GB64GB32GB16GBECCYes (Full)Yes (Full)Yes (Full)Yes (Full)Infinity Fabric Links633N/ACPU CoherencyNoN/AN/AN/ATDP560W300W300W300WManufacturing ProcessTSMC N6TSMC N6TSMC 7nmTSMC 7nmTransistor Count2 x 29.1B29.1B25.6B13.2BArchitectureCDNA 2CDNA 2CDNA (1)VegaGPU2 x CDNA 2 GCD
AMD Releases Milan-X CPUs With 3D V-Cache: EPYC 7003 Up to 64 Cores and 768 MB L3 Cache
There's been a lot of focus on how both Intel and AMD are planning for the future in packaging their dies to increase overall performance and mitigate higher manufacturing costs. For AMD, that next step has been V-cache, an additional L3 cache (SRAM) chiplet that's designed to be 3D die stacked on top of an existing Zen 3 chiplet, tripling the total about of L3 cache available. Now AMD's V-cache technology is finally becoming available to the mass market, as AMD's EPYC 7003X "Milan-X" server CPUs have now reached general availability.As first announced late last year, AMD is bringing its 3D V-Cache technology to the enterprise market through Milan-X, an advanced variant of its current-generation 3rd Gen Milan-based EPYC 7003 processors. AMD is launching four new processors ranging from 16-cores to 64-cores, all of them with Zen 3 cores and 768 MB of stacked L3 3D V-Cache.
Cincoze DS-1300 Industrial PC Review: Xeon-Powered Do-it-All
Industrial PCs are meant for 24x7 deployment in a wide range of environments. This brings in a host of requirements such as wide operating temperature range, ruggedness, regulatory requirements, support for specific I/O types, etc. Most industrial PCs are passively cooled, with the absence of moving parts contributing to better reliability. In certain cases, processing power requirements and space constraints make it necessary to include active cooling. Today, we are looking at a high-end industrial PC from Cincoze - the DS-1300 featuring a Comet Lake-based Xeon CPU and a discrete GPU. Read on to for a detailed look at the features and performance profile of the flagship member of the Cincoze DS-1300 series.
Mushkin Redline VORTEX PCIe 4.0 NVMe SSD Launched: Affordable Flagship
Mushkin's lineup of PCIe 4.0 SSDs has largely remained a Phison affair. The Delta series was based on the Phison E16 and the Gamma on the Phison E18. Recently, the company launched a new series of PCIe 4.0 SSDs - the Redline VORTEX. The key here seems to be the usage of a new SSD controller - the Innogrit Rainier IG5236. It appears to be taking over the flagship mantle from the Gamma - besting it in both read and write random access IOPS and also sequential read speeds. However, unlike the Delta and Gamma, which came to the market in 1TB, 2TB, and 4TB flavors, the Redline VORTEX series has three capacity points - 512GB, 1TB, and 2TB. Detailed specifications are provided in the table below.Mushkin Redline VORTEX SSD SpecificationsCapacity512 GB1024 GB2048 GBControllerInnogrit IG5236NAND Flash?? 3D TLC NANDForm-Factor, InterfaceSingle-Sided M.2-2280, PCIe 4.0 x4, NVMe 1.4DRAM512 MB DDR41 GB DDR42 GB DDR4Sequential Read6750 MB/s7430 MB/s7415 MB/sSequential Write2635 MB/s5300 MB/s6800 MB/sRandom Read IOPS200K390K730KRandom Write IOPS645K1085K1500KSLC CachingYesTCG Opal EncryptionNoWarranty5 yearsWrite Endurance250 TBW
AMD Teases FSR 2.0: Temporal Upscaling Tech for Games Coming in Q2
Alongside their spring driver update, AMD this morning is also unveiling the first nugget of information about the next generation of their FidelityFX Super Resolution (FSR) technology. Dubbed FSR 2.0, the next generation of AMD’s upscaling technology will be taking the logical leap into adding temporal data, giving FSR more data to work with, and thus improving its ability to generate details. And, while AMD is being coy with details for today’s early teaser, at a high level this technology should put AMD much closer to competing with NVIDIA’s temporal-based DLSS 2.0 upscaling technology, as well as Intel’s forthcoming XeSS upscaling tech.
AMD Releases Adrenalin Software Spring 2022 Update: Adds RSR Upscaling and More
AMD this morning is releasing the awaited spring update to their AMD Software Adrenalin Edition suite for their GPUs. First unveiled back in January as part of AMD’s keynote address, the spring update (22.3.1) introduces a few quality-of-life features for AMD’s software stack, and is being headlined by the first release of AMD’s FSR 1.0-based Radeon Super Resolution (RSR) technology for driver-based game upscaling.
AMD's Ryzen 7 5800X3D Launches April 20th, Plus 6 New Low & Mid-Range Ryzen Chips
Since the launch of AMD’s Zen 3-powered Ryzen 5000 desktop processors in late 2020, the company’s retail desktop chip offerings have been rather static. With AMD facing heavy demand for products on multiple fronts – from CPUs to GPUs to console APUs – and all during an unprecedented chip crunch, the company has held back on expanding its desktop offerings. Instead, AMD has focused its limited TSMC 7nm wafer allocations on trying to keep up with demand for some of their most important (and highest margin) products, such as server CPUs, laptop chips, and high-end desktop CPUs.However, as the chip crunch has ever-so-slightly abated, AMD is now turning their attention back to the desktop space, to finally focus on fleshing out their desktop processor product lineups. We saw our first glimpse of that last week with the announcement of the long-awaited Threadripper Pro 5000 series for workstations. And now for this week the company is announcing the launch dates of several new Ryzen desktop processors, including the much-awaited Ryzen 7 5800X3D with V-Cache, as well as 6 new low-to-mid-range Ryzen SKUs for the retail market.
The ADATA XPG Cybercore 1300W PSU Review: Advanced From the Start
In today's review, we are taking a look at XPG's latest creation, the Cybercore power supply series. The Cybercore PSU is based on a whole new power supply platform and boasts a massive power output for its proportions, all while it is built exclusively with premium components.
The Intel W680 Chipset Overview: Alder Lake Workstations Get ECC Memory and Overclocking Support
Earlier this month Intel quietly launched its W680 chipset, the company's workstation-focused chipset for its 12th Gen Core (Alder Lake) processors. Unlike the current generation of consumer desktop chipsets such as Z690, H670, B660, and H610, the W680 adds the capability to use ECC DRAM, including both DDR5 and DDR4 variants. At present, there haven't been many W680 motherboard announcements, although a couple of vendors, including ASRock Industrial and Supermicro have a few options listed. So we're giving you the lowdown on W680, what it has to offer, and what technologies it brings for users looking to build a workstation-class desktop with Intel's latest Alder Lake architecture.
Interview with Intel’s Raja Koduri: Zettascale or ZettaFLOP? Metaverse what?
We currently live in a sea of buzzwords. Whether that’s something to catch the eye when scrolling through our news feed, or a company wanting to latch their product onto the word-of-the-day, the quintessential buzzword gets lodged in your brain and it’s hard to get out. Two that have broken through the barn doors in the technology community lately have been ‘Zettascale’, and ‘Metaverse’. Cue a collective groan while we wait for them to stop being buzzwords and into something tangible. That’s my goal today while speaking to Raja Koduri, Intel’s SVP and GM of Accelerated Computing.What makes buzzwords like Zettascale and Metaverse so egregious right now is that they’re referring to one of our potential futures. To break it down: Zettascale is talking about creating 1000x the current level of compute today but in the latter half of the decade, to take advantage of the high demand for computational resources by both consumers and businesses, and especially machine learning; Metaverse is something about more immersive experiences, and leveling up the future of interaction, but is about as well defined as a PHP variable.The main element that combines the two is computer hardware, coupled by computer software. That’s why I reached out to Intel to ask for an interview with Raja Koduri, SVP and GM, whose role is to manage both angles for the company towards a Zettascale future and a Metaverse experience. One of the goals of this interview was to cut through the miasma of marketing fluff and understand exactly what Intel means with these two phrases, and if they’re relevant enough to the company to be built into those future roadmaps (to no-one’s surprise, they are – but we’re finding out how).
Apple Announces M1 Ultra: Combining Two M1 Maxes For Workstation Performance
As part of Apple’s spring “Peek Performance” product event this morning, Apple unveiled the fourth and final member of the M1 family of Apple Silicon SoCs, the M1 Ultra. Aimed squarely at desktops – specifically, Apple’s new Mac Studio – the M1 Ultra finds Apple once again upping the ante in terms of SoC performance for both CPU and GPU workloads. And in the process, Apple has thrown the industry a fresh curveball by not just combining two M1 Max dies into a single chip package, but by making the two dies present themselves as a single, monolithic GPU, marking yet another first for the chipmaking industry.Back when Apple announced the M1 Pro and the already ridiculously powerful M1 Max last fall, we figured Apple was done with M1 chips. After all, how would you even top a single 432mm2 chip that’s already pushing the limits of manufacturability on TSMC’s N5 process? Well, as the answer turns out to be, Apple can do one better. Or perhaps it would be more accurate to say twice as better. As for the company’s final and ultimate M1 chip design, the M1 Ultra, Apple has bonded two M1 Max dies together on to a single chip, with all of the performance benefits doubling their hardware would entail.The net result is a chip that, without a doubt, manages to be one of the most interesting designs I’ve ever seen for a consumer SoC. As we’ll touch upon in our analysis, the M1 Ultra is not quite like any other consumer chip currently on the market. And while double die strategy benefits sprawling multi-threaded CPU and GPU workloads far more than it does more single-threaded tasks – an area where Apple is already starting to fall behind – in the process they re breaking new ground on the GPU front. By enabling the M1 Ultra’s two dies to transparently present themselves as a single GPU, Apple has kicked off a new technology race for placing multi-die GPUs in high-end consumer and workstation hardware.
The Apple "Peek Performance" Event Live Blog (Starts at 10am PT/18:00 UTC)
Join us a bit later today for Apple's spring product launch event, which for this year is being called "Peek Performance".The presentation kicks off at 10am Pacific (18:00 UTC) and should be packed with a barrage of Apple product announcements. In previous years these events have covered new Macs, iPads, and even iPhones, and this year should be much the same. So it should be interesting to see what Apple has in store, especially as the company continues its multi-year transition in the Mac from x86 CPUs to their own Arm-based Apple Silicon chips.Join us at 10am PT for more details!
AMD Announces Ryzen Threadripper Pro 5000 WX-Series: Zen 3 For OEM Workstations
In 2020, AMD released a new series of workstation-focused processors under its Threadripper umbrella, aptly named the Threadripper Pro series. These chips were essentially true workstation versions of AMD's EPYC server processors, offering the same massive core counts and high memory bandwidth as AMD's high-performance server platform. By introducing Threadripper Pro, AMD carved out an explicit processor family for high-performance workstations, a task that was previously awkwardly juggled by the older Threadripper and EPYC processors.Now, just under two years since the release of the original Threadripper 3000 Pro series, AMD is upgrading that lineup with the announcement of the new Threadripper Pro 5000 series. Based on AMD's Zen 3 architecture, the newest Threadripper Pro chips are designed to up the ante once more in terms of performance, taking advantage of Zen 3's higher IPC as well as higher clockspeeds. Altogether AMD is releasing five new SKUs, ranging from 12c/24t to 64c/128t, which combined with support for 8 channels of DDR4 across the entire lineup, will offer a mix of chips for both CPU-hungry and bandwidth-hungry compute tasks.
The ASUS Vivobook Pro 15 OLED Review: For The Creator In All Of Us
ASUS has been building notebooks for the creator market for several years now, and today we are looking at the Vivobook Pro 15 OLED. The name kind of gives away the special feature of this device but including a 15.6-inch OLED display adds a major punch to the offering. Although OLED has not taken over the PC industry like it has the smartphone world, there is really nothing like the stunning contrast ratio OLED provides, as well as the wider color gamut most OLED devices support.
ASRock Industrial's NUC1200 BOX Series Brings Alder Lake to UCFF Systems
Intel recently updated their low-power processors lineup with the Alder Lake U and P Series 12 Gen Core mobile SKUs. With support for a range of TDPs up to 28W, these allow ultra-compact form-factor (UCFF) PC manufacturers to update their traditional NUC clones. Similar to the Tiger Lake generation, ASRock Industrial is again at the forefront - launching the NUC1200 BOX Series within a few days of Intel's announcement.The new NUC1200 BOX Series retains the chassis design and form-factor of the NUC1100 BOX Series. The NUC BOX-1165G7 left a favorable impression in our hands-on review, and the NUC1200 BOX Series seems to be carrying over all those aspects. The company is launching three models in this series - NUC BOX-1260P, NUC BOX-1240P, and NUC BOX-1220P. The specifications are summarized in the table below.ASRock Industrial NUC 1200 BOX (Alder Lake-P) LineupModelNUC BOX-1260PNUC BOX-1240PNUC BOX-1220PCPUIntel Core i7-1260P
The Intel Core i3-12300 Review: Quad-Core Alder Lake Shines
Just over a month ago Intel pulled the trigger on the rest of its 12th generation "Alder Lake" Core desktop processors, adding no fewer than 22 new chips. This significantly fleshed out the Alder Lake family, adding in the mid-range and low-end chips that weren't part of Intel's original, high-end focused launch. Combined with the launch of the rest of the 600 series chipsets, this finally opened the door to building cheaper and lower-powered Alder Lake systems.Diving right in, today we're taking a look at Intel's Core i3-12300 processor, the most powerful of the new I3s. Like the entire Alder Lake i3 series, the i3-12300 features four P-cores, and is aimed to compete in the entry-level and budget desktop market. With prices being driven higher on many components and AMD's high-value offerings dominating the lower end of the market, it's time to see if Intel can compete in the budget desktop market and offer value in a segment that currently needs it.
SPEC Adds Linux Edition of SPECviewperf 2020 v3.0 Benchmark
The SPEC Graphics Performance Characterization (SPECgpc) group updated the Windows version of the workstation GPU benchmark suite - SPECviewperf 2020 - twice last year. The intent of the benchmark is to replay GPU workload traces from real-world professional applications (Maya for media and entertainment, Catia, Creo, NX, and Solidworks for CAD/CAM, OpendTect for the energy industry, and the Tuvok visualization library for rendering medical images). Version 3.0, released in December 2021, updated the Solidworks viewset to better reflect the OpenGL API calls in the latest version of the software. Version 2.0 had enabled selective downloading of the viewsets.While the Windows version of the benchmark had been through three versions, the Linux community was left out, having to rely on the SPECviewperf 13 released almost a decade ago. That is changing today with the availability of the Linux edition of SPECviewperf 2020 v3.0. The benchmark updates the viewsets with traces from the latest versions of the relevant applications and also updates the models to match the Windows version. Since the benchmarks wrapper framework (even for the Windows version) is based on Node-Webkit (now NW.js), the creation of a Linux edition had to mainly deal with the actual viewset processing. Automation and results processing are identical between the Windows and Linux versions.Unlike SPECviewperf 13 Linux Edition which was distributed as a compressed tar archive, the SPECviewperf 2020 v3.0 Linux Edition is a .deb package. The benchmark requires Canonical Ubuntu Linux 20.04, 16GB or more of RAM, and 80GB of fixed disk drive space. Viewsets are processed at two resolutions - 1080p and 4K, with 1080p being the minimum. The GPU drivers are required to support OpenGL 4.5, and the GPU itself needs to have 2GB minimum VRAM.The benchmark is available for download free of cost for everyone other than vendors of computers and related products / services who are not members of SPEC/GWPG. Such vendors can purchase a license for $2500.Linux has a much greater market share in the workstation segment compared to consumer desktops. It is heartening to see SPECgpc finally replace the aging Linux edition of SPECviewperf 13 . The latest viewsets and models in the SPECviewperf 2020 v3.0 Linux Edition bring it on par with the benchmarking capabilities of the Windows edition.
Universal Chiplet Interconnect Express (UCIe) Announced: Setting Standards For The Chiplet Ecosystem
If there has been one prominent, industry-wide trend in chip design over the past half-decade or so, it has been the growing use of chiplets. The tiny dies have become an increasingly common feature as chip makers look to them to address everything from chip manufacturing costs to the overall scalability of a design. Be it simply splitting up a formerly monolithic CPU in to a few pieces, or going to the extreme with 47 chiplets on a single package, chiplets are already playing a big part in chip design today, and chip makers have made it clear that it’s only going to grow in the future.In the meantime, after over 5 years of serious, high-volume use, chiplets and the technologies underpinning them seem to finally be reaching an inflection point in terms of design. Chip makers have developed a much better idea of what chiplets are (and are not) good for, packaging suppliers have refined their ultra-precise methods needed to place chiplets, and engineering teams have ironed out the communications protocols used to have chiplets talk amongst each other. In short, chiplets are no longer experimental designs that need to be proven, but instead have become proven designs that chip makers can rely on. And with that increasing reliance on chiplet technology comes the need for design roadmaps and stability – the need for design standards.To that end, today Intel, AMD, Arm, and all three leading-edge foundries are coming together to announce that they are forming a new and open standard for chiplet interconnects, which is aptly being named Universal Chiplet Interconnect Express, or UCIe. Taking significant inspiration from the very successful PCI-Express playbook, with UCIe the involved firms are creating a standard for connecting chiplets, with the goal of having a single set of standards that not only simplify the process for all involved, but lead the way towards full interoperability between chiplets from different manufacturers, allowing chips to mix-and-match chiplets as chip makers see fit. In other words, to make a complete and compatible ecosystem out of chiplets, much like today’s ecosystem for PCIe-based expansion cards.
Lenovo Announces The ThinkPad X13s Laptop, Powered By Snapdragon 8cx Gen 3
During the MWC 2022 trade show in Barcelona, Lenovo unveiled the first laptop powered by Qualcomm's new Snapdragon 8cx Gen 3 chip, the ThinkPad X13s. Using a passively-cooled design, Lenovo is claiming that the ThinkPad X13s has a long battery life with up to 28 hours of video playback, as well as boasting plenty of wireless connectivity, including support for 5G mmWave, Wi-Fi 6E, and is all housed in a 90% recycled magnesium chassis.Over the last couple of months, we've dedicated a number of column inches to Qualcomm's latest Snapdragon 8cx Gen 3. It uses four Arm Cortex-X1 prime cores at 3.0 GHz, four smaller A78 efficiency cores operating at 2.4 GHz, and it also includes the company's latest Adreno graphics.The biggest challenge for Qualcomm with the Snapdragon 8cx Gen 3 and the Windows on Arm project has been application compatibility. Qualcomm has been working closely with Microsoft and software vendors to allow its Arm-based processors to work with x86 apps, and last year's launch of Windows 11 added x86-64 application compatibility as well. So these days it's less a matter of what will work on WoA laptops, and more about how quickly x86 applications will run.
AMD's Ryzen 9 6900HS Rembrandt Benchmarked: Zen3+ Power and Performance Scaling
Earlier this year, AMD announced an update to its mobile processor line that we weren’t expecting quite so soon. The company updated its Ryzen 5000 Mobile processors, which are based around Zen3 and Vega cores, to Ryzen 6000 Mobile, which use Zen3+ and RDNA2 cores. The jump from Vega to RDNA2 on the graphics side was an element we had been expecting at some point, but the emergence of a Zen3+ core was very intriguing. AMD gave us a small pre-brief, saying that the core is very similar to Zen3, but with ~50 new power management features and techniques inside. With the first laptops based on these chips now shipping, we were sent one of the flagship models for a quick test.
Bitspower Quietly Launches its First Ever Air CPU Cooler, The Phantom
More known for its custom water cooling components, Bitspower has released its first-ever air-based CPU cooler, the Phantom. Designed for the entry-level market, the Phantom includes a single 120 mm RGB enabled cooling fan and supports Intel's latest LGA1700 desktop socket and AMD's AM4 socket.With a height of 158 mm, the Phantom is compatible with most desktop cases. It is constructed of aluminum, with four copper heat pipes attaching the large aluminum fin stack to the cold plate. To aid in heat dissipation, it uses a single 120 mm cooling fan which includes RGB for users looking to add a bit of flair to their system. The cooling fan included has a maximum speed of 1800 RPM, which Bitspower claims the fan has an airflow rating of 80 CFM, and it operates with a maximum volume of 34 dBA.
The GIGABYTE Z690 Aorus Master Mobo Review: 10GbE Rounds Out A Premium Board
The latest motherboard to grace our test bench is the GIGABYTE Z690 Aorus Master, which hails from its Aorus gaming series and sits just one step below its Aorus Xtreme models. Some of its most notable features include 10 Gigabit Ethernet and Wi-FI 6E networking, USB 3.2 G2x2 connectivity, as well as plenty of storage capacity consisting of five M.2 slots and six SATA ports. The Aorus Z690 Master also boasts support for DDR5-6400 memory and an impressive 20-phase power delivery designed for overclockers looking to squeeze out extra performance. Does the GIGABYTE Z690 Aorus have enough about it to justify the $470 price tag? We aim to find out in our latest Z690 motherboard review.
The Intel NUC12 Extreme Dragon Canyon Preview: Desktop Alder Lake Impresses in SFF Avatar
Intel kick-started a form-factor revolution in the early 2010s with the introduction of the ultra-compact NUCs. The systems were meant to be an alternative to the tower desktops used in many applications where the size, shape, and the system capabilities were mostly unwarranted. The success of the NUCs enabled Intel to start reimagining the build of systems used in a wider range of settings.More recently, the introduction of the Skull Canyon NUC in 2016 was Intel's first effort to make a gaming-focused SFF PC. And desktop-focused Compute Elements (essentially, a motherboard in a PCIe card form-factor) launched in early 2020 meant that full-blown gaming desktops could credibly come under the NUC banner. In the second half of 2020, the Ghost Canyon NUC9 – the first NUC Extreme – made a splash in the market with support for a user-replaceable discrete GPU. Ghost Canyon was extremely impressive, but the restrictions on the dGPU size and high-end pricing were dampeners. Intel made some amends with the NUC11 Extreme (Beast Canyon), using a special Tiger Lake SKU with a 65W TDP and comparatively competitive pricing.The introduction of Alder Lake and its desktop-first focus has enabled Intel to prepare a new flagship in the NUC Extreme lineup barely 6 months after the launch of the Beast Canyon NUC. The new Dragon Canyon platform was briefly teased at the 2022 CES, with the promise of a Q1 launch. Intel is keeping up its word with the launch of a number of NUC12 Compute Elements and NUC12 Extreme Kit SKUs. Today, we are having a detailed look at what the NUC12 Extreme brings to the table, particularly in comparison to the NUC11 Extreme. The recent introduction of Windows 11 means that our benchmarks comparison set is currently limited - today's preview does not deal with any systems other than the NUC12 Extreme and NUC11 Extreme.
Intel Launches Alder Lake U and P Series Processors: Ultraportable Laptops Coming In March
Following the January launch of Intel’s first Alder Lake-based 12 Gen Core mobile processors, the Alder Lake-H family, Intel this morning is following that up with the formal launch of the rest of their mobile product stack. Designed to fill out the lower-power portion of Intel’s product stack for smaller thin and light laptops, today the company is launching the 28 watt Alder Lake-P series processors, as well as the 15 watt and 9 watt Alder Lake-U series processors. Laptops based on both processor sub-families are set to become available in March, where they will be competing against rival AMD’s recently launched Ryzen 6000 Mobile series.Technically, today’s announcement from Intel is largely a redux in terms of information. The company announced the Alder Lake P and U series alongside the H series chips back at CES, though in a blink-and-you’ll-miss-it fashion, as the bulk of Intel’s efforts were focused around the more imminent H series. But now that the H series launch has passed and the first U/P series laptops are about to hit the market, Intel is giving its lower power processors their moment in the sun.Along with reiterating on the specifications of the U/P series processors, including clockspeeds, core counts, and integrated GPU configurations, today’s announcement also offers some new concrete details on the overall platform. In particular, we now have confirmation of what I/O options are included for the various low-power chip configurations, as well as the number of USB ports and PCIe lanes available. As well, Intel is also offering a full update on its Evo design program, outlining the updated requirements for Alder Lake Evo laptops.
Netgear Launches WAX630E AX7800 Wi-Fi 6E Access Point for SMBs
Netgear has been building up a portfolio of software-defined networking (SDN) products over the last few years. The introduction of the cloud-based Insight management feature to their lineup of SMB products has made their lineup of routers, switches, and access points appeal to a wider customer base. SMB-focused versions of leading technologies often lag their consumer counterparts by a year or so, and updates to flagship offerings are often spaced apart. However, Netgear's Wi-Fi 6 WAX630 (introduced in June 2021) is receiving a Wi-Fi 6E upgrade / companion today in the form of the WAX630E.The Qualcomm-based WAX630E is one of the first reasonably-priced SMB Wi-Fi 6E APs for SMBs and micro-businesses. Many announced and leaked offerings in the Wi-Fi 6E space have been delayed (with FCC certification being a stumbling block), allowing Netgear's offering to become available for purchase ahead of the competition. The advantages of Wi-Fi 6E have been covered in multiple articles previously - including the original Wi-Fi Alliance announcement coverage, and Netgear's first Wi-Fi 6E product in the consumer space last year. The availability of a much wider interference-free spectrum in 6 GHz - up to seven usable 160 MHz - means that consumer experience is bound to be much better. The downside is the reduced range due to power limitations (APs at 30 dBm max., and clients at 24 dBm max.).On the technical front, the WAX630E replaces the third 4x4 5GHz band of the WAX630 with a 2x2 6GHz radio. The 2.4 GHz radio configuration has also been reduced to 2x2 in the WAX630E from the 4x4 in the WAX630. These updates have allowed Netgear to price the WAX630E very competitively at $350 - a reasonable premium of $20 over the WAX630's $330. The reduced number of radios also allows the WAX630E to sport lower maximum power consumption compared to the WAX630 (27.6W vs. 30.1W). However, the availability of 160MHz channels enables the AP to be marketed as an AX7800 device (600 Mbps for the 2x2 2.4 GHz, 4800 Mbps for the 4x4 5 GHz, and 2400 Mbps for the 2x2 6 GHz radios). The AP has a 3000 sq. ft. coverage area. The WAX630E retains the same physical footprint as the WAX630, though it is slight heavier.The AP supports PoE++ over the 2.5GbE LAN port (an extra 1Gbps LAN port is also available). Netgear recommends the Insight-managed MS510TXUP NBASE-T PoE++ switch for use with the WAX630E. The introduction of the new AP provides a wider range of options for Netgear's customers, whose needs with respect to required number of clients and client types to support, speeds, and pricing may vary.Netgear believes that cloud-based management of Wi-Fi 6E APs will be a must in the long-run. Outdoor APs will eventually need to figure out a solution for ensuring that they do not interfere with other licensed 6 GHz band users. FCC mandates that real-time geolocation database cross-checks need to be performed by APs, which might be a challenge without regular updates to the database. While the WAX630E is not affected by this (due to its indoor nature), cloud-based management / real-time connection to a database maintained by the vendor might become essential moving forward. Netgear's Insight provides IT administrators with a centralized dashboard for multi-location installations, allowing for both local and remote management. Netgear is touting scalability and easy-to-use multi-location support as advantages over solutions requiring local controllers.The WAX630E is available for pre-order today - $350 for the PoE variant, and $370 for the one with a power adapter included. The units are expected to ship early next month.Gallery: Netgear Launches WAX630E AX7800 Wi-Fi 6E Access Point for SMBs
EVGA Unleashes Z690 Dark K|NGP|N Edition Motherboard: Alder Lake Goes Extreme
Although these long, cold, and dark nights are starting to come to an end, EVGA has launched its darkest and most devilish desktop motherboard to date, the EVGA Z690 Dark K|NGP|N Edition. Designed in collaboration with legendary extreme overclocker Vince 'K|NGP|N' Lucido, the Z690 Dark K|NGP|N Edition boasts an impressive feature set including support for DDR5-6600 memory, three PCIe 4.0 x4 M.2 slots, eight SATA ports, and a large 21-phase power delivery to push Intel's Alder Lake to the extreme.Built around the Intel's high-end Z690 chipset, the EVGA Z690 Dark K|NGP|N Edition isn't a conventional motherboard by any stretch of the imagination. It is based on the E-ATX form factor and has interesting design characteristics, including a transposed LGA1700 socket that allows extreme overclockers to mount LN2 pots more easily.To make the board more robust, EVGA includes a large black metal backplate on the rear of the board to reinforce the PCB. This also includes right-angled connectors, including two 8-pin 12V ATX CPU power inputs and a 24-pin 12V ATX motherboard power input that intrudes into the PCB with a handy cutout designed to make cable management more effortless. It also includes an impressive accessories pack that features an EVGA flat test bench plate that we saw in our previous review of the EVGA Z590 Dark motherboard.Looking at the feature set, the Z690 Dark K|NGP|N Edition includes two full-length PCIe 5.0 slots that can operate at x16 and x8/x8, with three PCIe 4.0 x4 M.2 slots that sit in between the PCIe slots and underneath a large black finned 'Dark' branded heatsink. There are eight SATA ports for conventional storage and optical drives, six of which are from the chipset with RAID 0, 1, 5, and 10 support, and two that come via an ASMedia ASM1061 SATA controller.Even though the Z690 Dark has a solid feature set for enthusiasts, the real focus by EVGA with this model is on extreme overclocking. This includes a large 21-phase power delivery cooled by an active heatsink with two fans. It also has a 10-layer PCB throughout and contains an overclocker's toolkit in the top right-hand corner that consists of dual two-digit LED debuggers, a power button, a reset button, dip-switches to disable PCIe slots, and a slow mode switch. There's also a probe-lt header where users can monitor voltages in real-time from various components on the board, such as the CPU and power inputs.
From There to Here, and Beyond
I’ll be the first to admit, I had no history with AnandTech before I joined. It was by sheer chance, meeting one of the writers at an overclocking event, that led me to first become a reader, then a writer, to what has become my career in journalism. If you’re new to AnandTech then welcome! It’s been my home for over a decade, where we’ve always had the goal of pushing the boundaries for all things technical and engineering-related. For all the old hands - I know many of you work at the companies we report on around the industry, and we’ve been forever glad for your continued support and interactions. Long may it continue, especially in an industry that is slowly consolidating around a few key players, both in technical and publishing – for as long as the audience demands it, AnandTech will aim to provide.Personally, I was always into computers, but it was overclocking that got me into hardware. Not just getting more frames in my games, but actual competitive overclocking, trying to get the best scores in the world. People liken it to the Formula 1 or car tuning, when in reality it feels like drag racing – 8 hours of preparation for a 10 seconds quarter mile. Studying chemistry at the time, on the surface there seemed to be not much more than a little overlap, except for a desire to learn more about what I was doing, the why, and how it all worked. That oblivious-yet-determined manner led to Rajinder Gill, senior motherboard editor at the time, suggesting that Anand bring me on as a freelancer back in 2010. Initially with news, I transitioned into Rajinder’s role rather quickly after he left, and starting from the Sandy Bridge launch in early 2011, I spent the next five years reviewing motherboards at AnandTech as my day job after graduating my PhD. I still look back on my first proper motherboard review, the ASRock P67 Extreme4, with rose-tinted spectacles. It was a great board for the time, and I still have it in my collection.That’s what got me to AnandTech, and after 11 years I feel the need to change, so I have decided to take up a new position in the industry. When Anand left in 2014, after 18 years at the helm, I was still quite green in my role and didn’t really take his words to heart at the time. Looking back at them today, I see a lot of parallels, even though I’ve never sat in that senior role. Since Anand left, I was promoted to Senior CPU Editor, and Ryan Smith has taken the Editor-in-Chief role with grace and poise – he’s consistently talked me down from a ledge when this industry has piled on, and all I’ve wanted to do is lash out! After Anand left, it was Ryan who brought me on as a full-time employee, and helped navigate AnandTech through two acquisitions, to where the brand currently sits today with Future. Despite being (roughly) the same age, Ryan has been a mentor and a director for a lot of the content I’ve written, for which I’m very thankful. I hope he knows how much it has meant over the years.I’ve really enjoyed working at AnandTech. I love getting my teeth into the latest technical details, and getting advance briefings from the researchers never ceases to be a great pleasure of mine. It doesn’t matter whether that’s for an upcoming product, attending technical IEEE conferences, or for Hot Chips talk, or seeing inside the secret R&D room at Computex. In a lot of ways, my academic experience has overlapped with my coverage that I would never have predicted - we're on the cusp of finding out how we need More Than Moore's Law in the modern era. My travel in 2019 topped 200,000 miles, which doesn’t really bother me in the slightest, as I’ve been able to meet and discuss with key industry movers and shakers. A crowning moment was talking AMD into making its 64-core Threadripper into a better price the evening before the announcement. Or biting one of Intel’s 10nm wafers. Being able to travel around and visit companies has shown me just how many amazing people and stories there are in our industry, and it’s a shame there aren’t enough hours in the day to focus on them all, as I know a lot of you would want to hear about them. I hope I've also been able to bring a little bit of humor and fun to my content too.If there’s one thing that has remained through all that time, it’s the dedication of AnandTech’s writers to provide as many detailed technical write-ups as we can. Over the years I’ve worked with some incredible talent, especially Andrei, and I’ve managed individuals that I’ve seen improve leaps and bounds, especially Gavin who now leads our motherboard coverage. Big shoutouts go to the rest of the team over the years: Ryan, Brett, Ganesh, Billy, Kristian, Tracy, Anton, Joe, Matt, Matt, Josh, Nate, Rajinder, Gary, Virginia, and Howard. You’ve all meant a lot to me in so many different ways. Then there’s also the audience, who have always provided copious feedback, either here, on social media, or through our email conversations. Please don’t stop giving all of us constructive criticism on how to do our jobs better, regardless of where we are or who we work for.As for me, I’m finding new ventures: a mixture of behind-the-scenes and public-facing opportunities, as well as continued consulting, but still within this tech industry that we love to analyze. We're on the cusp of finding out how we need More Than Moore's Law in the modern era. I’ll still be that loud voice on Twitter, criticizing every financial disclosure and presentation, and if you’re interested in what I’m doing next, then I’m likely to announce my future roles over there or on LinkedIn in short course. While today is my final day at AnandTech, don't be surprised if my name pops up again here over the next week or two, as I’ve prepared some content in advance, including our AMD Rembrandt review and an interview with Raja Koduri. Stay tuned for those.To all of the readers over the years, thank you so much for this opportunity. I couldn’t have done it without you. I hope that you’ll continue to give all the AnandTech writers the support you have shown me.~Ian
AnandTech Interview with Dr. Ann Kelleher: EVP and GM of Intel’s Technology Development
It’s somewhat of an understatement to say that Intel’s future roadmap on its process node development is one of the most aggressive in the history of semiconductor design. The company is promising to pump out process nodes quicker than we’ve ever seen, despite having gone through a recent development struggle. Even with CEO Pat Gelsinger promising more than ever before, it’s up to Intel’s Technology Development (TD) team to pick up the ball and run with it in innovative ways to make that happen. In charge of it all is Dr. Ann Kelleher, EVP and GM of Intel’s Technology Development, and on the back of some strong announcements last year we reached out for the chance to interview her regarding Intel’s strategy.
Intel Discloses Multi-Generation Xeon Scalable Roadmap: New E-Core Only Xeons in 2024
It’s no secret that Intel’s enterprise processor platform has been stretched in recent generations. Compared to the competition, Intel is chasing its multi-die strategy while relying on a manufacturing platform that hasn’t offered the best in the market. That being said, Intel is quoting more shipments of its latest Xeon products in December than AMD shipped in all of 2021, and the company is launching the next generation Sapphire Rapids Xeon Scalable platform later in 2022. Beyond Sapphire Rapids has been somewhat under the hood, with minor leaks here and there, but today Intel is lifting the lid on that roadmap.
Intel Goes Full XPU: Falcon Shores to Combine x86 and Xe For Supercomputers
One of Intel’s more interesting initiatives over the past few years has been XPU – the idea of using a variety of compute architectures in order to best meet the execution needs of a single workload. In practice, this has led to Intel developing everything from CPUs and GPUs to more specialty hardware like FPGAs and VPUs. All of this hardware, in turn, is overseen at the software level by Intel’s oneAPI software stack, which is designed to abstract away many of the hardware differences to allow easier multi-architecture development.Intel has always indicated that their XPU initiative was just a beginning, and as part of today’s annual investor meeting, Intel is finally disclosing the next step in the evolution of the XPU concept with a new project codenamed Falcon Shores. Aimed at the supercomputing/HPC market, Falcon Shores is a new processor architecture that will combine x86 CPU and Xe GPU hardware into a single Xeon socket chip. And when it is released in 2024, Intel is expecting it to offer better than 5x the performance-per-watt and 5x the memory capacity of their current platforms.At a very high level, Falcon Shores appears to be an HPC-grade APU/SoC/XPU for servers. While Intel is offering only the barest of details at this time, the company is being upfront in that they are combining x86 CPU and Xe GPU hardware into a single chip, with an eye on leveraging the synergy between the two. And, given the mention of advanced packaging technologies, it’s a safe bet that Intel has something more complex than a monolithic die planned, be it separate CPU/GPU tiles, HBM memory (e.g. Sapphire Rapids), or something else entirely.Diving a bit deeper, while integrating discrete components often pays benefits over the long run, the nature of the announcement strongly indicates that there’s more to Intel’s plan here than just integrating a CPU and GPU into a single chip (something they already do today in consumer parts). Rather, the presentation from Raja Koduri, Intel’s SVP and GM of the Accelerated Computing Systems and Graphics (AXG) Group, makes it clear that Intel is looking to go after the market for HPC users with absolutely massive datasets – the kind that can’t easily fit into the relatively limited memory capacity of a discrete GPU.A singular chip, in comparison, would be much better prepared to work from large pools of DDR memory without having to (relatively) slowly shuffle data in and out of VRAM, which remains a drawback of discrete GPUs today. In those cases, even with high speed interfaces like NVLink and AMD’s Infinity Fabric, the latency and bandwidth penalties of going between the CPU and GPU remain quite high compared to the speed at which HPC-class processors can actually manipulate data, so making that link as short as physically possible can potentially offer performance and energy savings.Meanwhile, Intel is also touting Falcon Shores as offering a flexible ratio between x86 and Xe cores. The devil is in the details here, but at a high level it sounds like the company is looking at offering multiple SKUs with different numbers of cores – likely enabled by varying the number of x86 and Xe titles.From a hardware perspective then, Intel seems to be planning to throw most of their next-generation technologies at Falcon Shores, which is fitting for its supercomputing target market. The chip is slated to be built on an “angstrom era process”, which given the 2024 date is likely Intel’s 20A process. And along with future x86/Xe cores, will also incorporate what Intel is calling “extreme bandwidth shared memory”.With all of that tech underpinning Falcon Shores, Intel is currently projecting a 5x increase over their current-generation products in several metrics. This includes a 5x increase in performance-per-watt, a 5x increase in compute density for a single (Xeon) socket, a 5x increase in memory capacity, and a 5x increase in memory bandwidth. In short, the company has high expectations for the performance of Falcon Shores, which is fitting given the highly competitive HPC market it’s slated for.And perhaps most interestingly of all, to get that performance Intel isn’t just tackling things from the raw hardware throughput side of matters. The Falcon Shores announcement also mentions that developers will have access to a "vastly simplified GPU programming model" for the chip, indicating that Intel isn’t just slapping some Xe cores into the chip and calling it a day. Just what this entails remains to be seen, but simplifying GPU programming remains a major goal in the GPU computing industry, especially for heterogeneous processors that combine CPU and GPU processing. Making it easier to program these high throughput chips not only makes them more accessible to developers, but reducing/eliminating synchronization and data preparation requirements can also go a long way towards improving performance.Like everything else being announced as part of today’s investor meeting, this announcement is more of a teaser for Intel. So expect to hear a lot more about Falcon Shores over the next couple of years as Intel continues their work to bringing it to market.
Intel’s Arctic Sound-M Server Accelerator To Land Mid-2022 With Hardware AV1 Encoding
Rounding out Intel’s direct GPU-related announcements from this morning as part of the company’s annual investor meeting, Intel has confirmed that the company is also getting ready to deliver a more traditional GPU-based accelerator card for server use a bit later this year. Dubbed Arctic Sound-M, the forthcoming accelerator is being aimed in particular at the media encoding and analytics market, with Intel planning to take full advantage of what should be the first server accelerator with hardware AV1 video encoding. Arctic Sound-M is expected to launch in the middle of this.The announcement of Arctic Sound-M follows a hectic, and ultimately sidetracked set of plans for Intel’s original GPU server hardware. The company initially commissioned their Xe-HP series of GPUs to anchor the traditional server market, but Xe-HP was canceled in November of last year. Intel didn’t give up on the server market, but outside of the unique Ponte Vecchio design for the HPC market, they did back away from using quite so much dedicated server silicon.In the place of those original products, which were codenamed Arctic Sound, Intel is instead coming to market with Arctic Sound-M. Given the investor-focused nature of today’s presentation, Intel is not publishing much in the way of technical details for their forthcoming server accelerator, but we can infer from their teaser videos that this is an Alchemist (Xe-HPC part), as we can clearly see the larger Alchemist die mounted on a single-slot card in Intel’s teaser video. This is consistent with the Xe-HP cancellation announcement, as at the time, Intel’s GPU frontman, Raja Koduri indicated that we’d see server products based on Xe-HPG instead.Arctic Sound-M, in turn, is being positioned as a server accelerator card for the media market, with Intel calling it a “media and analytics supercomputer”. Accordingly, Intel is placing especially heavy emphasis on the media processing capabilities of the card, both in regards to total throughput and in codecs supported. In particular, Intel expects that Arctic Sound-M will be the first accelerator card released with hardware AV1 encoding support, giving the company an edge with bandwidth-sensitive customers who are ready to use the next-generation video codec.Interestingly, this also implies that hardware AV1 encoding is a native feature of (at least) the large Alchemist die. Though given the potential value of the first hardware AV1 encoder, it remains to be seen whether Intel will enable it on their consumer Arc cards, or leave it restricted to their server card.Meanwhile in terms of compute performance for media analytics/AI inference, Intel is quoting a figure of 150 TOPS for INT8. There aren’t a ton of great comparisons here in terms of competing hardware, but the closest comparison in terms of card size and use cases would be NVIDIA’s A2 accelerator, where on paper, the Arctic Sound-M would deliver almost 4x the inference performance. It goes without saying that the proof is in the pudding for a new product like Intel’s GPUs, but if they can deliver on these performance figures, then Arctic Sound-M would be able to safely occupy a very specific niche in the larger server accelerator marketplace.Past that, like the rest of Intel’s Arc(tic) products, expect to hear more details a bit later this year.
Intel Meteor Lake Client Processors to use Arc Graphics Chiplets
Continuing with this morning’s spate of Intel news coming from Intel’s annual Investor meeting, we also have some new information on Intel’s forthcoming Meteor Lake processors, courtesy of this morning’s graphics presentation. Intel’s 2023 client processor platform, Meteor Lake was previously confirmed by the company to use a chiplet/tile approach. Now the company is offering a bit more detail on their tile approach, confirming that Meteor Lake will use a separate graphics tile, and offering the first visual mock-up of what this tiled approach will look like.First revealed back in March of 2021, Meteor Lake is Intel’s client platform that will follow Raptor Lake – the latter of which is Alder Lake’s successor. In other words, we’re looking at Intel’s plans for their client platform two generations down the line. Among the handful of details revealed so far about Meteor Lake, we know that it will take a tiled approach, and that the compute tile will be built on the Intel 4 process, the company’s first EUV-based process.Now, thanks to this morning’s investor presentation, we have our first look at the graphics side of Meteor Lake. For Intel’s 2023/2024 platform, Intel isn’t just offering a compute tile separate from an IO/SoC tile, but graphics will be their own tile as well. And that graphics tile, in turn, will be based on Intel’s Arc graphics technologies – presumably the Battlemage architecture.In describing the significance of this change to Intel’s investor audience, GPU frontman Raja Koduri underscored that the tiled approach will enable Intel to offer performance more along the lines of traditional discrete GPUs while retaining the power efficiency of traditional integrated GPUs. More pragmatically, Battlemage should also be a significant step up from Intel’s existing Xe-LP integrated GPU architecture in terms of features, offering at least the full DirectX 12 Ultimate (FL 12_2) feature set in an integrated GPU. Per this schedule, this will put Intel roughly a year and a half to two years behind arch-rival AMD in terms of integrated graphics feature sets, as AMD’s brand-new Ryzen 6000 “Rembrandt” APUs are launching today with a DX12U-capable GPU architecture.Past that, we’re expecting that Intel may have a bit more information on Meteor Lake this afternoon, as the company will deliver its client (Core) and server (Xeon) updates to investors as part of their live session later today. Of particular interest will be whether Intel embraces the tiled approach for the entire Meteor Lake family, or if they’ll hit a crossover point where they’ll want to produce a more traditional monolithic chip for the lower-end portion of the product stack. The Foveros technology being used to package Meteor Lake is cutting-edge technology, and cutting-edge tech often has cost drawbacks.
Intel Arc Update: Alchemist Laptops Q1, Desktops Q2; 4mil GPUs Total for 2022
As part of Intel’s annual investor meeting taking place today, Raja Koduri, Intel’s SVP and GM of the Accelerated Computing Systems and Graphics (AXG) Group delivered an update to investors on the state of Intel’s GPU and accelerator group, including some fresh news on the state of Intel’s first generation of Arc graphics products. Among other things, the GPU frontman confirmed that while Intel will indeed ship the first Arc mobile products in the current quarter, desktop products will not come until Q2. Meanwhile, in the first disclosure of chip volumes, Intel is now projecting that they’ll ship 4mil+ Arc GPUs this year.In terms of timing, today’s disclosure confirms some earlier suspicions that developed following Intel’s CES 2022 presentation: that the company would get its mobile Arc products out before their desktop products. Desktop products will now follow in the second quarter of this year, a couple of months behind the mobile parts. And finally, workstation products, which Intel has previously hinted at, are on their way and will land in Q3.The pre-recorded presentation from Koduri does not offer any further details as to why Intel has their much-awaited Arc Alchemist architecture-based desktop products trailing their mobile products by a quarter. We know from previous announcements that the Alchemist family is comprised of two GPUs, so it may be that Intel is farther ahead on manufacturing and delivering the smaller of the two GPUs, which would be best suited for laptops. Alternatively, the company may be opting to focus on laptops first since it would allow them to start with OEM devices, and then expand into the more complex add-in board market a bit later. In any case, it’s a notable departure from the traditional top-to-bottom, desktop-then-laptop style launches that current GPU titans NVIDIA and AMD have favored. And this means that eager enthusiasts looking for an apples-to-apples look at how Intel’s first high-end GPU architecture fares, we’re going to be waiting a bit longer than initially expected.Meanwhile, between mobile, desktop, and workstation products, Intel is expecting to ship over 4 million units/GPUs for 2022. To put this in some kind of reference, Jon Peddie Research estimates that the GPU AIB industry shipped 12.7 million boards in Q3’21. Which, pending Q4 numbers being released, would have put the industry as having shipped over 40 million discrete boards altogether over 2021. And while this is a bit of an apples-to-oranges comparison since Intel is counting both AIB desktop and mobile products, it does underscore the overall low volume of Alchemist chips that Intel is expecting to sell this year. Assuming AMD and NVIDIA deliver as many chips in 2022 as they did in 2021, Intel will be adding at most no more than another 10% to the overall volume of GPUs sold.On the whole, this isn’t too surprising given both current manufacturing constraints and Intel’s newcomer status. The company is using TSMC’s N6 process to fab their Alchemist GPUs, and TSMC remains capacity constrained during the current chip crunch; so how many wafers Intel could hope to get was always going to be limited. Meanwhile as a relative newcomer to the discrete GPU space – a market that has been an NVIDIA/AMD duopoly for most of the past two decades – Intel doesn’t have the customer inertia that comes from offering decades of products. So even if Alchemist products perform very well relative to the competition, the company still needs time to grow into AMD and NVIDIA-sized shoes and to woo over the relatively conservative OEM base.Celestial Architecture Under Development: Targeting Ultra Enthusiast MarketAlong with the update on Alchemist, Koduri’s presentation also offered a very (very) brief update on Celestial, Intel’s third Arc architecture. Celestial is now under development, and at this point, Intel is expecting it to be their first product to address the ultra-enthusiast market (i.e. the performance crown). GPUs based on the Celestial architecture are expected in the “2024+” timeframe; which is to say that at this far out Intel doesn’t seem to know for sure if they’ll be 2024 or 2025 products.Covering the gap between Alchemist and Celestial will be Battlemage, the second of Intel’s Arc GPU architectures. Battlemage now has a 2023-2024 release date, with Intel expecting the architecture to improve performance over Alchemist to the point where Battlemage will be competitive in the enthusiast GPU market – but not quite reaching the ultra-enthusiasts that Celestial will.Finally, by virtue of this disclosure, it would seem that Battlemage will be the first Arc GPU architecture to make it into Intel’s CPUs. The company has it slated to be implemented as a tile on Meteor Lake CPUs, making this the crossover point where Intel’s current Xe-LP GPU architecture finally gets retired in favor of a newer GPU architecture.
Crucial Ballistix Memory Goes End-of-Life, Micron Realigns its DRAM Strategy
Underscoring the fast-paced nature of the computer hardware market, Micron this week has decided to discontinue all of its current Crucial Ballistix memory products. The move to end-of-life (EOL) these products covers the entire Ballistix lineup, such as the vanilla Ballistix, Ballistix MAX, and Ballistix MAX RGB series. Word of this change comes from a press announcement from Micron, Crucial's parent company, and marks the impending end of the line for its popular consumer-focused line-up.Over the years, I have personally used many of Crucial's Ballistix series memory for different builds, even back as far as the days of its DDR2-800 kits with bold and stylish gold heatsinks. The latest Ballistix series for DDR4 mixed things up with a whole host of different color schemes such as white, black, and even those adopting integrated RGB heat spreaders designed to offer users varying levels of customizability. It seems those days are now set to come to an end, as Micron has decided to call time on the popular series designed for enthusiasts and gamers.Despite there being no officially stated reason from Micron for the decision to cut its popular and premier consumer-focused Ballistix series from its arsenal, the press release does state, "The company will intensify its focus on the development of Micron's DDR5 client and server product roadmap, along with the expansion of the Crucial memory and storage product portfolio".It should be noted that Micron or Crucial never advertised or mentioned the Ballistix brand during the market's transition from DDR4 to DDR5 memory. It seems that the decision wasn't a spur-of-the-moment one and that Micron, which is one of the three main DRAM manufacturers globally along with SK Hynix and Samsung, is looking to turn its attention to satisfying the growing demand for its server hardware and client-based customers.Finally, it should be noted that the memory discontinuation doesn't affect Crucial's consumer storage products, such as the Crucial P5 and P2 NMVe M.2 storage drives, or Crucial's X8 and X6 portable SSDs. It looks as Crucial will still keep its toes in the consumer sector for storage, at least for now. Still, the glory days of its Ballistix series will be no more, and users can expect to see the brand die out entirely as DDR4 memory is phased out of the desktop platforms in the coming years to come.
Ampere Goes Quantum: Get Your Qubits in the Cloud
When we talk about quantum computing, there is always the focus on what the ‘quantum’ part of the solution is. Alongside those qubits is often a set of control circuitry, and classical computing power to help make sense of what the quantum bit does – in this instance, classical computing is our typical day-to-day x86 or Arm or others with ones and zeros, rather than the wave functions of quantum computing. Of course, the drive for working quantum computers has been a tough slog, and to be honest, I’m not 100% convinced it’s going to happen, but that doesn’t mean that companies in the industry aren’t working together for a solution. In this instance, we recently spoke with a quantum computing company called Rigetti, who are working with Ampere Computing who make Arm-based cloud processors called Altra, who are planning to introduce a hybrid quantum/classical solution for the cloud in 2023.
A Visit to Intel’s D1X Fab: Next Generation EUV Process Nodes
On a recent trip to the US, I decided to spend some time criss-crossing the nation for a couple of industry events and spend some of the time visiting friends and peers. One of those stops was at Intel’s D1X Fab in Hillsboro, Oregon, one of the company’s leading edge facilities used for both production and development. It’s very rare to get time in a fab as a member of the press – in the ten years covering the industry, I’m lucky to say this was my second, which is usually two more than most. As you can imagine, everything had to be pre-planned and pre-approved, but Intel managed to fit me into their schedule.
Intel to Acquire Tower Semiconductor for $5.4B To Expand IFS Capabilities
Continuing their recent spending spree in expanding their foundry capabilities, Intel this morning has announced that it has struck a deal to acquire specialty foundry Tower Semiconductor for $5.4 billion. If approved by shareholders and regulatory authorities, the deal would result in Intel significantly expanding its own contract foundry capabilities, acquiring not only Tower’s various fabs and specialty production lines, but also the company’s experience in operating contract foundries over the long run.The proposed deal marks the latest venture from Intel that is designed to bolster Intel Foundry Services’ (IFS) production capabilities. In the last month and a half alone, Intel has announced plans to build a $20B fab complex in Ohio that will, in part, be used to fab chips for IFS, as well as a $1B fund to support companies building new and critical technologies for the overall foundry ecosystem. The Tower Semiconductor acquisition, in turn, is yet another piece of the puzzle for IFS, fleshing out Intel’s foundry capabilities for more exotic products.As a specialty foundry, the Israel-based Tower Semiconductor is best known for its analog offerings, as well as its other specialized process lines. Among the chip types produced by Tower are MEMS, RF CMOS, BiCMOS, CMOS image sensors, silicon–germanium transistors, and power management chips. Essentially, Tower makes most of the exotic chip types that logic-focused Intel does not – so much so that Intel has been a Tower customer long before today’s deal was announced. All of which is why Intel wants the firm and its capabilities: to boost IFS’s ability to make chips for customers who aren’t after a straight ASIC processor.The proposed acquisition would also see Intel pick up ownership of/access to the 8 foundry facilities that Tower uses. This includes the Tower-owned 150mm and 200mm fabs in Israel and two 200mm fabs in the US. Meanwhile Tower also has majority ownership in two 200mm fabs and a 300mm fab in Japan, and a future 300mm facility in Italy that will be shared with ST Microelectronics. As is typical for analog and other specialty processes where density is not a critical factor (if not a detriment), all of these fabs are based around mature process nodes, ranging from 1000nm down to 65nm, which sits in stark contrast to Intel’s leading-edge logic fabs.Along with Tower’s manufacturing technology, the proposed deal would also see Intel pick up Tower’s expertise in the contract foundry business, which is something the historically insular Intel lacks. On top of their fab services, Tower also offers its customers electronic design automation and design services using a range of IP, all of which will be folded into IFS’s expanded offerings as part of the deal. Consequently, although the company has already brought on executives and other personnel with contract fab experience in past hirings, this would be the single largest talent transaction for IFS.All told, Intel currently expects the deal to take around 12 months to close, with the company paying $5.4 billion in cash from its balance sheet for Tower Semiconductor shares. Though approved by both the Intel and Tower Semiconductor boards, Tower’s stockholders will still need to approve the deal. Intel will also need regulatory approval from multiple governments in order to close the deal, to which the company isn’t expecting much objection to given the complementary nature of the two companies’ foundry offerings. Still, as the last week alone has proven, regulatory approval for multi-billion dollar acquisitions is not always guaranteed.
Update: AMD’s Acquisition of Xilinx Receives Regulatory Go, Deal Closes At $49B
Update 02/14: AMD has sent out a brief statement this morning announcing that the Xilinx deal has formally closed. AMD has now fully acquired Xilinx. The final value of the deal, based on AMD's stock price at the closing, was $49B. Which according to AMD is the largest semiconductor acquisition in history.Original Story, 02/10:Although it’s taken a bit longer than planned, AMD’s acquisition of Xilinx has finally cleared the last regulatory hurdles. With the expiration of the mandatory HSR waiting period in the United States, AMD and Xilinx now have all of the necessary regulatory approval to close the deal, and AMD expects to complete its roughly $53 billion acquisition of the FPGA maker on or around February 14, 2 business days from now.Having previously received approval from Chinese regulators late last month, the final step in AMD’s acquisition of Xilinx has been waiting out the mandatory Hart-Scott-Rodino (HSR) Act waiting period, which gives US regulators time to review the deal, and take more action if necessary. That waiting period ended yesterday, February 9, with no action taken by the US, meaning that the US will not be moving to block the deal, and giving AMD and Xilinx the green light to close on it.With all the necessary approvals acquired, AMD and Xilinx are now moving quickly to finally consummate the acquisition. AMD expects to complete that process in two more business days, putting the closure of the deal on (or around) February 14 – which is fittingly enough Valentine’s Day.16 months in the making, AMD’s acquisition of Xilinx is the biggest acquisition ever for the Texas-based company. The all-stock transaction was valued at $35 billion at the time the deal was announced, offering 1.7234 shares of AMD stock for each Xilinx share. Since then, AMD’s stock price has increased by almost 51% to $125/share, which will put the final price tag on the deal at close to $53 billion – which is almost a third of AMD’s entire market capitalization and underscores the importance of this deal to AMD. Once the deal closes, Xilinx’s current stockholders will find themselves owning roughly 26% of AMD, while AMD’s existing stockholders hold the remaining 74%.Having rebounded from their darkest days last decade, AMD has since shifted into looking at how to further grow the company, both by increasing its market share in its traditional products like CPUs and GPUs, as well as by expanding into new markets entirely. In particular, AMD has turned its eye towards expanding their presence in the data center market, which has seen strong and sustained growth for virtually everyone involved.With AMD’s recent growth in the enterprise space with its Zen-based EPYC processor lines, a natural evolution one might conclude would be synergizing high-performance compute with adaptable logic under one roof, which is precisely the conclusion that Intel also came to several years ago. To that end, the high-performance FPGA markets, as well as SmartNICs, adaptive SoCs, and other controllable logic driven by FPGAs represent a promising avenue for future growth for AMD – and one they were willing to pay significantly for.Overall, this marks the second major industry acquisition to be resolved this week. While NVIDIA’s takeover of Arm was shut down, AMD’s acquisition of Xilinx will close out the week on a happier ending. Ultimately, both deals underscore just how lucrative the market is for data center-class processors, and to what lengths chipmakers will go to secure a piece of that growing market.
AnandTech Interview with Miguel Nunes: Senior Director for PCs, Qualcomm
During this time of supply crunch, there is more focus on ever on the PC and laptop markets – every little detail gets scrutinized depending on what models have what features and how these companies are still updating their portfolios every year despite all the high demand. One of the secondary players in the laptop space is Qualcomm, with their Windows on Snapdragon partnerships to bring Windows to Snapdragon powered laptops with x86 virtualization and a big bump in battery life as well as connectivity. The big crest on Qualcomm’s horizon in this space is the 2023 product lines, using CPU cores built by their acquisition of Nuvia. At Tech Summit in December 2021, we spoke to Qualcomm’s Miguel Nunes, VP and Senior Director of Product Management for Mobile Computing, who leads up the charge of the laptop division to see what’s coming down the pipe, and what measures Qualcomm are taking to bring a competitive product to market.
Hands-On With The Huawei P50 Pro: The 2022 Flagship with a Snapdragon 888 Option
For those of us outside the US, Huawei has maintained its presence in a number of markets in which it has grown its sales over the last decade. Even without access to Google Services or TSMC, the company has been producing hardware and smartphones as it pivots to a new strategy. To lead off in 2022, that strategy starts with the Huawei P50 Pro, the next generation of flagship photography camera. The P series from Huawei has often been the lead device for new cameras and new features to attract creators, and the model we have today is a new twist in the Huawei story: our model comes with a Qualcomm flagship Snapdragon chip inside.
NVIDIA-Arm Acquisition Officially Nixed, SoftBank to IPO Arm Instead
NVIDIA’s year-and-a-half long effort to acquire Arm has come to an end this morning, as NVIDIA and Arm owner SoftBank have announced that the two companies are officially calling off the acquisition. Citing the current lack of regulatory approval of the deal and the multiple investigations that have been opened up into it, NVIDIA and SoftBank are giving up on their acquisition efforts, as the two firms no longer believe it will be possible to receive the necessary regulatory approvals needed to close the deal. In lieu of being able to sell Arm to NVIDIA (or seemingly anyone else), SoftBank is announcing that they will instead be taking Arm public.First announced back in September of 2020, SoftBank and NVIDIA unveiled what was at the time a $40 billion deal to have NVIDIA acquire the widely popular IP firm. And though the two companies expected some regulatory headwind given the size of the deal and the importance of Arm’s IP to the broader technology ecosystem – Arm’s IP is in many chips in one form or another – SoftBank and NVIDIA still expected to eventually win regulatory approval.However, after 17 months, it has become increasingly clear that government regulators were not apt to approve the deal. Even with concessions being made by NVIDIA, European Union regulators ended up opening an investigation into the acquisition, Chinese regulators have held off on approving the deal, and US regulators moved to outright block it. Concerns raised by regulators centered around NVIDIA gaining an unfair advantage over other companies who use Arm’s IP, both by controlling the direction of its development and by their position affording NVIDIA unique access to insights about what products Arm customers were developing – some of which would include products being designed to compete with NVIDIA’s own wares. Ultimately, regulators have shown a strong interest in retaining a competitive landscape for chips, with the belief that such a landscape wouldn’t be possible if Arm was owned by a chip designer such as NVIDIA.As a result of these regulatory hurdles, NVIDIA and SoftBank have formally called off the acquisition, and the situation between the two companies is effectively returning to status quo. According to NVIDIA, the company will be retaining its 20 year Arm license, which will allow the company to continue developing and selling chips based around Arm IP and the Arm CPU architecture. Meanwhile SoftBank has received a $1.25 billion breakup fee from NVIDIA as a contractual consequence of the acquisition not going through.In lieu of selling Arm to NVIDIA, SoftBank is now going to be preparing to take Arm public. According to the investment group, they are intending to IPO the company by the end of their next fiscal year, which ends on March 23 of 2023 – essentially giving SoftBank a bit over a year to get the IPO organized. Meanwhile, according to Reuters, SoftBank’s CEO Masayoshi Son has indicated that the IPO will take place in the United States, most likely on the Nasdaq.Once that IPO is completed, it will mark the second time that Arm has been a public company. Arm was a publicly-held company prior to the SoftBank acquisition in 2016, when SoftBank purchased the company for roughly $32 billion. And while it’s still too early to tell what Arm will be valued at a second time around, it goes without saying that SoftBank would like to turn a profit on the deal, which is why NVIDIA’s $40 billion offer was so enticing. Still, even with the popularity and ubiquity of Arm’s IP across the technology ecosystem, it’s not clear at this time whether SoftBank will be able to get something close to what they spent on Arm, in which case the investment firm is likely to end up taking a loss on the Arm acquisition.Finally, the cancellation of the acquisition is also bringing some important changes to Arm itself. Simon Segars, Arm’s long-time CEO and major proponent of the acquisition, has stepped down from his position effective immediately. In his place, the Arm board of directors has already met and appointed Arm insider Rene Haas to the CEO position. Haas has been with Arm since 2013, and he has been president of the Arm IP Products Group since 2017.Arm’s news release doesn’t offer any official insight into why Arm is changing CEOs at such a pivotal time. But with the collapse of the acquisition, Arm and SoftBank may be looking for a different kind of leader to take the company public over the next year.Sources: NVIDIA, Arm
The Noctua NH-P1 Passive CPU Cooler Review: Silent Giant
In today's review, we are having a look at a truly innovative cooler by Noctua, the NH-P1. The NH-P1 is a CPU cooler of colossal proportions, designed from the ground up with passive (fanless) operation in mind. Can a modern CPU operate seamlessly without a cooling fan? Noctua is here to prove that it can.
Western Digital Introduces WD_BLACK SN770: A DRAM-less PCIe 4.0 M.2 NVMe SSD
The initial wave of PCIe 4.0 NVMe SSDs put emphasis on raw benchmark numbers, with power consumption remaining an afterthought. The targeting of high-end desktop platforms ensured that it was not much of a concern. However, with the rise of notebook and mini-PC platforms supporting PCIe 4.0, power consumption and thermal performance became important aspects. With the gaming segment tending to be the most obvious beneficiary of PCIe 4.0 in the consumer market, speeds could also not be sacrificed much in this pursuit.DRAM-less SSDs tend to be more power-efficient and also cost less, while delivering slightly worse performance numbers and consistency in general. There are multiple DRAM-less SSD controllers in the market such as the Phison E19T (used in the WD_BLACK SN750 SE and likely, the Micron 2450 series as well), Silicon Motion SM2267XT (used in the ADATA XPG GAMMIX S50 Lite), and the Innogrit IG5220 (used in the ADATA XPG ATOM 50). While performance tends to vary a bit with the NAND being used, the drives based on the Phison E19T and the Silicon Motion SM2267XT tend to top out around 3.9 GBps, while the Innogrit IG5220 reaches around 5 GBps.Western Digital is throwing its hat into this ring today with the launch of the WD_BLACK SN770, powered by its own in-house DRAM-less SSD controller - the SanDisk 20-82-10081. It is wresting the performance crown in this segment with read speeds of up to 5150 MBps. A 20% improvement in power efficiency over the WD_BLACK SN750 SE is also being claimed by the company. The WD_BLACK SN770 will be available in four capacities ranging from 250GB to 2TB, with the complete specifications summarized in the table below.Western Digital WD_BLACK SN770 SSD SpecificationsCapacity250 GB500 GB1 TB2 TBModelWDS250G3X0EWDS500G3X0EWDS100T3X0EWDS200T3X0EControllerSanDisk 20-82-10081NAND FlashBiCS 5 112L 3D TLC NAND?Form-Factor, InterfaceSingle-Sided M.2-2280, PCIe 4.0 x4, NVMe 1.4DRAMN/ASequential Read4000 MB/s5000 MB/s5150 MB/sSequential Write2000 MB/s4000 MB/s4900 MB/s4850 MB/sRandom Read IOPS240K460K740K650KRandom Write IOPS470K800KAvg. Power Consumption? W? W? W? WMax. Power Consumption? W (R)
AMD Reports Q4 2021 and FY 2021 Earnings: Turning Silicon Into Gold
As the full year 2021 earnings season rolls along, the next major chip maker out of the gate is AMD, who has been enjoying a very positive trajectory in revenue and profits over the past few years. The company has continued to build upon the success of its Zen architecture-based CPUs and APUs in both the client and server spaces, as well as a full year’s revenue for the APUs powering the hard-to-find Playstation 5 and Xbox Series X|S. As a result, these products have propelled AMD to another record quarter and another record year, as the company continues to hit revenue records while recording some sizable profits in the process.For the fourth quarter of 2021, AMD reported $4.8B in revenue, a 49% jump over the same quarter a year ago. As a result, Q4’2021 was (yet again) AMD’s best quarter ever, built on the back of strong sales across the entire company. Meanwhile, due to last year’s unusual, one-off gain related to an income tax valuation allowance, AMD’s GAAP net income did dip on a year-over-year basis, to $974M. In lieu of that, AMD’s quarterly non-GAAP net income (which excludes the tax allowance) was up 77% year-over-year, which is an even bigger jump than what we saw in Q4’20.AMD’s continued growth and overall success has also boosted the company’s gross margin to 50%, marking the first time since at least the turn of the century that AMD has crossed the 50% mark. Besides underscoring the overall profitability of AMD’s operations, gross margins are also a good indicator of the health of a company; and for a fab-less semiconductor firm, 50% is a very good number to beat indeed. AMD is now within 5 percentage points of Intel’s gross margins, a feat that at one time seemed impossible, and highlighting AMD’s ascent to a top-tier chip firm.AMD Q4 2021 Financial Results (GAAP)Q4'2021Q4'2020Q3'2021Y/YQ/QRevenue$4.8B$3.2B$4.3B+49%+12%Gross Margin50%45%48%+5.6pp+1.9ppOperating Income$1.2B$570M$948M+112%+27%Net Income$974M$1781M*$923M-45%+6%Earnings Per Share$0.80$1.45$0.75-45%+7%As for AMD’s full-year earnings, the company has been having great quarters all year now, so unsurprisingly this is reflected in their full-year results. Overall, for 2021 AMD booked $16.4B in revenue, which was an increase of 68% over 2020, and, of course, sets a new record for the company. AMD’s gross margin for the year was 48%, up 3.7 percentage points from FY2020, and reflecting how AMD’s gross margins have been on the rise throughout the entire year.All of this has played out nicely for AMD’s profitability, as well. For the year AMD booked $3.2 billion in net income, and unlike 2020, there are no one-off tax valuations inflating those numbers. Amusingly, even with that $1.3B valuation for 2020, AMD still beat their 2020 net income by a wide margin this year, bringing home $672M more. Or to look at things on a non-GAAP basis, net income was up 118% in a year, more than doubling 2020’s figures. Suffice it to say, the chip crunch has been very kind to AMD’s bottom line in the past year.AMD FY 2021 Financial Results (GAAP)FY 2021FY 2020FY 2019Y/YRevenue$16.4B$9.8B$6.7B+68%Gross Margin48%45%43%+3.7ppOperating Income$3.6B$1369M$631M+166%Net Income$3.2B$2490M*$341M+27%Earnings Per Share$2.57$2.06$0.30+25%Moving on to individual reporting segments, 2021 was a year where all of AMD’s business units were seemingly firing on all cylinders. Client CPUs, GPUs, server CPUs, game consoles; 2021 will go down as the year where nobody could get enough of AMD’s silicon.For Q4’21, AMD’s Computing and Graphics segment booked $2.6B in revenue, a 32% improvement over the year-ago quarter. According to the company, both Ryzen and Radeon sales have done very well here, with both product lines seeing further sales growth. On the CPU/APU front, average sale prices were up on both a yearly and quarterly basis, reflecting the fact that higher priced products are making up a larger share of AMD’s processor sales. And while AMD doesn’t offer a specific percentage breakdown, the company is reporting that notebook sales were once again the leading factor in AMD’s Ryzen revenue growth, coming on the back of strong demand for higher margin premium notebooks. And, based on overall growth in the number of processors sold, AMD believes that they’ve increased their market share (by revenue) for what would be the seventh straight quarter.Meanwhile on the GPU front, AMD is reporting that graphics revenue has doubled on a year-over-year basis. According to the company, GPU ASPs are up on a year-over-year basis as well, though interestingly, they’re actually down on a quarterly basis due to what the company is attributing to the product mix – which in turn is presumably the ramp-up and launch of their first Navi 24-based products such as the RX 6500 XT. AMD’s prepared remarks don’t include any mentions of cryptocurrency, but it goes without saying that for the last year AMD has encountered little trouble in selling virtually every GPU it can get fabbed.Finally, AMD also folds its data center/enterprise GPU sales under the C&G segment. There, AMD is reporting that revenue has more than doubled on a YoY basis, thanks to last year’s launch of the Instinct MI200 accelerator family. Unfortunately, AMD doesn’t offer any unit or revenue breakouts here to get a better idea of what data center shipments are like, or how much of those sales were MI250X accelerators for the Frontier supercomputer.AMD Q4 2021 Reporting SegmentsQ4'2021Q4'2020Q3'2021Computing and GraphicsRevenue$2584M$1960M$2398MOperating Income$566M$420M$513MEnterprise, Embedded and Semi-CustomRevenue$2242M$1284M$1915MOperating Income$762M$243M$542MMeanwhile, AMD’s Enterprise, Embedded and Semi-Custom segment booked $2.2B in revenue for the quarter. The 75% year-over-year increase in revenue was driven by both improved EPYC sales as well as higher semi-custom sales.As is usually the case, AMD doesn’t break apart EPYC and semi-custom sales figures, but the company is noting that data center, server, and cloud revenue – essentially everything EPYC except HPC – all more than doubled versus the year-ago quarter. All of which propelled AMD to doubling EPYC sales versus Q4’20, setting new records in the process. AMD has also noted that they’ve shipped their first V-cache enabled EPYC CPUs (Milan-X) to Microsoft, who is using them in an upcoming Azure instance type.As for semi-custom sales, AMD is riding a wave of unprecedented demand for game consoles that has Sony and Microsoft taking every console APU they can get. Furthermore, despite this going on now for the last year and a half, AMD still expects semi-custom sales revenue to further grow in 2022 on the back of continued orders from console makers.With all of that said, however, as AMD’s revenues have increased, so have their costs. For both the Client and Enterprise segments, the company is reporting that operating income growth has been partially offset by higher operating expenses. This encompasses both higher wafer prices from TSMC, as well as higher costs for things such as shipping. AMD can more than absorb the hit, of course, but it’s a reflection on how AMD has needed to spend more in order to secure wafers and supplies on an ongoing basis.Looking forward, AMD is (understandably) once again expecting a very promising first quarter of 2022 and beyond. AMD has enjoyed significant revenue and market share growth over the past few years, and the company’s official forecasts are for that to continue into 2022. And, especially in the midst of the current and ongoing chip crunch, so long as demand holds, silicon may as well be gold for as valuable as it is to some of AMD’s customers.To that end, AMD is officially projecting revenue growth of 31% for 2022, which would bring AMD to around $21.5B in sales. Given AMD’s 2021 estimate, this is likely once again conservative, though it is noteworthy in that it’s a bit less growth than AMD was projecting at this point a year ago for 2021. More interestingly, perhaps, is that AMD expects the non-GAAP gross margin for the year to land at around 51%, which even if it’s also a conservative estimate, would still be a big accomplishment for AMD.Driving this growth will be a new slate of products for many of AMD’s important product lines. Along with ramping deliveries of Milan-X EPYC processors, AMD is also slated to deliver their Genoa EPYC processors, based on AMD’s Zen 4 CPU architecture, later this year. Zen 4 will also be making its appearance in Ryzen processors in H2’22, and in the meantime AMD has just launched their Zen3+ based Ryzen 6000 APUs for laptops. Finally, GPUs based on AMD’s forthcoming RDNA 3 architecture remain on the roadmap to be launched later this year as well.
Interview with Alex Katouzian, Qualcomm SVP: Talking Snapdragon, Microsoft, Nuvia, and Discrete Graphics
Two driving forces are driving the current technology market: insatiable demand for hardware, and the supply chain shortages making it difficult to produce enough in quantity to fulfil every order. Even with these two forces in action, companies have to push and develop next generation technologies, as no competitor wants to sit on their laurels. That includes Qualcomm, and as part of the Tech Summit in late 2021, I sat down with Alex Katouzian, Qualcomm’s GM of Mobile, Compute, and Infrastructure to talk about the issues faced in 2021, the outlook for 2022, where the relationships lie, and where innovation is headed when it comes to smartphone and PC.
Intel Reports Q4 2021 and FY 2021 Earnings: Ending 2021 On A High Note
Kicking off yet another earnings season, we once again start with Intel. The reigning 800lb gorilla of the chipmaking world is reporting its Q4 2021 and full-year financial results, closing the book on an eventful 2021 for the company. The first full year of the pandemic has seen Intel once again set revenue records, making this the sixth record year in a row, but it’s also clear that headwinds are going to be approaching for the company, both in respect to shifts in product demand and in the sizable investments needed to build the next generation of leading-edge fabs.
Launching This Week: NVIDIA's GeForce RTX 3050 - Ampere For Low-End Gaming
First announced as part of NVIDIA’s CES 2022 presentation, the company’s new GeForce RTX 3050 desktop video card is finally rolling out to retailers this month. The low-end video card is being positioned to round out the bottom of NVIDIA’s product stack, offering a modern, Ampere-based video card for a more entry-level market. All of this comes as PC video card market still in chaos due to a combination of the chip crunch and crypto miner demand, so any additional cards are most welcome – and likely to be sucked up rather quickly, even at a MSRP of $249 (and higher).
...78910111213141516...