Feed anandtech

Link https://anandtech.com/
Feed https://anandtech.com/rss/
Updated 2024-04-29 00:30
Samsung Announces First LPDDR5X at 8.5Gbps
After the publication of the LPDDR5X memory standard earlier this summer, Samsung has now been the first vendor to announce new modules based on the new technology.The LPDDR5X standard will start out at speeds of 8533Mbps, a 33% increase over current generation LPDDR5 based products which are running at 6400Mbps.
AMD Confirms Milan-X with 768 MB L3 Cache: Coming in Q1 2022
As an industry, we are slowly moving into an era where how we package the small pieces of silicon together is just as important as the silicon itself. New ways to connect all the silicon include side by side, on top of each other, and all sorts of fancy connections that help keep the benefits of chiplet designs but also taking advantage of them. Today, AMD is showcasing its next packaging uplift: stacked L3 cache on its Zen 3 chiplets, bumping each chiplet from 32 MiB to 96 MiB, however this announcement is targeting its large EPYC enterprise processors.
AMD Announces Instinct MI200 Accelerator Family: Taking Servers to Exascale and Beyond
AMD today is formally unveiling their AMD Instinct MI200 family of server accelerators. Based on AMD’s new CDNA 2 architecture, the MI200 family is the capstone AMD’s server GPU plans for the last half-decade. By combing their GPU architectural experience with the latest manufacturing technology from TSMC and home-grown technologies such as their chip-to-chip Infinity Fabric, AMD has put together their most potent server GPUs yet. And with MI200 parts already shipping to the US Department of Energy as part of the Frontier exascale supercomputer contract, AMD is hoping that success will open up new avenues into the server market for the company.
AMD Gives Details on EPYC Zen4: Genoa and Bergamo, up to 96 and 128 Cores
Since AMD’s relaunch into high-performance x86 processor design, one of the fundamental targets for the company was to be a competitive force in the data center. By having a competitive product that customers could trust, the goal has always been to target what the customer wants, and subsequently grow market share and revenue. Since the launch of 3 Generation EPYC, AMD is growing its enterprise revenue at a good pace, however questions always turn around to what the roadmap might hold. In the past, AMD has disclosed that its 4 Generation EPYC, known as Genoa, would be coming in 2022 with Zen 4 cores built on TSMC 5nm. Today, AMD is expanding the Zen 4 family with another segment of cloud-optimized processors called Bergamo.
VIA To Offload Parts of x86 Subsidiary Centaur to Intel For $125 Million
As part of their third quarter earnings release, VIA Technologies has announced this morning that the company is entering into an unusual agreement with Intel to offload parts of VIA’s x86 R&D subsidiary, Centaur Technology. Under the terms of the murky deal, Intel will be paying Centaur $125 million to pick up part of the engineering staff – or, as the announcement from VIA more peculiarly puts it “recruit some of Centaur's employees to join Intel,” Despite the hefty 9-digit price tag, the deal makes no mention of Centaur’s business, designs, or patents, nor has an expected closing date been announced.A subsidiary of VIA since 1999, the Austin-based Centaur is responsible for developing x86 core designs for other parts of VIA, as well as developing their own ancillary IP such as deep learning accelerators. Via Centaur, VIA Technologies is the largely aloof third member of the x86 triumvirate, joining Intel and AMD as the three x86 license holders. Centaur’s designs have never seen widescale adoption to the extent that AMD or Intel’s have, but the company has remained a presence in the x86 market since the 90s, spending the vast majority of that time under VIA.Centaur’s most recent development was the CNS x86 core, which the company announced in late 2019. Aimed at server-class workloads, the processor design is said to offer Haswell-like general CPU performance, which is combined with AVX-512 support (executed over 2 rounds via a 256-bit SIMD). CNS, in turn, would be combined into a product Centaur called CHA, which added fabric and I/O, as well as an integrated proprietary deep learning accelerator. The first silicon based on CHA was originally expected in the second half of 2020, but at this point we haven’t heard anything (though that’s not unusual for VIA).As for the deal at hand, VIA’s announcement leaves more questions than answers. The official announcement from VIA comes with very few details other than the price tag and the information that Intel is essentially paying Centaur for the right to try to recruit staff members to join Intel. Despite being the buyer in this deal – and buyers typically being the ones to announce acquisitions – Intel has not said anything about the deal from their end.We’ve reached out to both Intel and Centaur for more information, but we’re not expecting to hear from them until later this morning given the significant time zone gap between Taiwan and the US. Update: We've since heard from both Intel and Centaur. Intel is confirming the deal, but without providing additional details. Meanwhile Centaur has no comment.In the meantime, local media reports are equally as puzzling, as language barriers aside, apparently even the local press isn’t being given much in the way of concrete details. None the less, local media such as United Daily News is reporting that the Intel deal is indeed not a wholesale sale of Centaur’s team, and that VIA is retaining the Centaur business. So what Intel is getting out of this that’s worth $125 million is, for the moment, a mystery.Adding an extra wrinkle to matters, the Centaur website has been partially scrubbed. Active as recently as the end of last week, the site’s contents have been replaced with an “under construction” message. In which case it would seem that, even if VIA is retaining Centaur and its IP, the company no longer has a need for a public face for the group.Meanwhile, given the overall lack of details, news of the acquisition raises a number of questions about the future of VIA’s x86 efforts, as well as just what Intel is getting out of this. If VIA isn’t selling the Centaur business, then does that mean they’re retaining their x86 license? And if Intel isn’t getting any IP, then what do they need with Centaur’s engineering staff? Does Intel want to make their own take on the CNS x86 core?Overall, it’s not too surprising to see Intel make a play for the far-flung third member of the x86 ecosystem, especially as the combination of AMD and Arm-based processors is proving to be stiff competition for Intel, dampening the need for a third x86 vendor. Still, this isn’t what we envisioned for Intel buying out Centaur.As always, we’ll have more details on this bizarre story as they become available.
The Intel 12th Gen Core i9-12900K Review: Hybrid Performance Brings Hybrid Complexity
Today marks the official retail availability of Intel’s 12 Generation Core processors, starting with the overclockable versions this side of the New Year, and the rest in 2022. These new processors are the first widescale launch of a hybrid processor design for mainstream Windows-based desktops using the underlying x86 architecture: Intel has created two types of core, a performance core and an efficiency core, to work together and provide the best of performance and low power in a singular package. This hybrid design and new platform however has a number of rocks in the river to navigate: adapting Windows 10, Windows 11, and all sorts of software to work properly, but also introduction of DDR5 at a time when DDR5 is still not widely available. There are so many potential pitfalls for this product, and we’re testing the flagship Core i9-12900K in a few key areas to see how it tackles them.
Google's Tensor inside of Pixel 6, Pixel 6 Pro: A Look into Performance & Efficiency
Today, we’re taking an in-depth look at Google's Tensor SoC, the chip powering their new Pixel 6 family of phones. The Tensor is Google's first custom-designed SoC, and incorporates a mix of custom Google IP blocks – such as the new edgeTPU – as well as some off-the-shelf blocks from Samsung. Google designed the Tensor SoC to give them better performance, particularly in machine learning workloads, which Google is increasingly favoring.So sit back as we dive into Google's first SoC and document what exactly it’s composed of, showcase the differences and similarities between other SoCs in the market, and come to a better understanding of what kind of IPs Google has integrated into the chip to make it unique and warrant calling it a Google SoC.
Updated: Intel Cans Xe-HP Server GPU Products, Shifts Focus To Xe-HPC and Xe-HPG
Update 11/01:In an additional tweet posted over the weekend by Raja Koduri, the Intel GPU frontman confirmed that Intel will be bringing products based on their Xe-HPG architecture to the server market.
Bringing Geek Back: Q&A with Intel CEO Pat Gelsinger
One of the overriding key themes of Pat Gelsinger’s ten-month tenure at Intel has been the eponymous will to ‘bring geek back’ to the company, implying a return to Intel’s competitive past which relied on the expertise of its engineers to develop market-leading products. During this time, Pat has showcased Intel’s IDM 2.0 strategy, leveraging internal production, external production, and an update to Intel’s foundry offering, making it a cornerstone of Intel’s next decade of growth. The first major launch of this decade happened this week, at Intel’s Innovation event, with the announcement of 12 Gen Core, as well as updates to Intel’s software strategy up and down the company.After the event, Intel invited several media and an analyst or two onto a group session with CEO Pat, along with CTO Greg Lavender, a recent new CTO hire coming from Pat’s old stomping ground at VMWare. In light of the announcements made at Intel Innovation, as well as the financial quarterly results released just the week prior, and the state of the semiconductor supply globally, everyone had Intel at the forefront of their minds, ready to ask for details on Intel’s plan.
OWC Envoy Pro Elektron Rugged IP67 Portable SSD Review
The market for portable SSDs has expanded significantly over the past few years. With USB 3.2 Gen 2 (10 Gbps) becoming the de-facto standard for USB ports even in entry-level systems, external storage devices using the interface have flooded the market.OWC has established itself as vendor of computing peripherals and upgrade components (primarily for the Apple market) over the last 30 years. Their portable SSDs lineup, under the Envoy brand, includes both Thunderbolt and USB-C offerings. The Envoy Pro EX Thunderbolt 3 and the Envoy Pro EX USB-C coupled leading performance numbers with a sleek and stylish industrial design. Late last year, the company introduced the OWC Envoy Pro Elektron - a portable flash drive similar to the Envoy Pro EX USB-C in performance, albeit in a much smaller form-factor.Read on for our hands-on review of the Envoy Pro Elektron to check out how it fares in our updated test suite for direct-attached storage devices.
European Union Regulators Open Probe Into NVIDIA-Arm Acquisition
Following an extended period of regulatory uncertainly regarding NVIDIA’s planned acquisition of Arm, the European Union executive branch, the European Commission, has announced that they have opened up a formal probe into the deal. Citing concerns about competition and the importance of Arm’s IP, the Commission has kicked off a 90 day review process for the merger to determine if those concerns are warranted, and thus whether the merger should be modified or blocked entirely. Given the 90 day window, the Commission has until March 15 of 2022 to publish a decision.At a high level, the EC’s concerns hinge around the fact that Arm is an IP supplier for both NVIDIA and its competitors. Which has led the EC to be concerned about whether NVIDIA would use its ownership of Arm to limit or otherwise degrade competitors’ access to Arm’s IP. This is seen as an especially concerning scenario given the breadth of device categories that Arm chips are in – everything from toasters to datacenters. As well, the EC will also be examining whether the merger could lead to NVIDIA prioritizing the R&D of IP that NVIDIA makes heavy use of (e.g. datacenter CPUs) to the detriment of other types of IP that are used by other customers.It is worth noting that this is going to be a slightly different kind of review than usual for the EC. Since NVIDIA and Arm aren’t competitors – something even the EC notes – this isn’t a typical competitive merger. Instead, the investigation is going to be all about the downstream effects of a major supplier also becoming a competitor.Overall, the need for a review is not terribly surprising. Given the scope of the $40 billion deal, the number of Arm customers (pretty much everyone), and the number of countries involved (pretty much everyone again), there was always a good chance that the deal could be investigated by one or more nations. Still, the EC’s investigation means that, even if approved, the deal will almost certainly not close by March as previously planned."Semiconductors are everywhere in products and devices that we use everyday as well as in infrastructure such as datacentres. Whilst Arm and NVIDIA do not directly compete, Arm's IP is an important input in products competing with those of NVIDIA, for example in datacentres, automotive and in Internet of Things. Our analysis shows that the acquisition of Arm by NVIDIA could lead to restricted or degraded access to Arm's IP, with distortive effects in many markets where semiconductors are used. Our investigation aims to ensure that companies active in Europe continue having effective access to the technology that is necessary to produce state-of-the-art semiconductor products at competitive prices."
Intel's Aurora Supercomputer Now Expected to Exceed 2 ExaFLOPS Performance
As part of Intel’s 2021 Innovation event, the company offered a brief update on the Aurora supercomputer, which Intel is building for Argonne National Laboratory. The first of the US’s two under-construction exascale supercomputers, Aurora and its critical processors are finally coming together, allowing Intel to finally narrow its performance projections. As it turns out, the 1-and-change exaFLOPS system is going to be more like a 2 exaFLOPS system – Aurora’s performance is coming in high enough that Intel now expects the system to exceed 2 exaFLOPS of double precision compute performance.Planned to be the first of the US’s two public exascale systems, the Aurora supercomputer has been through a tumultuous development process. The contract was initially awarded to Intel and Cray back in 2015 for a pre-exascale system based on Intel’s Xeon Phi accelerators, a plan that went out the window when Intel discontinued Xeon Phi development. In its place, the Aurora contract was renegotiated to become an exascale system based on a combination of Intel’s Xeon CPUs and what became their Ponte Vecchio Xe-HPC GPUs. Since then, Intel has been working down to the wire on getting the necessary silicon built in order to make a delivery window that’s already shifted from 2020 to 2021 to 2022(ish), going as far as fabbing parts of Ponte Vecchio on rival TSMC’s 5nm process.But there is finally light at the end of the tunnel, it would seem. As Intel pushes to complete the system, its performance is coming in ahead of expectations. According to the chip company, they now expect that the assembled supercomputer will be able to deliver over 2 exaFLOPS of double precision (FP64) performance. The system previously didn’t have a specific performance figure attached to it, beyond the fact that it would be over 1 exaFLOPS in FP64 throughput.This higher performance figure for Aurora comes courtesy of Ponte Vecchio, which according to CEO Pat Gelsinger is overdelivering on performance. Gelsinger hasn’t gone into additional detail in how Ponte Vecchio is overdelivering, but given that IPC and overall efficiency tends to be relatively easy to nail down during simulations, the most likely candidate here is that Ponte Vecchio’s is clocking higher than Intel’s previous projections. Ponte Vecchio is one of the first HPC chips (and the first Intel GPU) built on TSMC’s N5 process, so there have been a lot of unknowns going into this project.For Intel, this is no doubt a welcome bit of good luck for a project that has seen many hurdles. The repeated delays have already allowed rival AMD to get the honors of delivering the first exascale system with Frontier, which is currently being installed and is expected to offer 1.5 exaFLOPS in performance. So while Intel no longer gets to be first, once Aurora does come online next year, it will be the faster of the two systems.
Intel 12th Gen Core Alder Lake for Desktops: Top SKUs Only, Coming November 4th
Over the past few months, Intel has been drip-feeding information about its next-generation processor family. Alder Lake, commercially known as Intel’s 12 Generation Core architecture, is officially being announced today for a November 4 launch. Alder Lake contains Intel’s latest generation high-performance cores combined with new high-efficiency cores for a new hybrid design, along with updates to Windows 11 to improve performance with the new heterogeneous layout. Only the six high-performance K and KF processor variants are coming this side of the New Year, with the rest due for Q1. We have specifications, details, and insights ahead of the product reviews on November 4.
AMD Reports Q3 2021 Earnings: Records All Around
Continuing our earnings season coverage for Q3’21, today we have the yin to Intel’s yang, AMD. The number-two x86 chip and discrete GPU maker has been enjoying explosive growth ever since AMD kicked off its renaissance of sorts a couple of years ago, and that trend has been continuing unabated – AMD is now pulling in more revenue in a single quarter than they did in all of 2016. Consequently, AMD has been setting various records for several quarters now, and their latest quarter is no exception, with AMD setting new high water marks for revenue and profitability.For the third quarter of 2021, AMD reported $4.3B in revenue, making a massive 54% jump over a year-ago quarter for AMD, when the company made just $2.8B in a then-record quarter. That makes Q3’21 both the best Q3 and the best quarter ever for the company, continuing a trend that has seen the company’s revenue grow for the last 6 quarters straight – and this despite a pandemic and seasonal fluctuations.As always, AMD’s growing revenues have paid off handsomely for the company’s profitability. For the quarter, the company booked $923M in net income – coming within striking distance of their first $1B-in-profit quarter. This is a 137% increase over the year-ago quarter, underscoring how AMD’s profitability has been growing even faster than their rapidly rising revenues. Helping AMD out has been a strong gross margin for the company, which has been holding at 48% over the last two quarters.AMD Q3 2021 Financial Results (GAAP)Q3'2021Q3'2020Q2'2021Y/YRevenue$4.3B$2.8B$3.45B+54%Gross Margin48%44%48%+4ppOperating Income$948M$449M$831M+111%Net Income$923M$390M$710M+137%Earnings Per Share$0.75$0.32$0.58+134%Breaking down AMD’s results by segment, we start with Computing and Graphics, which encompasses their desktop and notebook CPU sales, as well as their GPU sales. That division booked $2.4B in revenue for the quarter, $731M (44%) more than Q2 2021. Accordingly, the segment’s operating income is up quite a bit as well, going from $384M a year ago to $513M this year. Though, in a mild surprise, it is down on a quarterly basis, which AMD is ascribing to higher operating expenses.As always, AMD doesn’t provide a detailed breakout of information from this segment, but they have provided some selective information on revenue and average selling prices (ASPs). Overall, client CPU sales have remained strong; client CPU ASPs are up on both a quarterly and yearly basis, indicating that AMD has been selling a larger share of high-end (high-margin) parts – or as AMD likes to call it, a “richer mix of Ryzen processor sales”. For their earnings release AMD isn’t offering much commentary on laptop versus desktop sales, but it’s noteworthy that the bulk of the company’s new consumer product releases in the quarter were desktop-focused, with the Radeon RX 6600 XT and Ryzen 5000G-series APUs.Speaking of GPUs, AMD’s graphics and compute processor business is booming as well. As with CPUs, ASPs for AMD’s GPU business as up on both a yearly and quarterly basis, with graphics revenue more than doubling over the year-ago quarter. According to the company this is being driven by both high-end Radeon sales as well as AMD Instinct sales, with data center graphics revenue more than doubling on both a yearly and quarterly basis. AMD began shipping their first CDNA2-based accelerators in Q2, so for Q3 AMD has been enjoying that ramp-up as they ship out the high-margin chips for the Frontier supercomputer.AMD Q3 2021 Reporting SegmentsQ3'2021Q3'2020Q2'2021Computing and GraphicsRevenue$2398M$1667M$2250MOperating Income$513M$384M$526MEnterprise, Embedded and Semi-CustomRevenue$1915M$1134M$1600MOperating Income$542M$141M$398MMoving on, AMD’s Enterprise, Embedded, and Semi-Custom segment has yet again experienced a quarter of rapid growth, thanks to the success of AMD’s EPYC processors and demand for the 9th generation consoles. This segment of the company booked $1.9B in revenue, $781M (69%) more than what they pulled in for Q3’20, and 20% ahead of an already impressive Q2’21. The gap between the CG and EESC groups has also further closed – the latter is now only behind AMD’s leading group by $483M in revenue.And while AMD intentionally doesn’t separate server sales from console sales in their reporting here, the company has confirmed that both are up. AMD’s Milan server CPUs, which were launched earlier this quarter, have become the majority of AMD’s server revenue, pushing them to their 6 straight quarter of record server processor revenue. And semi-custom revenue – which is primarily the game consoles – is up not only on a yearly basis, but on a quarterly basis as well, with AMD confirming that they have been able to further expand their console APU production.Looking forward, AMD’s expectations for the fourth quarter and for the rest of the year have been bumped up yet again. For Q4 the company expects to book a record $4.5B (+/- $100M) in revenue, which if it comes to pass will be 41% growth over Q4’20. AMD is also projecting a 49.5% gross margin for Q4, which if they exceed it even slightly, would be enough to push them to their first 50% gross margin quarter in company history. Meanwhile AMD’s full year 2021 projection now stands at a 65% year-over-year increase in revenue versus their $9.8B FY2020, which is 5 percentage points higher than their forecast from the end of Q2.As for AMD’s ongoing Xilinx acquisition, while the company doesn’t have any major updates on the subject, they are confirming that they’re making “good progress” towards securing the necessary regulatory approvals. To that they, they are reiterating that it remains on-track to close by the end of this year.Finally, taking a break from growing the company by 50% every year, AMD is scheduled to hold their AMD Accelerated Data Center Premiere event on Monday, November 8. While AMD isn’t giving up too much information in advance, the company is confirming that we’ll hear more about their CDNA2 accelerator architecture, which along with the current Frontier supercomputer, will be going into their next generation Radeon Instinct products. As well, the company will also be delivering news on their EPYC server processors, which were just recently updated back in March with the launch of the 3 generation Milan parts. As always, AnandTech will be virtually there, covering AMD’s announcements in detail, so be sure to drop by for that.
AnandTech Interviews Mike Clark, AMD’s Chief Architect of Zen
AMD is calling this time of the year as its ‘5 years of Zen’ time, indicating that back in 2016, it was starting to give the press the first taste of its new microarchitecture which, in hindsight, ultimately saved the company. How exactly Zen came to fruition has been slyly hidden from view all these years, with some of the key people popping up from time to time: Jim Keller, Mike Clark, and Suzanne Plummer hitting the headlines more often than most. But at the time AMD started to disclose details about the design, it was Mike Clark front and center in front of those slides. At the time I remember asking him for all the details, but as part of the 5 Year messaging, offered Mike for a formal interview on the topic.
Kingston KC3000 PCIe 4.0 NVMe Flagship SSD Hits Retail
Kingston had previewed their 2021 flagship PCIe 4.0 x 4 M.2 NVMe SSD (codename "Ghost Tree") at CES earlier this year. Not much was divulged other than the use of the Phison E18 controller at that time. The product is hitting retail shelves today as the KC3000. The M.2 2280 SSD will be available in four capacities ranging from 512GB to 4TB. Kingston also provided us with detailed specifications.Kingston KC3000 SSD SpecificationsCapacity512 GB1024 GB2048 GB4096 GBControllerPhison E18NAND FlashMicron 176L 3D TLC NANDForm-Factor, InterfaceSingle-Sided M.2-2280, PCIe 4.0 x4, NVMe 1.4Double-Sided M.2-2280, PCIe 4.0 x4, NVMe 1.4DRAM512 MB DDR41 GB DDR42 GB DDR44 GB DDR4Sequential Read7000 MB/sSequential Write3900 MB/s6000 MB/s7000 MB/sRandom Read IOPS450K900K1MRandom Write IOPS900K1MAvg. Power Consumption0.34 W0.33 W0.36 WMax. Power Consumption2.7 W (R)
Apple's M1 Pro, M1 Max SoCs Investigated: New Performance and Efficiency Heights
Last week, Apple had unveiled their new generation MacBook Pro laptop series, a new range of flagship devices that bring with them significant updates to the company’s professional and power-user oriented user-base. The new devices particularly differentiate themselves in that they’re now powered by two new additional entries in Apple’s own silicon line-up, the M1 Pro and the M1 Max. We’ve covered the initial reveal in last week’s overview article of the two new chips, and today we’re getting the first glimpses of the performance we’re expected to see off the new silicon.
The ASRock X570S PG Riptide Motherboard Review: A Wave of PCIe 4.0 Support on A Budget
Officially announced at Computex 2021, AMD and its vendors unveiled a new series of AM4 based motherboards for Ryzen 5000 processors. The new X570S chipset is, really, not that different from the previous version launched back in 2019 from a technical standpoint. The main user difference is that all of the X570S models now feature a passively cooled chipset. Some vendors have opted to either refresh existing models, or others are releasing completely new variants, such as the ASRock X570S PG Riptide we are reviewing today. Aimed at the entry-level extreme chipset, the X570S PG Riptide features a Killer-based 2.5 GbE controller, dual PCIe 4.0 x4 M.2 slots, and support for up to 128 GB of DDR4-5000.
Intel Reports Q3 2021 Earnings: Client Down, Data Center and IoT Up
Kicking off another earnings season, Intel is once again leading the pack of semiconductor companies in reporting their earnings for the most recent quarter. As the company gets ready to go into the holiday quarter, they are coming off what’s largely been a quiet quarter for the chip maker, as Intel didn’t launch any major products in Q3. Instead, Intel’s most recent quarter has been driven by ongoing sales of existing products, with most of Intel’s business segments seeing broad recoveries or other forms of growth in the last year.For the third quarter of 2021, Intel reported $19.2B in revenue, a $900M improvement over the year-ago quarter. Intel’s profitability has also continued to grow – even faster than overall revenues – with Intel booking $6.8B in net income for the quarter, dwarfing Q3’2020’s “mere” $4.3B. Unsurprisingly, that net income growth has been fueled in part by higher gross margins; Intel’s overall gross margin for the quarter was 56%, up nearly 3 percentage points from last year.Intel Q3 2021 Financial Results (GAAP)Q3'2021Q2'2021Q3'2020Revenue$19.2B$19.7B$18.3BOperating Income$5.2B$5.7B$5.1BNet Income$6.8B$5.1B$4.3BGross Margin56.0%53.3%53.1%Client Computing Group Revenue$9.7B-4%-2%Data Center Group Revenue$6.5Bflat+10%Internet of Things Group Revenue$1.0B+2%+54%Mobileye Revenue$326Mflat+39%Non-Volatile Memory Solutions Group$1.1Bflat-4%Programmable Solutions Group$478M-2%+16%Breaking things down by Intel’s individual business groups, most of Intel’s groups have enjoyed significant growth over the year-ago quarter. The only groups not to report gains are Intel’s Client Computing Group (though this is their largest group) and their Non-Volatile Memory Solutions Group, which Intel is in the process of selling to SK Hynix.Starting with the CCG then, Intel’s core group is unfortunately also the only one struggling to grow right now. With $9.7B in revenue, it’s down just 2% from Q3’2020, but that’s something that stands out when Intel’s other groups are doing so well. Further breaking down the numbers, platform revenue overall is actually up 2% on the year, but non-platform revenue – “adjacencies” as Intel terms them, such as their modem and wireless communications product lines – are down significantly. On the whole this isn’t too surprising since Intel is in the process of winding down its modem business anyhow as part of that sale to Apple, but it’s an extra drag that Intel could do without.The bigger thorn in Intel’s side at the moment, according to the company, is the ongoing chip crunch, which has limited laptop sales. With Intel’s OEM partners unable to source enough components to build as many laptops as they’d like, it has the knock-on effect of reducing their CPU orders, even though Intel itself doesn’t seem to be having production issues. The upshot, at least, is that desktop sales are up significantly versus the year-ago quarter, and that average selling prices (ASPs) for both desktop and notebook chips are up.Meanwhile, Intel’s Data Center Group is enjoying a recovery in enterprise spending, pushing revenues higher. DCG’s revenue grew 10% year-over-year, with both sales volume and ASPs increasing by several percent on the back of their Ice Lake Xeon processors. A bit more surprising here is that Intel believes they could be doing even better if not for the chip crunch; higher margin products like servers are typically not impacted as much by these sorts of shortages, since server makers have the means to pay for priority.Unfortunately, unlike Q2 Intel isn’t providing a quarter-over-quarter (i.e. vs the previous quarter) figures for their earnings presentation. So while overall DCG revenue is flat on a quarterly basis, it sounds like Intel hasn’t really recovered from the hit they took in Q2. Meanwhile, commerntary on Intel's earnings call suggests the sales of the largest (XCC) Ice Lake Xeons has been softer than Intel first expected, which has kept ASP growth down in an otherwise DCG-centric quarter.The third quarter was also kind to Intel’s IoT groups and their Programmable Solutions Group. All three groups are up by double-digit percentages on a YoY basis, particularly the Internet of Things Group (IoTG), which is up 54%. According to Intel, that IOTG growth is largely due to businesses recovering from the pandemic, with a similar story for the Mobileye group thanks to automotive production having ramped back up versus its 2020 lows.Otherwise, Intel’s final group, the Non-Volatile Memory Solutions Group, was the other declining group for the quarter. At this point Intel has officially excised the group’s figures from their non-GAAP reporting, and while they’re still required to report those figures in GAAP reports, they aren’t further commenting on a business that will soon no longer be theirs.Finally, tucked inside Intel’s presentation deck is an interesting note: Intel Foundry Services (IFS) has shipped its first revenue wafers. Intel is, of course, betting heavily on IFS becoming a cornerstone of its overall chip-making business in the future as part of its IDM 2.0 strategy, so shipping customers’ chips for revenue is an important first step in that process. Intel has laid out a very aggressive process roadmap leading up to 20A in 2024, and IFS’s success will hinge on whether they can hit those manufacture ring technology targets.For Intel, Q3’2021 was overall a decent quarter for the group – though what’s decent is relative. With the DCG, IOTG, and Mobileye groups all setting revenue records for the quarter (and for IOTG, overall records), Intel continues to grow. On the flip side, however, Intel missed their own revenue projections for the quarter by around $100M, so in that respect they’ve come in below where they intended to be. And judging from the 7% drop in the stock price during after-hours trading, investors are taking note.Looking forward, Intel is going into the all-important Q4 holiday sales period, typically their biggest quarter of the year. At this point the company is projecting that it will book $18.3B in non-GAAP revenue (excluding NSG), which would be a decline of 5% versus Q4’2020. Similarly, the company is expecting gross margins to come back down a bit, forecasting a 53.5% margin for the quarter. On the product front, Q4 will see the launch of the company’s Alder Lake family of processors, though initial CPU launches and their relatively low volumes tend not to move the needle too much.On that note, Intel’s Innovation event is scheduled to take place next week, on the 27 and 28. The two day event is a successor-of-sorts to Intel’s IDF program, and we should find out more about the Alder Lake architecture and Intel’s specific product plans at that time.Gallery: Intel Q3 2021 Earnings Presentation
Intel Reaffirms: Our Discrete GPUs Will Be On Shelves in Q1 2022
Today is when Intel does its third-quarter 2021 financial disclosures, and there’s one little tidbit in the earnings presentation about its upcoming new discrete GPU offerings. The earnings are usually a chance to wave the flag of innovation about what’s to come, and this time around Intel is confirming that its first-generation discrete graphics with the Xe-HPG architecture will be on shelves in Q1 2022.Intel has slowly been disclosing the features for its discrete gaming graphics offerings. Earlier this year, the company announced the branding for its next-gen graphics, called Arc, and with that the first four generations of products: Alchemist, Battlemage, Celestial, and Druid. It’s easy to see that we’re going ABCD here. Technically at that disclosure, in August 2021, Intel did state that Alchemist will be coming in Q1, the reaffirmation of the date today in the financial disclosures indicates that they’re staying as close to this date as possible.Intel has previously confirmed that Alchemist will be fully DirectX 12 Ultimate compliant – meaning that alongside RT, it will offer variable-rate shading, mesh shaders, and sampler feedback. This will make it comparable in core graphics features to current-generation AMD and NVIDIA hardware. Although it has taken a few years now to come to fruition, Intel has made it clear for a while now that the company has intended to become a viable third player in the discrete graphics space. Intel’s odyssey, as previous marketing efforts have dubbed it, has been driven primarily by developing the Xe family of GPU microarchitectures, as well as the GPUs based on those architectures. Xe-LP was the first out the door last year, as part of the Tiger Lake family of CPUs and the DG1 discrete GPU. Other Xe family architectures include Xe-HP for servers and Xe-HPC for supercomputers and other high-performance compute environments.The fundamental building block of Alchemist is the Xe Core. For manufacturing, Intel is turning to TSMC’s N6 process to do it. Given Intel’s Q1’22 release timeframe, Intel’s Alchemist GPUs will almost certainly be the most advanced consumer GPUs on the market with respect to manufacturing technology. Alchemist will be going up against AMD’s Navi 2x chips built on N7, and NVIDIA’s Ampere GA10x chips built on Samsung 8LPP. That said, as AMD can attest to, there’s more to being competitive in the consumer GPU market than just having a better process node. In conjunction with the use of TSMC’s N6 process, Intel is reporting that they’ve improved both their power efficiency (performance-per-watt) and their clockspeeds at a given voltage by 50% compared to Xe-LP. Note that this is the sum total of all of their improvements – process, logic, circuit, and architecture – so it’s not clear how much of this comes from the jump to TSMC N6 from Intel 10SF, and how much comes from other optimizations.Exactly what performance level and pricing Intel will be pitching its discrete graphics to is currently unknown. The Q1 launch window puts CES (held the first week of January) as a good spot to say something more.Related Reading
SK Hynix Announces Its First HBM3 Memory: 24GB Stacks, Clocked at up to 6.4Gbps
Though the formal specification has yet to be ratified by JEDEC, the memory industry as a whole is already gearing up for the upcoming launch of the next generation of High Bandwidth Memory, HBM3. Following announcements earlier this summer from controller IP vendors like Synopsys and Rambus, this morning SK Hynix is announcing that it has finished development of its HBM3 memory technology – and according to the company, becoming the first memory vendor to do so. With controller IP and now the memory itself nearing or at completion, the stage is being set for formal ratification of the standard, and eventually for HBM3-equipped devices to start rolling out later in 2022.Overall, the relatively lightweight press release from SK Hynix is roughly equal parts technical details and boasting. While there are only 3 memory vendors producing HJBM – Samsung, SK Hynix, and Micron – it’s still a technically competitive field due to the challenges involved in making deep-stacked and TSV-connected high-speed memory work, and thus there’s a fair bit of pride in being first. At the same time, HBM commands significant price premiums even with its high production costs, so memory vendors are also eager to be first to market to cash in on their technologies.In any case, both IP and memory vendors have taken to announcing some of their HBM wares even before the relevant specifications have been announced. We saw both parties get an early start with HBM2E, and now once again with HBM3. This leaves some of the details of HBM3 shrouded in a bit of mystery – mainly that we don’t know what the final, official bandwidth rates are going to be – but announcements like SK Hynix’s help narrow things down. Still, these sorts of early announcements should be taken with a small grain of salt, as memory vendors are fond of quoting in-lab data rates that may be faster than what the spec itself defines (e.g. SK Hynix’s HBM2E).Getting into the technical details, according to SK Hynix their HBM3 memory will be able to run as fast as 6.4Gbps/pin. This would be double the data rate of today’s HBM2E, which formally tops out at 3.2Gbps/pin, or 78% faster than the company's off-spec 3.6Gbps/pin HBM2E SKUs. SK Hynix’s announcement also indirectly confirms that the basic bus widths for HBM3 remain unchanged, meaning that a single stack of memory is 1024-bits wide. At Hynix’s claimed data rates, this means a single stack of HBM3 will be able to deliver 819GB/second worth of memory bandwidth.SK Hynix HBM Memory ComparisonHBM3HBM2EHBM2Max Capacity24 GB16 GB8 GBMax Bandwidth Per Pin6.4 Gb/s3.6 Gb/s2.0 Gb/sNumber of DRAM ICs per Stack1288Effective Bus Width1024-bitVoltage?1.2 V1.2 VBandwidth per Stack819.2 GB/s460.8 GB/s256 GB/sSK Hynix will be offering their memory in two capacities: 16GB and 24GB. This aligns with 8-Hi and 12-Hi stacks respectively, and means that at least for SK Hynix, their first generation of HBM3 memory is still the same density as their latest-generation HBM2E memory. This means that device vendors looking to increase their total memory capacities for their next-generation parts (e.g. AMD and NVIDIA) will need to use memory with 12 dies/layers, up from the 8 layer stacks they typically use today.What will be interesting to see in the final version of the HBM3 specification is whether JEDEC sets any height limits for 12-Hi stacks of HBM3. The group punted on the matter with HBM2E, where 8-Hi stacks had a maximum height but 12-Hi stacks did not. That in turn impeded the adoption of 12-Hi stacked HBM2E, since it wasn’t guaranteed to fit in the same space as 8-Hi stacks – or indeed any common size at all.On that matter, the SK Hynix press release notably calls out the efforts the company put into minimizing the size of their 12-Hi (24GB) HBM3 stacks. According to the company, the dies used in a 12-Hi stack – and apparently just the 12-Hi stack – have been ground to a thickness of just 30 micrometers, minimizing their thickness and allowing SK Hynix to properly place them within the sizable stack. Minimizing stack height is beneficial regardless of standards, but if this means that HBM3 will require 12-Hi stacks to be shorter – and ideally, the same height as 8-Hi stacks for physical compatibility purposes – then all the better for customers, who would be able to more easily offer products with multiple memory capacities.Past that, the press release also confirms that one of HBM’s core features, integrated ECC support, will be returning. The standard has offered ECC since the very beginning, allowing device manufacturers to get ECC memory “for free”, as opposed to having to lay down extra chips with (G)DDR or using soft-ECC methods.Finally, it looks like SK Hynix will be going after the same general customer base for HBM3 as they already are for HBM2E. That is to say high-end server products, where the additional bandwidth of HBM3 is essential, as is the density. HBM has of course made a name for itself in server GPUs such as NVIDIA’s A100 and AMD’s M100, but it’s also frequently tapped for high-end machine learning accelerators, and even networking gear.We’ll have more on this story in the near future once JEDEC formally approves the HBM3 standard. In the meantime, it’s sounding like the first HBM3 products should begin landing in customers’ hands in the later part of next year.
The Huawei MateBook 16 Review, Powered by AMD Ryzen 7 5800H: Ecosystem Plus
Having very recently reviewed the Matebook X Pro 2021 (13.9-inch), our local PR in the UK offered me a last-minute chance to examine the newest element to their laptop portfolio. The Huawei MateBook 16, on paper at least, comes across as a workhorse machine designed for office and on the go. A powerful CPU that can go into a high-performance mode when plugged in, and sip power when it needs to. No discrete graphics to get in the way, and a massive 84 Wh battery is designed for an all-day workflow. It comes with a color-accurate large 3:2 display, and with direct screen share with a Huawei smartphone/tablet/monitor, it means if you buy into the ecosystem there’s a lot of potential. The question remains – is it any good?
Google Announces Pixel 6, Pixel 6 Pro: The New Real Flagship Pixels
Today, after many weeks, even months of leaks and teasers, Google has finally announced the new Pixel 6 and Pixel 6 Pro – their new flagship line-up of phones for 2021 and carrying them over into next year. The two phones had been teased quite on numerous occasions and have probably one of the worst leak records of any phone ever, and today’s event revealed little unknowns, but yet still Google manages to put on the table a pair of very interesting phones, if not, the most interesting Pixel phones the company has ever managed to release.
The Arm DevSummit 2021 Keynote Live Blog: 8am PT (15:00 UTC)
This week seems to be Arm's week across the tech industry. Following yesterday's Arm SoC announcements from Apple, today sees Arm kick off their 2021 developer's summit, aptly named DevSummit. As always, the show is opening up with a keynote being delivered by Arm CEO Simon Segars, who will be using the opportunity to lay out Arm's vision of the future.Arm chips are already in everything from toasters to PCs – and Arm isn't stopping there. So be sure to join us at 8am PT (15:00 UTC) for our live blog coverage of Arm's keynote.
Apple Announces M1 Pro & M1 Max: Giant New Arm SoCs with All-Out Performance
Today’s Apple Mac keynote has been very eventful, with the company announcing a new line-up of MacBook Pro devices, powered by two different new SoCs in Apple’s Silicon line-up: the new M1 Pro and the M1 Max.The M1 Pro and Max both follow-up on last year’s M1, Apple’s first generation Mac silicon that ushered in the beginning of Apple’s journey to replace x86 based chips with their own in-house designs. The M1 had been widely successful for Apple, showcasing fantastic performance at never-before-seen power efficiency in the laptop market. Although the M1 was fast, it was still a somewhat smaller SoC – still powering devices such as the iPad Pro line-up, and a corresponding lower TDP, naturally still losing out to larger more power-hungry chips from the competition.Today’s two new chips look to change that situation, with Apple going all-out for performance, with more CPU cores, more GPU cores, much more silicon investment, and Apple now also increasing their power budget far past anything they’ve ever done in the smartphone or tablet space.
The Apple 2021 Fall Mac Event Live Blog 10am PT (17:00 UTC)
Following last month’s announcement event of Apple’s newest iPhone and iPad line-ups, today we’re seeing Apple hold its second fall event, where we expect the company to talk about all new things Mac. Last year’s event was a historic one, with Apple introducing the M1 chip and new powered Mac devices, marking the company’s move away from x86 chips from Intel, taking instead their own future in their hands with their own custom Arm silicon. This year, we’re expecting more chips and more devices, with even more performance to be release. Stay tuned as we cover tonight’s show.
TSMC Roadmap Update: 3nm in Q1 2023, 3nm Enhanced in 2024, 2nm in 2025
TSMC has introduced a brand-new manufacturing technology roughly every two years over the past decade. Yet as the complexity of developing new fabrication processes is compounding, it is getting increasingly difficult to maintain such a cadence. The company has previously acknowledged that it will start producing chips using its N3 (3 nm) node about four months later than the industry is used to (i.e., Q2), and in a recent conference call with analysts, TSMC revealed additional details about its latest process technology roadmap, focusing on their N3, N3E, and N2 (2 nm) technologies.N3 in 2023TSMC's N3 technology will provide full node scaling compared to N5, so its adopters will get all performance (10% - 15%), power (-25% ~ -30%), and area (1.7x higher for logic) enhancements that they come to expect from a new node in this day and age. But these advantages will come at a cost. The fabrication process will rely extensively on extreme ultraviolet (EUV) lithography, and while the exact number of EUV layers is unknown, it will be a greater number of layers than the 14 used in N5. The extreme complexity of the technology will further add to the number of process steps – bringing it toto well over 1000 – which will further increase cycle times.As a result, while mass production of the first chips using TSMC's N3 node will begin in the second half of 2022, the company will only be shipping them to an undisclosed client for revenue in the first quarter of 2023. Many observers, however, expected these chips to ship in late 2022."N3 risk production is scheduled in 2021, and production will start in second half of 2022," said C.C. Wei, CEO of TSMC. "So second half of 2022 will be our mass production, but you can expect that revenue will be seen in first quarter of 2023 because it takes long — it takes cycle time to have all those wafer out."N3E in 2024Traditionally, TSMC offers performance-enhanced and application-specific process technologies based on its leading-edge nodes several quarters after their introduction. With N3, the company will be changing their tactics somewhat, and will introduce a node called N3E, which can be considered as an enhanced version of N3.This process node will introduce an improved process window with performance, power, and yield enhancements. It is unclear whether N3 meets TSMC's expectations for PPA and yield, but the very fact that the foundry is talking about improving yields indicates that there is a way to improve it beyond traditional yield boosting methods."We also introduced N3E as an extension of our N3 family," said Wei. "N3E will feature improved manufacturing process window with better performance, power and yield. Volume production of N3E is scheduled for one year after N3."TSMC has not commented on whether N3E will be compatible with N3's design rules, design infrastructure, and IPs. Meanwhile, since N3E will serve customers a year after N3 (i.e., in 2024), there will be quite some time for chip designers to prepare for the new node.N2 in 2025TSMC's N2 fabrication process has largely been a mystery so far. The company has confirmed that it was considering gate-all-around field-effect transistors (GAAFETs) for this node, but has never said that the decision was final. Furthermore, it has never previously disclosed a schedule for N2.But as N2 gets closer, TSMC is slowly locking down some additional details. Particularly, the company is now formally confirming that the N2 node is scheduled for 2025. Though they are not elaborating on whether this means HVM in 2025, or shipments in 2025."I can share with you that in our 2-nm technology, the density and performance, will be the most competitive in 2025," said Wei.
TSMC to Build Japan's Most Advanced Semiconductor Fab
Fabs are well-known for being an expensive business to be in, so any time a new fab is slated for construction, it tends to be a big deal – especially amidst the current chip crunch. To that end, TSMC this week has announced plans to build a new, semi-specialized fab in Japan to meet the needs of its local customers. The semiconductor manufacturing facility will focus on mature and specialty fabrication technologies that are used to make chips with long lifecycles for automakers and consumer electronics. The fab will be Japan's most advanced fab for logic when it becomes operational in late 2024 and if the rumors about planned investments are correct, it could also be Japan's largest fab for logic chips."After conducting due diligence, we announce our intention to build a specialty technology fab in Japan, subject to our board of directors approval," announced CC Wei, chief executive officer of TSMC, during a conference call with investors and financial analysts. "We have received a strong commitment to support this project from both our customers and the Japanese government."Comes Online in Late 2024TSMC's fab in Japan will process 300-mm wafers using a variety of specialty and mature nodes, including a number of 28 nm technologies as well as 22ULP process for ultra-low-power devices. These nodes are not used to make leading-edge ASICs and SoCs, but they are widely used by automotive and consumer electronics industries and will continue to be used for years to come not only for existing chips, but for upcoming solutions as well."This fab will utilize 20 nm to 28 nm technology for semiconductor wafer fabrication," Wei added. "Fab construction is scheduled to begin in 2022 and production is targeted to begin in late 2024, further details will be provided subject to the board approval."While TSMC disclosed the specialized nature of the fab, its schedule, and the fact that it gained support from clients and the Japanese government, the company is not revealing anything beyond that. In fact, while it confirmed that the cost of the semiconductor production facility is not included in its $100 billion three-year CapEx plan, it refused to give any estimates about its planned investments in the project.Meanwhile, there are many things that make this fab special for TSMC, Japan, and the industry.The Most Advanced Logic Fab in JapanIt was late 2005, AMD and Intel started to ship their first dual-core processors and the CPU frequency battle was officially over. Intel was getting ready to introduce its first 65nm chips in early 2006 and all of a sudden Panasonic said that it had started volume production of the world's first application processors using a 65 nm technology, which it co-developed with Renesas, putting Panasonic a couple of months ahead of mighty Intel. In mid-2007, Panasonic again beat Intel to punch by several months with its 45 nm fabrication process.But with their 32 nm node, Panasonic was 9 – 10 months behind Intel. And while the company did a half-node shrink of this process, it ultimately pulled the plug on 22nm following other Japanese conglomerates that opted out from the process technology race even earlier. By now, all Japanese automotive and electronics companies outsource their advanced chips to foundries, who in turn, build the majority of them outside of Japan.By bringing a 22ULP/28nm-capable fab to Japan, TSMC's plans will not only brings advanced logic manufacturing back to the country, but it would also amount to the most advanced fab in Japan. TSMC is also constructing an R&D center in Japan and cooperates with the University of Tokyo on various matters, so its presence in the country is growing, which is good news for the local semiconductor industry.Previously TSMC concentrated its fabs and R&D facilities in Taiwan, but it looks like its rapid growth fueled by surging demand for semiconductors as well as geopolitical matters are compelling the foundry to diversify its production and R&D locations.What is particularly interesting is that according to a Nikkei report, the Japanese production facility will be co-funded by TSMC, the Japanese government, and Sony. This marks another major strategy shift for TSMC, which tends to fully own its fabs. In fact, if the Nikkei report is to be believed, the whole project will cost around $7 billion (though it is not said whether this is the cost of first phase of the fab, or a potential multi-year investment).To put the number into context, SMIC recently announced plans to spend around $8.87 billion on a fab with planned capacity of around 100,000 300-mm wafer starts per month (WSPM). TSMC's facility will presumably cost less and will be built in a country with higher operating costs, so it may well not be a GigaFab-level facility (which have capacity of ~100K WSPM). But still, we are talking about a sizable fab that could have a capacity of tens of thousands of wafer starts per month, which would make it Japan's biggest 300-mm logic facility ever. Just for comparison, the former Panasonic fab in Uozo (now controlled by Tower Semiconductor and Nuvoton) has a capacity of around 8,000 WSPM.TSMC has not formally confirmed any numbers about its Japanese fab, but the company tends to build rather large production facilities that can be expanded if needed. Meanwhile, a fab in Japan that will serve the needs of local automotive and electronics conglomerates promises to help them to avoid shortages of chips in the future. This would also leave TSMC free to assign its 28nmTaiwanese and Chinese production lines to other applications, including PCs, which is important for the whole industry.
G.Skill Unveils Premium Trident Z5 and Z5 RGB DDR5 Memory, Up To DDR5-6400 CL36
With memory manufacturers clamoring over themselves to push out DDR5 in time for the upcoming launch of Intel's Alder Lake processors, G.Skill has unveiled its latest premium Trident Z5 kits. The latest Trident kits are based on Samsung's new DDR5 memory chips and range in speed from DDR5-5600 to DDR5-6400, with latencies of either CL36 or CL40. Meanwhile, G.Skill has also opted to use this opportunity to undertake a complete design overhaul from its previous DDR4 memory, with a fresh new look and plenty of integrated RGB.G.Skill Trident Z5 DDR5 Memory SpecificationsSpeedLatenciesVoltageCapacityDDR5-640036-36-36-76
The EVGA Z590 Dark Motherboard Review: For Extreme Enthusiasts
Getting the most out of Intel's Core i9-11900K primarily relies on two main factors: premium cooling for the chip itself, and a solid motherboard acting as the foundation. And while motherboard manufacturers such as EVGA can't do anything about the former, they have quite a bit of experience with the latter.Today we're taking a look at EVGA's Z590 Dark motherboard, which is putting EVGA's experience to the test as one of a small handful of LGA1200 motherboards geared for extreme overclocking. A niche market within a niche market, few people really have the need (or the means) to overclock a processor within an inch of its life. But for those that do, EVGA has developed a well-earned reputation with its Dark series boards for pulling out all of the stops in helping overclockers get the most out of their chips. And even for the rest of us who will never see a Rocket Lake chip pass 6GHz, it's interesting to see just what it takes with regards to motherboard design and construction to get the job done.
The Be Quiet! Pure Loop 280mm AIO Cooler Review: Quiet Without Compromise
Today we're taking our first look at German manufacturer Be Quiet's all-in-one (AIO) CPU liquid coolers, with a review of their Pure Loop 280mm cooler. True to their design ethos, Be Quiet! has built the Pure Loop to operate with as little noise as is reasonably possible, making for a record-quiet cooler that also hits a great balance between overall performance, an elegant appearance, and price.
AMD Launches Radeon RX 6600: More Mainstream Gaming For $329
AMD this morning is once again expanding its Radeon RX 6000 family of video cards, this time with the addition of a second, cheaper mainstream offering: the Radeon RX 6600. Being announced and launched this morning, the Radeon RX 6600 is aimed at the mainstream 1080p gaming market, taking its place as a second, cheaper alternative to AMD’s already-released Radeon RX 6600 XT. Based on the same Navi 23 GPU as its sibling, the Radeon RX 6600 comes with 28 CUs’ worth of graphics hardware, 8GB of GDDR6 VRAM, and a 32MB Infinity Cache, with prices starting at $329.
Netgear Updates Orbi Lineup with RBKE960 Wi-Fi 6E Quad-Band Mesh System
Mesh networking kits / Wi-Fi systems have become quite popular over the last few years. Despite competition from startups such as eero (now part of Amazon) and Plume (with forced subscriptions), as well as big companies like Google (Google Wi-Fi and Nest Wi-Fi), Netgear's Orbi continues to enjoy popularity in the market. Orbi's use of a dedicated backhaul provides tangible benefit over other Wi-Fi systems using shared backhauls. However, the costs associated with the additional radio have meant that the Orbi Wi-Fi systems have always carried a premium compared to the average market offerings in the space.Netgear introduced their first Wi-Fi 6E router - the Nighthawk RAXE500 - at the 2021 CES. Priced at $600, the router utilized a Broadcom platform (BCM4908 network processing SoC + BCM46384 4-stream 802.11an/ac/ax radio). Today, the company is updating the Orbi lineup with a Wi-Fi 6E offering belonging to the AXE11000 class. Based on Qualcomm's Networking Pro Series 1610 (which integrates the IPQ8074 WiSoC and QCN9074 radios) platform, the company is touting their RBKE960 Orbi series to be the world's first quad-band Wi-Fi 6E mesh system.Netgear's high-end Orbi kits have traditionally been tri-band solutions, with a second 5 GHz channel as a dedicated backhaul. With Wi-Fi 6E, a tri-band solution is mandated - 2.4 GHz, 5 GHz, and 6 GHz support are all needed for certification. The 6 GHz channel, as discussed previously, opens up multiple 160 MHz channels that are free of interference. The RBKE960 series supports the three mandated bands, and also retains a dedicated 5 GHz backhaul, making it a quad-band solution with combined Wi-Fi speeds of up to 10.8 Gbps across all four considered together.Netgear has opted to retain 5 GHz for the backhaul in order to maximize range. While the 6 GHz band is interference-free, the power restrictions prevent the communication in those channels from having as much range as the existing 5 GHz ones. Having a dedicated backhaul ensures that all the 'fronthaul' channels are available for client devices (shared backhauls result in a 50% reduction in speeds available for client devices for each additional node / satellite). The benefits of Wi-Fi 6E and what consumers can expect from the 6GHz band have already been covered in detail in our Nighthawk RAXE500 launch piece. The Orbi RBKE960 series supports up to seven 160 MHz channels, allowing for interference-free operation even in dense apartments with multiple neighbors.The RBKE960 supports 16 Wi-Fi streams, making for an extremely complex antenna design. Netgear has made improvements based on past experience to the extent that the new Orbi RBKE960 performs better than the Orbi RBK850 even for 5GHz communication (the larger size of the unit also plays a part in this).In terms of hardware features, the router sports a 10G WAN port, 3x 1GbE, and 1x 2.5GBASE-T ports. The satellite doesn't have the WAN port, but retains the other three. The 2.5GBASE-T port can be used to create an Ethernet backhaul between the router and the satellite. On the software side, the new Orbi creates four separate Wi-Fi networks for different use-cases.The reduced range in the 6GHz band means that large homes might require multiple satellites to blanket the whole area with 6GHz coverage.Installation and management is via the Orbi app. Netgear also includes the NETGEAR Armor cyber-security suite with integrated parental controls - some features in Armor are subscription-based.Netgear is also introducing an 'Orbi Black Edition' available exclusively on Netgear's own website. With the RAXE500 setting the stage with its $600 price point, it is no surprise that the RBSE960 satellite costs the same (trading the WAN port and other features for an extra 4x4 radio). A kit with a router and a single satellite (RBKE962) is priced at $1100, while the RBKE963 (an additional satellite) bumps up the price tag to $1500. With home Wi-Fi becoming indispensable thanks to the work-from-home trend among other things, Netgear believes consumers will be ready to fork out what is essentially the price of a high-end smartphone or notebook for a reliable and future-proof Wi-Fi solution.
SanDisk Professional G-DRIVE SSD and ArmorLock SSD Review
Western Digital introduced the SanDisk Professional branding in May 2021 for its G-Technology products targeting the content-capture market. The company has taken the opportunity to update some of the hardware in the process of transitioning from G-Technology to the new branding. The G-DRIVE family represents the lineup of single-disk direct-attached storage units from SanDisk Professional. Today's review takes a look at the G-DRIVE SSD and G-DRIVE ArmorLock SSD - two bus-powered portable SSDs with a USB 3.2 Gen 2 interface that target very different use-cases.
Apple's iPhone 13 Series Screen Power, Battery Life Report - Long Lasting Devices
Following our last week’s preview into the new iPhone 13 series’ A15 chip, which impressed us tremendously due to its efficiency gains, we promised next to have a closer look at the new phone’s battery life and how the new display generation and screen efficiency ties in with the SoC efficiency and increased battery capacities this generation.
The EVGA X570 Dark Motherboard Review: A Dark Beast For Ryzen
Quite a few of the motherboards we have reviewed over the last month have been aimed at enthusiasts with a penchant for extreme overclocking. Today's review focuses on the EVGA X570 Dark that is more than the usual desktop AM4 motherboard. It's EVGA's first entry into the market for AMD's Ryzen processors, focusing on performance and overclocking more than most other X570/X570S boards currently available. Some of the EVGA X570 Dark's most notable features include two memory slots with support for DDR4-4800, dual PCIe 4.0 x4 M.2, eight SATA, dual 2.5 GbE, and support for Wi-Fi 6. Is EVGA, which had previously been an Intel and NVIDIA only deal until now, enough to tempt you to the 'DARK' side? Time to take a look and see if the X570 Dark has enough about it to justify the combination of an unconventional design and premium price tag.
Seagate Updates Game Drive SSD for Xbox with New Look and Internals
Seagate has been maintaining a line of Xbox-certified external SSDs since late 2016. The current Game Drive for Xbox SSD is based on the Seagate Fast SSD's internals and industrial design. With the USB 3.2 Gen 1 (BarraCuda) Fast SSD reaching EOL (that market segment has since moved on to USB 3.2 Gen 2), the time has now come for Seagate to revamp the internals of the Game Drive for Xbox SSD and give it a new look.The company is introducing a new Game Drive for Xbox SSD that takes inspiration from the currently available HDD equivalent - sporting a sleek all-black look with a green LED bar. The 96mm x 50mm x 11mm bus-powered portable SSD weighs just 51g, and sports a USB 3.2 Gen 1 micro-Binterface. It is compatible with Xbox Series X, Series S, and all generations of Xbox One. The package includes a 46cm USB 3.0 cable (micro-B to Type-A).Seagate is planning to launch only a single SKU of the new product - a 1TB version (STLD1000400) for $170. Interestingly, the 3-year warranty is also accompanied by data-loss protection using Seagate's Rescue Data Recovery Services. Availability is slated for later this month, well in time for the holiday season.The article will be updated with additional information once we hear back from Seagate regarding details of the SSD's internals.
The Ampere Altra Max Review: Pushing it to 128 Cores per Socket
Following last year’s 80-core Altra, Ampere is now delivering the new Altra Max server processor with up to 128 cores, double that of the competition, and with a focus on Cloud and hyperscale deployments, we’ve had a look on the new chip.
Samsung Foundry: 2nm Silicon in 2025
One of the key semiconductor technologies beyond 3D FinFET transistors are Gate-All-Around transistors, which show promise to help extend the ability to drive processors and components to higher performance and lower power. Samsung has always announced that its first generation GAA technology will align with its ‘3nm’ nodes, with its 3GAE and 3GAP processes. As part of the Samsung Foundry Forum today, some more insight was put into the timeline for the rollout, as well as talk of its 2nm process.
Samsung Foundry’s New 17nm Node: 17LPV brings FinFET to 28nm
Despite most discussion about chip manufacturing focusing on the leading edge and blazingly fast and complex side of the industry, the demand for the ‘legacy’ process technologies is also higher than ever, but also by volume a lot bigger than the latest and greatest. These legacy processes form the backbone of most modern electronics, and so being able to offer equivalent technology at lower cost/power is often a win-win for manufacturers and chip designers alike. To that end, Samsung is announcing a new 17nm process node, designed for customers still using a planar 28nm process, but want to take advantage of 14nm FinFET technology.
Samsung Foundry to Almost Double Output by 2026
It’s hard not to notice that we’re in the middle of a semiconductor crunch right now. Factories are running at full steam, but pinch points in the supply chain are causing chaos and bottlenecks – whether that means not enough packaging materials, the cost of shipping has increased 10x, or additional tariffs, it’s causing various industries that rely on semiconductors to wait for supply and then pay over expected prices. Nonetheless, everything that is made is being sold, and so all of the big foundries are driving more investment into their supply chain ecosystem as well as raw manufacturing, and Samsung is no different.
The Microsoft Surface Laptop Studio Review: Dynamic Design
Microsoft’s Surface team has produced some amazing designs over the years, taking to focusing on convertible devices to highlight the adaptability of Windows. That being said, over the last several years the design team has been largely held in check, as Microsoft has opted to focus on further refining their convertible designs. Thankfully, for 2021 the team is back to innovation as well as refinement with their latest device, the Surface Laptop Studio. With its dynamic woven hinge, the Laptop Studio is a true convertible device, as well as the spiritual successor to the now-defunct Surface Book.
What to Expect with Windows 11: A Day One Hands-On
Tomorrow, Microsoft is officially launching Windows 11, the next installment of their operating system which underpins the majority of PCs in use today. Windows 10 has an install base of over 1 billion devices, and Windows 11 comes into existence in a much different place than its predecessor. After the much-maligned Windows 8 there was a sense of urgency and necessity which ushered Windows 10 into the world. Windows 11, on the other hand, comes into a market where most people are happy with Windows 10. So it raises the question: Why now?
ASUS GeForce RTX 3070 Noctua Edition Announced
One thing Noctua is famed for, other than its high-end design and engineering team delivering top quality air-cooling products, is the brown/beige color scheme. Some users may detest the off-key and non-conventional color, which rarely goes with other colors inside their PC, and so they have to shun Noctua and look elsewhere. Others swear by the design, and ASUS has gone one step further by teaming up with Noctua to create an NVIDIA GeForce RTX 3070 graphics card. The ASUS GeForce RTX 3070 Noctua Edition features two NF-A12x25 PWM cooling fans with a semi-passive design and aims to be one of the coolest and quietest air-cooled RTX 3070 on the market.ASUS x Noctua: The Start of Something Bigger?In August, Twitter user @KOMACHI_ENSAKA spotted that an ASUS and Noctua collaboration may have been in the works via a listing on the Eurasian Economic Commission (EEC) website. Putting the rumors to rest, ASUS and Noctua have announced their collaboration today, and they both present the ASUS GeForce RTX 3070 Noctua Edition, with a slightly higher clocked OC version too.The most striking thing visually with the ASUS GeForce RTX 3070 Noctua Edition models is the dual beige Noctua NF-A12x25 PWM cooling fans attached to a custom heatsink design with multiple fins and heat pipes behind the beige. Noctua and ASUS have opted for an extensive cooling design as the RTX 3070 Noctua Edition and OC version take up 4.3 slots worth of space and measure in at 12.2 inches in length.Both the regular and OC models operate with a semi-passive design which means that whenever the temperature drops below 50°C, the fans will switch off. Both models have a standard I/O, including dual HDMI 2.1 and three DisplayPort 1.4a video outputs. Providing power to the graphics cards is a pair of 8-pin PCIe power inputs and comes with a recommendation that users install a 750 W power supply or greater.
ASRock Rack Lists WRX80D8-2T Motherboard For Ryzen Threadripper Pro
ASRock Rack has listed a new motherboard on its website supporting AMD's latest Ryzen Threadripper Pro 3000WX series of processors. The ASRock Rack WRX80D8-2T is currently under 'preliminary' status and features eight memory slots, seven full-length PCIe 4.0 x16 slots, as well as twelve SATA ports and support for two PCIe 4.0 x4 M.2 drives. It also includes dual 10 GbE and is supported by an ASPEED BMC controller with a dedicated management LAN port and D-sub video output.In terms of design, the ASRock Rack WRX80D8-2T follows a basic green design with blue memory slots and black PCIe slots, and power connectors. Surrounding a transposed sTRX4 (WRX80) socket is eight memory slots with support up to 2TB of capacity, with ECC and non-ECC UDIMM, RDIMM, LRDIMM, and RDIMM3DS memory types supported. Providing power to the motherboard is a 24-pin 12 V ATX power input, while CPU power comes from a pair of 8-pin 12 V ATX CPU power inputs, all of which are located in the top right-hand corner.Dominating the lower half of the board are seven full-length PCIe 4.0 x16 slots, which are designed to maximize 112 of the supported 120 PCIe lanes from the Zen 2 based Ryzen Threadripper Pro 3000WX processors. Focusing on storage, the WRX80D8-2T has support for twelve SATA ports from the WRX80 chipset with two OCuLink ports, including four regular 7-pin SATA ports. Users can add U.2 storage with two OCuLink ports at PCIe 4.0 x4 or use these for an additional four SATA ports apiece. Other storage options include two PCIe 4.0 x4 M.2 slots with support for form factors up to 22110 M.2. Cooling options consist of seven 6-pin fan headers.On the rear panel are two USB 3.2 G1 Type-A ports, with a dedicated Realtek RTL8211E Gigabit management LAN port and D-Sub video output powered by an ASPEED AST2500 BMC controller, which adds IPMI support. Users looking to add more USB ports can do so via front panel headers, including one USB 3.2 G2 Type-C header and one USB 3.2 G1 Type-A header for an additional two ports. Networking includes two RJ45 ports, which an Intel X550-AT2 10 GbE controller powers. Finishing off the rear panel is a Serial port, and a small UID identification LED button.At the time of writing, we don't have any information on either the pricing or availability of the ASRock Rack WRX80D8-2T.Source: ASRock RackGallery: ASRock Rack Lists WRX80D8-2T For Ryzen Threadripper PRO 3000WX ProcessorsRelated Reading
The Apple A15 SoC Performance Review: Faster & More Efficient
In preparation for our full iPhone device review, we’re having a dedicated look at the new A15 SoC from Apple – following quite vague performance claims, how does the new chip stand up against its predecessor & competition?
Western Digital Updates WD Blue Series with SN570 DRAM-less NVMe SSD
Western Digital is unveiling its latest addition to the WD Blue family today - the SN570 NVMe SSD. A DRAM-less PCIe 3.0 x4 drive, it brings in performance improvements over the current lead product in the line - the SN550. In order to better appeal to the content creators market, WD is also bundling a free month of membership to Adobe Creative Cloud.Similar to the SN550, the SN570 is also available in three capacities - 250GB, 500GB, and 1TB. All drives are single-sided, come with a 5-year warranty, and carry a 0.3 DWPD rating. The key performance improvement over the SN550 is the increase in sequential read speeds from 2400 MBps to 3500 MBps. Though Western Digital wouldn't officially confirm, we believe this is likely due to the move from BiCS 4 96L 3D TLC to BiCS 5 112L 3D TLC. We did obtain confirmation that these drives are set to be equipped with 3D TLC over their complete lifetime, and will not move to QLC.Western Digital SN570 SSD SpecificationsCapacity250 GB500 GB1 TBControllerWD In-House?NAND FlashWestern Digital / Kioxia BiCS 5 112L 3D TLC NAND?Form-Factor, InterfaceSingle-Sided M.2-2280, PCIe 3.0 x4, NVMe 1.4Sequential Read3300 MB/s3500 MB/sSequential Write1200 MB/s2300 MB/s3000 MB/sRandom Read IOPS190K360K460KRandom Write IOPS210K390K450KSLC CachingYesTCG Opal EncryptionNoWarranty5 yearsWrite Endurance150 TBW
ASUS PN50 mini-PC Review: A Zen 2 Business NUC
Ultra-compact form-factor (UCFF) machines have been one of the major drivers in the resurgence of the PC market. The trend was kickstarted by Intel's NUCs in the early 2010s. These PCs have usually relied on low-power processors with compelling performance per watt metrics. AMD was largely absent in this market till the introduction of the Ryzen processors. While ASRock Industrial was one of the first to release a UCFF mini-PC based on the first-generation Ryzen embedded processors, multiple OEMs have lined up to utilize the second-generation AMD processors in their own high-performance mini-PC lineups. Today, we are looking at the performance and value proposition of the ASUS PN50 - a high-end UCFF system based on the AMD Ryzen 7 4800U SoC.
An Interview with Intel Lab’s Mike Davies: The Next Generation of Neuromorphic Research
As part of the launch of the new Loihi 2 chip, built on a pre-production version of Intel’s 4 process node, the Intel Labs team behind its Neuromorphic efforts reached out for a chance to speak to Mike Davies, the Director of the project. Now it is perhaps no shock that Intel’s neuromorphic efforts have been on my radar for a number of years – as a new paradigm of computing compared to the traditional von Neumann architecture, and one that is meant to mimic brains and take advantages of such designs, if it works well then it has the potential to shake up specific areas of the industry, as well as Intel’s bottom line. Also, given that we’ve never really covered Neuromorphic computing in any serious detail here on AnandTech, it would be a great opportunity to get details on this area of research, as well as the newest hardware, direct from the source.
Intel Rolls Out New Loihi 2 Neuromorphic Chip: Built on Early Intel 4 Process
We’ve been keeping light tabs on Intel’s Neuromorphic efforts ever since it launched its first dedicated 14nm silicon for Neuromorphic Computing, called Loihi, back in early 2018. In an interview with Intel Lab’s Director Dr. Richard Uhlig back in March 2021, I asked about the development of the hardware, and when we might see a second generation. Today is that day, and the group is announcing Loihi 2, a substantial upgrade over the first generation that addresses a lot of the low-hanging fruit from the first design. What is perhaps just as interesting is the process node used: Intel is communicating that Loihi 2 is being built, in silicon today, using a pre-production version of Intel’s first EUV process node, Intel 4.
...10111213141516171819...