Feed anandtech

Link https://anandtech.com/
Feed https://anandtech.com/rss/
Updated 2024-04-24 23:00
AMD Announces Ryzen 7000 Reveal Livestream for August 29th
In a brief press release sent out this morning, AMD has announced that they will be delivering their eagerly anticipated Ryzen 7000 unveiling later this month as a live stream. In an event dubbed “together we advance_PCs”, AMD will be discussing the forthcoming Ryzen 7000 series processors as well as the underlying Zen 4 architecture and associated AM5 platform – laying the groundwork ahead of AMD’s planned fall launch for the Ryzen 7000 platform. The event is set to kick off on August 29 at 7pm ET (23:00 UTC), with CEO Dr. Lisa Su and CTO Mark Papermaster slated to present.AMD first unveiled their Ryzen 7000 platform and branding back at Computex 2022, offering quite a few high-level details on the forthcoming consumer processor platform while stating it would be launching in the fall. The new CPU family will feature up to 16 Zen 4 cores using TSMC's optimized 5 nm manufacturing process for the Core Complex Die (CCD), and TSMC’s 6nm process for the I/O Die (IOD). AMD has not disclosed a great deal about the Zen 4 architecture itself, though their Computex presentation has indicated we should expect a several percent increase in IPC, along with a further several percent increase in peak clockspeeds, allowing for a 15%+ increase in single-threaded performance.The Ryzen 7000 series is also notable for being the first of AMD’s chiplet-based CPUs to integrate a GPU – in this case embedding it in the IOD. The modest GPU allows for AMD’s CPUs to supply their own graphics, eliminating the need for a discrete GPU just to boot a system while, we expect, providing enough performance for basic desktop work.AMD Desktop CPU GenerationsAnandTechRyzen 7000
The AlphaCool Eisbaer Aurora 360 AIO Cooler Review: Improving on Expandable CPU Cooling
Today, we are taking a look at the updated version of the Alphacool Eisbaer AIO CPU cooler, the Eisbaer Aurora. For its second-generation product, Alphacool has gone through the Eisbaer design and improved every single part of this cooler, from the pump to the radiator and everything in-between. Combined with its unique, modular design that allows for additional blocks to be attached to this otherwise closed loop cooler, and Alphacool has a unique and powerful CPU cooler on its hands – albeit one that carries a price premium to match.
ASRock Industrial NUC BOX-1260P and 4X4 BOX-5800U Review: Alder Lake-P and Cezanne UCFF Faceoff
The past few years have seen Intel and AMD delivering new processors in a staggered manner. In the sub-45W category, Intel's incumbency has allowed it to deliver products for both the notebook and ultra-compact form factor (UCFF) within a few months of each other. On the other hand, AMD's focus has been on the high-margin notebook market, with the chips filtering down to the desktop market a year or so down the road. In this context, AMD's Cezanne (most SKUs based on the Zen 3 microarchitecture) and Intel's Tiger Lake went head-to-head last year in the notebook market, while Rembrandt (based on Zen3+) and Alder Lake-P are tussling it out this year. In the desktop space, Cezanne-based mini-PCs started making an appearance a few months back, coinciding with the first wave of Alder Lake-P systems. ASRock Industrial launched the NUC BOX-1200 series (Alder Lake-P) and the 4X4 BOX-5000 series (Cezanne) within a few weeks of each other. The company sent over the flagship models in both lineups for review, giving us a chance to evaluate the performance and value proposition of the NUC BOX-1260P and 4X4 BOX-5800U. Read on to find out how Alder Lake-P and Cezanne stack up against each other in the mini-PC space, and a look into what helps ASRock Industrial introduce mini-PCs based on the latest processors well ahead of its competitors.
UCIe Consortium Incorporates, Adds NVIDIA and Alibaba As Members
Among the groups with a presence at this year’s Flash Memory Summit is the UCIe Consortium, the recently formed group responsible for the Universal Chiplet Interconnect Express (UCIe) standard. First unveiled back in March, the UCIe Consortium is looking to establish a universal standard for connecting chiplets in future chip designs, allowing chip builders to mix-and-match chiplets from different companies. At the time of the March announcement, the group was looking for additional members as it prepared to formally incorporate, and for FMS they’re offering a brief update on their progress.First off, the group has now become officially incorporated. And while this is largely a matter of paperwork for the group, it’s none the less an important step as it properly establishes them as a formal consortium. Among other things, this has allowed the group to launch their work groups for developing future versions of the standard, as well as to offer initial intellectual property rights (IPR) protections for members.More significant, however, is the makeup of the incorporated UCIe board. While UCIe was initially formed with 10 members – a veritable who’s who of many of the big players in the chip industry – there were a couple of notable absences. The incorporated board, in turn, has picked up two more members who have bowed to the peer (to peer) pressure: NVIDIA and Alibaba.NVIDIA for its part has already previously announced that it would support UCIe in future products (even if it’s still pushing customers to use NVLink), so their addition to the board is not unexpected. Still, it brings on board what’s essentially the final major chip vendor, firmly establishing support for UCIe across all of the ecosystem’s big players. Meanwhile, like Meta and Google Cloud, Alibaba represents another hyperscaler joining the group, who will presumably be taking full advantage of UCIe in developing chips for their datacenters and cloud computing services.Overall, according to the Consortium the group is now up to 60 members total. And they are still looking to add more through events like FMS as they roll on towards getting UCIe 1.0 implemented in production chiplets.
SK hynix Announces 238 Layer NAND - Mass Production To Start In H1'2023
As the 2022 Flash Memory Summit continues, SK hynix is the latest vendor to announce their next generation of NAND flash memory at the show. Showcasing for the first time the company’s forthcoming 238 layer TLC NAND, which promises both improved density/capacity and improved bandwidth. At 238 layers, SK hynix has, at least for the moment, secured bragging rights for the greatest number of layers in a TLC NAND die – though with mass production not set to begin until 2023, it’s going to be a while until the company’s newest NAND shows up in retail products.Following closely on the heels of Micron’s 232L TLC NAND announcement last week, SK hynix is upping the ante ever so slightly with a 238 layer design. Though the difference in layer counts is largely inconsequential when you’re talking about NAND dies with 200+ layers to begin with, in the highly competitive flash memory industry it gives SK hynix bragging rights on layer counts, breaking the previous stalemate between them, Samsung, and Micron at 176L.From a technical perspective, SK hynix’s 238L NAND further builds upon the basic design of their 176L NAND. So we’re once again looking at a string stacked design, with SH hynix using a pair of 119 layer decks, up from 88 layers in the previous generation. This makes SK hynix the third flash memory vendor to master building decks over 100 layers tall, and is what’s enabling them to produce a 238L NAND design that holds the line at two decks.SK hynix’s NAND decks continue to be built with their charge-trap, CMOS under Array (CuA) architecture, which sees the bulk of the NAND’s logic placed under the NAND memory cells. According to the company, their initial 512Gbit TLC part has a die size of 35.58mm, which works out to a density of roughly 14.39 Gbit/mm. That’s a 35% improvement in density over their previous-generation 176L TLC NAND die at equivalent capacities. Notably, this does mean that SK hynix will be ever so slightly trailing Micron’s 232L NAND despite their total layer count advantage, as Micron claims they’ve hit a density of 14.6 Gbit/mmon their 1Tbit dies.SK hynix 3D TLC NAND Flash Memory238L176LLayers238176Decks2 (x119)2 (x88)Die Capacity512 Gbit512 GbitDie Size (mm)35.58mm2~47.4mm2Density (Gbit/mm)~14.3910.8I/O Speed2.4 MT/s
Solidigm Announces P41 Plus SSD: Taking Another Shot at QLC With Cache Tiering
Although Intel is no longer directly in the SSD market these days, their SSD team and related technologies continue to live on under the SK hynix umbrella as Solidigm. Since their initial formation at the very end of 2021, Solidigm has been in the process of reestablishing their footing, continuing to sell and support Intel’s previous SSD portfolio while continuing development of their next generation of SSDs. On the enterprise side of matters this recently culminated in the launch of their new D7 SSDs. Meanwhile on the consumer side of matters, today at Flash Memory Summit the company is announcing their first post-Intel consumer SSD, the Solidigm P41 PlusThe P41 Plus is, at a high level, the successor to Intel’s 670p SSD, the company’s second-generation QLC-based SSD. And based on that description alone, a third generation QLC drive from Solidigm is something that few AnandTech readers would find remarkable. QLC makes for cheap high(ish) capacity SSDs, which OEMs love, while computing enthusiasts are decidedly less enthusiastic about them.But then the P41 Plus isn’t just a traditional QLC drive.One of the more interesting ventures out of Intel’s time as a client SSD manufacturer was the company’s forays into cache tiering. Whether it was using flash memory as a hard drive cache, using 3D XPoint as a hard drive cache, or even using 3D XPoint as a flash memory cache, Intel tried several ways to speed up the performance of slower storage devices in a cost-effective manner. And while Intel’s specific solutions never really caught on, Intel’s core belief that some kind of caching is necessary proved correct, as all modern TLC and QLC SSDs come with pseudo-SLC caches for improved burst write performance.While they are divorced from Intel these days, Solidigm is picking up right where Intel left off, continuing to experiment with cache tiering. Coming from the same group that developed Intel’s mixed 3D XPoint/QLC drives such as the Optane Memory H20, Solidigm no longer has access to Intel’s 3D XPoint memory (and soon, neither will Intel). But they do have access to flash memory. So for their first solo consumer drive as a stand-alone subsidiary, Solidigm is taking a fresh stab at cache tiering, expanding the role of the pSLC cache to serve as both a write cache and a read cache.
Compute Express Link (CXL) 3.0 Announced: Doubled Speeds and Flexible Fabrics
While it’s technically still the new kid on the block, the Compute Express Link (CXL) standard for host-to-device connectivity has quickly taken hold in the server market. Designed to offer a rich I/O feature set built on top of the existing PCI-Express standards – most notably cache-coherency between devices – CXL is being prepared for use in everything from better connecting CPUs to accelerators in servers, to being able to attach DRAM and non-volatile storage over what’s physically still a PCIe interface. It’s an ambitious and yet widely-backed roadmap that in three short years has made CXL the de facto advanced device interconnect standard, leading to rivals standards Gen-Z, CCIX, and as of yesterday, OpenCAPI, all dropping out of the race.And while the CXL Consortium is taking a quick victory lap this week after winning the interconnect wars, there is much more work to be done by the consortium and its members. On the product front the first x86 CPUs with CXL are just barely shipping – largely depending on what you want to call the limbo state that Intel’s Sapphire Ridge chips are in – and on the functionality front, device vendors are asking for more bandwidth and more features than were in the original 1.x releases of CXL. Winning the interconnect wars makes CXL the king of interconnects, but in the process, it means that CXL needs to be able to address some of the more complex use cases that rival standards were being designed for.To that end, at Flash Memory Summit 2022 this week, the CXL Consortium is at the show to announce the next full version of the CXL standard, CXL 3.0. Following up on the 2.0 standard, which was released at the tail-end of 2020 and introduced features such as memory pooling and CXL switches, CXL 3.0 focuses on major improvements in a couple of critical areas for the interconnect. The first of which is the physical side, where is CXL doubling its per-lane throughput to 64 GT/second. Meanwhile, on the logical side of matters, CXL 3.0 is greatly expanding the logical capabilities of the standard, allowing for complex connection topologies and fabrics, as well as more flexible memory sharing and memory access modes within a group of CXL devices.
Phison and Seagate Announce X1 SSD Platform: U.3 PCIe 4.0 x4 with 128L eTLC
Phison and Seagate have been collaborating on SSDs since 2017 in the client as well as SMB/SME space. In April 2022, they had announced a partnership to develop and distribute enterprise NVMe SSDs. At the Flash Memory Summit this week, the results of the collaboration are being announced in the form of the X1 SSD platform - an U.3 PCIe 4.0 x4 NVMe SSD that is backwards compatible with U.2 slots.The X1 SSD utilizes a new Phison controller exclusive to Seagate - the E20. It integrates two ARM Cortex-R5 cores along with multiple co-processors that accelerate SSD management tasks. Phison is touting the improvement in random read IOPS (claims of up to 30% faster that the competition in its class) as a key driver for its fit in AI training and application servers servicing thousands of clients. The key specifications of the X1 SSD platform are summarized in the table below. The performance numbers quoted are for the 1DWPD 3.84TB model.Seagate / Phison X1 SSD PlatformCapacities1.92 TB, 3.84 TB, 7.68 TB, 15.36 TB (1DWPD models)
OpenCAPI to Fold into CXL - CXL Set to Become Dominant CPU Interconnect Standard
With the 2022 Flash Memory Summit taking place this week, not only is there a slew of solid-state storage announcements in the pipe over the coming days, but the show is also increasingly a popular venue for discussing I/O and interconnect developments as well. Kicking things off on that front, this afternoon the OpenCAPI and CXL consortiums are issuing a joint announcement that the two groups will be joining forces, with the OpenCAPI standard and the consortium’s assets being transferred to the CXL consortium. With this integration, CXL is set to become the dominant CPU-to-device interconnect standard, as virtually all major manufacturers are now backing the standard, and competing standards have bowed out of the race and been absorbed by CXL.Pre-dating CXL by a few years, OpenCAPI was one of the earlier standards for a cache-coherent CPU interconnect. The standard, backed by AMD, Xilinx, and IBM, among others, was an extension of IBM’s existing Coherent Accelerator Processor Interface (CAPI) technology, opening it up to the rest of the industry and placing its control under an industry consortium. In the last six years, OpenCAPI has seen a modest amount of use, most notably being implemented in IBM’s POWER9 processor family. Like similar CPU-to-device interconnect standards, OpenCAPI was essentially an application extension on top of existing high speed I/O standards, adding things like cache-coherency and faster (lower latency) access modes so that CPUs and accelerators could work together more closely despite their physical disaggregation.But, as one of several competing standards tackling this problem, OpenCAPI never quite caught fire in the industry. Born from IBM, IBM was its biggest user at a time when IBM’s share in the server space has been on the decline. And even consortium members on the rise, such as AMD, ended up skipping on the technology, leveraging their own Infinity Fabric architecture for AMD server CPU/GPU connectivity, for example. This has left OpenCAPI without a strong champion – and without a sizable userbase to keep things moving forward.Ultimately, the desire of the wider industry to consolidate behind a single interconnect standard – for the sake of both manufacturers and customers - has brought the interconnect wars to a head. And with Compute Express Link (CXL) quickly becoming the clear winner, the OpenCAPI consortium is becoming the latest interconnect standards body to bow out and become absorbed by CXL.Under the terms of the proposed deal – pending approval by the necessary parties – the OpenCAPI consortium’s assets and standards will be transferred to the CXL consortium. This would include all of the relevant technology from OpenCAPI, as well as the group’s lesser-known Open Memory Interface (OMI) standard, which allowed for attaching DRAM to a system over OpenCAPI’s physical bus. In essence, the CXL consortium would be absorbing OpenCAPI; and while they won’t be continuing its development for obvious reasons, the transfer means that any useful technologies from OpenCAPI could be integrated into future versions of CXL, strengthening the overall ecosystem.With the sublimation of OpenCAPI into CXL, this leaves the Intel-backed standard as dominant interconnect standard – and the de facto standard for the industry going forward. The competing Gen-Z standard was similarly absorbed into CXL earlier this year, and the CCIX standard has been left behind, with its major backers joining the CXL consortium in recent years. So even with the first CXL-enabled CPUs not shipping quite yet, at this point CXL has cleared the neighborhood, as it were, becoming the sole remaining server CPU interconnect standard for everything from accelerator I/O (CXL.io) to memory expansion over the PCIe bus.
Akasa AK-ENU3M2-07 USB 3.2 Gen 2x2 SSD Enclosure Review: 20Gbps with Excellent Thermals
Storage bridges have become an ubiquitous part of today's computing ecosystems. The bridges may be external or internal, with the former ones enabling a range of direct-attached storage (DAS) units. These may range from thumb drives using an UFD controller to full-blown RAID towers carrying Infiniband and Thunderbolt links. From a bus-powered DAS viewpoint, Thunderbolt has been restricted to premium devices, but the variants of USB 3.2 have emerged as mass-market high-performance alternatives. USB 3.2 Gen 2x2 enables the highest performance class (up to 20 Gbps) in USB devices without resorting to PCIe tunneling. The key challenges for enclosures and portable SSDs supporting 20Gbps speeds include handling power consumption and managing thermals. Today's review takes a look at the relevant performance characteristics of Akasa's AK-ENU3M2-07 - a USB 3.2 Gen 2x2 enclosure for M.2 NVMe SSDs.
The Intel Core i9-12900KS Review: The Best of Intel's Alder Lake, and the Hottest
As far as top-tier CPU SKUs go, Intel's Core i9-12900KS processor sits in noticeably sharp In contrast to the launch of AMD's Ryzen 7 5800X3D processor with 96 MB of 3D V-Cache. Whereas AMD's over-the-top chip was positioned as the world's fastest gaming processor, for their fastest chip, Intel has kept their focus on trying to beat the competition across the board and across every workload.As the final 12th Generation Core (Alder Lake) desktop offering from Intel, the Core i9-12900KS is unambiguously designed to be the powerful one. It's a "special edition" processor, meaning that it's low-volume, high-priced chip aimed at customers who need or want the fastest thing possible, damn the price or the power consumption.It's a strategy that Intel has employed a couple of times now – most notably with the Coffee Lake-generation i9-9900KS – and which has been relatively successful for Intel. And to be sure, the market for such a top-end chip is rather small, but the overall mindshare impact of having the fastest chip on the market is huge. So, with Intel looking to put some distance between itself and AMD's successful Ryzen 5000 family of chips, Intel has put together what is meant to be the final (and fastest) word in Alder Lake CPU performance, shipping a chip with peak (turbo) clockspeeds ramped up to 5.5GHz for its all-important performance cores.For today's review we're putting Alder Lake's fastest to the test, both against Intel's other chips and AMD's flagships. Does this clockspeed-boosted 12900K stand out from the crowd? And are the tradeoffs involved in hitting 5.5GHz worth it for what Intel is positioning as the fastest processor in the world? Let's find out.
Intel To Wind Down Optane Memory Business - 3D XPoint Storage Tech Reaches Its End
It appears that the end may be in sight for Intel’s beleaguered Optane memory business. Tucked inside a brutal Q2’2022 earnings release for the company (more on that a bit later today) is a very curious statement in a section talking about non-GAAP adjustments: In Q2 2022, we initiated the winding down of our Intel Optane memory business. As well, Intel’s earnings report also notes that the company is taking a $559 Million “Optane inventory impairment” charge this quarter.Taking these items at face value, it would seem that Intel is preparing to shut down its Optane memory business and development of associated 3D XPoint technology. To be sure, there is a high degree of nuance here around the Optane name and product lines – which is why we’re looking for clarification from Intel – as Intel has several Optane products, including “Optane memory” “Optane persistent memory” and “Optane SSDs”. None the less, within Intel’s previous earnings releases and other financial documents, the complete Optane business unit has traditionally been referred to as their “Optane memory business,” so it would appear that Intel is indeed winding down the complete Optane business unit, and not just the Optane Memory product.Update: 6:40pm ETFollowing our request, Intel has sent out a short statement on the Optane wind-down. While not offering much in the way of further details on Intel's exit, it does confirm that Intel is indeed exiting the entire Optane business.
Silicon Motion Announces SM8366 PCIe 5.0 x4 NVMe Controller and MonTitan SSD Solutions Platform for Enterprise Storage
In the lead up to the Flash Memory Summit next week, many vendors have started announcing their new products. Today, Silicon Motion is unveiling their first enterprise-focused PCIe 5.0 NVMe SSD controllers set. These controllers find themselves embedded in a flexible turnkey solutions platform encompassing different EDSFF standards. A follow-up to the SM8266 introduced in November 2020, the SM8366 and SM8308 belong to Silicon Motion's 3 Generation enterprise NVMe controller family.Silicon Motion's 3 Generation Enterprise SSD ControllersSM8366SM8308Host InterfacePCIe 5.0 x4 / x2 (dual-port x2+x2 or x1+x1 capable)NAND Interface16ch, 2400 MT/s8ch, 2400 MT/sDRAM2x 40-bit DDR4-3200 / DDR5-4800
Micron’s 232 Layer NAND Now Shipping: 1Tbit, 6-Plane Dies With 50% More I/O Bandwidth
Ahead of next week’s Flash Memory Summit, Micron this morning is announcing that their next-generation 232 layer NAND has begun shipping. The sixth generation of Micron’s 3D NAND technology, 232L is slated to offer both improved bandwidth and larger die sizes – most notably, introducing Micron’s first 1Tbit TLC NAND dies, which at this point are the densest in the industry. According to the company, the new NAND is already shipping to customers and in Crucial SSD products in limited volumes, with further volume ramping to take place later in the year.Micron first announced their 232L NAND back in May during their Investor Day event, revealing that the NAND would be available this year, and that the company intended to ramp up production by the end of the year. And while that yield ramp is still ongoing, Micron’s Singapore fab is already capable of producing enough of the new NAND that Micron is comfortable in announcing it is shipping, albeit clearly in limited quantities.From a technical perspective, Micron’s 232L NAND further builds upon the basic design elements Micron honed in that generation. So we’re once again looking at a string stacked design, with Micron using a pair of 116 layer decks, up from 88 layers in the previous generation. 116 layer decks, in turn, are notable as this is the first time Micron has been able to produce a single deck over 100 layers, a feat previously limited to Samsung. This in turn has allowed Micron to produce cutting-edge NAND with just two decks, something that may not be possible for much longer as companies push toward designs with over 300 total layers.Micron’s NAND decks continue to be built with their charge-trap, CMOS under Array (CuA) architecture, which sees the bulk of the NAND’s logic placed under the NAND memory cells. Micron has long cited this as giving them an ongoing advantage in NAND density, and that’s once again on show for their 232L NAND. According to the company they’ve achieved a density of 14.6 Gbit/mm, which is about 43% denser than their 176L NAND. And, according to Micron, anywhere between 35% to 100% denser than competing TLC products.Micron 3D TLC NAND Flash Memory232L
Intel NUC11TNBi5 and Akasa Newton TN Fanless Case Review: Silencing the Tiger
Intel's Tiger Lake-based NUCs have been shipping for well over a year now. Supply chain challenges have been impacting availability of different models in different regions, but that has not prevented Intel's partners from delivering complementary products. Akasa, a well-known manufacturer of thermal solutions for computing systems targeting varied markets, has been maintaining a lineup of passively-cooled cases for Intel's NUCs since 2013. We had reviewed the Akasa Turing chassis for the Bean Canyon NUC a couple of years back, and had come away pleasantly surprised. Today's review presents results from our comprehensive evaluation of the Tiger Canyon NUC (NUC11TNKi5). It also describes the process for installing the kit's board (NUC11TNBi5) in the Akasa Newton TN chassis before analyzing its thermal performance characteristics. Read on to find out whether Akasa has managed to replicate the success of the Turing - Bean Canyon combination.
TSMC and ASML: Demand for Chips Remains Strong, But Getting Fab Tools Is Hard
TSMC's revenue this year is going to set an all-time record for the company, thanks to high demand for chips as well as increased prices that its customers are willing to pay for its services. While the company admits that demand for chips aimed at consumer devices is slowing, demand for 5G, AI, HPC, and automotive chips remains steady. In fact, TSMC's main problem at present is getting more fab equipment, as ASML and other tool firms and reporting that demand for semiconductor production tools significantly exceeds supply.Last week TSMC posted its financial results for the second quarter of 2022. The company's revenue hit a record $18.2 billion, which was a year-over-year increase of 43.5%. The company revealed that while its sales were up 55% and 65.3% in April and May (respectively), its revenue in June was 'only' up 18.5% YoY, which indicates a slowdown in sales growth.UPDATE 5:40 PM ET: As some of our colleagues have pointed out, TSMC does not use dry lithography for its leading-edge process technologies due to overlay capabilities of older tools as well as their performance. The story has been corrected accordingly.Demand for Client Devices Slowing"Due to the softening device momentum in smartphone, PC and consumer end market segments, we observe the supply chain is already taking action and expect inventory level to reduce throughout the second half 2022," said C.C. Wei, chief executive of TSMC, at the company's earnings conference call.While we can only speculate on this, it looks like some of TSMC's customers reduced their orders for client-oriented chips after Russia started a full-scale war against Ukraine in late February. TSMC charges/recognizes revenue when it delivers chips/wafers to a client.Production cycle for chips on modern process technologies is well over 60 days depending on complexity and the number of layers: N16 is ~60 days, N7 is 90+ days, N5 is probably well over 100 days. These nodes account for 65% of TSMC's revenue. So, if clients started to wind down orders in March and April as they anticipated increasing inflation and uncertainty among the end user, the effect will be seen in June, which is what can be observed in TSMC's reports.TSMC admits that demand for client-oriented chips is softening, but demand for chips designed to support 5G, AI, and HPC applications still exceeds the company's abilities to supply."While we observe softness in consumer end market segments, other end market segments such as data center and automotive-related remain steady," said Wei. "We are able to reallocate our capacity to support these areas. Despite the ongoing inventory correction, our customers' demand continues to exceed our ability to supply. We expect our capacity to remain tight throughout 2022 and our full year growth to be mid-30% in U.S. dollar terms."Advanced Nodes to Remain Growth Drivers, Expansions Getting TougherOver half of TSMC's revenue (51%) comes from chips made using its advanced fabrication technologies (N7 and thinner nodes), which is not particularly surprising as TSMC is one of the only two contract foundries that offer such sophisticated manufacturing processes to clients.These technologies will be among TSMC's main growth drivers in the coming years, especially as more customers adopt N7 and more advanced technologies. But more N7/N6 and N5/N4 orders mean that TSMC will need to build more capacity for these nodes, as well as more capacity for N3 and subsequent nodes, which is why the company estimated that its CapEx this year would reach $40 billion – $44 billion."With the successful ramp of N5, N4P, N4X, and the upcoming ramp-up of N3, we will expand our customer product portfolio and increase our addressable market," said the head of TSMC. "The macroeconomic uncertainty may persist into 2023, our technology leadership will continue to advance and support our growth. […] We believe the fundamental structural growth trajectory in the long-term semiconductor demand remains firmly in place. "The world's No. 1 contract maker of semiconductors also urges customers to migrate from older nodes to 28nm and specialty technologies as this will ensure capacity availability (as TSMC plans to expand capacity for 28 nm and specialty nodes by 50% by 2025) and denser designs potentially with more features.Building additional leading-edge, 28 nm, and specialty capacities not only requires massive investments, but TSMC needs to procure additional semiconductor production tools. Whether TSMC is building capacities for its brand-new N3 node or 28nm/specialty technologies, it should be noted that the company needs all kinds of lithography machines for them. An N3-capable fab needs dry litho tools, immersion litho scanners, and EUV-capable equipment. Without required number of dry andimmersion scanners, etching, deposition, resist removal, inspection and many other tools (that do not necessarily come from ASML), an advanced EUV machine on its own will be useless. Meanwhile, lithography tools are not the only machinery that a fab needs.Apparently, demand for fab equipment is so high that TSMC will not be able to spend its CapEx budget this year, and some purchases related to advanced (N7 and thinner) and mature nodes will be delayed into 2023. As a result, TSMC's CapEx this year will be at a lower end of the company's prediction (around $40 billion) not because it does not want to invest, but because it cannot invest in tools that are not available."Our suppliers have been facing greater challenges in their supply chains, which are extending tool delivery lead times for both advanced and mature nodes," said Wei. "As a result, we expect some of our CapEx this year to be pushed out into 2023."ASML Confirms Record Quarterly BookingsMeanwhile, ASML, the world's largest producer of lithography tools, this week posted its Q2 2022 revenue of €5.431 billion, a 53% increase year-over-year. During the second quarter, the company supplied (recognized revenue) a total of 91 new lithography systems (up from 59 in Q2 2021), with 12 of those being EUV systems (up from 3 in Q2 2021).What is perhaps more important is that ASML's net bookings for new systems totaled €8.461 billion during the quarter, so the company's bookings are higher than its quarterly sales. Meanwhile, ASML's backlog now totals €33 billion and spans multiple years to come, which essentially is a yet another confirmation that it is extremely hard for companies like TSMC to get new tools.The backlog for DUV machines is now at around 600 units and product order lead time for a new DUV scanner is now about two years. The backlog for EUV tools is well over 100 machines. Meanwhile, ASML says that PO lead time metrics is not exactly relevant since it faces supply chain and own production capacity issues, which means that its partners have to build additional capacity and ASML has to build additional capacity (which takes time) and only then it will be able to supply the tools ordered recently.For the whole year 2022, ASML expects to ship 55 extreme ultraviolet (EUV) lithography scanners, but recognize revenue of 40 EUV systems valued at €6.40 billion (€160/$140 million per machine) because 15 EUV machines will be so-called fast shipments — a shipment process that skips some of the testing at ASML's factory and then final testing and formal acceptance are performed at the customer site (which is why revenue acceptance gets deferred). The company also intends to supply 240 deep ultraviolet (DUV) litho tools this year. ASML expects its production capacity to total 60 EUV scanners and 375+ DUV tools in 2023.SummaryWhile demand for chips aimed at client/consumer devices is getting softer due to rising inflation and geopolitical uncertainty, the global megatrends like 5G, AI, HPC, and autonomous vehicles are still there and these require loads of advanced system-on-chips, specialty processors, and not-so-advanced things like sensors. Therefore, TSMC is confident of strong demand for chips in the coming years.But there is a problem with meeting that demand as TSMC is not the only company that is expanding its manufacturing capacity. ASML's backlog now includes over 100 EUV scanners and around 600 DUV scanners — it will take years for the company to ship these machines. As a result, TSMC has problems with obtaining tools it needs to build additional capacity it needs. It is unclear whether the company has enough capacity to meet all of the potential demand from its largest customers on N3, N4, N5 nodes (Apple, MediaTek, AMD, NVIDIA, etc.), but, ultimately, tool shortages will affect all of its process technologies.
Samsung Portable SSD T7 Shield Review: Flagship PSSD Gets IP65 Avatar
Samsung's lineup of portable SSDs has enjoyed tremendous success, starting with the T1 back in 2015. The company has been regularly updating their PSSD lineup with the evolution of different high-speed interfaces as well as NAND flash technology. Earlier this year, Samsung launched the Portable SSD T7 Shield, a follow-up to the Portable SSD T7 (Touch) introduced in early 2020. Samsung is mainly advertising the ruggedness / IP65 rating of the T7 Shield as a selling point over the regular Portable SSD T7 and T7 Touch. Today's review takes a look at the performance and value proposition of the Portable SSD T7 Shield. Our detailed analysis reveals another trick that Samsung has up their sleeve, which makes the T7 Shield a worthy successor to the Portable SSD T7 family.
The Montech Century Gold 650W PSU Review: The New Kid Starts Out Strong
In today's review, we are taking a look at the Century Gold 650W, an 80Plus Gold certified power supply by Montech. Montech is a Taiwanese company that was founded just six years ago, and yet they already managed to penetrate into the international PC power & cooling market with some surprisingly premium products.
Western Digital 22TB WD Gold, Red Pro, and Purple HDDs Hit Retail
Western Digital's 'What's Next' event back in May 2022 had seen the announcement of its 22TB platform based on ePMR and OptiNAND (with ArmorCache). At the event, WD indicated that the 22TB 10-platter drives would make its market appearance under different product categories - Ultrastar DC HC570 for data centers and enterprises, WD Gold for enterprises, SMEs, and SMBs, WD Red Pro for SMB and SME NAS systems, and WD Purple Pro for surveillance network video recorders.Today, WD is announcing retail availability of these models along with technical details. All drives have a 3.5" form-factor and sport a SATA 6 Gbps interface. The drives are equipped with a 512MB cache and have a 7200 rpm spindle speed. The acoustics rating for all of them are the same too - 20 dBA at idle and 32 dBA for the average seek.Western Digital 2022 22TB Hard Drives - Metrics of InterestWD GoldWD Red ProWD Purple ProRated Workload (TB/yr)550300550Max. Sustained Transfer Rate (MBps)291265265Rated Load / Unload Cycles600K600K600KUnrecoverable Read Errors1 in 10E151 in 10E131 in 10E15MTBF (Hours)2.5M1M2.5MPower (Idle / Active) (W)5.7 / 9.33.4 / 6.85.6 / 6.9Warranty (Years)555Pricing$600$600$600Based on the above specifications, it is clear that Western Digital has taken the ePMR / OptiNAND / triple-stage actuator platform and tweaked the firmware suitably to cater to different market segments. The Red Pro CMR drive comes with NASware 3.0 firmware that includes features such as adjusting parameters based on the integrated multi-axis shock sensor, maintaining balance using the dual-plane balance control technology, and TLER (Time-Limited Error Recovery) configuration for compatibility with various NAS systems. As is customary for the Red Pro family, the new 22TB drives are recommended for usage in systems with up to 24 bays.The WD Purple Pro drives meant for network video recordings has firmware tweaked for continuous sequential writes to multiple drive regions simultaneously. WD indicates the capability of the drive to handle up to 64 concurrent HD stream recordings at 3.25 Mbps, and up to 32 streams for machine learning / object detection tasks. Similar to NASware 3.0 in the Red Pro, the custom firmware has a specific moniker - AllFrame AI.The WD Gold is the flagship in today's retail launch announcement. It's firmware is tweaked for the highest possible performance, without focusing on the active and idle power numbers. The ArmorCache feature is specifically turned on in the WD Gold 22TB model only (July 27, 2022 Update: WD reached out to inform us that the ArmorCache feature is turned on in all the three models covered here, as well as the Ultrastar DC 22TB and 26TB models, and that the oversight in their product briefs would be fixed soon)..Interestingly, all the three drives have been launched with the same MSRP of $600. Even though the WD Red Pro has a much lower workload rating, peak performance number, and reliability metric (MTBF), it appears that WD believes the market will be willing to pay a premium for the lower power consumption numbers.The overall push with these high-capacity hard drives is one of TCO. The ability to reduce physical footprint of storage servers for the same capacity can result in significant savings in allied IT costs related to power, cooling, and rack space. Moving forward, WD can hopefully address the plateauing of access speeds compared to capacity (using dual-actuators or some other similar technology). This can make high-capacity HDDs attractive to home consumers / prosumers who may be rightly worried about long RAID rebuild times.
Intel Atlas Canyon (NUC11ATKPE) and GEEKOM MiniAir 11 UCFF PCs Review: Desktop Jasper Lake Impresses
Intel's low-power Tremont microarchitecture has powered a range of products - from the short-lived Lakefield, to Elkhart Lake in the embedded space, and finally, Jasper Lake in the client computing area. A steady stream of notebooks and motherboards / mini-PCs based on Jasper Lake have become available since the introduction of the series in early 2021. Given their pricing, ultra-compact form-factor (UCFF) machines based on Jasper Lake offer attractive entry-level options in the NUC domain. With a range of SKUs specified for power consumption numbers ranging from 4.8W up to 25W, the product series lends itself to designs that can be either actively or passively cooled. Last week, we had taken a look at two fanless Jasper Lake PCs. Today's review takes a look at two actively cooled Jasper Lake UCFF systems - the Intel's flagship Atlas Canyon NUC (NUC11ATKPE), and GEEKOM's MiniAir 11.
Kingston DataTraveler Max UFD Series Review: New Type-A Thumb Drive Retains NVMe Performance
Kingston's new products in the portable flash-based external storage space have met with good market reception over the last year or so. Two products in particular - the Kingston XS2000 and the DataTraveler Max - continue to remain unique in the market with no other comparable products being widely available. The Kingston DataTraveler Max USB flash drive (UFD) was introduced in August 2021. It advertised 1GBps-class speeds, low power consumption, and a Type-C interface - all in a thumb drive form-factor. Today, Kingston is expanding the DT Max series with three new drives - all sporting a USB 3.2 Gen 2 Type-A interface. Read on for a detailed look at the performance and characteristics of the new DTMAXA drives.
Jasper Lake Fanless Showdown: ECS LIVA Z3 and ZOTAC ZBOX CI331 nano UCFF PCs Review
Intel's Jasper Lake series of products based on the Tremont microarchitecture was launched in early 2021. Since then, we have seen a steady stream of notebooks and motherboards / mini-PCs based on those processors getting introduced in the market.Ultra-compact form-factor (UCFF) machines based on the Atom series offer attractive entry-level options in the NUC domain. Their low-power nature also lends itself to passively cooled designs. Today, we are looking at two different fanless Jasper Lake UCFF PCs - the ECS LIVA Z3 and the ZOTAC ZBOX CI331 nano.Our investigation into the performance and behavior of the two PCs revealed some interesting insights into fanless system designs. Read on for a detailed look at what 6W Jasper Lake SKUs can deliver for traditional PC workloads, along with some guidelines on what to look for in passively cooled systems.
Samsung MUF-256DA USB-C Flash Drive Review: Thumb-Sized Performance Consistency
Portable SSDs have seen great demand over the last few years. Advancements in flash technology and controllers has resulted in compact drives delivering blistering speeds. These advancements have also had their effects on the ubiquitous thumb drives. Samsung's MUF-256DA is a compact USB Type-C flash drive available in capacities ranging from 64GB to 256GB. Today's review attempts to figure out how this USB flash drive (UFD) with a native flash controller stacks up against other native controller offerings in the market.
The AmazonBasics Aurora Vista 1500 UPS Review: Passable Power
AmazonBasics is a private label of products owned by Amazon. The subsidiary was founded back in 2009 and initially offered only basic products, such as cables and office consumables. More and more products are being added under the AmazonBasics label every day. Today, Amazon retails thousands of products under the AmazonBasics label, ranging from paperclips to living room sets. The only common point amongst all of these products is that they are very aggressively priced, usually selling for significantly less than any other competitive product from an established brand.In this review, we are having a look at a very popular low-cost UPS that Amazon distributes under the AmazonBasics label, the AmazonBasics Aurora Vista 1500VA. Much like its name suggests, it is a very basic design with minimal features, yet it is very aggressively priced. Taking the renowned Amazon customer service into account, it seems like an amazing deal for that kind of output.
The AMD Ryzen 7 5800X3D Review: 96 MB of L3 3D V-Cache Designed For Gamers
The level of competition in the desktop CPU market has rarely been as intensive as it has been over the last couple of years. When AMD brought its Ryzen processors to market, it forced Intel to reply, and both have consistently battled in multiple areas, including core count, IPC performance, frequency, and ultimate performance. The constant race to improve products, stay ahead of the competition, and meet customers' changing needs has also sent the two companies off of the beaten paths at times, developing even wilder technologies in search of that competitive edge.In the case of AMD, one such development effort has culminated with 3D V-Cache packaging technology, which stacks a layer of L3 cache on top of the existing CCD's L3 cache. Owing to the fact that while additional cache is beneficial to performance, large quantities of SRAM are, well, large, AMD has been working on how to place more L3 cache on a CPU chiplet without blowing out the die size altogether. The end result of that has been the stacked V-Cache technology, which allows the additional cache to be separately fabbed and then carefully placed on top of a chip to be used as part of a processor.For the consumer market, AMD's first V-Cache equipped product is the Ryzen 7 5800X3D. Pitched as the fastest gaming processor on the market today, AMD's unique chip offers eight cores/sixteen threads of processing power, and a whopping 96 MB of L3 cache onboard. Essentially building on top of the already established Ryzen 7 5800X processor, the aim from AMD is that the additional L3 cache on the 5800X3D will take gaming performance to the next level – all for around $100 more than the 5800X.With AMD's new gaming chip in hand, we've put the Ryzen 7 5800X3D through CPU suite and gaming tests to see if it is as good as AMD claims it is.
Samsung Starts 3nm Production: The Gate-All-Around (GAAFET) Era Begins
Capping off a multi-year development process, Samsung’s foundry group sends word this morning that the company has officially kicked off production on its initial 3nm chip production line. Samsung’s 3nm process is the industry’s first commercial production process node using gate-all-around transistor (GAAFET) technology, marking a major milestone for the field of silicon lithography, and potentially giving Samsung a major boost in its efforts to compete with TSMC.The relatively spartan announcement from Samsung, which comes on the final day of Q2, announces that Samsung has begun production of chips on a GAAFET-enabled 3nm production line. The company is not disclosing the specific version of the node used here, but based on previous Samsung roadmaps, this is undoubtedly Samsung’s initial 3GAE process – essentially, Samsung’s earliest process node within a family. According to Samsung, the line will initially be used to produce chips for “high performance, low power computing”, with mobile processors to come later. Samsung’s early process nodes are traditionally reserved for the company’s internal use, so while Samsung isn’t announcing any specific 3nm chips today, it’s only a matter of time until we see a 3nm SoC announces from Samsung LSI.Samsung has, for the most part, been quiet about its progress on 3nm/GAAFET this year. The last significant news we heard from the company on the matter was several months ago at the company’s Foundry Forum event, where the company reiterated plans to get 3GAE into production by the end of 2022. Given the previous silence and the cutting-edge nature of the technology, there had been more than some concern that 3GAE would be delayed past 2022 – adding on to delays that pushed the tech out of its original 2021 launch window – but with today’s announcement Samsung seems to want to put that to rest.With that said, the devil is in the detail in these announcements, especially as to what’s said versus not said. Taken literally, today’s announcement from Samsung notably does not include any mention of “high volume” manufacturing, which is the traditional milestone for when a process node is ready for commercial use. So by merely saying 3nm is in production, Samsung’s announcement leaves the company with a fair bit of wiggle room with regards to just how many chips they’re capable of producing – and at what yields. The company was producing test chips back in 2021, so the matter is more nuanced than just firing up the fab, so the line between PR and productization is fuzzy, to say the least.Still, today’s announcement is a major moment for Samsung, who has been working on 3nm/GAAFET technology since before 2019, when they initially announced the technology. Samsung’s specific flavor of GAA transistor technology is Multi Bridge Channel FET (MBCFET), which is a nanosheet-based implementation. Nanosheet-based FETs are extremely customizable, and the width of the nanosheet is a key metric in defining the power and performance characteristics: the higher the width, the higher the performance (at higher power). As a result, transistor designs that focus on low power can use smaller nanosheets, while logic that requires higher performance can go for the wider sheets.Along with today’s production announcement, Samsung has also offered some updated size and performance figures comparing 3GAE to older nodes. Officially, 3GAE can offer 45% reduced power consumption or 23% improved performance compared to Samsung’s 5nm process (the company doesn’t state which flavor), with an overall reduction in feature size of 16%. These figures are notably different from Samsung’s previous (2019) figures, which compared the tech to Samsung’s 7LPP node. Given the change in baselines, it’s not clear at this point whether 3GAE is living up to Samsung’s initial claims, or if they’ve had to back off a bit for the initial version of their 3nm technology.What is clear, however, is that Samsung has more significant improvements in mind for the second iteration of 3nm, which we know is 3GAP(lus). According to today’s press release, Samsung is expecting a 50% power reduction or 30% performance improvement versus the same 5nm baseline, with a much greater 35% area reduction. Today’s announcement doesn’t offer a date for 3GAP, but per previous roadmaps, 3GAP is expected to land around a year after 3GAE. 3GAP is also when we expect to see Samsung open the door to outside customers, though given the harsh competitive environment, nothing should be taken for granted.
ASRock Releases Raptor Lake BIOS Updates For Its 600 Series Motherboards
While Intel has yet to officially announce its next (13th) generation of Core processors, this isn't stopping motherboard manufacturers from releasing products for them. Always eager to slide ahead of the competition and spur on new sales towards the later half of a platform's lifecycle, mobo makers are already releasing BIOSes that support Intel's future chips – parts which, officially speaking, don't even exist (yet).Leading this charge is ASRock, who today has released a wave of new BIOSes for its 600 series motherboards designed to support Intel's next generation of processors. Looking to the future, the wave of BIOS updates is the vast majority of its first-generation LGA1700 motherboards, including their Z690, H670, B660, and H610 models.At present, Intel has not officially announced its Raptor Lake processors yet, which are set to be Intel's next-generation of processors for desktops. This is an interesting move from ASRock, which means users currently with an ASRock 600-series motherboard can update the firmware now and not have to worry about installing the 'next-gen' later on.ASRock's EZ Update utility, alongside its models with BIOS Flashback, can install the update with a USB stick with the core file on it. However, it remains to be seen if pre-existing 600-series boards will be updated at the retail/distribution level before the launch of Raptor Lake.
TSMC: N2 To Start With Just GAAFETs, Add Backside Power Delivery Later
When TSMC initially introduced its N2 (2 nm class) process technology earlier this month, the company outlined how the new node would be built on the back of two new cutting-edge fab techniques: gate-all-around transistors, and backside power rails. But, as we've since learned from last week's EU symposium, TSMC's plans are a bit more nuanced than first announced. Unlike some of their rivals, TSMC will not be implementing both technologies in the initial version of their N2 node. Instead, the first iteration of N2 will only be implementing gate-all-around transistors, with backside power delivery to come with a later version of the node.So far, TSMC has mentioned two distinctive features of N2: nano sheet gate-all-around (GAA) transistors, and backside power rails. GAA transistors have two unique advantages over FinFETs: they solve many challenges associated with the leakage current since GAAFET's channels are horizontal and are surrounded by gates around all four sides. Meanwhile, backside power rail enabled improved power delivery to transistors, which increases performance and lowers power consumption.But, as it turns out, TSMC is not planning to start with both nanosheet GAA transistors and backside power rails in the initial generation of its N2 process technology. As disclosed by the company last week at their EU symposium, the first generation of N2 will only feature gate-all-around transistors. Backside power delivery, on the other hand, will come later with more advanced implementations of N2.At this point the company hasn't said too much as to why they're not rolling out backside power delivery as part of their initial N2 node. But, in discussing the bifurcation, TSMC has noted that backside power delivery will ultimately add additional process steps, which the company is seemingly looking to avoid on their first try with GAAFETs.The lack of backside power delivery in the original version of the N2 fabrication technology perhaps explains rather moderate performance improvement of N2 when compared to N3E node. While for high-performance computing (CPUs, accelerators, etc.) a 10% to 15% performance improvement at the same power and complexity does not seem to be impressive, a 25% to 30% power drop at the same speed and complexity seems to be very good for mobile applications.Advertised PPA Improvements of New Process Technologies
TSMC to Customers: It's Time to Stop Using Older Nodes and Move to 28nm
We tend to discuss leading-edge nodes and the most advanced chips made using them, but there are thousands of chip designs developed years ago that are made using what are now mature process technologies that are still widely employed by the industry. On the execution side of matters, those chips still do their jobs as perfectly as the day the first chip was fabbed which is why product manufacturers keep building more and more using them. But on the manufacturing side of matters there's a hard bottleneck to further growth: all of the capacity for old nodes that will ever be built has been built – and they won't be building any more. As a result, TSMC has recently begun strongly encouraging its customers on its oldest (and least dense) nodes to migrate some of their mature designs to its 28 nm-class process technologies.Nowadays TSMC earns around 25% of its revenue by making hundreds of millions of chips using 40 nm and larger nodes. For other foundries, the share of revenue earned on mature process technologies is higher: UMC gets 80% of its revenue on 40 nm higher nodes, whereas 81.4% of SMIC's revenue come from outdated processes. Mature nodes are cheap, have high yields, and offer sufficient performance for simplistic devices like power management ICs (PMICs). But the cheap wafer prices for these nodes comes from the fact that they were once, long ago, leading-edge nodes themselves, and that their construction costs were paid off by the high prices that a cutting-edge process can fetch. Which is to say that there isn't the profitability (or even the equipment) to build new capacity for such old nodes.This is why TSMC's plan to expand production capacity for mature and specialized nodes by 50% is focused on 28nm-capable fabs. As the final (viable) generation of TSMC's classic, pre-FinFET manufacturing processes, 28nm is being positioned as the new sweet spot for producing simple, low-cost chips. And, in an effort to consolidate production of these chips around fewer and more widely available/expandable production lines, TSMC would like to get customers using old nodes on to the 28nm generation."We are not currently [expanding capacity for] the 40 nm node" said Kevin Zhang, senior vice president of business development at TSMC. "You build a fab, fab will not come online [until] two year or three years from now. So, you really need to think about where the future product is going, not where the product is today."While TSMC's 28nm nodes are still subject to the same general cost trends as chip fabs on the whole – in that they're more complex and expensive on a per-wafer basis than even older nodes – TSMC is looking to convert customers over to 28nm by balancing that out against the much greater number of chips per wafer the smaller node affords. Therefore, while companies will have to pay more, they also stand to to get more in terms of total chips. And none of this takes into account potential ancillary benefits of a newer node, such as reduced power consumption and potentially greater clockspeed (performance) headroom."So, lots of customers' product today is at, let's say 40 nm or even older, 65 nm," said Zhang. They are moving to lower advance nodes. 20/28 nm is going to be a very important node to support future specialty. […] We are working with customer to accelerate [their transition]. […] I think the customer going to get a benefit, economic benefit, scaling benefit, you have a better power consumption. but they've already got a chip that works. Why? Oh, then you could say why we do advanced technology. Yeah. Yeah. I mean, it's, uh, find just the nature of the summit is you go to a next node, you get a better performance and better power and overall you get a system level benefit."In addition to multiple 28nm nodes designed for various client applications, TSMC is expanding its lineup of specialty 28nm and 22nm (22ULP, 22ULL) process technologies to address a variety of chip types that currently rely on various outdated technologies. As with the overall shift to 28nm, TSMC is looking to corral customers into using the newer, higher density process nodes. And, if not 28nm/22nm, then customers also have the option of transitioning into even more capable FinFET-based nodes, which are part of TSMC's N16/N12 family (e.g., N12e for IoT).
As HPC Chip Sizes Grow, So Does the Need For 1kW+ Chip Cooling
One trend in the high performance computing (HPC) space that is becoming increasingly clear is that power consumption per chip and per rack unit is not going to stop with the limits of air cooling. As supercomputers and other high performance systems have already hit – and in some cases exceeded these limits – power requirements and power densities have continued to scale up. And based on the news from TSMC's recent annual technology symposium, we should expect to see this trend continue as TSMC lays the groundwork for even denser chip configurations.The problem at hand is not a new one: transistor power consumption isn't scaling down nearly as quickly as transistor sizes. And as chipmakers are not about to leave performance on the table (and fail to deliver semi-annual increases for their customers), in the HPC space power per transistor is quickly growing. As an additional wrinkle, chiplets are paving the way towards constructing chips with even more silicon than traditional reticle limits, which is good for performance and latency, but even more problematic for cooling.Enabling this kind of silicon and power growth has been modern technologies like TSMC'a CoWoS and InFO, which allow chipmakers to build integrated multi-chiplet system-in-packages (SiPs) with as much a double the amount of silicon otherwise allowed by TSMC's reticle limits. By 2024, advancements of TSMC's CoWoS packaging technology will enable building even larger multi-chiplet SiPs, with TSMC anticipating stitching together upwards of four reticle-sized chiplets, This will enable tremendous levels of complexity (over 300 billion transistor per SiP is a possibility that TSMC and its partners are looking at) and performance, but naturally at the cost of formidable power consumption and heat generation.Already, flagship products like NVIDIA's H100 accelerator module require upwards of 700W of power for peak performance. So the prospect of multiple, GH100-sized chiplets on a single product is raising eyebrows – and power budgets. TSMC envisions that several years down the road there will be multi-chiplet SiPs with a power consumption of around 1000W or even higher, Creating a cooling challenge.At 700W, H100 already requires liquid cooling; and the story is much the same for the chiplet based Ponte Vecchio from Intel, and AMD's Instinct MI250X. But even traditional liquid cooling has its limits. By the time chips reach a cumulative 1 kW, TSMC envisions that datacenters will need to use immersion liquid cooling systems for such extreme AI and HPC processors. Immersion liquid cooling, in turn, will require rearchitecting datacenters themselves, which will be a major change in design and a major challenge in continuity.The short-tem challenges aside, once datacenters are setup for immersion liquid cooling, they will be ready for even hotter chips. Liquid immersion cooling has a lot of potential for handling large cooling loads, which is one reason why Intel is investing heavily in this technology in an attempt to make it more mainstream.In addition to immersion liquid cooling, there is another technology that can be used to cool down ultra-hot chips — on-chip water cooling. Last year TSMC revealed that it had experimented with on-chip water cooling and said that even 2.6 kW SiPs could be cooled down using this technology. But of course, on-chip water cooling is an extremely expensive technology by itself, which will drive costs of those extreme AI and HPC solutions to unprecedented levels.None the less, while the future isn't set in stone, seemingly it has been cast in silicon. TSMC's chipmaking clients have customers willing to pay a top dollar for those ultra-high-performance solutions (think operators of hyperscale cloud datacenters), even with the high costs and technical complexity that entails. Which to bring things back to where we started, is why TSMC has been developing CoWoS and InFO packaging processes on the first place – because there are customers ready and eager to break the reticle limit via chiplet technology. We're already seeing some of this today with products like Cerebras' massive Wafer Scale Engine processor, and via large chiplets, TSMC is preparing to make smaller (but still reticle-breaking) designs more accessible to their wider customer base.Such extreme requirements for performance, packaging, and cooling not only push producers of semiconductors, servers, and cooling systems to their limits, but also require modifications of cloud datacenters. If indeed massive SiPs for AI and HPC workloads become widespread, cloud datacenters will be completely different in the coming years.
The Gigabyte UD1000GM PG5 1000W PSU Review: Prelude to ATX 3.0
In today's review, we are taking a look at the first-ever PSU released with the new 12VHPWR connector, the GIGABYTE UD1000GM PG5. Although the unit is not ATX v3.0 compliant, GIGABYTE upgraded one of their currently available platforms to provide for a single 600W video card connector in an effort to entice early adopters.
AMD Updates Ryzen Embedded Series, R2000 Series With up to Four Cores and Eight Threads
One area of AMD's portfolio that perhaps doesn't garner the same levels of attention as its desktop, mobile, and server products is its embedded business. In early 2020, AMD unveiled its Ryzen Embedded R1000 platform for the commercial and industrial sectors and the ever-growing IoT market, with low-powered processors designed for low-profile systems to satisfy the mid-range of the market.At Embedded World 2022 in Nuremberg, Germany, AMD has announced its next-generation of Ryzen Embedded SoCs, the R2000 series. Offering four different SKUs ranging from 2C/4T up to 4C/8T, which is double the core count of the previous generation, AMD claims that the R2000 series features up to 81% higher CPU and graphics performance.The AMD Ryzen Embedded R2000 Series compared to the previous generation (R1000), now has double the core count, with a generational swing from Zen to the more efficient and higher performance Zen+ cores. All four SKUs announced feature a configurable TDP, with the top SKU, the R2544, operating at between 35 and 54 W. More in line with the lower power target of these SoCs, the bottom SKU (R2312) has a configurable TDP of between 12 and 35 W.AMD Ryzen Embedded R2000-Series APUsAnandTechCore/
Lenovo ThinkStation P360 Ultra Melds Desktop Alder Lake and NVIDIA Professional Graphics
Over the last decade or so, advancements in CPU and GPU architectures have combined extremely well with the relentless march of Moore's Law on the silicon front. Together, these have resulted in hand-held devices that have more computing power than huge and power-hungry machines from the turn of the century. On the desktop front, small form-factor (SFF) machines are now becoming a viable option for demanding professional use-cases. CAD, modeling, and simulation capabilities that required big iron servers or massive tower workstations just a few years back are now capable of being served by compact systems.Workstation notebooks integrating top-end mobile CPUs and professional graphics solutions from AMD (FirePro) or NVIDIA (Quadro Mobile / RTX Professional) have been around since the early 2000s. The advent of UCFF and SFF PCs has slowly brought these notebook platforms to the desktop. Zotac was one of the early players in this market, and continues to introduce new products in the Zotac ZBOX Q Series. The company has two distinct lines - one with a notebook CPU and a professional mobile GPU (with a 2.65L volume), and another with a workstation CPU (Xeons up to 80W) and a professional mobile GPU (with a 5.85L volume).Today, Lenovo is also entering the SFF workstation PC market with its ThinkStation P360 Ultra models. The company already has tiny workstations that do not include support for discrete GPUs, and that is fixed in the new Ultra systems. Featuring desktop Alder Lake with an Intel W680 chipset (allowing for ECC RAM option), these systems also optionally support discrete graphics cards - up to NVIDIA RTX A5000 Mobile. Four SODIMM slots allow for up to 128GB of ECC or non-ECC DDR5-4000 memory. Two PCIe Gen 4 x4 M.2 slots and a SATA III port behind a 2.5" drive slot are also available, with RAID possibility for the M.2 SSDs. Depending on the choice of CPU and GPU, Lenovo plans to equip the system with one of three 89% efficiency external power adapters - 170W, 230W, or 300W.Gallery: Lenovo ThinkStation P360 UltraThe front panel has a USB 3.2 Gen 2 Type-A and two Thunderbolt 4 Type-C ports, as well as a combo audio jack. The vanilla iGPU version has four USB 3.2 Gen 2 Type-A ports, three DisplayPort 1.4 ports, and two RJ-45 LAN ports (1x 2.5 GbE, and 1x 1 GbE). On the WLAN front, the non-vPro option is the Wi-Fi 6 AX201, while the vPro one is the Wi-Fi 6E AX211. In addition to the PCIe 4.0 x16 expansion slot for the discrete GPU, the system also includes support for a PCIe 3.0 x4 card such as the Intel I350-T2 dual-port Gigabit Ethernet Adapter.With dimensions of 87mm x 223mm x 202mm, the whole package comes in at 3.92L. In order to cram the functionality into such a chassis, Lenovo has employed a custom dual-sided motherboard with a unique cooling solution, as indicated in the teardown picture above. A blower fan is placed above the two M.2 slots to ensure that thte PCIe Gen 4 M.2 SSDs can operate without any thermal issues.As is usual for Lenovo's business / professional-oriented PCs, these systems are tested to military grade requirements and come with ISV certifications fro companies such as Autodesk, ANSYS, Dassault, PTC, Siemens, etc. Pricing starts at $1299 for the base model without a discrete GPU.The ThinkStation P360 Ultra joins Lenovo's already-announced P360 Tiny and the P360 Tower models. The P360 Tiny doesn't support powerful discrete GPUs (capable of handling workstation workloads), while the P360 Tower goes overboard with support for 3.5" drives, and up to four PCIe expansion cards, along with a 750W PSU. Most workstation use-cases can get by without all those bells and whistles. Additional options for the end consumer are always welcome, and that is where the P360 Ultra comes into play.
TSMC to Expand Capacity for Mature and Specialty Nodes by 50%
TSMC this afternoon has disclosed that it will expand its production capacity for mature and specialized nodes by about 50% by 2025. The plan includes building numerous new fabs in Taiwan, Japan, and China. The move will further intensify competition between TSMC and such contract makers of chips as GlobalFoundries, UMC, and SMIC.When we talk about silicon lithography here at AnandTech, we mostly cover leading-edge nodes used produce advanced CPUs, GPUs, and mobile SoCs, as these are devices that drive progress forward. But there are hundreds of device types that are made on mature or specialized process technologies that are used alongside those sophisticated processors, or power emerging smart devices that have a significant impact on our daily lives and have gained importance in the recent years. The demand for various computing and smart devices in the recent years has exploded by so much that this has provoked a global chip supply crisis, which in turn has impacted automotive, consumer electronics, PC, and numerous adjacent industries.Modern smartphones, smart home appliances, and PCs already use dozens of chips and sensors, and the number (and complexity) of these chips is only increasing. These parts use more advanced specialty nodes, which is one of the reason why companies like TSMC will have to expand their production capacities of otherwise "old" nodes to meet growing demand in the coming years.But there is another market that is about to explode: smart cars. Cars already use hundreds of chips, and semiconductor content is growing for vehicles. There are estimates that several years down the road the number of chips per car will be about 1,500 units – and someone will have to make them. Which is why TSMC rivals GlobalFoundries and SMIC have been increasing investments in new capacities in the last couple of years.TSMC, which has among the largest CapEx budgets in the semiconductor industries (which is challenged only by Samsung) has in recent years been relatively quiet about their mature and specialty node production plans. But at their 2022 TSMC Technology Symposium, the company outlined its plans formally.The company is investing in four new facilities for mature and specialty nodes:
TSMC Unveils N2 Process Node: Nanosheet-based GAAFETs Bring Significant Benefits In 2025
At its 2022 Technology Symposium, TSMC formally unveiled its N2 (2 nm class) fabrication technology, which is slated to go into production some time in 2025 and will be TSMC's first node to use their nanosheet-based gate-all-around field-effect transistors (GAAFETs). The new node will enable chip designers to significantly reduce the power consumption of their products, but the speed and transistor density improvements seem considerably less tangible.TSMC's N2 is a brand-new platform that extensively uses EUV lithography and introduces GAAFETs (which TSMC calls nanosheet transistors) as well as backside power delivery. The new gate-all-around transistor structure promises well-published advantages, such as greatly reduced leakage current (now that the gates are around all four sides of the channel) as well as ability to adjust channel width to increase performance or lower power consumption. As for the backside power rail, it is generally designed to enable better power delivery to transistors, offering a solution to the problem of increasing resistances in the back-end-of-line (BEOL). The new power delivery is slated to increase transistor performance and lower power consumption.From feature set standpoint, TSMC's N2 looks like a very promising technology. As for actual numbers, TSMC promises that N2 will allow chip designers to increase performance by 10% to 15% at the same power and transistor count, or reduce power consumption at the same frequency and complexity by 25% ~ 30%, all the while increasing chip density by over 1.1-fold when compared to N3E node.Advertised PPA Improvements of New Process Technologies
TSMC Readies Five 3nm Process Technologies, Adds FinFlex For Design Flexibility
Taiwan Semiconductor Manufacturing Co. on Thursday kicked off its 2022 TSMC Technology Symposium, where the company traditionally shares it process technology roadmaps as well as its future expansion plans. One of the key things that TSMC is announcing today are its leading-edge nodes that belong to its N3 (3 nm class) and N2 (2nm class) families that will be used to make advanced CPUs, GPUs, and SoCs in the coming years.N3: Five Nodes Over Next Three YearsAs fabrication processes get more complex, their pathfinding, research, and development times get stretched out as well, so we no longer see a brand-new node emerging every two years from TSMC and other foundries. With N3, TSMC's new node introduction cadence is going to expand to around 2.5 years, whereas with N2, it will stretch to around three years.This means that TSMC will need to offer enhanced versions of N3 in order to meet the needs of its customers who are still looking for a performance per watt improvement as well as transistor density bump every year or so. Another reason why TSMC and its customers need multiple versions of N3 is because the foundry's N2 relies on all-new gate-all-around field-effect transistors (GAA FETs) implemented using nanosheets, which is expected to come with higher costs, new design methodologies, new IP, and many other changes. While developers of bleeding-edge chips will be quick to jump to N2, many of TSMC's more rank & file customers will stick to various N3 technologies for years to come.At its TSMC Technology Symposium 2022, the foundry talked about four N3-derived fabrication processes (for a total of five 3 nm-class nodes) — N3E, N3P, N3S, and N3X — set to be introduced over the coming years. These N3 variants are slated to deliver improved process windows, higher performance, increased transistor densities, and augmented voltages for ultra-high-performance applications. All these technologies will support FinFlex, a TSMC "secret sauce" feature that greatly enhances their design flexibility and allows chip designers to precisely optimize performance, power consumption, and costs.Advertised PPA Improvements of New Process Technologies
The ASUS ROG Maximus Z690 Hero Motherboard Review: A Solid Option For Alder Lake
Over the last six months since Intel launched its 12th Gen Core series of processors, we've looked at several Alder Lake desktop CPUs and seen how competitive they are from top to bottom - not just in performance but price too. To harness the power of Alder Lake, however, there are many options in terms of Z690 motherboards, and today we're taking a look at one of ASUS's more premium models, the ROG Maximus Z690 Hero.They say hard times don't create heroes, but ASUS has done for many years with good results. Equipped with plenty of top-tier features such as Thunderbolt 4, Intel's Wi-Fi 6E CNVi, and support for up to DDR5-6400 memory, it has enough to make it a solid choice for gamers and enthusiasts. It's time to see if the Z690 Hero option stacks up against the competition and if it can sparkle in a very competitive LGA1700 market.
Intel 4 Process Node In Detail: 2x Density Scaling, 20% Improved Performance
Taking place this week is the IEEE’s annual VLSI Symposium, one of the industry’s major events for disclosing and discussing new chip manufacturing techniques. One of the most anticipated presentations scheduled this year is from Intel, who is at the show to outline the physical and performance characteristics of their upcoming Intel 4 process, which will be used for products set to be released in 2023. The development of the Intel 4 process represents a critical milestone for Intel, as it’s the first Intel process to incorporate EUV, and it’s the first process to move past their troubled 10nm node – making it Intel’s first chance to get back on track to re-attaining fab supremacy.Intel’s scheduled to deliver their Intel 4 presentation on Tuesday, in a talk/paper entitled “Intel 4 CMOS Technology Featuring Advanced FinFET Transistors optimized for High Density and High-Performance Computing”. But this morning, ahead of the show, they re publishing the paper and all of its relevant figures, giving us our first look at what kind of geometries Intel is attaining, as well as some more information about the materials being used.
AMD's Desktop CPU Roadmap: 2024 Brings Zen 5-based "Granite Ridge"
As part of AMD's Financial Analyst Day 2022, it has provided us with a look at the company's desktop client CPU roadmap as we advance towards 2024. As we already know, AMD's latest 5 nm chips based on its Ryzen 7000 family are expected to launch in Fall 2022 (later this year), but the big news is that AMD has confirmed their Zen 5 architecture will be coming to client desktops sometime before the end of 2024 as AMD's "Granite Ridge" chips.At Computex 2022, during AMD's Keynote presented by CEO Dr. Lisa Su, AMD unveiled its Zen 4 core architecture using TSMC's 5 nm process node. Despite not announcing specific SKUs during this event, AMD did unveil some expected performance metrics that we could expect to see with the release of Ryzen 7000 for desktop. This includes 1 MB per core L2 cache, which is double the L2 cache per core with Zen 3, and a 15%+ uplift in single-threaded performance.AMD 3D V-Cache Coming to Ryzen 7000 and BeyondOne key thing to note with AMD's updated client CPU roadmap, it highlights some more on what to expect with its Zen 4 core, which is built on TSMC's 5 nm node. AMD is expecting 8-10% IPC gains over Zen 3, on top of their previously announced clockspeed gains. As a result, the company is expecting single-threaded performance to improve by at least 15%, and by even more for multi-threaded workloads.Meanwhile AMD's 3D V-Cache packaging technology will also come to client desktop Zen 4. AMD is holding any further information close to their chest, but their current roadmap makes it clear that we should, at a minimum, expect a successor to the the Ryzen 7 5800X3D.AMD Zen 5 For Client Desktop: Granite RidgeThe updated AMD client CPU roadmap until 2024 also gives us a time frame of when we can expect its next-generation Zen 5 cores. Built on what AMD is terming an "advanced node" (so either 4 nm or 3 nm), Zen 5 for client desktops will be Granite Ridge.At two years out, AMD isn't offering any further details than what they've said about the overall Zen 5 architecture thus far. So while we know that Zen 5 will involve a significant reworking of AMD's CPU architecture with a focus on the front end and issue width, AMD isn't sharing anything about the Granite Ridge family or related platform in particular. So sockets, chipsets, etc are all up in the air.But for now, AMD's full focus is on the Zen 4-based Ryzen 7000 family. Set to launch this fall, 2022 should end on a high note for the company.
Updated AMD Notebook Roadmap: Zen 4 on 4nm in 2023, Zen 5 By End of 2024
As we've come to expect during AMD's Financial Analyst Day (FAD), we usually get small announcements about big things coming in the future. This includes updated product roadmaps for different segments such as desktop, server, graphics, and mobile. In AMD's latest notebook roadmap stretching out to 2024, AMD has unveiled that its mobile Zen 4 core (Phoenix Point) will be available sometime in 2023 and Zen 5 for mobile on an unspecified node which is expected to land sometime by the end of 2024.The updated AMD Notebook roadmap through to 2024 highlights two already available mobile processors, the Zen 3-based Ryzen 5000 series with Vega integrated graphics and the latest Ryzen 6000 based on Zen 3+ and with the newest RDNA 2 mobile graphics capabilities. But there's more that is due to be announced starting in 2023.From The Rembrandt, Rises a Phoenix: Zen 4 Mobile AKA Phoenix PointWhat's new and upcoming on the updated AMD mobile roadmap is the successor to Rembrandt (Ryzen 6000), which AMD has codenamed Phoenix Point. AMD Phoenix Point will be based on AMD's upcoming Zen 4 core architecture and will be built using TSMC's 4 nm process node. According to the roadmap, AMD's Zen 4 Phoenix Point mobile processors will use Artificial Intelligence Engine (AIE) and AMD's upcoming and next-generation RDNA 3 integrated graphics.Also Announced: Zen 5 Mobile Codenamed Strix PointAlso on the AMD notebook roadmap is the announcement of its Zen 5-based platform on an unspecified manufacturing process, codenamed Strix Point. While details on Strix Point are minimal, AMD does state that Strix Point will use AMD's unreleased RDNA 3+ graphics technology, which will likely be a refreshed and perhaps more performance per watt efficient RDNA 3 variation.Also listed within the slide of the roadmap with Phoenix Point and Strix Point is an Artificial Intelligence Engine (AIE), which is more commonly found in mobile phones. The AI Engine or AIE will allow AMD to spec its products based on tiling with an adaptive interconnect. Still, it hasn't unveiled much more about how it intends to incorporate AIE into its notebook portfolio. We know that it is part of AMD's XDNA Adaptive Architecture IP, which comes from its acquisition of Xilinx.We will likely learn more about AMD's Phoenix Point based on Zen 4 in the coming future, as a release date sometime in 2023 is expected. As for Strix Point, which will be using its unannounced Zen 5 microarchitecture, we're likely to hear more about this next year sometime.
AMD Announces Genoa-X: 4th Gen EPYC with Up to 96 Zen 4 Cores and 1GB L3 V-Cache
As AMD makes strides in snatching market share with its high-performance x86 processor designs in the server market, it has announced some of its upcoming 4th generations EPYC families expected sometime in 2023. Focusing on its technical computing and database-focused family codenamed Genoa-X, it is the direct successor to AMD's Milan-X EPYC line-up which launches, later on, this year in Q4.Essentially the V-Cache enabled version of AMD's Genoa EPYC CPUs, Genoa-X will include up to 96 Zen 4 cores and 1GB (or more) of L3 cache per socket. We know that Genoa-X will be using the latest SP5 socket (LGA6096), and will feature twelve memory channels, just like the regular Genoa platform which is set to debut in Q4 2022.This means that the new SP5 platform will support Genoa, Genoa-X, Bergamo, and Siena, although it is unclear if users upgrading from Genoa to Genoa-X will need a new LGA6096 motherboard or if it will be enabled with a firmware update.As the successor to Milan-X, Genoa-X is designed to slot into the same user segment, with AMD pitching it at customers who have workloads that uniquely benefit from oversized L3 caches – that is, workloads that can predominantly fit in those caches. That includes technical computing workloads (CAM, etc) as well as databases.We expect to hear more about Genoa-X and any specific features it will bring to the 4th Gen EPYC platform in the future. AMD Genoa-X is scheduled to be released sometime in 2023.
AMD Unveils Siena, A Lower Cost EPYC Family With Up to 64 Zen 4 Cores
As part of AMD's Financial Analyst Day 2022, AMD unveiled an updated server CPU roadmap up to and including 2024. Nestled within AMD's latest server roadmap, it highlighted the Siena series, much like the Genoa (due Q4 2022), Bergamo (Due 1H 2023), and the Siena family from its 4th gen EPYC series are expected to land sometime in 2023. While roadmaps only give a glimpse of what is expected, they are used internally to plot and plan specific product groups and keep them on track for release.The AMD Siena family of 4th generation EYPC processors are slightly different from Genoa and Genoa-X because Siena is primarily designed for the Edge and Telecommunication industries. Siena will feature up to 64 Zen 4 cores, and AMD states it will be a lower-cost platform in comparison to Genoa, Genoa-X, and Bergamo, all of which will be based on AMD's Zen 4 core architecture and TSMC's 5 nm and the even more highly optimized 4 nm process node.AMD's Siena family of EPYC 7004 products will likely be compatible with the SP5 platform that launches alongside Genoa in Q4 2022. SP5 features support twelve channels of DDR5 memory and PCIe 5.0 lanes, but it is unclear how AMD intends to package its Siena family in terms of die layout or whether it will feature a cut-down feature set to make it more affordable.We expect AMD to unveil more about Siena soon, and AMD states that Siena will be coming sometime in 2023.
AMD Updated EPYC Roadmap: 5th Gen EPYC "Turin" Announced, Coming by End of 2024
As part of AMD's Financial Analysts Day 2022, AMD has provided updates to its Server CPU roadmap going into 2024. The biggest announcement is that AMD is already planning for the (next) next-gen core for its successful EPYC family, the 5th generation EPYC series, which has been assigned the codenamed Turin. Some key announcements include various segmentations of its expected EPYC 7004 portfolio, including Genoa, Bergamo, Genoa-X, and Siena.From the launch of AMD's EPYC 2nd generation products Codenamed Rome back in August 2019, the release of the updated EPYC 7003 processors, including both Milan and Milan-X, the next generation of EPYC 7004 codenamed Genoa is expected to launch in Q4 2022. Genoa will feature up to 96 Zen 4 cores based on TSMC's 5 nm process node, with the new SP5 platform bringing support for 12-channel memory, PCIe 5.0, and support for memory expansion with Compute Express Link (CXL).While Genoa will benefit from up to 96 Zen 4 cores and will be released towards the end of the year in Q4, AMD also announced Bergamo, which will be available in the first half of 2023, with Genoa-X and Siena also being available sometime in 2023. AMD's Genoa-X will feature up to 96 Zen 4 cores based on TSMC's 5 nm manufacturing node, with up to 1 GB of L3 cache per socket. AMD Siena will be predominantly targeted as a lower-cost platform and will feature up to 64 Zen 4 cores, with an optimized performance per watt, making it more affordable for the Edge and Telco markets.AMD Unveils 5th Gen EPYC (Turin)Perhaps the most significant announcement on AMD's Server CPU Roadmap going into 2024 is the plan to bring its 5th generation of EPYC processors codenamed 'Turin' to market sometime before the end of 2024. As expected, AMD hasn't shed many details on the Turin family of processors, but we expect it to be named the EPYC 7005 platform to follow its current EPYC name scheme.We know that the Zen 5 cores will be based on a 4 nm mode (likely TSMC but not confirmed) and a 3 nm version, as highlighted in AMD's CPU Core Roadmap through to 2024. AMD also states there will be three variants of the Zen 5 core in its CPU roadmap, including Zen 5, Zen 5 with 3D V-Cache, and Zen 5c.From the latest roadmap highlighting AMD's EPYC products, we know that AMD's 5th generation of EPYC processors is expected to launch sometime before the end of 2024.
AMD: Combining CDNA 3 and Zen 4 for MI300 Data Center APU in 2023
Alongside their Zen CPU architecture and RDNA client GPU architecture updates, AMD this afternoon is also updating their roadmap for their CDNA server GPU architecture and related Instinct products. And while CPUs and client GPUs are arguably on a rather straightforward path for the next two years, AMD intends to shake up its server GPU offerings in a big way.Let’s start first with AMD’s server GPU architectural roadmap. Following AMD’s current CDNA 2 architecture, which is being used in the MI200 series Instinct Accelerators, will be CDNA 3. And unlike AMD’s other roadmaps, the company isn’t offering a two-year view here. Instead, the server GPU roadmap only goes out one year – to 2023 – with AMD’s next server GPU architecture set to launch next year.Our first look at CDNA 3 comes with quite a bit of detail. With a 2023 launch AMD isn’t holding back on information quite as much as they do elsewhere. As a result, they’re divulging information on everything from the architecture to some basic information about one of the products CDNA 3 will go in to – a data center APU made of CPU and GPU chiplets.
AMD’s 2022-2024 Client GPU Roadmap: RDNA 3 This Year, RDNA 4 Lands in 2024
Among the slew of announcements from AMD today around their 2022 Financial Analyst Day, the company offering an update to their client GPU (RDNA) roadmap. Like the company’s Zen CPU architecture roadmap, AMD has been keeping a 2 year horizon here, essentially showing what’s out, what’s about to come out, and what’s going to be coming out in a year or two. Meaning that today’s update gives us our first glace at what will follow RDNA 3, which itself was announced back in 2020.With AMD riding a wave of success with their current RDNA 2 architecture products (the Radeon RX 6000 family), the company is looking to keep up that momentum as they shift towards the launch of products based on their forthcoming RDNA 3 architecture. And while today’s roadmap update from AMD is a high-level one, it none the less offers us the most detailed look yet into what AMD has in store for their Radeon products later this year.
AMD RDNA 3/Navi 3X GPU Update: 50% Better Perf-Per-Watt, Using Chiplets For First Time
Continuing our coverage of AMD's 2022 Financial Analyst day, we have the matter of AMD's forthcoming RDNA 3 GPU architecture and the Navi 3X GPUs that will be built upon it. Up until now, AMD has been fairly quiet about what to expect with RDNA 3, but as RDNA 2 approaches its second birthday and the first RDNA 3 products are slated to launch this year, AMD is offering some of the first significant details on the GPU architecture.First and foremost, let’s talk about performance. The Navi 3X family, to be built on a 5nm process (TSMC’s, no doubt) is targeting a greater-than 50% performance-per-watt uplift versus RDNA 2. This is a significant and similar uplift as to AMD saw moving from RDNA (1) to RDNA 2. And while such a claim from AMD would have seemed ostentatious two years ago, RDNA 2 has given AMD’s GPU teams a significant amount of renewed credibility.Thankfully for AMD, unlike the 1-to-2 transition, they don’t have to find a way to come up with a 50% uplift based on architecture and DVFS optimizations alone. The 5nm process means that Navi 3X is getting a full node’s improvement from the TSMC N7/N6 based Navi 2X GPU family. As a result, AMD will see a significant efficiency improvement from that alone.But with that said, these days a single node jump on its own can’t deliver a 50% perf-per-watt improvement (RIP Dennard scaling). So there are several architecture improvements planned for RDNA 3. This includes the next generation of AMD’s on-die Infinity Cache, and what AMD is terming an optimized graphics pipeline. According to the company, the GPU compute unit (CU) is also being rearchitected, though to what degree remains to be seen.But the biggest news of all on this front is that, confirming a year’s worth of rumors and several patent applications, AMD will be using chiplets with RDNA 3. To what degree, AMD isn’t saying, but the implication is that at least one GPU tier (as we know it) is moving from a monolithic GPU to a chiplet-style design, using multiple smaller chips.Chiplets are in some respects the holy grail of GPU construction, because they give GPU designers options for scaling up GPUs past today’s die size (reticle) and yield limits. That said, it’s also a holy grail because the immense amount of data that must be passed between different parts of a GPU (on the order of terabytes per second) is very hard to do – and very necessary to do if you want a multi-chip GPU to be able to present itself as a single device. We’ve seen Apple tackle the task by essentially bridging two M1 SoCs together, but it’s never been done with a high-performance GPU before.Notably, AMD calls this an “advanced” chiplet design. That moniker tends to get thrown around when a chip is being packaged using some kind of advanced, high-density interconnect such as EMIB, which differentiates it from simpler designs such as Zen 2/3 chiplets, which merely route their signals through the organic packaging without any enhanced technologies. So while we’re eagerly awaiting further details of what AMD is doing here, it wouldn’t at all be surprising to find out that AMD is using a form of Local Si Interconnect (LSI) technology (such as the Elevated Fanout Bridge used for the MI200 family of accelerators) to directly and closely bridge two RNDA 3 chiplets.At this point, AMD isn’t going into any more details on the architecture or Navi 3X GPUs. Today is a teaser and roadmap update for the analyst market, not an announcement of what we can only assume will be the Radeon RX 7000 family of video cards. None the less, with the first RDNA 3 products slated to launch later this year, a more formal announcement cannot be too far away. So we’re looking forward to hearing more about what stands to be a major shake-up in the nature of GPU design and fabrication.
AMD Zen Architecture Roadmap: Zen 5 in 2024 With All-New Microarchitecture
Today is AMD’s Financial Analyst Day, the company’s semi-annual, analyst-focused gathering. While the primary purpose of the event is for AMD to reach out to investors, analysts, and others to demonstrate the performance of the company and why they should continue to invest in the company, FAD has also become AMD’s de-facto product roadmap event. After all, how can you wisely invest in AMD if you don’t know what’s coming next?As a result, the half-day series of presentations is full of small nuggets of information about products and plans across the company. Everything here is high-level – don’t expect AMD to hand out the Zen 4 transistor floorplan – but it’s easily our best look at AMD’s product plans for the next couple of years.Kicking off FAD 2022 with what’s always AMD’s most interesting update is the Zen architecture roadmap. The cornerstone of AMD’s recovery and resurgence into a competitive and capable player in the x86 processor space, the Zen architecture is the basis of everything from AMD’s smallest embedded CPUs to their largest enterprise chips. So what’s coming down the pipe over the next couple of years is a very big deal for AMD, and the industry as a whole.
AMD Zen 4 Update: 8% to 10% IPC Uplift, 25% More Perf-Per-Watt, V-Cache Chips Coming
As part of today’s AMD’s 2022 Financial Analyst Day, the company is offering a short, high-level update on their forthcoming Zen 4 CPU architecture. This information is being divulged as part of the company’s larger Zen architecture roadmap, which today is being extended to announce Zen 5 for 2024.The biggest news here is that AMD is, for the first time, disclosing their IPC expectations for the new architecture. Addressing some post-Computex questions around IPC expectations, AMD is revealing that they expect Zen 4 to offer an 8-10% IPC uplift over Zen 3. The initial Computex announcement and demo seemed to imply that most of AMD’s performance gains were from clockspeed improvements, so AMD is working to respond to that without showing too much of their hand months out from the product launches.This makes up a good chunk of AMD’s overall >15% expected improvement in single-threaded performance, which was previously disclosed at Computex and essentially remains unchanged. That said, AMD is strongly emphasizing the “greater than” aspect of that performance estimate. At this point AMD can’t get overly specific since they haven’t locked down final clockspeeds, but as we’ve seen with their Computex demos, peak clockspeeds of 5.5GHz (or more) are currently on the table for Zen 4.AMD is also talking a bit more about power and efficiency expectations today. At this point, AMD is projecting a >25% increase in performance-per-watt with Zen 4 over Zen 3 (based on desktop 16C chips running CineBench). Meanwhile the overall performance improvement stands at >35%, no doubt taking advantage of both the greater performance of the architecture per-thread, and AMD’s previously disclosed higher TDPs (which are especially handy for uncorking more performance in MT workloads). And yes, these are terrible graphs.Finally, AMD is confirming that there will be V-Cache equipped Zen 4 SKUs within their processor lineup. No specific SKUs are being announced today, but AMD is reiterating that V-Cache was not just a one-off experiment for the company, and that they will be employing the die stacked L3 cache on some Zen 4 chips as well.
Supermicro SYS-E100-12T-H Review: Fanless Tiger Lake for Embedded Applications
Compact passively-cooled systems find application in a wide variety of market segments including industrial automation, IoT gateways, digital signage, etc. These are meant to be deployed for 24x7 operation in challenging environmental conditions. Supermicro has a number of systems targeting this market under the Embedded/IoT category. Their SuperServer E100 product line makes use of motherboards in the 3.5" SBC form-factor. In particular, the E100-12T lineup makes use of embedded Tiger Lake-U SoCs to create powerful, yet compact and fanless systems. Today's review takes a look at the top-end of this line - the SYS-E100-12T-H based on the Intel Core i7-1185GRE embedded processor.
Apple Announces M2 SoC: Apple Silicon for Macs Updated For 2022
Though primarily a software-focused event, Apple’s WWDC keynotes are often stage for an interesting hardware announcement or two as well, and this year Apple did not disappoint. At the company’s biggest Mac-related keynote of the year, Apple unveiled the M2, their second-generation Apple Silicon SoC for the Mac (and iPad) platform. Touting modest performance gains over the original M1 SoC of around 18% for multithreaded CPU workloads and 35% in peak GPU workloads, the M2 is Apple’s first chance to iterate on their Mac SoC to incorporate updated technologies, as well as to refresh their lower-tier laptops in the face of recent updates from their competitors.With the king of the M1 SoCs, M1 Ultra, not even 3 months behind them, Apple hasn’t wasted any time in preparing their second generation of Apple Silicon SoCs. To that end, the company has prepared what is the first (and undoubtedly not the last) of a new family of SoCs with the Apple Silicon M2. Designed to replace the M1 within Apple’s product lineup, the M2 SoC is being initially rolled out in refreshes of the 13-inch MacBook Pro, as well as the MacBook Air – which is getting a pretty hefty redesign of its own in the process.The launch of the M2 also gives us our first real glimpse into how Apple is going to handle updates within the Apple Silicon ecosystem. With the iPhone family, Apple has kept to a yearly cadence for A-series SoC updates; conversely, the traditional PC ecosystem is on something closer to a 2-year cadence as of late. M2 seems to split this down the middle, coming about a year and a half after the M1 – though in terms of architecture it looks closer to a yearly A-series SoC update.
...567891011121314...