Feed anandtech AnandTech

AnandTech

Link http://www.anandtech.com/
Feed http://anandtech.com/rss/
Copyright Copyright 2017 AnandTech
Updated 2017-03-28 00:15
Intel Launches Optane Memory M.2 Cache SSDs For Consumer Market
Last week, Intel officially launched their first Optane product, the SSD DC P4800X enterprise drive. This week, 3D XPoint memory comes to the client and consumer market in the form of the Intel Optane Memory product, a low-capacity M.2 NVMe SSD intended for use as a cache drive for systems using a mechanical hard drive for primary storage.The Intel Optane Memory SSD uses one or two single-die packages of 3D XPoint non-volatile memory to provide capacities of 16GB or 32GB. The controller gets away with a much smaller package than most SSDs (especially PCIe SSD) since it only supports two PCIe 3.0 lanes and does not have an external DRAM interface. Because only two PCIe lanes are used by the drive, it is keyed to support M.2 type B and M slots. This keying is usually used for M.2 SATA SSDs while M.2 PCIe SSDs typically use only the M key position to support four PCIe lanes. The Optane Memory SSD will not function in a M.2 slot that provides only SATA connectivity. Contrary to some early leaks, the Optane Memory SSD uses the M.2 2280 card size instead of one of the shorter lengths. This makes for one of the least-crowded M.2 PCBs on the market even with all of the components on the top side.The very low capacity of the Optane Memory drives limits their usability as traditional SSDs. Intel intends for the drive to be used with the caching capabilities of their Rapid Storage Technology drivers. Intel first introduced SSD caching with their Smart Response Technology in 2011. The basics of Optane Memory caching are mostly the same, but under the hood Intel has tweaked the caching algorithms to better suit 3D XPoint memory's performance and flexibility advantages over flash memory. Optane Memory caching is currently only supported on Windows 10 64-bit and only for the boot volume. Booting from a cached volume requires that the chipset's storage controller be in RAID mode rather than AHCI mode so that the cache drive will not be accessible as a standard NVMe drive and is instead remapped to only be accessible to Intel's drivers through the storage controller. This NVMe remapping feature was first added to the Skylake-generation 100-series chipsets, but boot firmware support will only be found on Kaby Lake-generation 200-series motherboards and Intel's drivers are expected to only permit Optane Memory caching with Kaby Lake processors.Intel Optane Memory SpecificationsCapacity16 GB32 GBForm FactorM.2 2280 single-sidedInterfacePCIe 3.0 x2 NVMeControllerIntel unnamedMemory128Gb 20nm Intel 3D XPointTypical Read Latency6 µsTypical Write Latency16 µsRandom Read (4 KB, QD4)300kRandom Write (4 KB, QD4)70kSequential Read (QD4)1200 MB/sSequential Write (QD4)280 MB/sEndurance100 GB/dayPower Consumption3.5 W (active), 0.9-1.2 W (idle)MSRP$44$77Release DateApril 24Intel has published some specifications for the Optane Memory drive's performance on its own. The performance specifications are the same for both capacities, suggesting that the controller has only a single channel interface to the 3D XPoint memory. The read performance is extremely good given the limitation of only one or two memory devices for the controller to work with, but the write throughput is quite limited. Read and write latency are very good thanks to the inherent performance advantage of 3D XPoint memory over flash. Endurance is rated at just 100GB of writes per day, for both 16GB and 32GB models. While this does correspond to 3-6 DWPD and is far higher than consumer-grade flash based SSDs, 3D XPoint memory was supposed to have vastly higher write endurance than flash and neither of the Optane products announced so far is specified for game-changing endurance. Power consumption is rated at 3.5W during active use, so heat shouldn't be a problem, but the idle power of 0.9-1.2W is a bit high for laptop use, especially given that there will also be a hard drive drawing power.Intel's vision is for Optane Memory-equipped systems to offer a compelling performance advantage over hard drive-only systems for a price well below an all-flash configuration of equal capacity. The 16GB Optane Memory drive will retail for $44 while the 32GB version will be $77. As flash memory has declined in price over the years, it has gotten much easier to purchase SSDs that are large enough for ordinary use: 256GB-class SSDs start at around the same price as the 32GB Optane Memory drive, and 512GB-class drives are about the same as the combination of a 2TB hard drive and the 32GB Optane Memory. The Optane Memory products are squeezing into a relatively small niche for limited budgets that require a lot of storage and want the benefit of solid state performance without paying the full price of a boot SSD. Intel notes that Optane Memory caching can be used in front of hybrid drives and SATA SSDs, but the performance benefit will be smaller and these configurations are not expected to be common or cost effective.The Optane Memory SSDs are now available for pre-order and are scheduled to ship on April 24. Pre-built systems equipped with Optane Memory should be available around the same time. Enthusiasts with large budgets will want to wait until later this year for Optane SSDs with sufficient capacity to use as primary storage. True DIMM-based 3D XPoint memory products are on the roadmap for next year.
Quick Look: Comparing Vulkan & DX12 API Overhead on 3DMark
Earlier this week the crew over at Futuremark released a major update to their API Overhead testing tool, which is built into the larger 3DMark testing suite. The API Overhead tool, first rolled out in 2015, is a relatively straightforward test that throws increasingly large number of draw calls at a system to see how many calls a system can sustain. The primary purpose of the tool is to show off the vast improvement in draw call performance afforded by modern, low-level APIs that can efficiently spread their work over multiple threads, as opposed to classic APIs like DirectX 11 which are essentially single-threaded and have a high degree of overhead within that sole thread.The latest iteration of the API Overhead test, now up to version 1.5, has added support for Vulkan, making it one of the first feature-level benchmarks to add support for the API. Khronos’s take on a low-level API – and a descendant of sorts of Mantle – Vulkan has been available now for just a bit over a year. However outside of a very successful outing with Doom, in the PC ream it has been flying somewhat under the radar, as few other games have (meaningfully) implemented support for the API thus far. By the end of 2017 we should be seeing some wider support for the API, but for the moment it’s still in the process of finding its footing among PC developers.In any case, like OpenGL versus Direct3D 9/10/11 before it, there’s a lot of curiosity (and arguments) over which API is better. Now that Futuremark is supporting the API for their Overhead test, let’s take a quick look at how the two APIs compare here, and whether one API offers lower overhead than the other.CPU:Intel Core i7-4960X @ 4.2GHzMotherboard:ASRock Fatal1ty X79 ProfessionalPower Supply:Corsair AX1200iHard Disk:Samsung SSD 840 EVO (750GB)Memory:G.Skill RipjawZ DDR3-1866 4 x 8GB (9-10-9-26)Case:NZXT Phantom 630 Windowed EditionMonitor:Asus PQ321Video Cards:NVIDIA GeForce GTX 1080 Ti Founders Edition
Dell’s 32-inch 8K UP3218K Display Now For Sale: Check Your Wallet
Back at CES in January, Dell announced the next step in personal screen resolution advancements. The recent rise of ‘4K’ (or more accurately, Ultra-HD at 3840x2160) monitors has shown that there is a demand for high resolution interfaces beyond a smartphone. Back when UHD monitors in a 16:9 format launched en masse, prices were high ($3500-5000+) and stocks were limited – I remember back in 2013 testing the Sharp 32-inch 4K display at a vendor in Taiwan several years ago in one of the first pieces to test 4K/UHD gaming. The fact that this was the only UHD monitor that GIGABYTE had in their HQ was a testament to how new the technology was. Now, 24-inch UHD displays can be had for as little as $350. We may see history repeat itself with 8K monitors from today.As always, the first Dell monitors off the production line are designed to be high-end professional monitors. The UP3218K goes in at higher than average specifications, such as 1300:1 contrast ratio, 400 nits brightness, but also offers 100% AdobeRGB, 100% RGB and 98% DCI-P3. The UP3218K is part of Dell’s UltraSharp range, which means we should expect the monitor to be color calibrated out of the box to within a given dE value, typically dE < 3.Specifications
The Riotoro Onyx Power Supply Review: 650W & 750W Tested
In this review we are having a look at Riotoro’s first attempt to enter the PSU market - the Onyx 650W and Onyx 750W PSUs. The Onyx are subtle, semi-modular PSUs with an 80Plus Bronze efficiency certification, developed to deliver good overall performance and reliability at competitive prices.
CORSAIR ONE Gaming PC Released
First announced in February, the new Corsair ONE pre-built gaming PC is now shipping. The Corsair ONE is the first ready-to-run system from the manufacturer that has mostly been known for their PC components and peripherals. Selling and supporting entire systems is a new venture for Corsair, but the design and capabilities of the Corsair ONE are a good fit for the company's product lineup.The Corsair ONE uses a custom case form factor that is a shallow-depth mini tower, but all of the major components inside use standard PC form factors: mini-ITX motherboard, SFX power supply, 2.5" SSDs and supporting graphics cards up to 11" long with two or three slot cooling solutions. Naturally, many of those components are either existing Corsair parts or special editions made for the Corsair ONE. The total volume of the case is around 12L and the exterior is mostly black aluminum.The system's cooling is provided by a single ML140 exhaust fan at the top and intake is through the side panels. The right side intake is occupied by the radiator for the CPU's closed-loop water cooler. The left side intake vent opens directly onto the air-cooled graphics card in the base model, while the top Corsair ONE includes a second water cooler for the GPU. Neither radiator has any fans of their own, as the exhaust fan at the top of the case provides most of the air flow. The power supply uses semi-passive cooling with its own fan, and the system as a whole emits around 20dB at idle.Gallery: CORSAIR ONEIn order to allow the graphics card to be positioned behind the motherboard and facing its own air intake, the Corsair ONE chassis provides the necessary cables to route the PCIe lanes to the graphics card, and pass-through video connections to ports on the back and one HDMI port on the front that is intended for VR displays. The power supply is mounted in the top of the right side of the case and also makes use of a short pass-through cable to the plug on the back of the machine. Because both side panels are used as air intakes, the Corsair ONE can only operate in vertical orientation cannot be operated with either side directly against any obstructing surface.The top vent and fan are removable without tools, but the two side panels with the radiators must be unscrewed at the top and are hinged at the bottom. While Corsair cases are usually quite easy to work in, further disassembly of the Corsair ONE gets tricky as usability has been sacrificed to save space.Corsair ONE PC SpecificationsModelCorsair ONECorsair ONE PROCorsair ONE PRO (web store only)CPUi7 7700i7 7700KGPUair-cooled GeForce GTX 1070water-cooled GeForce GTX 1080DRAM16GB DDR4 2400Motherboardmini-ITX, Z270 chipsetStorage240GB SSD + 1TB HDD480GB SSD + 2TB HDD960GB SSDPSUcustom edition of Corsair SF600: SFX, 80+ Gold with semi-passive coolingWarranty2 yearsMSRP$1799$2299$2399The base model Corsair ONE comes standard with an Intel Core i7 7700 processor in a Z270 motherboard with 16GB of DDR4-2400 RAM. The base graphics card is an air-cooled NVIDIA GeForce GTX 1070. The Corsair ONE PRO model upgrades to a Core i7 7700K processor and a MSI GEFORCE GTX 1080 AERO 8G OC with Corsair's custom water cooler. Storage is either a combination of a SATA SSD and a 2.5" hard drive or a single larger SATA SSD.Stylistically, the Corsair ONE is less ostentatious than many gamer-oriented products. The front face of the case includes aqua blue accent lighting that can be controlled or entirely disabled through Corsair Link software, but it's single-color rather than full RGB lighting. Even with the lighting off the Corsair ONE doesn't easily blend in with typical office or living room furnishings, but the relatively small size and all-black color scheme make it fairly unobtrusive.The software pre-installed on the Corsair ONE is minimal: Windows 10 Home with all the necessary drivers, Corsair's CUE customization tool, and installers for popular game digital distribution platforms including Steam, Origin, Uplay and GoG Galaxy.Corsair will be selling the Corsair ONE PC through major electronics retailers as well as directly through their online store. Support will be be handled in-house by Corsair's expanded support department that now includes specialists for the Corsair ONE. The system comes with a two-year warranty and aftermarket upgrades performed by the consumer will void that warranty, but Corsair will also be partnering with retailers to provide in-warranty aftermarket upgrades.
The Qualcomm Snapdragon 835 Performance Preview
Qualcomm's Snapdragon 835 gets a CPU transplant, an updated GPU, a 1Gbps LTE modem, and other updates. We were able to see the new SoC in action and collect some preliminary performance data during a brief testing session at Qualcomm's headquarters in San Diego. How does it compare to Snapdragon 820 and the Kirin 960?
Capsule Review: Logitech MK850 Performance Wireless Keyboard and Mouse Combo
We have been reviewing Logitech's multi-device wireless input peripherals for a few years now. The Logitech K480 was launched in 2014, and the Logitech K780 and M720 keyboard and mouse were launched last year. The K780 and M720 Triathlon were separate products. The success of these input peripherals has prompted Logitech to introduce the MK850 Performance combo - a full-length multi-device wireless keyboard along with the wireless M720 Triathlon mouse. The combo is made more powerful by the DuoLink feature - using the keyboard to alter the functionality of the mouse buttons and gestures.Gallery: The Logitech MK850 Performance Wireless Keyboard and Mouse ComboThe Logitech MK850 Performance combo comes with a single Unifying USB receiver. Logitech also bundles a USB extender for use-cases where the placement of the spare USB port in the PC might result in spotty wireless performance. The combo can be paired with up to three different host devices and easily switch between them. The hosts can be Android, iOS, Mac or Windows devices, and the keys in the keyboard get automatically re-mapped depending on the host OS. Both the mouse and the keyboard have explicit on/off buttons in order to conserve battery life. The keyboard needs 2xAAA batteries, while the mouse needs a single AA battery. Logitech claims battery life of two years for the mouse and three years for the keyboard.Gallery: The K850 Performance Wireless KeyboardGallery: M720 Triathlon MouseWe had talked about how the K780 keyboard addressed the shortcomings of the K480 keyboard. The keyboard component of the MK850 adds a number of interesting features.
Apple Announces 2017 iPad 9.7-Inch: Entry Level iPad now at $329
With the spring finally upon us, Apple this morning is going about some spring cleaning of the iPad family. The iPad Air 2 and the iPad Mini 2 have been discontinued, making way for a new entry-level iPad: the simply named iPad 9.7-Inch. This latest iPad is a bit of an unusual twist on the usual Apple fare; it’s not really a successor to the iPad Air 2, and from a features perspective it’s essentially a kitbash of a few different Apple products. None the less, at $329 it’s also the lowest Apple has ever priced a 9.7” iPad, as retailer sales aside, Apple hasn’t offered this size below $399 before. As a result the new 9.7” iPad is likely to make an impact, even in the softening market for tablets.Apple 9.7-Inch iPad FamilyApple iPad 9.7" (2017)Apple iPad Air 2
NVIDIA Releases 378.92 WHQL Driver: Mass Effect and Dolby Vision for Games
This week in a relatively brief driver update, NVIDIA has released their Game Ready driver for the newly released Mass Effect: Andromeda.Following with the 378 branch, 378.92 adds official Game Ready support for the game. This includes the usual driver optimizations as well as an SLI profile for EA's soon-to-be blockbuster game. Alongside Mass Effect, we are also getting game ready support for Rock Band VR, and updated SLI profiles for Dead Rising 4 and Deus Ex: Breach.However buried in this driver release is a second noteworthy and interesting feature addition: NVIDIA is also enabling Dolby Vision support for games. Dolby's competing HDR transport standard is arguably a higher fidelity standard than HDR10 thanks to aspects such as its 12bpc color modes, however its proprietary nature has limited its adoption versus HDR10. And while we're still early in the days of HDR PC monitors – the first HDR10 monitors don't even ship until later this month – Dolby Vision hasn't been on the PC monitor roadmap at all.In any case, the addition of Dolby Vision appears to be aimed at use with TVs, where high-end models do offer Dolby Vision support. So for those users out there who own the necessary TV, an NVIDIA card, and Mass Effect: Andromeda (the sole Dolby Vision-enabled PC game), they should be in for a treat.Anyone interested can download the updated drivers through GeForce Experience or on the NVIDIA driver download page. More information on this update and further issues can be found in the 378.92 release notes.
ARM Launches DynamIQ: big.Little to Eight Cores Per Cluster
Most users delving into SoCs know about ARM core designs over the years. Initially we had single CPUs, then paired CPUs and then quad-core processors, using early ARM cores to help drive performance. In October 2011, ARM introduced big.Little – the ability to use two different ARM cores in the same design by typically pairing a two or four core high-performance cluster with a two or four core high-efficiency cluster design. From this we have offshoots, like MediaTek’s tri-cluster design, or just wide core mesh designs such as Cavium’s ThunderX. As the tide of progress washes against the shore, ARM is today announcing the next step on the sandy beach with DynamIQ.The underlying theme with DynamIQ is heterogeneous scalability. Those two words hide a lot of ecosystem jargon, but as ARM predicts that another 100 billion ARM chips will be sold in the next five years, they pin key areas such as automotive, artificial intelligence and machine learning at the interesting end of that growth. As a result, performance, efficiency, scalability, and latency are all going to be key metrics moving forward that DynamIQ aims to facilitate.The first stage of DynamIQ is a larger cluster paradigm - which means up to eight cores per cluster. But in a twist, there can be a variable core design within a cluster. Those eight cores could be different cores entirely, from different ARM Cortex-A families in different configurations.Many questions come up here, such as how the cache hierarchy will allow threads to migrate between cores within a cluster (perhaps similar to how threads migrate between clusters on big.Little today), even when cores have different cache arrangements. ARM did not yet go into that level of detail, however we were told that more information will be provided in the coming months.Each variable core-configuration cluster will be a part of a new fabric, with uses additional power saving modes and aims to provide much lower latency. The underlying design also allows each core to be controlled independently for voltage and frequency, as well as sleep states. Based on the slide diagrams, various other IP blocks, such as accelerators, should be able to be plugged into this fabric and benefit from that low latency. ARM quoted elements such as safety critical automotive decisions can benefit from this.One of the focus areas from ARM’s presentation was one of redundancy. The new fabric will allow a seemingly unlimited number of clusters to be used, such that if one cluster fails the others might take its place (or if an accelerator fails). That being said, the sort of redundancy that some of the customers of ARM chips might require is fail-over in the event of physical damage, such as automotive car control is retained if there are >2 ‘brains’ in the vehicle and there is an impact which disables one. It will be interesting to see if ARM’s vision for DynamIQ extends to that level of redundancy at the SoC level, or if it will be up to ARM’s partners to develop on the top of DynamIQ.Along with the new fabric, ARM stated that a new memory sub-system design is in place to assist with the compute capabilities, however nothing specific was mentioned. Along the lines of additional compute, ARM did state that new dedicated processor instructions (such as limited precision math) for artificial intelligence and machine learning will be integrated into a variant of the ARMv8 architecture. We’re unsure if this is an extension of ARMv8.2-A, which introduced half-precision for data processing, or a new version. ARMv8.2-A also adds in RAS features and memory model enhancements, which coincides with the ‘new memory sub-system design’ mentioned earlier. When asked about which cores can use DynamIQ, ARM stated that new cores would be required. Future cores will be ARMv8.2-A compliant and will be able to be part of DynamIQ.ARM’s presentation focused mainly on DynamIQ for new and upcoming technologies, such as AI, automotive and mixed reality, although it was clear that DynamIQ can be used with other existing edge-case use models, such as tablets and smartphones. This will depend on how ARM supports current core designs in the market (such as updates to A53, A72 and A73) or whether DynamIQ requires separate ARM licenses. We fully expect any new cores announced from this point on will support the technology, in the same way that current ARM cores support big.Little.So here’s some conjecture. A future tablet SoC uses DynamIQ, which consists of two high-powered cores, four mid-range cores, and two low-power cores, without a dual cluster / big.Little design. Either that or all three types of cores are on different clusters altogether using the new topology. Actually, the latter sounds more feasible from a silicon design standpoint, as well as software management. That being said, the spec sheet of any future design using DynamIQ will now have to list the cores in each cluster. ARM did state that it should be fairly easy to control which cores are processing which instruction streams in order to get either the best power or the best efficiency as needed.ARM states that more information is to come over the next few months.Gallery: ARM DynamIQ Slides
CPU Buyer's Guide: Q1 2017
In our series of Buyer Guides, here’s the latest update to our recommended CPUs list. All numbers in the text are updated to reflect pricing at the time of writing (3/19). Numbers in graphs reflect MSRP.CPU Buyer's Guide: Q1 2017As we move through to the second quarter of the year, we have had two major updates to the desktop CPU landscape. First, Intel launched its Seventh generation of Core-based CPUs, known under the Kaby Lake name, using a refined version of their 14nm process that allowed for some frequency gains over the previous generation Skylake microarchitecture. Then AMD made their biggest CPU launch in five years, with a renewed attack on high-performance computing with the Ryzen 7 family. After spending so many years fighting 32nm and 28nm on Bulldozer based microarchitectures, AMD introduced not only a new core design but also a new process (GlobalFoundries' 14nm) with FinFET based technology. Both of these launches have drastically shaken up our recommended CPU lists moving into Spring and Summer.For all the information about Intel's Kaby Lake and AMD's Ryzen, our deep dive reviews are open to all readers and we highly encourage enthusiastic users to give them a once over, to understand how the hardware performs and why.The AMD Zen and Ryzen 7 Review: A Deep Dive on 1800X, 1700X and 1700
The Tesoro Excalibur SE Spectrum Review: A Mechanical Keyboard with Gateron Optical Switches
The popularity of mechanical keyboards and the saturation of the market with a myriad of products has led manufacturers into seeking development of new features and technologies. In this review, we are having a look at Tesoro’s newest product that makes use of their newly developed optical key switches, the Excalibur SE Spectrum.
Qualcomm Announces 205 Mobile Platform: Entry-Level LTE for India & Emerging Markets
This morning in New Delhi, Qualcomm is taking the wraps off of their latest entry-level, high-volume SoC, the Qualcomm 205 SoC. The cornerstone of the Qualcomm 205 Mobile Platform – and keeping in mind last week’s renaming of Qualcomm’s product stack – the 205 Platform is being launched as Qualcomm’s new entry-level SoC for emerging markets. Qualcomm’s focus on this latest 200-series SoC is bringing an LTE-enabled SoC down to its lowest price point yet, ultimately aiming to get LTE into the sub-$50 mass-market feature phones that are popular in these markets.Qualcomm's 200-Series SoC Lineup205 SoC208 SoC210 SoCCPU2 x ARM Cortex A7 @ 1.1GHz2 x ARM Cortex A7 @ 1.1GHz4 x ARM Cortex A7 @ 1.1GHzGPUAdreno 304Adreno 304Adreno 304Memory Interface32-bit LPDDR2/332-bit LPDDR2/332-bit LPDDR2/3Integrated ModemSnapdragon X5: LTE Category 4, HSPA+, DS-DA, VoLTEGobi 3G: HSPA+, DS-DASnapdragon X5: LTE Category 4, HSPA+, DS-DA, VoLTEIntegrated WiFi802.11n 1-stream802.11n 1-stream802.11n 1-streamManufacturing Process28nm LP28nm LP28nm LPVideo Encode/DecodeDecode: 720p30
Intel Introduces Optane SSD DC P4800X With 3D XPoint Memory
A year and a half after first publicly unveiling their new 3D XPoint non-volatile memory technology, Intel is launching the first product incorporating the new memory. The Intel Optane SSD DC P4800X is an enterprise PCIe 3 x4 NVMe SSD that Intel promises will be the the most responsive data center SSD with lower latency than all of the fastest NAND flash based competitors. After months of touting 3D XPoint memory primarily with rough order of magnitude claims about its performance, endurance and cost relative to DRAM and NAND flash, and after some unexplained delays, Intel is finally providing some concrete specifications and pricing for a complete SSD that is shipping today. The information is more limited than we're accustomed to for their NAND flash SSDs, and Intel still isn't confirming anything about the materials or exact operating principle of the 3D XPoint memory cell.Current computer system architectures are based around the use of DRAM as working memory and NAND flash for fast storage. 3D XPoint memory falls between the two technologies on most important metrics, so Optane SSDs bring a new dimension of complication to a server architect's task. For most enterprise use cases, the most enticing feature of Optane SSDs over NAND SSDs is the former's higher performance, especially reduced latencies. Aside from the gains from switching to the NVMe protocol, the latency offered by NAND flash based SSDs has been mostly stagnant or even regressed with the disappearance of SLC NAND from the market, even as throughput and capacity have grown with every generation.The Intel Optane SSD DC P4800X is rated for a typical read or write latency under 10µs, compared to tens of microseconds for the best NAND flash based SSDs, and about 4µs minimum imposed by PCIe and NVMe transaction overhead. More impressive is how little latency degrades under less than ideal conditions. Queue depth 1 random reads are rated to remain below 30µs even while the drive is simultaneously accepting 2GB/s of sustained random writes (about 500k IOPS). Intel even specifies Quality of Service (QoS) standards for latency at the 99.999th percentile, with even QD16 random writes staying almost entirely below 200µs. A consequence of the low latency is that the P4800X can deliver full throughput at lower queue depths: the P4800X is rated to deliver maximum IOPS at QD16 while flash-based SSDs are specified for queue depths of at least 32. Unlike flash memory, the read and write performance of 3D XPoint memory is roughly equal, and this is reflected in Intel's specifications for the P4800X.Conspicuously missing from the performance specifications are sequential throughput. The P4800X can already use more than half of the available PCIe bandwidth with a completely random I/O workload. Rather than reassure us that the P4800X can do even better with larger transfer sizes, Intel suggests that being overly concerned with the sequential transfer speeds is a sign that you should be shopping for their 3D NAND SSDs instead. They'll offer plenty of throughput for a far lower price.Intel's 3D XPoint memory is being manufactured as a 128Gb (16GB) die, slightly behind the trend for NAND flash capacities. As a result, the Optane SSD DC P4800X will start with a 375GB model and later this year be followed by 750GB and 1.5TB models. The top-performing enterprise SSDs currently tend to be multi-TB drives. Intel has shared very few details about the new controller they've developed for the P4800X, but they have disclosed that the 375GB model uses seven channels with four dies per channel, for a total of 28 chips and a raw capacity of 448GB. Fourteen packages of 3D XPoint memory are visible on the back side of the drive in the photographs Intel has released, suggesting that fourteen more packages are hiding under the heatsink and that the 375GB add-in card model is using single-die packages. The controller implements a high-performance all-hardware read path that does not involve the drive's firmware, and while the exact stride of memory accesses is not known, a single 4k read will be spread across all seven channels.3D XPoint memory can be read or written with byte granularity and modifications can be written in place, so it is free from the worst internal fragmentation and write amplification challenges that are caused by the large page sizes and huge erase block sizes of NAND flash. This means that further overprovisioning beyond the drive's native amount will have minimal impact on performance and that the performance of a full drive should not suffer severely the way flash based SSDs do. However, some amount of spare area is still required for error correction and other metadata and for a pool of spare blocks to replace failed or defective blocks. The write endurance of 3D XPoint memory is not infinite so wear leveling is still required, but it is a much simpler process that requires much less spare area.The Intel Optane SSD DC P4800X has a write endurance rating of 30 Drive Writes Per Day, and Intel is hopeful that future products can offer even higher ratings once 3D XPoint memory has more broadly proven its reliability. Today's limited release 375GB models have a three year warranty for a total write endurance rating of 12.3 PB, and once the product line is expanded to broad availability of the full range of capacities in the second half of this year the warranty period will be five years.Intel is offering the 375GB P4800X in PCIe add-in card form factor with a MSRP of $1520 starting today with a limited early-ship program. In Q2 a 375GB U.2 model will ship, as well as a 750GB add-in card. In the second half of the year the rest of the capacity and form factor options will be available, but prices and exact release dates for those models have not been announced. At just over $4/GB the P4800X seems to fall much closer to DRAM than NAND in price, though to be fair the enterprise SSDs it will compete against are all well over $1/GB and the largest DDR4 DIMMs are around $10/GB.Intel Optane SSD DC P4800X SpecificationsCapacity375 GB750 GB1.5 TBForm FactorPCIe HHHL or 2.5" 15mm U.2InterfacePCIe 3.0 x4 NVMeControllerIntel unnamedMemory128Gb 20nm Intel 3D XPointTypical Latency (R/W)<10µsRandom Read (4 KB) IOPS (QD16)550kTBATBARandom Read 99.999% Latency (QD1)60µsTBATBARandom Read 99.999% Latency (QD16)150µsTBATBARandom Write (4 KB) IOPS (QD16)500kTBATBARandom Write 99.999% Latency (QD1)100µsTBATBARandom Write 99.999% Latency (QD16)200µsTBATBAEndurance30 DWPDWarranty5 years (3 years during early limited release)MSRP$1520TBATBARelease DateMarch 19 (HHHL)
Bosch and NVIDIA Team Up for Xavier-Based Self-Driving Systems for Mass Market Cars
Bosch and NVIDIA on Thursday announced plans to co-develop self-driving systems for mass-market vehicles. The solutions will use NVIDIA’s next-generation codenamed Xavier SoC as well as the company’s AI-related IP. Meanwhile, Bosch will offer its expertise in car electronics as well as auto navigation.Typically, automakers mention self-driving cars in the context of premium and commercial vehicles, but it is pretty obvious that, given the opportunity, self-driving is a technology that will be a part of the vast majority of cars available in the next decade and onwards. Bosch and NVIDIA are working on an autopilot platform for mass-market vehicles that will not cost as much as people think, and will be able to be widespread. To build the systems, the two companies will use NVIDIA’s upcoming Drive PX platform based on the Xavier system-on-chip, which is a next-gen Tegra processor set to be mass-produced sometimes in 2018 or 2019.Bosch and NVIDIA did not disclose too many details about their upcoming self-driving systems, but indicated that they are talking about the Level 4 autonomous capabilities in which a car can drive on its own without any human intervention. To enable Level 4 autonomous capabilities, NVIDIA will offer its Xavier SoC featuring eight general-purpose in-house-designed custom ARMv8-A cores, a GPU based on the Volta architecture with 512 stream processors, hardware-based encoders/decoders for video streams with up to 7680×4320 resolution, and various I/O capabilities.From performance point of view, Xavier is now expected to hit 30 Deep Learning Tera-Ops (DL TOPS) (a metric for measuring 8-bit integer operations), which is 50% higher when compared to NVIDIA’s Drive PX 2, the platform currently used by various automakers to build their autopilot systems (e.g., Tesla Motors uses the Drive PX 2 for various vehicles). NVIDIA's goal is to deliver this at 30 W, for an efficiency ratio of 1 DL TOPS-per-watt. This is a rather low level of power consumption given the fact that the chip is expected to be produced using TSMC’s 16 nm FinFET+ process technology, the same that is used to make the Tegra (Parker) SoC of the Drive PX 2.The developers say that the next-gen Xavier-based Drive PX will be able to fuse data from multiple sensors (cameras, lidar, radar, ultrasonic, etc.) and its compute performance will be enough to run deep neural nets to sense surroundings, understand the environment, predict the behavior and position of other objects as well as ensure safety of the driver in real-time. Given the fact that the upcoming Drive PX will be more powerful than the Drive PX 2, it is clear that it will be able to better satisfy demands of automakers. In fact, since we are talking about a completely autonomous self-driving system, the more compute efficiency NVIDIA can get from its Xavier the better.Speaking of the SoC, it is highly likely that the combination of its performance, power and the level of integration is what attracted Bosch to the platform. One chip with a moderate power consumption means that Bosch engineers will be able to design relatively compact and reasonable-priced systems for self-driving and then help automakers to integrate them into their vehicles.Unfortunately, we do not know what car brands will use the autopilot systems co-developed by Bosch and NVIDIA. Bosch supplies auto electronics to many carmakers, including PSA, which owns Peugeot, Citroën and Opel brands.Neither Bosch nor NVIDIA made any indications about when they expect actual cars featuring their autopilot systems to hit the roads. But since NVIDIA plans to start sampling of its Xavier in late 2017 and then mass produce it in 2018 or 2019, it is logical to expect the first commercial applications based on the SoC to become available sometime in the 2020s, after the (extensive) validation and certification period for an automotive system.Related Reading:
Tag Heuer Connected Modular 45: Atom Z3400, Android Wear 2.0, Starts at $1650
Tag Heuer last week announced its new generation smartwatch, co-developed with Google and Intel. The new Connected Modular 45 timepiece uses an Intel SoC, runs Google’s Android Wear 2.0, and is listed with 'expanded functionality'. Tag Heuer will also offer a variety of customization options for the new smartwatch and aim to address different market segments with the new product. Furthermore, the watchmaker says that the Connected Modular 45 design could easily fit a mechanical module and be converted into a regular timepiece.Tag Heuer, Google and Intel formally introduced their first-gen connected smartwatch in late-2015. The wristwatch was the first device of the kind for Tag Heuer and for Intel, and so it was largely a test vehicle for both of them. As it turned out, the Tag Heuer Connected was considered a success by its developers and with the second generation they decided to install a more capable computing platform, a better display and introduce customizable design options. The use of Google Android Wear 2.0 should expand the overall functionality of the new smartwatch, in order to offer more features.Tag Heuer will offer different configurations of the Connected Modular 45: 11 standard versions available in retail and additional configurations upon request. Each timepiece consists of three key elements which users can mix and match: the watch module, the lugs, and the strap. All watch modules are made of grade 5 titanium 5 with a sand-blasted satin finish (of a chosen color), but users can choose bezels of different colors made of ceramic, gold, aluminum, titanium, and even covered with diamonds. The lugs can match the bezels and thus can be made of aluminum, titanium, ceramic and so on. Finally, the manufacturer will offer a variety of straps featuring different colors (black, brown, red, green, etc.) made of calfskin, rubber, ceramic or titanium.The central piece of the Connected Modular 45 is, of course, the watch module. The latter is based on the Intel Atom Z3400-series SoC (Merrifield, two Silvermont cores, 1 MB cache) equipped with 512 MB of LPDDR3 memory (down from 1 GB in the previous-gen model) and 4 GB of NAND flash memory. The device comes with a wireless module featuring Wi-Fi 802.11b/g/n, Bluetooth 4.1, GPS and NFC as well as a host of sensors, including an accelerometer, a gyroscope, a tilt detection sensor and an ambient light sensor. In addition, the module has a water-resistant microphone and a vibration/haptics engine, but no speaker. The most notable upgrade of the new Tag Heuer smartwatch is the new 1.39” AMOLED display, with a 400×400 resolution and 287 PPI, which is higher than many competing wearable devices. The display is covered with a 2.5-mm sapphire glass, just like many Swiss-made watches. As for the battery, the manufacturer states that it has a capacity of 410 mAh and claims it can last for up to 25 hours.Tag Heuer Connected Modular 45ProcessorIntel Atom Z3400-series
Samsung Shows Off A Z-SSD: With New Z-NAND
As the sort of person that can get addicted to deep technology discussions about the latest thing, without due care and attention I could easily fall into the pit of storage related technologies. From the storage bits through to software defined cache hierarchy, there is so much to learn and to talk about. Over the last two years, unless you were living under a rock, it would have been hard to miss the level of attention that Intel's 3D XPoint technology (a co-venture with Micron) has been getting. Billed as a significant disruption to the storage market, and claiming an intersection between DRAM and SSDs as a form of non-volatile storage, many column inches have been devoted to the potential uses of 3D XPoint. Despite all this talk, and promises that Intel's Super 7 partners are well under way with qualifying the hardware in their datacenters, we are yet to actually see it come to market - or even be actively demonstrated in any sizeable volume at a trade show. We're expecting more information this year, but while everyone is waiting, Samsung has snuck up behind everyone with their new Z-SSD product line.The Z-SSD line was announced back at Flash Memory Summit, although details were scant. This was a PCIe NVMe storage technology using Samsung's new 'Z-NAND', which was aimed at the intersection between DRAM and SSDs (sounds like 3D XPoint?). Z-NAND is ultimately still baked in as NAND, although designed differently to provide better NAND characteristics. We still don't know the exact way this happens - some analysts have pointed to this being 3D NAND/V-NAND running in SLC mode, given some of the performance metrics, but this is still unknown.At Cloud Expo Europe, Samsung had a Z-SSD on display and started talking numbers, if not the technology itself. The first drive for select customers to qualify will be 800GB in a half-height PCIe 3.0 x4 card. Sequential R/W will be up to 3.2 GBps, with Random R/W up to 750K/160K IOPS. Latency (presumably read latency) will be 70% lower than current NVMe drives, partially due to the new NAND but also a new controller, which we might hear about during Samsung's next tech day later this year. We are under the impression that the Z-NAND will also have high endurance, especially if it comes down to fewer bits per cell than current NAND offerings, but at this point it is hard to tell.Initial reports indicated that Samsung was preparing 1TB, 2TB and 4TB drives under the Z-SSD banner. At present only the 800GB is on the table, which if we take into account overprovisioning might just be the 1TB drive anyway. Nothing was said about other capacities or features, except that the customers Samsung is currently dealing with are very interested in getting their hands on the first drives.
Best Laptops: Q1 2017
It's once again time to take a look at the laptop market, and as with every quarterly update, there are always some changes to discuss. First, Intel released it's quad-core mobile Kaby Lake chips at CES 2017, meaning most larger laptops have made the switch now to the latest and greatest 7th generation Core processors, and NVIDIA also released their GP107 mobile GPUs at CES, meaning we finally have a nice upgrade from the GTX 960M and GTX 965M class devices, with the GTX 1050 and GTX 1050 Ti rounding out the middle of their lineup. Only the lowest end GPUs such as the GT 940M don't yet have a Pascal update.NVIDIA Laptop GPU Specification ComparisonGTX 1060GTX 1050 TiGTX 1050GTX 960MGTX 950MCUDA Cores1280768640640640Texture Units8048404040ROPs4832161616Core Clock1404MHz1493MHz1354MHz1097MHz914MHzBoost Clock1670MHz1620MHz1493MHzUndefinedUndefinedMemory Clock8Gbps GDDR57Gbps GDDR57Gbps GDDR55Gbps GDDR55Gbps GDDR5Memory Bus Width192-bit128-bit128-bit128-bit128-bitVRAM6GB2GB/4GB2GB/4GB2GB/4GB2GB/4GBFP641/321/321/321/321/32GPUGP106GP107GP107GM107GM107Transistor Count4.4B3.3B3.3B1.87B1.87BManufacturing ProcessTSMC 16nmSamsung 14nmSamsung 14nmTSMC 28nmTSMC 28nmLaunch Date08/16/201601/03/201701/03/201703/12/201503/12/2015In addition, we've had a chance to test several more machines, and there's been the constant evolution of devices, so it is good to check on the state of the market.Low Cost LaptopsMost of the excitement in the PC market comes at the high end, but not everyone has the budget for a several thousand dollar Ultrabook. The best low cost laptops are the ones that hit the right balance on features for the money, and while that sounds obvious, often times there can be some serious usability issues with devices that sell for less than $300. The competition here is between the Chromebook and the low-cost Windows PC, and we've not seen enough Chromebooks lately to really make any judgements there, we do have a new entry here on the Windows side.Chuwi LapBook 14.1We've just completed our review of this new entry from Chuwi, which is a company based in Shenzhen, China. They have been in business since 2004, and they offer several tablets, laptops, PCs, and accessories. The Chuwi LapBook 14.1 offers great value for the money. It's a 14-inch notebook, offering thin display bezels, and a 1920x1080 IPS display, which is a rare thing at its price point. It's powered by the latest Intel Apollo Lake platform, with a quad-core Celeron based on the latest Atom Goldmont cores. It has 4 GB of memory, and 64 GB of eMMC, which is much nicer than the 32 GB models usually offered around this price range. If you're looking for a laptop and don't have a lot to spend, this is a great place to start.Chuwi is offering AnandTech readers a $24 discount on Amazon as well (good until 03/23/2017 at 10PM PDT). Apply this code at checkout: L9OQJYE4Buy Chuwi LapBook 14.1 on Amazon.comLenovo ThinkPad 11e YogaThe latest generation of the Lenovo ThinkPad 11e Yoga offers some nice upgrades over its non-flexible cousin, the 11e. Besides the obvious addition of the word Yoga to the model, it means it offers the 360° hinge that has made the Yoga lineup so successful across all of Lenovo's products. The ThinkPad 11e is built for education, which means it's designed not to break easily, so this should be a strong and durable model. One of the other nice upgrades over the non-Yoga version of this laptop is that the convertible one comes with an IPS display, but in the same 11-inch 1366x768 configuration as the non-Yoga. That's going to dramatically improve the display usability, and the Yoga version isn't really that much more money. You can find this with up to a Core i3-6100U if you need more performance than Atom, but most models for sale are going to be based on the Braswell platform, so don't expect amazing performance. They do offer SSD as the only storage option, with 128/192/256 GB models. That's a big jump over eMMC in terms of performance, and the extra space is always nice. It's not the nicest looking laptop around, but it's a good buy for the price.Buy Lenovo ThinkPad 11e Yoga N3150 on Amazon.comUltrabooksUtrabooks have moved the laptop forward, with sleek and thin designs that still feature good performance with the Core i-U series processors, and even thinner and lighter models are available with the Core m-Y series models. The definition has expanded somewhat over the years, but a good Ultrabook will have at least a 1920x1080 IPS display, SSD storage, and over eight hours of battery life, with many of them over ten now. If I was to recommend an everyday notebook, it would be an Ultrabook. The traditional laptop form factor is less compromised for notebook tasks than most of the 2-in-1 designs, and there are some great choices now.HP SpectreHP launched a new entrant in the Ultrabook category with the “world’s thinnest laptop” which they are calling the Spectre. It’s not quite the lightest, but the 2.45 lbs is a very low weight, and the design is stunning. Kaby Lake U series Core processors are available with 8 GB of memory, and HP has gone with PCI-E storage in 256 or 512 GB offerings. The display is a 1920x1080 IPS model at 13.3-inches. The very thin design has precluded the use of USB-A though, but the Spectre does have three USB-C ports, with two of them capable of Thunderbolt 3. The Spectre is just 10.4 mm thick, yet despite this they have still included a keyboard with a solid 1.3 mm of travel. The Spectre starts at $1169.99, which is a lot, but it’s a stunner.Buy HP Spectre 13t i7-7500U 8GB 512GB on Amazon.comDell XPS 13The reigning Ultrabook on the best-of lists is generally the Dell XPS 13. The Infinity Display makes it stand apart, with very thin bezels packing a large display into a small chassis. The downside of this is the webcam, which is mounted on the bottom of the display, which might make this a non-starter for people who do a lot of video chat, but despite this, Dell has crafted a great machine here. Dell has recently updated this to a Kaby Lake processor, up to the Core i7-7500U. The outgoing model did offer Iris graphics on the i7 version, but not right away, so we’ll see if Dell brings back this option once the Iris Kaby Lake processors are available. They’ve also switched from Broadcom NICs to Killer, because Broadcom is exiting the market. They now quote up to 22 hours of battery life on the 1080p model thanks to more efficiency with Kaby Lake as well as a 60 Wh battery, up from 56 Wh last year. I love the aluminum outside with the black carbon fibre weave on the keyboard deck, and the black keys make the backlighting stand out with great contrast. The XPS 13 starts at $799 for the i3 model.Buy the XPS 13 on Dell.comASUS UX330UAI loved the ASUS UX305CA for its thin, light, and fanless design, as well as the excellent price. With the UX305CA currently on hiatus, it is pretty easy to step up to the UX330UA, which now features Kaby Lake. Unlike the UX305CA which had Core M, the UX330UA has the 15-Watt U series processors, and specifically the Core i5-7200U Kaby Lake CPU. It also includes 8 GB of RAM, and a 256 GB SSD, along with a 1920x1080 13.3-inch display. ASUS has done a great job adding USB-C connectors to their systems without excluding the older ports, and this model is no exception, with HDMI and a SD card reader in addition to the USB ports. One crucial upgrade from the UX305CA is that this model has a newer, nicer keyboard, and it includes backlighting which was sorely missed on the older laptop. The laptop is just 13.55 mm or 0.53-inches thick, and weighs 1.2 kg or 2.65 lbs. The best part is you get all of this for a very reasonable price of $699 as of this writing. Considering most Ultrabooks ship with less storage, and cost more, this is a strong price from ASUS.Buy ASUS ZenBook UX330UA on Amazon.comRazer Blade StealthRazer has also updated the Stealth with Kaby Lake, and even more importantly they’ve increased the battery capacity as well. The Razer Blade Stealth is a fantastic notebook that was hindered by its battery life, and the new model should offer at least a bit longer time away from the mains. This CNC aluminum notebook mimics the larger Razer Blade 14 in appearance, yet is very thin and light. Razer has added some new models to the Stealth lineup, with a new i5/8GB/128GB model starting at $100 less than the original, which means they are now available starting from $899, but it's a big jump to the i7/16GB/512GB model now, since you also jump to the UHD display from QHD in the base model, which is too bad, since the QHD is the better option in my opinion. Razer went with an Adobe RGB panel for the UHD model, but they don't have a sRGB mode for it, so it ends up blowing out all the colors on the display. It’s the only laptop on this list to feature per-key RGB backlighting on the keyboard, allowing some pretty nifty looks. It can be connected to the Razer Core external graphics dock with a single Thunderbolt 3 cable as well, which is going to offer a massive boost in gaming performance when docked. I really like what Razer is doing in this market, and their pricing is very competitive.Buy the Razer Blade Stealth on Razerzone.comMacBookThe MacBook isn’t for everyone, with limited ports and a few other caveats, but for an ultraportable PC running macOS, this is hard to ignore. Apple hasn't refreshed this lately, so it's still on the Skylake Core M series of CPUS. It’s incredibly light and thin, and although not everyone is sold on the butterfly switch keyboard, Apple clearly is since they’ve moved to it on the larger MacBook Pros as well. The display is great, and Apple continues to buck the trend and use 16:10 aspect ratio displays. The biggest controversy is the single USB-C port, which is also the charging port, but despite this the Retina display and fanless design make it a great portable laptop if you need a Mac.Buy MacBook Core m3 256GB on Amazon.comConvertiblesAs much as I love an Ultrabook when I need a true laptop experience, there are some great convertible devices out there too which can serve multiple roles. They may not be the best laptop and they may not be the best tablet, but they can generally handle either chore well enough.Microsoft Surface Pro 4The best convertible is the Surface Pro 4. This 12.3-inch tablet has basically created the 2-in-1 tablet market, with many competitors now creating similar devices, from Dell to Google and Apple. The Surface Pro 4 certainly sets the bar high compared to the other Windows based devices, and with the legacy software support, is highly productive. All the changes from the Surface Pro 3 to the Surface Pro 4 are subtle, with a slightly larger display in the same chassis size, higher resolution, and Skylake processors, but there are new features too like the lightning fast Windows Hello facial recognition camera. Possibly the best new feature is an accessory, with the new Type Cover offering edge to edge keys and a much larger glass trackpad, meaning the Surface Pro 4 can double as a laptop much better than any previous model could. Starting with the Core m3 processor, the Surface Pro 4 starts at $899, but the more popular Core i5 version with 8 GB of memory and 256 GB of storage costs $1199 without the Type Cover. It’s not the most inexpensive 2-in-1, but it’s a leader in this category. Other companies have come into this market, often for less money, but it's tough to beat the build quality, and fantastic display, of the Surface Pro 4. I do expect this to be updated in the near future, but there's no official word from Microsoft yet.Buy Microsoft Surface Pro 4 Core i5/8GB/256GB on Amazon.comMicrosoft Surface BookSoftware issues plagued the Surface Book at launch, but Microsoft has seemed to sort all of them out. The Surface Book is now easily recommended as a great 2-in-1 if you need something that’s more of a laptop than a tablet. The 13.5-inch 3:2 display with it’s 3000x2000 resolution is one of the best displays on a laptop, with a sharp resolution and great contrast. Performance is solid too with either a Core i5-6300U or Core i7-6600U, and you can also get discrete NVIDIA graphics with a custom GT 940M. It’s not a gaming powerhouse, but the NVIDIA option is pretty much double the integrated performance. The all magnesium body gives the Surface Book a great look and feel, and the keyboard and trackpad are some of the best on any Ultrabook as well. The Surface Book is not perfect though; the device is heavier than traditional Ultrabooks and the weight balance makes it feel heavier than it is. Also, there’s the price, which starts at $1349 and goes all the way up to $3199 for a Core i7 with 16 GB of memory, 1 TB of SSD storage, and the dGPU. Still, it’s got solid performance, good battery life, and a great detachable tablet. Recently Microsoft refreshed the Surface Book with a new “Surface Book with Performance Base” which is a terrible name, but the new model features a much more powerful GPU in the NVIDIA GTX 965M, as well as a larger battery.Buy Microsoft Surface Book i5/8GB/128GB on Amazon.comLenovo Yoga 910Lenovo pretty much invented the flip-around convertible with their Yoga series, and the latest Yoga 910 takes it all to the next level. It features Kaby Lake processors, up to Core i7-7500U, along with up to 16 GB of memory, and it keeps the fantastic watch band hinge introduced on the Yoga 3 Pro. The big upgrade this year are new displays, with edge to edge displays similar to the XPS 13. They’ve increased the panel size from 13.3” to 13.9” and offer both a 1920x1080 IPS panel as well as a 3840x2160 IPS panel. I would assume this means the RGBW subpixel arrangement is also gone, which should help out a lot on color accuracy and contrast. It is available in three colors and starts at $1299.Buy the Lenovo Yoga 910 on Lenovo.comLarge LaptopsFor some people, a 13.3-inch or 14-inch laptop is just too small. Maybe they need more performance, and the quad-core chips in larger laptops and better discrete GPUs are necessary. Maybe they just like the larger display. There are some great large form factor laptops that are available too.Dell XPS 15Dell took the winning formula with the XPS 13 and applied it to their larger XPS 15, and the result is a great looking laptop, which has a 15.6-inch display in a smaller than normal chassis. The latest XPS 15 9560 offers quad-core Kaby Lake CPUs, along with the latest NVIDIA GTX 1050 graphics, which is a big jump in performance over what’s available in any Ultrabook. You can get a UHD display with 100% of the Adobe RGB gamut as well, although the battery life takes a big hit with that many pixels, so the base 1920x1080 offering may be better suited to those that need a bit more time away from the power outlet. The keyboard and trackpad are both excellent, just like the XPS 13, and it features the same styling cues. The XPS 15 starts at $999.Buy Dell XPS 15 9560 i7-7700HQ 16GB 512GB GTX 1050 on Amazon.comApple MacBook Pro 15Apple has kept the same Retina display resolution for the newest MacBook Pro, but improved the color gamut to cover the P3 color space instead of just sRGB. They’ve slimmed the 15-inch model down a lot, making it only four pounds, and they’ve embraced the next generation of IO with USB-C and Thunderbolt 3. Unfortunately, they’ve completely abandoned the USB-A ports though, so be prepared grab USB-C versions of any peripherals you may need.The 15-inch MacBook launched with Skylake quad-core CPUs, and feature an AMD Polaris GPU that can drive up to six displays, or, two of the new 5K displays that were announced as well, in addition to the laptop panel. Combined with the low profile and weight, and the latest generation MacBook Pro packs a lot of performance into relatively little space.Apple has moved to the butterfly switch keyboard on this model as well, and they’ve added a touch bar instead of the function keys. I’ll reserve judgement on that for the time being, as Ryan is still wrapping up our full review, but it's definitely a major change. It’s early days though and Apple always has great developer support for these sorts of things. The MacBook Pro was in desperate need of a refresh, and although they didn’t hit on everyone’s wants, if you’ve been in the market for a new macOS device and the MacBook wasn’t performant enough for you, the new MacBook Pro 15 is the best you can get at the moment.Buy Apple MacBook Pro 15 on Amazon.com
Qualcomm Tweaks Snapdragon Brand: No Longer a Processor, Instead a Platform
While all eyes are on Qualcomm for the impending release of devices containing their high-end Snapdragon 835 SoC, this morning the company has a slightly different kind of announcement to make. After nearly a decade since the launch of the Snapdragon brand, Qualcomm is undergoing a brand redesign of sorts ahead of their next-generation product launches. Starting today, Snapdragon is no longer a processor; instead Snapdragon is a platform.More formally, Qualcomm will no longer be referring to Snapdragon as the “Snapdragon Processor”, but rather the “Qualcomm Snapdragon Mobile Platform”. Meanwhile at the bottom end of the product stack, the Snapdragon 200 series are getting ejected from the Snapdragon family entirely; they will now simply be part of the “Qualcomm Mobile Platform” family.This rejiggering of brand names is, in all seriousness, exactly as weird as it sounds. But Qualcomm has some reasonably thought-out logic behind it.In the US and abroad, Qualcomm has been promoting the Snapdragon brand in various forms for several years now, and they’ve actually had a fair bit of success at it. Snapdragon may not be a household name, but it’s likely to be better known than Qualcomm. So there is a certain degree of emphasis on making sure Qualcomm doesn’t get overtaken by their own brand name.But the bigger shift here – and the real meat of the story – is from a processor to a platform. To be clear, the hardware isn’t changing; a Snapdragon SoC is still a Snapdragon SoC, and the Snapdragon brand continues to refer to the hardware and its supporting bits.However Qualcomm wants to emphasize that a Snapdragon SoC is more than its CPU. It is a collection of various bits and bobs: a CPU, a GPU, a DSP, a cellular modem, RF transceivers, not to mention the various pieces of software and drivers that Qualcomm develops for their SoCs. Consequently, Qualcomm feels that “platform” is a better all-encompassing word of what they do than “processor”.And they’re not wrong, at least to an extent. While we have various kinds of processors (CPUs, GPUs, etc), “processor” is first and foremost thought of as a CPU. This is a low-level liability for a company that is definitely in competition with Intel, and yet their flagship product is a full-on System-on-Chip rather than discrete components like a CPU with integrated GPU.Furthermore while Qualcomm develops their own semi-custom CPU (Kryo), what really sets them apart from even other SoC vendors are the fully custom non-CPU bits like the modem and GPU. At the end of the day, Qualcomm wants to get more attention and focus on the hardware blocks they have developed and believe give them the greatest edge over the competition. And if they can better differentiate what they do from Intel, all the better.The risk for Qualcomm, besides any potential derailment of the Snapdragon brand, is that “platform” is badly overused across the tech industry these days. Windows is a platform, Twitter is a platform, Steam is a platform. Whereas “processor” was a generic term for a specific part of a computer, “platform” is a generic term for just about any kind of computing environment. So while platform is probably a better fit for an SoC, it’s definitely also more generic.Finally, let’s talk about the 200 series of SoCs, which are now no longer Snapdragon, leaving them as the “Qualcomm Mobile Platform”. While Qualcomm is taking this action at the same time as the above platform rename, the rationale is a bit different. Qualcomm is looking to solidify the Snapdragon brand as a brand for high-end processors, and as a budget SoC line powered by ARM’s Cortex-A7 CPU cores and 802.11n networking, the 200 series definitely doesn’t fall under that umbrella. The fact that Qualcomm is not branding the 200 series as something else does, on the surface, feel a bit odd, but at the same time it wouldn’t make much sense to put money and energy behind promoting a low-end brand.In any case, by removing the 200 series from the Snapdragon brand, Qualcomm will be throwing out the lowest performing member of the family. Which, if all goes according to plan, will make it easier for Qualcomm to better communicate that Snapdragon is a high-end brand.
Seagate Announces Enterprise Capacity 12 TB HDD: 2nd-Gen Helium-Filled Hard Drives
Seagate has introduced its second-generation of helium-filled HDDs. These drives will be aimed at capacity-demanding enterprise and cloud applications, and the new drives store up to 12 TB of data. The new drive uses eight platters, which is more than the first generation model, but its power consumption remains below typical air-filled HDDs. The new capacity point from Seagate should enable customers to increase the amount of data they store per standard rack by 20% when compared to previous-gen models.The Seagate Enterprise Capacity v7 3.5-inch HDDs are based on the company’s seventh-gen enterprise-class platform, with multiple features designed to reduce the number of errors, as well as reducing the vibration impact on internal components and improving the security and the endurance of the device. Traditionally such drives have more robust mounting mechanisms for internal components anyway, such as the motor, and various vibration and environmental sensors to guarantee predictable performance and reduce risks. In addition, the new HDDs support PowerChoice technology that helps to manage idle power consumption. The new PowerBalance tech enables operators of datacenters to balance power consumption and IOPS performance of the hard drives. When compared to the previous-gen Enterprise Capacity HDDa, the new ones support RSA2048-signed firmware with a secure download and diagnostics (SD&D) feature that prevents unauthorized access, modification or installation of a tampered firmware.The new Enterprise Capacity v7 3.5-inch 12 TB HDD has eight perpendicular magnetic recording (PMR) platters, each with a 1.5 TB capacity. This comes with 16 heads, and rotates at 7200 RPM. Cache is listed as 256 MB for each drive. Due to higher areal density and some other optimizations, the new-gen enterprise HDDs have up to a 261 MB/s maximum sustained transfer rate, which is a little bit higher than the helium-filled drives introduced last year. The random write performance of the new drives is also slightly higher when compared to that of their predecessors (it's still worth noting that 400 IOPS is behind that of even entry-level SSDs by orders of magnitude). Moreover, despite the addition of a platter, the maximum operating power of the new Seagate Enterprise Capacity Helium HDDs seems to be similar when compared to that of the first-gen helium hard drives, at around 8 W - 9 W (see the table for details). At the same time, the average idle power consumption of the new HDDs is slightly higher when compared to that of their predecessors.Comparison of Seagate's Helium-Filled HDDsSeagate Enterprise Capacity v6
GIGABYTE GB-BKi7HA-7500 Kaby Lake BRIX Review
The emergence of power-efficient high-performance processors has created a bright spot in the desktop PC market. The ultra-compact form factor (UCFF) heralded by the Intel NUCs has experienced rapid growth over the past few years. GIGABYTE, with their BRIX lineup, was one of the first vendors to introduce NUC clones. They went beyond the traditional Intel models and provided plenty of choices to the end users. GIGABYTE has also kept up with Intel's release cadence and updated the BRIX lineup after the launch of new U-series CPUs. Today, we are taking a look at the GB-BKi7HA-7500 - a BRIX based on the Kaby Lake Core i7-7500U, with support for a 2.5" drive, and sporting an ASMedia bridge chip for USB 3.1 Gen 2 support.
AMD Releases Radeon Software ReLive Crimson Edition 17.3.2
Continuing their momentum in update frequency, we have another driver release form AMD. This latest update is a minor "point update" for the month, and it includes only two fixes and game support. However, that game is the highly anticipated Mass Effect: Andromeda.In our short list of fixes today, we have a fix for texture corruption from The Division seen while running DX12, and an oddly specific fix for texture flickering and black screens found in For Honor when performing task switching in 4-way CrossFire system configurations. AMD's last driver update was only Monday of last week, so I expect more of that lengthy “Known Issues” list to move its way up in due time.Likely more important for many, support for Mass Effect: Andromeda comes bundled in with this update as well. AMD is claiming performance gains of up to 12% on a Radeon RX 480 with an i7-6700K and 8GB of DDR4-2666 when compared with Radeon Software Crimson ReLive edition 17.3.1 (the footnote claims that rise is from 53.7 to 60.1 FPS). Meanwhile in addition to officially supporting Mass Effect, and likely contributing to any performance gains, AMD has added an Optimized Tessellation Profile for the game.As always, those interested in reading more or installing the updated hotfix drivers for AMD’s desktop, mobile, and integrated GPUs can find them either under the driver update section in Radeon Settings or on AMDs Radeon Software Crimson ReLive Edition download page.
AMD Announces Ryzen 5 Lineup: Hex-Core from $219, Available April 11th
As part of our initial Ryzen 7 review, AMD also teased the presence of two more elements to the Ryzen lineup, specifically Ryzen 5 and Ryzen 3, both aiming at a lower cost market and allowing AMD to sell some of the silicon that didn’t quite make it to the Ryzen 7 lineup. Today is the official announcement for Ryzen 5, featuring four processors in hex-core and quad-core formats, all with Simultaneous Multi-Threading (SMT) and all using the same AM4 platform as Ryzen 5.Ryzen 5Whereas Ryzen 7 was AMD’s main attack on high-performance x86 and a shot across the bow against Intel’s high-end desktop platform, Ryzen 5 is targeted more at mainstream users. The goal here is that where Intel has four cores with no hyperthreading, AMD can provide six cores with SMT, effectively offering three times as many threads for the same price and potentially smashing any multithreaded workload.Without further ado, here is where the Ryzen families stand:AMD Ryzen 7 SKUsCores/
Spreadtrum SC9861G-IA: An Intel Atom Octocore Smartphone SoC on 14nm with LTE
Last year Intel decided to cease development of its smartphone SoCs and focus instead on microprocessors for other devices, as well as LTE, 5G modems, as well as various IoT solutions. While we weren't expecting a new x86 SoC in the space, Intel did not specify that would be the case: the agreements with third-party SoC developers such as Spreadtrum and Rockchip were still in place. Despite this, we were surprised to hear that At MWC 2017, Intel’s partner Spreadtrum introduced a brand new application processor for high-end handsets, featuring Intel’s 2015 Airmont cores (as seen in Cherry Trail) and made using Intel’s 14 nm process technology.The Spreadtrum SC9861G-IA SoC features eight Intel’s Airmont cores with running at up to 2 GHz, with Imagination Technology's PowerVR GT7200 GPU. Also integrated is Spreadtrum's own 5-mode LTE Cat 7 modem (up to 300 Mbps download, up to 100 Mbps upload). The SoC also integrates an ISP that supports up to two 13 MP camera sensors, a dedicated sensor hub, and hardware-based decoders/encoders for HEVC and other popular video codecs that support up to 3840×2160 resolution. The display controller can handle resolutions up to 2560×1600.Spreadtrum's 8-Core Airmont SoCSC9861G-IACPU Cores8 × Intel Airmont at up to 2 GHzGPUPowerVR GT7200Imaging CapabilitiesUp to 26 MP,
AppliedMicro's X-Gene 3 SoC Begins Sampling: A Step in ARM's 2017 Server Ambitions
There has been a lot of recent movement in the ARM Server SoC space, with three major players. The third player, AppliedMicro, has been acquired by MACOM. MACOM has announced that the third generation 16-nanometer FinFET Server-on-a-Chip (SoC) solution, X-Gene 3, is sampling to "lead customers". Despite all the products so far on ARMv8, the server world continues to mature and to move forward.The AppliedMicro X-Gene 3Back in 2015, we reviewed the 40 nm 8-core X-Gene 1 (2.4 GHz, 45W), which found a home in HP's Moonshot processors. Performance wise the SoC was on par with the Atom C2750 (8 cores @ 2 GHz), but consumed twice as much power, which led in our review to an overall negative conclusion. The power consumption issue was understandable: it was baked on a very old 40 nm process. But the performance was rather underwhelming, as we expected more from a 4-issue superscalar processor at 2.4 GHz. The Atom core, by comparison, was only a dual-issue design and offered similar performance at a lower frequency.Moving forward, we got the X-Gene 2. This was a refresh of the first design, but built on 28 nm. It was still at 2.4 GHz, but with a lower power consumption (35 W TDP) and a smaller die size of around 100 mm². Despite the relatively lackluster CPU performance, the overall efficiency increase meant that the X-Gene 2 did find a home in several appliances where CPU performance was not the top priority, such as switches and storage devices.MACOM, the new owners of the X-Gene IP, claim that the new X-Gene 3 is a totally different beast. The main performance claim is that it should be >6 times faster in SPECintRate than X-Gene 1 or 2. That performance increase is mostly because the new SoC has 4 times as many cores: 32 rather than 8. Besides the 32 ARMv8-A 64-bit cores in X-Gene 3, it will also include eight ECC capable DDR4-2667 memory channels, supporting up to 16 DIMMs (max. 1 TB), and 42 PCIe Gen 3.0 lanes.MACOM's reference X-Gene 3 platform has everything working at near full speed: all 32 cores are functional and run as fast as 3.3 GHz. The SoC design gives 32 MB of L3 cache through a coherent network, and we are told is 'at full speed'. PCIe, USB and integrated SATA ports all work at full speed also. Memory is initially limited to 2400 MT/s instead of 2667 MT/s, but considering that the current memory market only offers buffered DDR4 DIMMs at 2400, that is not an immediate issue.That set of specifications is impressive, but if the X-Gene 3 really wants to be a "Cloud SoC", performance has to be competitive. We look forward to testing.The ARM CompetitionThe other two players are Cavium and Qualcomm.Cavium has been on a buying spree as of late, acquiring Broadcom Vulcan IP and also Qlogic, a network/storage vendor. If Cavium can inject all that IP in it's Thunder-X server SoC line, its next generation could be a very powerful contender.Qualcomm will have its 48-core Centriq-2400 SoC ready by the second half of this year, and it will run Windows Server.Predicted Performance Analysis: Xeon-D AlternativeThe only performance figures for X-Gene 3 we have seen so far are the ones found in a Linley Group white paper that can be accessed here:
HiSilicon Kirin 960: A Closer Look at Performance and Power
HiSilicon looks to build on the Kirin 950’s success by adopting ARM’s latest A73 CPU cores and Mali-G71 GPU for the Kirin 960. Still manufactured on a 16nm FinFET process, the Kirin 960 also packs twice as many GPU cores as its previous SoC. Let’s take a closer look at how these changes affect performance and power consumption.
MWC 2017: AGM Preparing IP68 Rated Snapdragon 835 Smartphone with 8GB DRAM
AGM may not be a household name for the vast majority of people living in Europe and the U.S., but this is a well-known maker of rugged phones that sells its products in many countries around the world. The company’s lineup currently includes six IP68-rated models, and in order to attract the attention of the mass market, AGM is preparing its new X2 model that weds a rugged design with the latest technology. At MWC 2017 this year, the company announced its new flagship that will be based on Qualcomm’s Snapdragon 835 and will rival leading-edge smartphones from big makers.The Old X1To put this into context, AGM’s current X1 flagship is based on the Qualcomm Snapdragon 617 (eight ARM Cortex-A53, Adreno 405, X8 LTE, etc.). The X1 is equipped with up to 4 GB of DDR3 RAM, up to 64 GB of eMMC NAND, two 13 MP back-facing cameras, a 5400 mAh battery and comes with a 5.5” FHD AMOLED display featuring Gorilla Glass 3. The phone is IP68-rated against dust and immersion in water (over 1 m depth, but other conditions are specified by the manufacturer), yet it looks considerably neater than typical rugged designs. While AGM’s phones come in rugged enclosures and can survive in situations when other handsets might fail, none of them are MIL-STD 810G-graded.Gallery: AGM X1The New X2While the AGM X1 is positioned by the manufacturer as an affordable rugged phone for extreme sports and other outdoor activities, but it is definitely not a phone from the premier league. In the coming months (in mid-2017) AGM plans to introduce its X2, which will be positioned as a premium smartphone and will feature Qualcomm’s latest Snapdragon 835 SoC (10nm, eight new Kryo cores, Adreno 540, X16 LTE, LPDDR4X, etc.). This is along with 8 GB of DRAM, 256 GB of NAND, two cameras, a ~6000 mAh battery (there will also be the AGM X2 Pro with a 10,000 mAh battery) an omni-bearing ambient sensor and so on. Based on official images of the AGM X2 (originally published by AGM and AndroidHeadlines), it is possible that the phone has four antennae and thus supports 4x4 MIMO, one of the three features required for Gigabit LTE.The AGM X2 will be one of the first IP68-rated Snapdragon 835-based smartphones with a rugged design. Meanwhile, for AGM, this will be a debut on the market of premium smartphones that compete against Apple’s iPhones or Samsung’s Galaxy S-series. The price of the AGM X2 is unknown, but it will likely vary significantly depending on the store and the region. For example, the AGM X1 can be bought for $260 in China or for over $480 in the U.S.
MWC 2017: Netgear Nighthawk M1 Coming to Europe in Mid-2017, But
Earlier this year Netgear introduced its Nighthawk M1 router, powered by Qualcomm’s X16 LTE modem and is the first Gigabit LTE router on the market. Right now, the device is available on Telstra’s 4GX LTE network in Australia, but the router made a surprise appearance at the MWC 2017 show and it will actually hit the market in Europe later this year. There is a catch however: there will not be a lot of Gigabit LTE deployments because of technical challenges.The Netgear Nighthawk M1 is based on Qualcomm’s Snapdragon X16 LTE modem (paired with Qualcomm’s WTR5975 RF transceiver) that uses 4×4 MIMO, three carrier aggregation (3CA) and 256QAM modulation to download data at up to 1 Gbps (in select areas) as well as 64QAM and 2CA to upload data at up to 150 Mbps. The Nighthawk M1 router is designed for those who need to set up ultra-fast mobile broadband connection but do not want an incoming physical data connection. The router is equipped with Qualcomm’s 2×2 802.11 b/g/n/ac Wi-Fi solution that can connect up to 20 devices simultaneously using 2.4 GHz or 5 GHz frequencies concurrently. Generally speaking, the Nighthawk M1 is aimed at mobile workgroups who need high-speed Internet connection where there is no broadband. In Australia, there are areas where Telstra’s 4GX LTE network is available, whereas regular broadband is not, so the device makes a lot of sense there.Netgear’s Nighthawk M1 router is clearly one of the flagship products offered by the company. Nonetheless, it was still a bit surprising to see the device at the MWC (given its current Australia-exclusive status). When asked about availability in Europe, a representative of the company said that the Nighthawk M1 is coming to Europe this summer and will be available from multiple operators. What this means is that a number of operators from Europe will be ready to deploy Gigabit LTE later this year. Netgear did not talk about which operators or which geographies, or at what pricing, right now because it will depend entirely on operators that are going to offer the Nighthawk M1 with certain service packages.While there are Gigabit LTE deployments coming, do not expect them to be widespread in the next couple of years with 4G networks. To enable Gigabit LTE, devices and operators have to support 4×4 MIMO, carrier aggregation (CA) and 256QAM modulation. It is not particularly easy to enable 4×4 MIMO and 256QAM modulation because of interference. In fact, far not all networks today use even 64QAM. Moreover, operators have to have enough spectrum and backhaul bandwidth to transfer all the data. Thus, to offer Gigabit LTE, operators have to upgrade their infrastructure both in terms of base stations and backhaul. Some operators may be reluctant to upgrade networks to Gigabit LTE because right now there are not a lot of announced devices featuring the technology, or not all operators have enough customers who need the tech and could use/are prepared to pay for routers like the Nighthawk M1. Despite this, wireless Gigabit networks are coming first with 4G/LTE in select areas, and then with 5G sometime from 2020+ and onwards.Even if there are not a lot of Gigabit LTE deployments across Europe this year, the Netgear Nighthawk M1 will still have enough advantages to attract customers seeking for an high-end mobile router that can work for up to 24 hours on a charge (it comes with a 5040-mAh battery). In Australia, the Nighthawk M1 is available for less than $300 from Telstra, but we know nothing about the price in Europe.Related Reading:
GDC 2017 Roundup: VR for All - Pico Neo CV, Tobii, & HTC
Now that I’ve wrapped up the major GDC product launches, I want to spend a bit of time talking about the rest of GDC.The annual show has always been a big draw for game developers and hardware companies alike, and since the end of the Great Recession that process has only accelerated. But without a doubt the fastest growth in terms of developer and vendor presence at the show has been VR. GDC 2016’s VR sessions exceeded any and all expectations – the show management had to scramble to move them to larger spaces because the attendance was so high – and it took all of half a year for VR to become its own stand-alone show as well with the GDC spinoff VRDC. Suffice it to say, the amount of attention being paid and resources being invested in VR is very significant, both for software and hardware developers.So for GDC 2017, I spent an afternoon on the expo floor dedicated to VR meetings, to see what new hardware was on display. While a common theme throughout is that everyone is still looking for the killer app of VR – both in terms of hardware design and the actual must-have game/application – it’s clear that there’s a lot of progress being made for future VR headsets, and that developers aren’t afraid to experiment in the workshop and show off those experiments to the public.Pico Neo CV – Stand-alone VR Headset with Inside-Out TrackingThe first stop was Pico Interactive’s booth, where the company was showing off their Pico Neo CV headset. Pico is one of several companies developing headsets around Qualcomm’s Snapdragon SoCs, for whom VR has become a priority. Already a major force in high-end smartphones, Qualcomm believes that the Snapdragon is a great fit for VR given the mix of portability and high-performance required. As a result the company has gone all-in on VR, dedicating quite a bit of engineering and marketing resources towards helping their customers develop VR headsets and bring them to market.Pico headset, in turn, is one of several headsets in development based around a Snapdragon processor. However more than just being a stand-alone headset for the purposes of on-board processing, arguably Pico’s big claim to fame in the world of VR development is their inside-out position tracking, which is designed to do one better than current VR headsets. Whereas setups like the Samsung Gear VR and various Cardboard headsets primarily rely on inertial tracking, the Pico Neo CV can do true inside-out tracking, fixing itself relative to the outside world on an absolute basis.The advantage to positional tracking is that it allows much greater accuracy, which in turn allows for greater freedom of movement than inertial tracking. You can actually do a lot by interpolating accelerometer and gyroscope data from Inertial tracking, and as a result it’s generally satisfactory for rotation – think 360 degree videos and fixed-position gaming such as Gunjack – but it is ultimately limiting in what can be done with interactive experiences where errors add up. The drawback to absolute tracking is that it normally takes an external camera or beacon of some kind – such as the Vive Lighthouse system – which in the case of stand-alone, untethered headsets is antithetical to their portability.The solution then, as several companies like Pico are playing with, is inside-out tracking. In the case of the Pico Neo CV, the company combines the usual gyroscope and accelerometer data with a camera looking at the outside world, using computer vision processing to extract the user’s position relative to the rest of the world. Computer vision is a fairly straightforward solution to the problem – witness the number of self-driving cars and other projects using CV for similar purposes thanks to the explosion in deep learning – but it’s made all the more interesting on a headset given the processing requirements.In the case of the Pico Neo CV, while the company won’t be shipping the headset until later this year, they already have a prototype up and running, inside-out tracking and all. In my hands-on time with the headset, the positioning of the Neo seemed very accurate; the demo software always reacted to my head position as I felt it should across all six degrees of freedom, and pulling off the headset to check my actual position revealed that I was positioned where (and facing where) I should be. It’s an experience that in principle is no different than using external tracking, but then that’s the point of inside-out tracking: it is meant to be the same thing, but without the external gear.That said, like the first-generation of PC headsets, I suspect the Pico Neo CV is going to be a transitional product as the hardware further improves. The camera-based tracking system only updates at 20Hz, meaning there’s 50ms between position updates. Without getting deeper into the headset I’m not sure what the actual input lag is, but the low refresh rate is noticeable if you turn your head quickly. In my experience it’s not nauseating in any way, but like some of the other drawbacks of first-generation VR headsets, there’s clear room for improvement. The headset display itself operates at 90Hz, so it’s a matter of getting tracking operating at the same frequency.Part of the catch, I suspect, is processing power. The Pico Neo CV is based around a Snapdragon 820 SoC, which although is powerful by SoC standards, now is splitting its time between rendering in VR and processing the additional tracking information. Future SoCs are going to go a long way towards helping with this problem.Looking at the rest of the headset, Pico has clearly set out to develop something better than the vast array of cellphone-powered VR experiences out there. Pico has combined their tracking gear and the 820 with a pair of 1.5K displays, so the total pixel count – and resulting DPI – is a lot higher than on a Cardboard or Daydream setup. Along with built-in audio, and the Pico Neo CV is everything needed for stand-alone mobile-caliber VR gaming.Pico hasn’t yet announced a precise launch date for the headset, but they expect to start selling it later this year. As one of the first serious efforts at a stand-alone Snapdragon-based headset, it should be interesting to see where these kinds of devices fall into the market, and just how much more Pico can improve the inside-out tracking before the headset’s launch.Tobii – VR Eye TrackingGoing from the inside looking out, let’s talk about the inside looking even further inside. One of the technologies various companies have been investigating for second-generation VR headsets is eye tracking. Besides enabling a more immersive experience, eye tracking could also potentially change how VR rendering works by allowing foveated rendering. By using eye tracking to keep tabs on what direction a user is looking, foveated rendering would allow games to efficiently render in a non-uniform fashion, rendering at full quality only where a user is looking, and rendering at a lower level of quality outside of that focus area.But to get there you first need to be able to accurately track users’ eyes, and that’s where Tobii comes in. The company, which focuses on eye tracking for gaming and other applications, has already made a name for itself with their eye-tracking cameras, which are available both stand-alone and integrated into some laptops and displays. The use of external eye tracking has proven a bit gimmicky, but the technology is sound, and VR stands to be a much more useful application.To that end, the company was at GDC showcasing a modified HTC Vive headset with their eye tracking technology installed. The company’s demo was primarily focused on how eye tracking can improve the gaming experience, both as an input method and as a way to add life to avatars, and true to their claims, it worked. The eye tracking implementation in the company’s modified headset was very rapid, to the point that it didn’t feel like it was operating any slower than the headset tracking. And while it took some practice to get used to – it’s a bit jarring at first that where you look actively matters – once I got used to it, it worked very well.But from a technical perspective, perhaps the most impressive part was just how well the company had integrated the eye tracking hardware into the headset itself. While the external cameras were by no means big to begin with, I was surprised just how easily it fit into the prototype. Adding eye tracking did not make the headset feel significantly heavier, and the sensors easily fit into the already limited free space inside the headset. From a hardware perspective, this very much felt like a technology that was already at a point where it could get integrated into a commercial headset tomorrow.Consequently, if Tobii’s technology (or similar eye tracking tech) shows up in second-generation VR headsets, I would not be the least-bit surprised. While I’m not sold on the gaming aspects of the tech – it’s neat, not must-have – it’s the kind of thing where I expect developers would need some time to play with it to really figure out if it’s useful and just what the best use cases are. Otherwise the big use case here is going to be foveated rendering, which is likely going to prove critical for higher resolution VR headsets. The latter is outside of Tobii’s hands, but offering a good eye tracking experience is the first and most important part in making that happen.HTC Vive – Hand on with the Deluxe Audio Strap & TrackerMy final stop for the afternoon was HTC’s private demo room, where the company was showing off some new games and other software technology being developed for the Vive. We’re at least a year too early for second-generation headsets, so the company wasn’t showing off anything new in that respect, but they did have on-hand their new Deluxe Audio Strap and the Tracker device for third-party peripherals. Both of these devices have been previously announced, but this is the first time I’ve had a chance to actually use them.The Deluxe Audio Strap is an interesting device. Despite its plain-sounding name, it’s a lot more than just an audio solution for the headphone-free Vive. In adding earphones, HTC went and radically altered the entire strapping mechanism for the headset. As a result the Deluxe Audio Strap not only rectifies one of the competitive drawbacks of the Vive – it requires a pair of headphones/earbuds on top of everything else – but it also greatly improves the fit of the headset. The latter has always been of particular interest to me; the original Vive strap system just never fit my admittedly oversized head very well. So improving this would go a long way towards making the Vive more comfortable to wear over a long period of time.Coming from the original strap system, I’ve found the difference rather pronounced. With the Deluxe Audio Strap installed, the Vive is not only easier to adjust, but it feels a lot more secure as well. The former comes thanks to a small dial (a “sizing dial”) on the back of the harness, which replaces the use of Velcro straps along the sides of the headset. Now you can just turn the dial to adjust the fit of the headset, which is easy enough to do both wearing the headset and with it off. Combined with some other general fitting tweaks HTC has made to the strap, and it feels like the strap they should have had for the headset’s launch last year.Meanwhile the new earphones are similarly impressive. Relative to the Rift HTC has gone with something a little bigger and a little more versatile. The drivers HTC are using are larger than those used in the Rift’s earphones, and should give it a bit more kick in the bass, though that’s something that would need to be tested. The fit of the earphones is also very good; the ratchet mechanism keeps them pushed towards the ears, while it’s easy enough to flip one or both earphones out to hear the world around you (or in my case, the engineer giving the presentation). While I doubt most Vive owners will want to buy the new strap solely on the basis of audio since they already have headphones or another solution, combined with the new strap system, it’s a very compelling offering.Also on display was the Vive Tracker. The external widget is designed to be used with the Vive’s Lighthouse system, allowing for Lighthouse tracking to be added to third party objects. The tracker itself does look a bit weird, owing to its need to match the pitted appearance of the Vive headset that the Lighthouse system is meant to work with, but it does its job well. Besides the obvious use case of third party controllers – which could prove interesting for developers since it’s just the Tracker and not the entire controller being tracked – HTC was also using it for more unusual applications such as attaching it to a camera to allow accurately superimposing recorded footage (i.e. unsuspecting editors) over the rendered game itself.The Deluxe Audio Strap is available immediately for developers and other commercial firms who are buying the Vive Business Edition. Otherwise larger-scale consumer sales will start a bit later this year; HTC is pricing it at $99.99 and pre-orders start on May 2, while HTC will begin shipping it in June. Meanwhile the Tracker will go on sale to developers on the 27 of this month, also for $99.99.Gallery: GDC 2017 Roundup: VR for All - Pico Neo CV, Tobii, & HTC
MWC 2017: Oppo Demonstrates 5X Optical Zoom for Smartphones
This year at MWC, Oppo showed off a smartphone prototype that used a new implementation of dual cameras to offer a 5X optical zoom. The company did not reveal anything about the actual plans to use it for products, nor did they reveal the cost of its implementation, but it is likely that it will reach the market sometimes in the future.Imaging capabilities of smartphones have been evolving rapidly since the introduction of the first handsets with cameras. Throughout the history of phones making photos, manufacturers have developed new lens packs, new CMOS sensors and extensive ISPs (image sensor processors) in order to improve the capability and/or quality of images. For a while, a number of makers tended to install higher-resolution sensors simply because the 'megapixel number' was easier to explain than the quality of optics or advanced ISPs. A lot has changed in the recent years as various smartphone makers have invested in high-end lenses (co-developed with Carl Zeiss, Leica, etc.), developed their own SoCs/ISPs for image processing, and other potential differentiators in a cramped smartphone ecosystem.So at MWC 2017, multiple smartphone manufacturers demonstrated their products with dual back-facing sensors (RGB+RGB or RGB+IR) to further improve their photography acumen. One of those was Oppo using the two sensors to build a portable camera system with a 5X optical zoom in a very different configuration to what we have seen before.Optical zoom is not anything new for smartphones, but Oppo’s approach is a little bit different compared to that used by other makers. The 5X dual camera optical zoom from Oppo relies on two image sensors:
MWC 2017: Panasonic Demonstrates Store Window as a Transparent Screen
At Mobile World Congress this year, Panasonic demonstrated a glass that can be turned into a display in an instant. The solution relies on a thin film between the sheets of glass that can quickly change its properties when electricity is supplied, allowing a rear projector to focus and provide an image. The system is currently aimed at retailers that want to attract more attention to their stores and shelves. The company says that the first deployments of the technology are expected this spring.There are typically two ways for stores to attract the attention of those passing by. Either put something interesting in the shop window, or replace the window with LCD screens that showcase something appealing. The new solution that Panasonic is showing blends traditional showcases and displays, enabling owners of stores to have both. The technology behind the solution appears to be relatively simple: Panasonic takes two glasses and puts a special light-control film between them.The film is matte and can be used to display images that are projected onto it using a conventional off-the-shelf projector. But when electricity is applied to the film, it becomes transparent. Similar opaque glass technologies are in frequent use, applying a potential difference across two electrodes embedded in the glass and between an electrolyte whereby larger particles in the electrolyte self-assemble in the presence of an electronic charge to allow light to pass through. This ends up being a natural extension of what Panasonic has shown at other recent events regarding large glass projection display technology.At MWC 2017, Panasonic showed a booth with a mannequin wearing a red dress, a pair of black shoes, a green handbag. The lens of the projector was camouflaged with the environment. Once the film is “switched”, the 1×2 meter window can be used as a screen and this is where Panasonic is demonstrating a video with a model wearing that exact red dress (albeit, with red shoes). The manufacturer says that the resolution of the display depends entirely on the resolution of the projector, but the density of the non-transparent particles as well as the placement of the projector have its effect on the quality too. Meanwhile, since the videos are displayed using a projector, it should not be too hard for stores to set everything up for transparent screens.Panasonic does not reveal the tech behind its smart glass and as there are multiple types of films that can change their properties when electricity is applied, which makes estimating difficult without an official announcement. What is important here is that the glass can either be a screen, or completely transparent. So, unless you stick several glasses together, the window will be either a window or a display, which limits the number of applications that can use the tech.At present, a 1×2 meter wall (XC-CSG01G) is the maximum size of Panasonic’s “transparent screen”, so, if someone wants a larger wall, they have to use several glasses and projectors in sync. The total cost for a single 1×2 meter display with a control box (XC-CSC01G-A1) like this will be around $3000-$4000 according to a Panasonic rep at the booth (not sure if this includes the projector, it doesn't sound like it does, but that price is minus a support contract). Panasonic states that the company already has customers interested in these products and are basically ready to accept delivery. The high price of Panasonic’s transparent screen glass is conditioned not only by its capabilities but also by the fact that everything has to be rugged and work properly for different weather and temperatures. Panasonic plans to start selling its “transparent screens” in Japan first and then look for customers in other parts of the world as well.
Intel to Acquire Mobileye for $15 Billion
In an interesting announcement today, Intel and Mobileye have entered into an agreement whereby Intel will commence a tender offer for all issued and outstanding ordinary shared of Mobileye. At $63.54 per share, this will equate to a value of approximately $15 billion.Mobileye is currently one of a number of competitors actively pursuing the visual computing space, and the high item on that agenda is automotive. We’ve seen Mobileye announcements over the last few years, with relationships with car manufacturers on the road to fully autonomous vehicles. Intel clearly wants a piece of that action, aside from its own movement into automotive as well as cloud computing required for various automotive tasks.Intel estimates that vehicle systems, data, and the services market for automotive to have a value around $70 billion by 2030, including edge cases through backhaul into cloud. This includes predictions such that 4TB of data per day per vehicle will be generated, which is going to require planning in infrastructure. Intel’s expertise in elements such as the RealSense technology and high-performance general compute will be an interesting match to Mobileye’s portfolio.
Infineon Shows Off Future of eSIM Cards: <1.65 mm2 using 14 nm FinFET
At MWC this year, Infineon showcased a lineup of its current and embedded SIM products. The company demonstrates not only the industry-standard MFF2 eSIM chip, but also considerably smaller ICs designed for future miniature devices (many of which may not even exist yet as a category) as well as M2M (machine to machine) applications. It is noteworthy that to manufacture an eSIM the size of a match head, Infineon uses GlobalFoundries 14LPP process technology, taking advantage of leading-edge lithography to bring the size of a simple device down.The first SIM cards were introduced in 1991 along with the world’s first GSM network operated by Radiolinja in Finland (now the company is called Elisa). Back then, mobile phones were so bulky that a card the size of a credit card (1FF) could fit in. Eventually, handsets got smaller, Mini-SIMs (2FF) replaced full-sized SIMs and then Micro-SIM (3FF) and Nano-SIM (4FF) cards took over. While mobile phones have evolved considerably in terms of feature-set in the last 25 years, the function of the SIM card remained the same: it stores an integrated circuit card identifier (ICCID), an international mobile subscriber identity (IMSI), a location area identity, an authentication key (Ki, this part actually requires a basic 16- or 32-bit compute unit) as well as a phone book and some SMS messages.By today’s standards, the amount of data that each SIM card stores is so tiny that its physical dimensions are simply not justified. Even Nano-SIMs are too large for applications like a smartwatch, and this is when embedded SIMs come into play: their form-factor is considerably smaller, they can be used with various operators (which makes them more flexible in general) and some of such cards have an expanded feature set (e.g., hardware crypto-processors). Today, there is one internationally recognized form-factor for eSIMs, the MFF2, which is used inside devices like Samsung’s Gear series smartwatch with GSM/3G connectivity. If we actually take a look inside the Gear S2 smartwatch, we will notice that the eSIM is actually one of the largest components and its functionality is disproportional to its dimensions.At MWC 2017 Infineon is demonstrating two more eSIM implementations, which have not been standardized (yet?), but which are already used inside millions of devices.The first one, when packaged, has dimensions of 2.5×2.7×0.5 mm, which essentially means that it has no packaging at all. This IC is produced using a mature 65 nm process technology and that means that it is very cheap.The second eSIM implementation that Infineon demonstrates is actually even tinier: its dimensions when fully packaged and ready to use are just 1.5×1.1×0.37 mm. The IC is made using 14LPP process technology by GlobalFoundries and the foundry charges the chip developer accordingly. Using a leading-edge process technology to make eSIM cards is not something common, but the approach enables developers of various devices to take advantage of the smallest cards possible (another advantage of such cards are low voltages and power consumption).It remains to be seen when the industry formally adopts eSIM standards smaller than the MFF2, but dimensions of the eSIMs that Infineon is demonstrating clearly indicate that there are ways to make these cards smaller. Moreover, companies are not afraid of using proprietary/non-standard form-factors are already using the offerings from Infineon. It is up for debate whether using leading-edge process technology for making eSIMs makes sense in general (after all, far not all devices require tiny dimensions and expanded functionality of eSIMs, such as crypto-processors), but with 10 million non-standard eSIMs shipped to date it is obvious that there are mass market devices that can absorb such chips even at potentially premium pricing.
SD Association Announces UHS-III (up to 624 MB/s), A2 Class, LV Signaling
The SD Association has made three important announcements in the course of the past couple of weeks. First is the introduction of its UHS-III bus that increases the potential maximum throughput of SD cards. Second is the announcement of the Application Performance Class 2 standard, for devices that meet a new set of criteria that relate to IOPS. Third is a new Low-Voltage Signaling specification (LVS) that has potential to reduce complexity and power consumption of future applications featuring SD memory cards. Both the A2 and the LVS are parts of the SD 6.0 specification.UHS-III: Up to 624 MB/sAs 4K, 8K and 360° content are becoming more widespread, the performance requirements for storage on cameras and similar devices is ever increasing. To support them, the SDA is introducing a new UHS-III interface bus that increases potential read/write bandwidth up to 624 MB/s (double that of the UHS-II). The UHS-III high-speed interface signals are assigned to the second row of SD card pins that are also present on the UHS-II cards, which means that the upcoming UHS-III cards will be backward compatible with UHS-II and UHS devices as well as any other SD hosts.Comparison of UHS Bus PerformanceUHS-IUHS-IIUHS-III50 - 104 MB/s156 MB/s full duplex
Sony Demonstrates Concept Xperia Ear Headphones and Xperia Touch Android Projector at MWC
As we move through our MWC meeting writeup backlog this week, one of the interesting developments we saw was from Sony. Apart from new smartphones, Sony showed two interesting devices at MWC 2017: the Xperia Ear Open-Style Concept as well as the Xperia Touch Android projector. Both devices use a number of Sony’s proprietary technologies, and Sony states that their usage models differ from what we expect to see from today’s devices. The open-style headphone is officially a concept device, with Sony wanting feedback, whereas the latter is a product that is about to ship.The Xperia Ear Open-Style Concept: Headphones That Lets You Listen to Outside WorldSony showcased its wireless stereo headphones called the Xperia Ear Open-Style Concept on the show floor. The headphones enable users to listen to music and receive notifications from their apps, while also hearing sounds from the outside world. Sony stated that people wearing headphones may not be aware of what is happening in the periphery and may not know whether a car or something heavier is incoming and they cannot be warned because their ears are busy.Sony does not go too deep explaining how the Xperia Ear Open-Style concept device works, but only says that it has two spatial acoustic conductors, and driver units transmitting sounds to the ear canal. Based on the look of two of Sony's prototypes (one was demonstrated at MWC, another was shown in images from Sony's labs), the driver units seem to be rather large and it is unclear whether the company can make them considerably smaller. Moreover, keep in mind that everything is wireless, which adds its complexities (e.g., power consumption and the stability of connection to the audio source, etc.).Sony compares its Xperia Ear Open-Style to its Xperia Ear headset, so we are talking about a device that connects to an Android smartphone and supports Sony’s Agent assistant. Based on Sony’s and our own pictures, the Xperia Ear Open-Style is rather huge, which may indicate that it might have more compute in there than just for audio, perhaps teasing other functionality. If the concept goes into production, it will likely be a bit smaller, but exact dimensions are something that even Sony most probably does not know right now.Gallery: Sony Xperia Ear Open-Style Concept at MWC 2017The Xperia Touch: Android Apps Outside the PhoneAndroid apps are run on Android-based smartphones, tablets or Chromebooks, which makes it hard to use them collaboratively unless you happen to own one of those 32-inch table(t)s. Sony wants to change that with its Xperia Touch projector. The device is such that it not only projects images but also senses interactions with them, making any surface a 23” touchscreen. It can also be used to project to an 80” wall. Sony says that this is a useful tool for collaborative family entertainment, but could also be used for collaborative work or for public services such as cafés.The Sony Xperia Touch is a fully fledged Google Android 7.0-based computer, based on an unknown SoC and is equipped with 3 GB of LPDDR3, 32 GB eMMC storage, sensors (e-compass, GPS, ambient, light barometer, temperature, humidity, human detection that also recognizes certain gestures), communication capabilities (802.11ac, Bluetooth 4.2, NFC, USB Type-C, HDMI), a microphone that can be used for voice commands, stereo speakers as well as a battery (lasts for one hour with half brightness). The display system uses a laser diode-based 0.37 SXRD LCD shutter projection with 1366x768 resolution, 100 nits brightness and a 4000:1 contrast ratio. To detect what users do, the Xperia Touch uses an IR sensor and Sony Exmor RS RGB sensors that capture images at 60 fps.Sony originally introduced its touch-sensing projector at MWC 2016 a year ago, but did not share any details about availability and pricing because it was a prototype back then. In one year, the device has evolved to a commercial product that Sony plans to start selling this spring on select markets in Europe for €1599. Given the relatively low resolution, lack of a significant library of consumer-grade software designed with the Xperia Touch in mind, a rather short battery life and high price, the projector is barely aimed at the mainstream audience at this point. For the time being, this product will be aimed at companies and individuals who already have ideas how to use it.Gallery: Sony Xperia Touch Projector Hands-On at MWC 2017
The Chuwi LapBook 14.1 Review: Redefining Affordable
In this industry, it is all too easy to focus only on the high end of the PC market. Manufacturers want to show off their best side, and often provide samples of high-end, high-expense devices more than their other offerings. While these devices are certainly exciting, and can set the bar for how products should perform, there is definitely a gap compared to being able to review the other end of the market. When Chinese manufacturer Chuwi reached out with an opportunity to take a look at the Chuwi LapBook 14.1, it was a great chance to see how this market has evolved over the last several years, and to see how another manufacturer tackles the inescapable compromise of this end of the market. The Chuwi LapBook 14.1 offers a lot of computer for the money.
The NVIDIA GeForce GTX 1080 Ti Founder's Edition Review: Bigger Pascal for Better Performance
Unveiled last week at GDC and launching tomorrow is the GeForce GTX 1080 Ti. Based on NVIDIA’s GP102 GPU – aka Bigger Pascal – the job of GTX 1080 Ti is to serve as a mid-cycle refresh of the GeForce 10 series. Like the GTX 980 Ti and GTX 780 Ti before it, that means taking advantage of improved manufacturing yields and reduced costs to push out a bigger, more powerful GPU to drive this year’s flagship video card. And, for NVIDIA and their well-executed dominance of the high-end video card market, it’s a chance to run up the score even more.
Microsoft Details Project Olympus Open Compute Standard
Today, at the 2017 Open Compute Project U.S. Summit, Microsoft unveiled some significant announcements around their hyperscale cloud hardware design, which they first announced in November as Project Olympus. With the explosion of growth in cloud computing, Microsoft is hoping to reduce the costs of their Azure expansion by creating universal platforms in collaboration with the Open Compute Project. Project Olympus is more than just a server standard though. It consists of a universal motherboard, power supplies, 1U and 2U server chassis, power distribution, and more. Microsoft isn’t the first company to want to go down this road, and it makes a lot of sense to cut costs by creating standards when you are buying equipment on the level of Azure.The company made several big announcements today, with the first one coming somewhat as a surprise, but when you read between the lines, it makes a lot of sense. Microsoft is partnering with Qualcomm and Cavium to bring ARM based servers to Azure. This is a pretty big shift for the company, since they have focused more on x86 computing, and changing to a new ISA is never a small task, so Microsoft is clearly serious about this move.Microsoft Distinguished Engineer Leendert van Doorn expanded on why the company is exploring this option in a blog post today. Clearly ARM has made some progress in the server world over the last few years, and Microsoft feels it’s the right time to bring some of that capability to their own datacenters. I think one of the key takeaways is that Microsoft wants to shape the hardware capabilities to the workload, and with an open platform like ARM, this can make a lot of sense for certain workloads. He points out that search and indexing, storage, databases, big data, and machine learning, are all named as potential workloads, and in cloud computing, these are all significant in their own right.Qualcomm Centriq 2400 PlatformMicrosoft already has a version of Windows Server running on ARM, and they’ve announced that both of their partners will be demonstrating this internal use port of Windows Server, first with Qualcomm with their Centriq 2400 processor, with 48 cores on Samsung’s 10nm FinFET process. Cavium will be running on their second generation ThunderX2 platform. Our own Johan De Gelas did a thorough investigation of the original ThunderX platform in June 2016 and it is certainly worth a read. The takeaways were that Cavium needed to do a lot of work on power management, and they had some big performance bottlenecks, so they offered inferior performance per watt compared to a Xeon D, but better than advertised single-threaded performance with SPEC 2006 results only 1/3 the Xeon results, rather than the 1/5 that was advertised. If Cavium has fixed some of the issues, especially power consumption, the new ThunderX2 might be a compelling solution for specific tasks.Cavium ThunderX2 PlatformThat is really the kicker though. The ARM platform, if properly executed, should be a good solution for some specific tasks, and if Microsoft can work with the platform makers to shape the hardware to fit specific tasks, but still be more general purpose than an ASIC, but at this time, it’s unlikely to be a serious threat to Intel’s monopoly on the datacenter at the moment. Intel has a pretty sizeable advantage in IPC, and especially on single-threaded workloads, so x86 isn’t going anywhere yet. What really matters is how Qualcomm and Cavium can execute on their platforms, and where they price them, since the end goal for Microsoft with this change is certainly, at least to some extent, to put pressure on Intel’s pricing for their datacenter equipment.Back on the x86 side, Microsoft also had some announcements there as well. AMD will also be collaborating with Microsoft to include their Naples processor into Project Olympus, which is their new server processor based on the “Zen” architecture. Although much of the news today has been around the ARM announcement, this is arguably the bigger play. Ryzen has already shown it is very competitive with Core, and Naples could be very strong competition for Xeon. We’ll have to wait for the launch to know for sure.Microsoft didn’t abandon Intel either, and they announced close collaboration with Intel as well. This will be not only for Intel’s general purpose CPUs, but also for Intel’s FPGA accelerators and Nervana support. Microsoft already has FPGAs in Azure, so adding them to Project Olympus is a no-brainer.Microsoft also announced a partnership with NVIDIA today, bringing the HGX-1 hyperscale GPU accelerator to Project Olympus. HGX-1 is targeted at AI cloud computing, which is certainly an area where there has been tremendous growth. Each HGX-1 will be powered by eight NVIDIA Tesla P100 GPUs, each with 3584 Stream Processors, based on the GP100 chip, and a new switching design based on NVIDIA NVLink and PCIe which allows a CPU to connect to any number of GPUs dynamically. NVIDIA states the HGX-1 provides up to 100x faster deep learning performance compared to CPU-based servers.This is a pretty substantial update today for Project Olympus, and it looks to be an incredibly modular platform. Anyone reading that Microsoft is dropping Intel for ARM in Azure is misunderstanding this goal. Looking at the platform as a whole, it is abundantly clear that Microsoft wants a platform that can be designed to work with any workload, and still offer optimal performance, and efficiency. Some tasks will be best on ARM, some on x86, while GPUs will be leveraged for performance gains where possible, and FPGAs being utilized for other tasks. When you look at computing on the scale of something like Azure, it only makes sense to dedicate hardware to specific workloads, since you’ll certainly have enough different workloads to make the initial effort worthwhile, and that isn’t always the case when looking at small business, medium business, or even most enterprise workloads.Source: Microsoft Azure Blog
Best SSDs: Q1 2017
The industry-wide NAND flash shortage has not abated, so there's little good news for consumers since the holiday edition of this guide. The best deals are a few cents per GB worse than they were during the holiday season. Older SSD models are being withdrawn from the market and current models are often out of stock. At CES we noticed a pattern of companies being ready to launch new models and capacities, but many of them are holding off until they can launch with sensible pricing and volume.The situation should improve later this year when the next generation of 3D NAND hits the market. With 64 layers or more and up to 512Gb per die for TLC parts, we should finally see 3D NAND from all four major manufacturers making its way into retail SSDs. In the near term however, there's not much hope for improvement in prices and available drive capacities.As always, the prices shown are merely a snapshot at the time of writing. We make no attempt to predict when or where the best discounts will be. Instead, this guide should be treated as a baseline against which deals can be compared. All of the drives recommended here are models we have tested in at least one capacity or form factor, but in many cases we have not tested every capacity and form factor. For drives not mentioned in this guide, our SSD Bench database can provide performance information and comparisons.Premium SATA drives: Samsung 850 PROThe SanDisk Extreme Pro has all but disappeared from the market, leaving the Samsung 850 PRO as the undisputed king of the SATA SSD market. No other consumer SATA SSD can match the 850 PRO's combination of performance and a ten-year warranty. For now, the only other SATA SSDs with 3D MLC NAND are ADATA's SU900 and XPG SX950, both based on Micron's 3D MLC. Those SSDs offer slightly higher endurance ratings but warranty periods of only 5 and 6 years, and we don't expect their performance to beat the 850 PRO.Even the slowest PCIe SSD will outperform the Samsung 850 PRO in most ordinary usage scenarios, and some of those PCIe SSDs are cheaper than the 850 PRO. There are several SATA SSDs that offer performance that is close to the 850 PRO for a substantially lower price, most notably the Samsung 850 EVO. The appeal of the 850 PRO is far narrower than it was when this product first launched. If a new competitor does not emerge for this segment, we may retire this recommendation category entirely as it no longer serves any common consumer use case.Buy Samsung 850 Pro (512GB) on Amazon.com250/256GB500/512GB1TB2TBSamsung 850 PRO$139.99 (55¢/GB)$237.76 (46¢/GB)$448.99 (44¢/GB)$854.97 (42¢/GB)Samsung 850 EVO$93.99 (38¢/GB)$169.99 (34¢/GB)$324.99 (32¢/GB)$689.00 (34¢/GB) Value & Mainstream SATA: Crucial MX300, Mushkin Reactor 1TBThe value segment of the SSD market is where drives sacrifice performance and endurance to reach the lowest possible prices. Since SSD prices have tended to drop across the entire market, it is almost always possible to spend just a little more money to get a significant performance boost. The mid-range segment is a battleground between TLC drives with high enough performance, and any MLC drives that can get the price down without sacrificing their inherent performance advantage over TLC.The Crucial MX300 continues to be one of the most affordable SSDs on the market. Its combination of Micron 3D TLC and a great Marvell controller allows the the MX300 to deliver performance that is a clear step up from the cheapest planar TLC SSDs, and the MX300's power consumption is surprisingly low. MLC SSDs and the Samsung 850 EVO still perform much better under heavy sustained workloads, but the MX300 is good enough for most ordinary use.Buy Crucial MX300 1TB on Amazon.com250-275GB500-525GB1000-1050GB2TBMushkin Reactor$89.99 (35¢/GB)$169.99 (33¢/GB)$246.99 (24¢/GB)Samsung 850 EVO$93.99 (38¢/GB)$169.99 (34¢/GB)$324.99 (32¢/GB)$689.00 (34¢/GB)Crucial MX300$94.99 (35¢/GB)$149.99 (29¢/GB)$252.08 (24¢/GB)$549.99 (27¢/GB) Standard & M.2 PCIe: Intel SSD 600p and Samsung 960 EVOAs they did in the SATA SSD market with the Samsung 850 EVO, Samsung's 960 EVO has shown that the combination of 3D TLC and a great controller can hold its own against most MLC-based competitors. Now that it is widely available, we think the 960 EVO offers a good balance of affordability and performance for the PCIe SSD segment.The Intel SSD 600p is the slowest PCIe SSD on the market, but also the cheapest by far. With pricing comparable to the Samsung 850 EVO, the Intel SSD 600p offers real-world performance that exceeds any SATA SSD. It won't hold up very well under very heavy sustained workloads, but its performance on ordinary desktop workloads is the reason we're not recommending the Samsung 850 EVO as a mid-range/mainstream SATA option.Buy Samsung 960 EVO 500GB on Amazon.com128GB250-256GB500-512GB1TB2TBSamsung 960 EVO$129.99 (52¢/GB)$249.99 (50¢/GB)$477.99 (48¢/GB)Samsung 960 Pro$327.99 (64¢/GB)$629.99 (62¢/GB)$1299.99 (63¢/GB)Intel SSD 600p$64.00 (50¢/GB)$99.99 (39¢/GB)$179.99 (35¢/GB)$349.00 (34¢/GB) M.2 SATA: Samsung 850 EVO and Crucial MX300M.2 has replaced mSATA as the small form factor of choice, and new product lines are no longer including mSATA variants. Selection of M.2 SATA SSDs is far more limited than 2.5" drives, but there are enough options to cover a reasonable range of prices and performance levels. The Samsung 850 EVO is the high-performance M.2 SATA drive of choice, and anyone wanting more performance should look to M.2 PCIe SSDs. The Crucial MX300 covers the low end of the market and carries only a slight premium over its 2.5" counterpart. ADATA and Western Digital offer M.2 versions of their latest entry-level SSDs, but they currently don't offer the value of the MX300.Buy Crucial MX300 275GB M.2 on Amazon.com250-275GB500-525GB1TBSamsung 850 EVO M.2$97.99 (39¢/GB)$167.99 (34¢/GB)$354.95 (35¢/GB)Crucial MX300 M.2$89.99 (33¢/GB)$149.99 (29¢/GB)$279.99 (27¢/GB)
Imagination Announces PowerVR Furian GPU Architecture: The Next Generation of PowerVR
Taking place today is Imagination Technologies’ annual tech summit in Santa Clara, California. The company’s annual summit is always a venue for major Imagination news, and this year that’s particularly the case. As the cornerstone of this year’s summit, Imagination is announcing their next PowerVR GPU architecture: Furian.Furian marks the first new GPU architecture out of Imagination in almost 7 years. Rogue, the company’s first OpenGL ES 3.x-capable architecture, was first announced in 2010 and has become the cornerstone of Imagination’s entire GPU lineup, from wearables to high-end devices. In the intervening years, Imagination has made a number of smaller updates and optmizations to the architecture, leading to the 6, 7, and 8 series of PowerVR GPU designs. Now the company is undertaking a more radical revision to their architecture in the form of Furian, which like Rogue before it, will ultimately become the cornerstone of their GPU families.I’ll have a deeper dive into Furian next week, but for today’s launch I want to hit the highlights of the new architecture and what Imagination’s goals are for the GPUs that will be derived from it. On that note, Imagination is not announcing any specific GPU designs today; while close partners already have beta RTL designs, the final designs and the announcement of those designs will come later in the year. But as the mobile industry is a bit more open in terms of design information due to the heavy use of IP licensing and the long design windows, it makes sense for Imagination to start talking this architecture up now, so that developers know what’s coming down the pipe.Initially, Furian will co-exist alongside Rogue designs. The initial designs for Furian will be high-end designs, which means that Rogue will continue to cover customers’ needs for lower power and area efficient designs. In particular, the various XE designs will still be around for some time to come as Imagination’s leading design for area efficiency. XE will eventually be replaced by Furian, but this could potentially be some years down the line due to a mix of design priorities, cost, and the fact that new architecture features can hurt the area efficiency of a design.The ultimate goal of Furian is of course to improve power and performance, both on an energy efficiency (perf-per-milliwatt) and area efficiency (perf-per-mm2) basis. In fact it’s interesting that despite the fact that the first Furian designs will be high-end designs, Imagination is still first and foremost promoting area efficiency with Furian. Compared to a similarly sized and clocked Series7XT Plus (Rogue), Imagination is stating that a Furian design would offer 35% better shader performance and 80% better fill rate (though the company’s presentation doesn’t make it clear if this is texel or pixel), with an ultimate performance gain of a rather incredible 70-90%.From an architectural standpoint Furian is not a new architecture designed from the ground-up, but rather is a rather significant advancement over what Imagination has already done with Rogue. We’re still looking at a Tile Based Deferred Rendering system of course – the bread and butter of Imagination’s GPU technology – with Imagination taking what they’ve learned from Rogue to significantly rework blocks at every level for better performance, greater capabilities, or better scaling. In fact the latter is a big point for the company, as this architecture ultimately needs to replace Rogue and last for a number of years, meaning a high degree of scalability is required. To do that, Imagination has essentially re-engineered their layout and data-flow for Furian – more hierarchical and less of a focus on a central hub – in order to ensure they can further scale up in future designs.And as you’d expect for a new architecture, Imagination has made several changes under the hood at the ALU Cluster level – the heart of the GPU – in order to improve their GPU designs. The biggest change here – and I dare say most conventional – is that the company has significantly altered their ALU pipeline design. Whereas a full Rogue ALU would contain 16 pipelines per cluster, each composed of a number of ALUs of various sizes capable of issuing MADs (multiply + add), Furian takes things in a wider, less flexible direction.For Furian, a single pipeline drops the second MAD ALU for a simpler MUL ALU. This means that the ALUs in a pipeline are unbalanced – the ALUs aren’t equal in capability, and you need to come up with a MUL to fill the second ALU. The advantage of a pair of matching MAD ALUs is that the resulting architecture is conceptually clean and simple. The problem is that relative to simpler MUL or ADD ALUs, MAD ALUs are bigger, more power hungry, and put more pressure on the register file.Ultimately Imagination found that they were having a hard time filling the second MAD on Rogue, and a MUL, while not as capable, could cover a lot of those use cases while being simpler. The net effect is that the second MUL will likely be used less than the MAD, but it will pay for itself in size and power.Meanwhile as mentioned before, Imagination is also expanding the size of a cluster, from 16 pipelines to 32 pipelines. Rogue’s native wavefront size was 32 to begin with – executing half a wavefront over 2 cycles – so this isn’t as big a change as it first appears, since the actual thread granularity doesn’t change. However with one cluster for 32 pipelines instead of two clusters for 32 pipelines, this cuts down on the amount of control logic overhead. At the same time, presumably in anticipation of Furian designs having fewer clusters than comparable Rogue designs, Imagination has increased the performance of the texture unit, going from 4 bilinear samples/clock on Rogue to 8 bilinear samples/clock on Furian.At a higher level, the compute capabilities of Furian will easily exceed those of Rogue. The architecture is designed to be OpenCL 2.x capable (conformance results pending), and there will be variations that are fully cache/memory coherent for heterogeneous processing. On that note, while it’s not outright HSA compliant, Furian adopts many of the hardware conventions of HSA, so it should behave like other heterogeneous solutions.Though for more specialized workloads, in an interesting change Furian will be able to accommodate additional “function-specific” pipelines in customized GPU designs. How this is done is ultimately up to the customer – Imagination just provides the IP and reference RTLs – but the plumbing is there for customers to either add new functional hardware at the block level, or go as low level as the shader processor unit itself to add this hardware. The most obvious candidate here would be ray tracing hardware derived from Imagination’s Wizard architecture, but ultimately it’s up to the customers integrating the GPU IP and how much work they want to put in to add additional blocks.Finally, once Imagination is shipping final Furian designs, the company will be courting customers and downstream users in all of the major high-end markets for embedded GPUs. Besides the obvious high-end phones and GPUs where they’ll go head-to-head with the likes of ARM’s Bifrost architecture, Imagination will also be going after the automotive market, the VR market, and thanks to the compute improvements, the rapidly growing deep learning market. The nature of IP licensing means that end-users are a couple of layers down, so Imagination first needs to court direct customers to build SoCs tailored to these applications, but the capability is there in the IP should customers demand it.As for when we’ll see Furian designs in consumer hardware, that too is ultimately up to customers. Final Furian RTL designs will not be released to customers until sometime in the middle of this year, which is also why Imagination has not yet announced any specific GPU designs. As a result the lag time between announcement and implementation will be longer than past announcements, where the company was already announcing GPU designs with final RTL. Customers could potentially have Furian-equipped silicon ready towards the end of 2018 if they rush, but the bulk of the first-generation Furian products will likely be in 2019.Gallery: PowerVR Furian Press Deck
The Western Digital Black PCIe SSD (512GB) Review
After acquiring SanDisk and introducing WD Green and WD Blue SSDs, it is no surprise to see Western Digital introduce a WD Black SSD that is a M.2 PCIe drive. Western Digital and SanDisk are relatively late to market with their first consumer PCIe SSD, but they've taken the time to refine the product. The WD Black PCIe SSD is an entry-level NVMe drive using TLC NAND and priced below the top SATA SSDs. It offers substantially better performance than the Intel SSD 600p for a modest price increase.
Everspin Announces New MRAM Products And Partnerships
Magnetoresistive RAM manufacturer Everspin has announced their first MRAM-based storage products and issued two other press releases about recent accomplishments. Until now, Everspin's business model has been to sell discrete MRAM components, but they're introducing a NVMe SSD based on their MRAM. Everspin's MRAM is one of the highest-performing and most durable non-volatile memory technologies on the market today, but its density and capacity falls far short of NAND flash, 3D XPoint, and even DRAM. As a result, use of MRAM has largely been confined to embedded systems and industrial computing that need consistent performance and high reliability, but have very modest capacity requirements. MRAM has also seen some use as a non-volatile cache or configuration memory in some storage array controllers. The new nvNITRO family of MRAM drives is intended to be used as a storage accelerator: a high-IOPS low-latency write cache or transaction log, with performance exceeding that of any single-controller drive based on NAND flash.Everspin's current generation of spin-torque MRAM has a capacity of 256Mb per die with a DDR3 interface (albeit with very different timings from JEDEC standard for DRAM). The initial nvNITRO products will use 32 or 64 MRAM chips to offer capacities of 1GB or 2GB on a PCIe 3 x8 card. MRAM has high enough endurance that the nvNITRO does not need to perform any wear leveling, which allows for a drastically simpler controller design and means performance does not degrade over time or as the drive is filled up—the nvNITRO does not need any large spare area or overprovisioning. Read and write performance are also nearly identical, while flash memory suffers from much slower writes than reads, which forces flash-based SSDs to buffer and combine writes in order to offer good performance. Everspin did not have complete performance specifications available at time of writing, but the numbers they did offer are very impressive: 6µs overall latency for 4kB transfers (compared to 20µs for the Intel SSD DC P3700), and 1.5M IOPS (4kB) at QD32 (compared to 1.2M IOPS read/200k IOPS write for the HGST Ultrastar SN260). The nvNITRO does rely somewhat on higher queue depths to deliver full performance, but it is still able to deliver over 1M IOPS at QD16, around 800k IOPS at QD8, and QD1 performance is around 175k IOPS read/150k IOPS write. MRAM supports fine-grained access, so the nvNITRO performs well even with small transfer sizes: Everspin has hit 2.2M IOPS for 512B transfers, although that is not an official performance specification or measurement from the final product.As part of today's announcements, Everspin is introducing MRAM support for Xilinx UltraScale FPGAs in the form of scripts for Xilinx's Memory Interface Generator tool. This will allow customers to integrate MRAM into their designs as easily as they would use SDRAM or SRAM. The nvNITRO drives are a demonstration of this capability, as the SSD controller is implemented on a Xilinx FPGA. The FPGA provides the PCIe upstream link as a standard feature, the memory controller is Everspin's new and Everspin has developed a custom NVMe implementation to take advantage of the low latency and simple management afforded by MRAM. Everspin claims a 30% performance advantage over an unspecified NVRAM drive based on battery-backed DRAM, and attributes it primarily to their lightweight NVMe protocol implementation. In addition to NVMe, the nvNITRO can be configured to allow all or part of the memory to be directly accessible for memory-mapped IO, bypassing the protocol overhead of NVMe.The initial version of the nvNITRO is built with an off-the-shelf FPGA development board and mounts the MRAM on a pair of SO-DIMMs. Later this year Everspin will introduce new denser versions on a custom PCIe card, as well as M.2 drives and 2.5" U.2 using a 15mm height to accommodate two stacked PCBs. By the end of the year, Everspin will be shipping their next generation 1Gb ST-MRAM with a DDR4 interface, and the nvNITRO will use that to expand to capacities of up to 16GB in the PCIe half-height half-length card form factor, 8GB in 2.5" U.2, and at least 512MB for M.2.Everspin has not announced pricing for the nvNITRO products. The first generation nvNITRO products are currently sampling to select customers and will be for sale in the second quarter of this year, primarily through storage vendors and system integrators as a pre-installed option.New Design Win For Current MRAMEverspin is also announcing another design win for their older field-switched MRAM technology. JAG Jakob Ltd is adopting Everspin's 16Mb MRAM parts for their PdiCS process control systems, with MRAM serving as both working memory and code storage. These systems have extremely strict uptime requirements, hard realtime performance requirements and service lifetimes of up to 20 years; there are very few memory technologies on the market that can satisfy all of those requirements. Everspin will continue to develop their line of MRAM devices that compete against SRAM and NOR flash even as their higher-capacity offerings adopt DRAM-like interfaces.
NVIDIA Announces Jetson TX2: Parker Comes To NVIDIA’s Embedded System Kit
For a few years now, NVIDIA has been offering their line of Jetson embedded system kits. Originally launched using Tegra K1 in 2014, the first Jetson was designed to be a dev kit for groups looking to build their own Tegra-based devices from scratch. Instead, what NVIDIA surprisingly found, was that groups would use the Jetson board as-is instead and build their devices around that. This unexpected market led NVIDIA to pivot a bit on what Jetson would be, resulting in the second-generation Jetson TX1, a proper embedded system board that can be used for both development purposes and production devices.This relaunched Jetson came at an interesting time for NVIDIA, which was right when their fortunes in neural networking/deep learning took off in earnest. Though the Jetson TX1 and underlying Tegra X1 SoC lack the power needed for high-performance use cases – these are after all based on an SoC designed for mobile applications – they have enough power for lower-performance inferencing. As a result, the Jetson TX1 has become an important part of NVIDIA’s neural networking triad, offering their GPU architecture and its various benefits for devices doing inferencing at the “edge” of a system.Now about a year and a half after the launch of the Jetson TX1, NVIDIA is going to be giving the Jetson platform a significant update in the form of the Jetson TX2. This updated Jetson is not as radical a change as the TX1 before it was – NVIDIA seems to have found a good place in terms of form factor and the platform’s core feature set – but NVIDIA is looking to take what worked with TX1 and further ramp up the performance of the platform.The big change here is the upgrade to NVIDIA’s newest-generation Parker SoC. While Parker never made it into third-party mobile designs, NVIDIA has been leveraging it internally for the Drive system and other projects, and now it will finally become the heart of the Jetson platform as well. Relative to the Tegra X1 in the previous Jetson, Parker is a bigger and better version of the SoC. The GPU architecture is upgraded to NVIDIA’s latest-generation Pascal architecture, and on the CPU side NVIDIA adds a pair of Denver 2 CPU cores to the existing quad-core Cortex-A57 cluster. Equally important, Parker finally goes back to a 128-bit memory bus, greatly boosting the memory bandwidth available to the SoC. The resulting SoC is fabbed on TSMC’s 16nm FinFET process, giving NVIDIA a much-welcomed improvement in power efficiency.Paired with Parker on the Jetson TX2 as supporting hardware is 8GB of LPDDR4-3733 DRAM, a 32GB eMMC flash module, a 2x2 802.11ac + Bluetooth wireless radio, and a Gigabit Ethernet controller. The resulting board is still 50mm x 87mm in size, with NVIDIA intending it to be drop-in compatible with Jetson TX1.Given these upgrades to the core hardware, unsurprisingly NVIDIA’s primary marketing angle with the Jetson TX2 is on its performance relative to the TX1. In a bit of a departure from the TX1, NVIDIA is canonizing two performance modes on the TX2: Max-Q and Max-P. Max-Q is the company’s name for TX2’s energy efficiency mode; at 7.5W, this mode clocks the Parker SoC for efficiency over performance – essentially placing it right before the bend in the power/performance curve – with NVIDIA claiming that this mode offers 2x the energy efficiency of the Jetson TX1. In this mode, TX2 should have similar performance to TX1 in the latter's max performance mode.Meanwhile the board’s Max-P mode is its maximum performance mode. In this mode NVIDIA sets the board TDP to 15W, allowing the TX2 to hit higher performance at the cost of some energy efficiency. NVIDIA claims that Max-P offers up to 2x the performance of the Jetson TX1, though as GPU clockspeeds aren't double TX1's, it's going to be a bit more sensitive on an application-by-application basis.NVIDIA Jetson TX2 Performance ModesMax-QMax-PMax ClocksGPU Frequency854MHz1122MHz1302MHzCortex-A57 Frequency1.2GHzStand-Alone: 2GHz
AMD Prepares 32-Core Naples CPUs for 1P and 2P Servers: Coming in Q2
For users keeping track of AMD’s rollout of its new Zen microarchitecture, stage one was the launch of Ryzen, its new desktop-oriented product line last week. Stage three is the APU launch, focusing mainly on mobile parts. In the middle is stage two, Naples, and arguably the meatier element to AMD’s Zen story.A lot of fuss has been made about Ryzen and Zen, with AMD’s re-launch back into high-performance x86. If you go by column inches, the consumer-focused Ryzen platform is the one most talked about and many would argue, the most important. In our interview with Dr. Lisa Su, CEO of AMD, the launch of Ryzen was a big hurdle in that journey. However, in the next sentence, Dr. Su lists Naples as another big hurdle, and if you decide to spend some time with one of the regular technology industry analysts, they will tell you that Naples is where AMD’s biggest chunk of the pie is. Enterprise is where the money is.So while the consumer product line gets columns, the enterprise product line gets profits and high margins. Launching an enterprise product that gains even a few points of market share from the very large blue incumbent can implement billions of dollars to the bottom line, as well as provided some innovation as there are now two big players on the field. One could argue there are three players, if you consider ARM holds a few niche areas, however one of the big barriers to ARM adoption, aside from the lack of a high-performance single-core, is the transition from x86 to ARM instruction sets, requiring a rewrite of code. If AMD can rejoin and a big player in x86 enterprise, it puts a small stop on some of ARMs ambitions and aims to take a big enough chunk into Intel.With today’s announcement, AMD is setting the scene for its upcoming Naples platform. Naples will not be the official name of the product line, and as we discussed with Dr. Su, Opteron one option being debated internally at AMD as the product name. Nonetheless, Naples builds on Ryzen, using the same core design but implementing it in a big way.The top end Naples processor will have a total of 32 cores, with simultaneous multi-threading (SMT), to give a total of 64 threads. This will be paired with eight channels of DDR4 memory, up to two DIMMs per channel for a total of 16 DIMMs, and altogether a single CPU will support 128 PCIe 3.0 lanes. Naples also qualifies as a system-on-a-chip (SoC), with a measure of internal IO for storage, USB and other things, and thus may be offered without a chipset.Naples will be offered as either a single processor platform (1P), or a dual processor platform (2P). In dual processor mode, and thus a system with 64 cores and 128 threads, each processor will use 64 of its PCIe lanes as a communication bus between the processors as part of AMD’s Infinity Fabric. The Infinity Fabric uses a custom protocol over these lanes, but bandwidth is designed to be on the order of PCIe. As each core uses 64 PCIe lanes to talk to the other, this allows each of the CPUs to give 64 lanes to the rest of the system, totaling 128 PCIe 3.0 again.On the memory side, with eight channels and two DIMMs per channel, AMD is stating that they officially support up to 2TB of DRAM per socket, making 4TB in a single server. The total memory bandwidth available to a single CPU clocks in at 170 GB/s.While not specifically mentioned in the announcement today, we do know that Naples is not a single monolithic die on the order of 500mm or up. Naples uses four of AMD’s Zeppelin dies (the Ryzen dies) in a single package. With each Zeppelin die coming in at 195.2mm, if it were a monolithic die, that means a total of 780mm of silicon, and around 19.2 billion transistors – which is far bigger than anything Global Foundries has ever produced, let alone tried at 14nm. During our interview with Dr. Su, we postulated that multi-die packages would be the way forward on future process nodes given the difficulty of creating these large imposing dies, and the response from Dr. Su indicated that this was a prominent direction to go in.Each die provides two memory channels, which brings us up to eight channels in total. However, each die only has 16 PCIe 3.0 lanes (24 if you want to count PCH/NVMe), meaning that some form of mux/demux, PCIe switch, or accelerated interface is being used. This could be extra silicon on package, given AMD’s approach of a single die variant of its Zen design to this point.Note that we’ve seen multi-die packages before in previous products from both AMD and Intel. Despite both companies playing with multi-die or 2.5D technology (AMD with Fury, Intel with EMIB), we are lead to believe that these CPUs are similar to previous multi-chip designs, however there is Infinity Fabric going through them. At what bandwidth, we do not know at this point. It is also pertinent to note that there is a lot of talk going around about the strength of AMD's Infinity Fabric, as well as how threads are manipulated within a silicon die itself, having two core complexes of four cores each. This is something we are investigating on the consumer side, but will likely be very relevant on the enterprise side as well.In the land of benchmark numbers we can’t verify (yet), AMD showed demonstrations at the recent Ryzen Tech Day. The main demonstration was a sparse matrix calculation on a 3D-dataset for seismic analysis. In this test, solving a 15-diagonal matrix of 1 billion samples took 35 seconds on an Intel machine vs 18 seconds on an AMD machine (both machines using 44 cores and DDR4-1866). When allowed to use its full 64-cores and DDR4-2400 memory, AMD shaved another four seconds off. Again, we can’t verify these results, and it’s a single data point, but a diagonal matrix solver would be a suitable representation for an enterprise workload. We were told that the clock frequencies for each chip were at stock, however AMD did say that the Naples clocks were not yet finalized.What we don’t know are power numbers, frequencies, processor lists, pricing, partners, segmentation, and all the meaty stuff. We expect AMD to offer a strong attack on the 1P/2P server markets, which is where 99% of the enterprise is focused, particularly where high-performance virtualization is needed, or storage. How Naples migrates into the workstation space is an unknown, but I hope it does. We’re working with AMD to secure samples for Johan and me in advance of the Q2 launch.Gallery: AMD Naples Slide DeckRelated Reading
How To Get Ryzen Working on Windows 7 x64
Officially, AMD does not support Ryzen CPUs on Windows 7. Given that Microsoft has essentially ended support for the OS, this is the type of response we expect from AMD – Intel has also stopped officially supporting Windows 7 on the newest platforms as well. 'Official' is a general term: some special customers may receive extended lifetime support, or drivers currently out in the ecosystem still work on the platforms. Official support refers to driver updates and perhaps security updates, but there’s nothing to stop you trying to install an OS to either system or platform.For clarification, we did not converse with AMD in writing this piece. AMD's formal position on Windows 7 on Ryzen is that it is unsupported, and as a result this means they will not provide support around it. There may also be other methods to install an unsupported OS, however here are a few solutions.The Main Issue: USB SupportFor installing Windows 7, the issues typically revolve around USB support. When there’s a mouse/keyboard plugged in, everything else after that is typically simple to configure (installing drivers, etc). However, from the 100-series chipsets on Intel and the AM4 motherboards on AMD, this can be an issue. When the CD or USB stick is being used to install the OS, the image needs USB drivers in order to activate a mouse or keyboard to navigate the install menus. This is the primary process that fails on both platforms and acts as a barrier to installation.General Solution: Use a PS/2 Keyboard, if the motherboard has a PS/2 portBy default, on most systems, the way to guarantee the presence of a mouse pointer or keyboard activity during installation is to hook up a PS/2 keyboard. I’ve never known an installation to fail to recognize a PS/2 peripheral, so this is often the best bet. However, PS/2 as a connectivity standard is near dead (sometimes new keyboards will offer dual connectivity, like one of my Rosewill mechanical keyboards), with fewer motherboards supporting it, and it falls to USB as a backup.
The Corsair Gaming K95 RGB Platinum Mechanical Keyboard Review
In this review we are having a look at the recently released Corsair Gaming K95 RGB Platinum, a mechanical gaming keyboard. It is a hybrid between the K70 RGB and K95 RGB models, with more features and a brand new CUE software package, designed to leave no gamer unsatisfied (for a price).
Playing With Power: A Look At Nintendo Switch Power Consumption
Last week was of course the launch of Nintendo’s eagerly anticipated Switch console. The company’s latest handheld console, the Switch is a bit of an odd duck in pretty much every way. It departs from Nintendo’s traditional and well-established clamshell design in favor of a larger tablet, and under the hood Nintendo has stepped away from their typical highly-custom low-power SoC in favor of a rather powerful Tegra design from NVIDIA. Given that the 3DS was essentially an ARMv6 + OpenGL ES 1.x device, I can’t overstate just how significant of a jump this is under the hood in going to the ARMv8 + OpenGL ES 3.2/Vulkan class Tegra SoC. Nintendo has essentially jumped forward 10 years in mobile technology in a single generation.Playing with a launch-day Switch a bit, there's not much testing we can do since the console is so locked down. But one area where I've had some success is on power consumption testing. This is also an area where the Switch is a bit of an odd duck, leading to some confusion around the Web judging from some of the comment posts I’ve seen elsewhere. USB Type-C has been shipping in devices for a couple of years now, so it’s hardly a new standard, but given the slow upgrade cycle of PCs and smartphones it still isn’t an interface that the majority of consumers out there have dealt with. Furthermore due to its use case as a game console, the Switch is unlike any other USB Type-C device out there (more on this in a second). So I opted to spend some time profiling the device’s power consumption, in order to shed some light on what to expect.
The 2016 Razer Blade Pro Review
When I first heard about Razer, they were a company that strictly made gaming peripherals. I mostly associate them with their DeathAdder mouse, with the version from 2010 still being one of the best mice I've ever used. Razer has also made audio equipment like gaming headsets for quite some time, as well as a line of gaming keyboards. As time went on, some of these products gained features that were unique to Razer, such as the use of Razer-designed mechanical switches in their gaming keyboards, and RGB backlighting in various products with the Chroma branding.Razer has made a number of attempts to move beyond the world of gaming peripherals. Some have been more successful than others. For example, some gamers may remember the Razer Edge Pro, the gaming tablet that never seemed to catch on with consumers. Razer also made a fitness band called the Nabu, but it also appears to have missed the mark and has seen some pretty heavy discounts in recent times. With Razer's recent purchase of NextBit, many have begun to speculate on whether Razer plans to move into the mobile industry.While it would be fun to speculate on Razer's plans for the future, they do have one area beyond peripherals that has been an undisputed success. Their line of laptops, which started with the unveiling of the original Razer Blade in 2011, have shown that it's possible to build gaming laptops without the bulky plastic bodies and poor quality displays that traditionally characterized high-performance laptops from other vendors. As time has gone on, Razer has iterated on the original Razer Blade, and introduced both a smaller model in the form of the Razer Blade Stealth, and a larger model known as the Razer Blade Pro. That latter model is the laptop I'll be looking at today. Read on for the full AnandTech review of the Razer Blade Pro.
Meizu Unveils Super mCharge: Fast Charging At 55W
Meizu unveiled a new fast-charging technology—called Super mCharge—at MWC 2017 that’s capable of fully charging a 3000 mAh battery in just 20 minutes. Rapid charging has grown from novelty to highly desirable feature in a short period of time, with it being particularly popular in China, Meizu’s home market.Great Scott!While not powerful enough to send a DeLorean back to the future, the 55W rating for Super mCharge (11V, 5A) is significantly higher than anything we’ve yet seen. For comparison, Motorola’s TurboPower is rated for 28.5W, and Qualcomm’s Quick Charge 3.0 hits 18W.Meizu is using a charge pump, a type of DC to DC converter that uses an external circuit to control the connection of capacitors to the input voltage. By disconnecting the capacitor from the source via a switch and reconfiguring the circuit with additional switches, the charge pump’s output voltage can be raised or lowered relative to the input. Keeping the capacitors small and the switching frequency high improves efficiency. Meizu is claiming 98% efficiency for its design, and while charge pumps are known for high efficiency, this seems a little high at first glance.For Super mCharge, Meizu is dividing the input voltage in half, which doubles the output current. To accommodate the current increase, Meizu is pairing its new fast-charging circuit with a new lithium-based 3000 mAh battery made with “advanced manufacturing processes” that can handle 4x the current of previous batteries. This new battery is said to retain 80% of its original charge capacity after 800 complete charge cycles, where a charge cycle is defined as any possible sequence that ultimately goes from 100% to 0% to 100%. This rating is actually at the high end of the scale, with most fast-charging methods rated for 500 cycles or a little more. Battery life is likely improved by keeping temperature in check; Meizu claims that battery temperature does not exceed 38 °C (100 °F), a full 6 °C less than a competing solution in its testing.Super mCharge includes voltage, current, and temperature monitoring for battery health and safety. Because the USB Type-C cable conducts more than 3A of current, it includes an E-mark IC (electronically marked safety chip) on one connector.Meizu did not say when we’ll see Super mCharge in a shipping device, but I would not be surprised to see it later this year.
The AMD Zen and Ryzen 7 Review: A Deep Dive on 1800X, 1700X and 1700
For over two years the collective AMD vs Intel personal computer battle has been sitting on the edge of its seat. Back in 2014, when AMD first announced it was pursuing an all-new microarchitecture, old hands recalled the days when the battle between AMD and Intel was fun to be a part of, and users were happy that the competition led to innovation: not soon after, the Core microarchitecture became the dominant force in modern personal computing today. Through the various press release cycles from AMD stemming from that original Zen announcement, the industry is in a whipped frenzy waiting to see if AMD, through rehiring guru Jim Killer and laying the foundations of a wide and deep processor team for the next decade, can hold the incumbent to account. With AMD’s first use of a 14nm FinFET node on CPUs, today is the day Zen hits the shelves and benchmark results can be published: Game On!
12345678910...