Feed anandtech AnandTech

AnandTech

Link http://www.anandtech.com/
Feed http://anandtech.com/rss/
Copyright Copyright 2019 AnandTech
Updated 2019-03-20 02:15
Apple Upgrades iMac and iMac Pro: More Cores, More Graphics, More Memory
Apple has introduced its updated iMac all-in-one desktop computers to use Intel's latest generation processors with up to eight cores plus AMD’s latest Pro graphics, and its iMac Pro to be equipped with more memory and a faster GPU. Since Apple upgrades its iMac product line every couple of years or so, the company has every right to claim that its top-of-the-range AIO PCs are now up to twice faster than their predecessors.The new 21.5-inch and 27-inch Apple iMac AIO desktops come in the same sleek chassis as their predecessors and use the same 4K and 5K display panels featuring the P3 color gamut and 500 nits brightness. The systems are offered with Intel’s latest Core processors paired with up to 32 GB of DDR4-2666 memory, SSD storage or a hybrid Fusion Drive storage (comprising of NAND flash used for caching and a mechanical HDDs), and a discrete AMD Radeon Pro GPU. Optionally, customers can equip their new iMacs with Intel’s eight-core Core i9 as well as AMD’s Radeon Pro Vega 48 8 GB GPU.Since the new Apple iMac AIO desktops inherit quite a lot from their ancestors, they feature the same set of I/O capabilities, including a 802.11ac Wi-Fi + Bluetooth adapter, a GbE port, two Thunderbolt 3 connectors, four USB 3.1 Gen 2 ports, an SDXC card reader, a 3.5-mm audio jack, built-in speakers, and a webcam.Apple iMac 2019 Brief Specifications21.5"27"Display21.5" with 4096 × 2304 resolution
SilverStone EP14: A Miniature USB-C Hub with HDMI, USB-A, 100 W Power
With hundreds of different USB Type-C adapters and docks on the market, manufacturers are trying hard to make theirs more attractive. To that end, they now tend to design rather interesting products addressing focused use cases. SilverStone has introduced its new compact USB-C dock that has three USB-A ports, a display output, and can pass through up to 100 W of power to charge a laptop and/or devices connected to the USB-A ports, a rare feature for small docks.
Google Announces Stadia: A Game Streaming Service
Today at GDC, Google announced its new video game streaming service. The new service will be called Stadia. This builds on the information earlier this year that AMD was powering Project Stream (as was then called) with Radeon Pro GPUs, and Google is a primary partner using AMD’s next generation CPUs and GPUs.Stadia is being advertised as the central community for gamers, creators, and developers. The idea is that people can play a wide array of games regardless of the hardware at hand. Back in October, Google debuted the technology showcasing a top-end AAA gaming title running at 60 FPS. Google wants a single place where gamers and YouTube creators can get together – no current gaming platform, according to Google, does this.Ultimately Google wants to stream straight to the Google browser. Google worked with leading publishers and developers to help build the system infrastructure. Google is one of a few companies with enough content delivery networks around the world to ensure that frame rates are kept high with super low latency.Users will be able to watch a video about a game, and instantly hit ‘Play Now’ and start playing the game in under five seconds without any download and lag. The idea is that a single code base can be enjoyed at any stream. At launch, desktop, laptop, TV, tablets, and phones will be supported. With Stadia, the datacenter is platform. No hardware acceleration is required on the device. The experience can be transferred between devices, such as chromebook to smartphone.One of the highlights of Google’s demonstration of Stadia was the platform working on Google-enabled TVs.The platform allows users to have any USB connected controller, or mouse and keyboard. Google will also be releasing its own Stadia Controller, available in three colors – white, black, and light blue. The controller connects via Wi-Fi straight into the cloud, and also which device is being run (it’s unclear how this works).The controller has two new buttons. The first allows saving and sharing the experience out to YouTube. The second is Google Assistant, using the integrated microphone in the controller. This allows game developers to integrate Google Assistant into their games. It also allows users to ask Google when they need help in a game - and the assistant will look for a guide to help.Stadia uses the same datacenter infrastructure already in place at Google. There are 7500+ edge nodes allows for compute resources being closer to players for lower latency. Custom designed, purpose built hardware powers the experience. Interconnected racks have sufficient compute and memory for the most demanding games. The technology has been in development inside Google for years.At launch, resolutions will be supported up to 4K 60 fps with HDR and surround sound. Future plans for up to 8K streaming at 120 fps are planned. The platform has been built to scale to support this. While playing, the stream is duplicated in 4K for direct upload – you get rendering quality video rather than what you capture locally.The platform is instance based, so Google can scale when needed. Game developers no longer have to worry about building to a specific hardware performance – the datacenter can scale as required.Custom GPU with AMD, with 10 TF of power, with a custom CPU with AVX2 support. Combined they create a single instance per person. Uses Linux and Vulkan, with full Unreal and Unity support. Havok engine support as well. Tool companies are onboard.(When Google says custom CPU and custom GPU - this could be early hardware of AMD's upcoming generations of technology, put into a custom core configuration / TDP. We're likely looking at a Zen 2 based CPU, based on AVX2 support listed, and a Radeon Instinct based GPU with tweaked settings specifically for Google.)One of the first games supported will be Doom Eternal from id Software, which will support 4K with HDR at 60 fps. Every user will get a single GPU with no other users.UL Benchmarks (3DMark) has been working with Google to help benchmark the systems and measure the power of the infrastructure. Developers if required can use multiple GPUs, it appears.Multiplayer is also supported, at least between different Stadia players. Distributed physics becomes possible, which means up to 1000 players in Battle Royale titles. There’s also the advantage, according to Google, of getting around hackers and cheaters.Developers can support multi-platform multiplayer, and transfer save files between platforms. Game developers have already been working on MP demos with destructive environments using real-time rigid body physics, allowing for perfect synchronization.Google also points out that split-screen gaming has not been a priority recently because of rendering two scenes at once. With Stadia, that problem disappears, as each player will be powered by a separate instance, reviving the idea of local co-op and squad based gaming. This also allows for multiple cameras for a single player to navigate a single map, for better tactics in certain types of games. Google says that this ability allows developers to create new types of games.Built on Google’s platform, Stadia will also support machine learning. For developers that want to take advantage, they can incorporate Google and third-party libraries to help improve games over time and enhance the experience both on a per-user level and on a local/global scale.The other focus on Stadia is the interaction with YouTube. Google points out that gaming has been a fundamental part of YouTube since its exception, and it is Google’s goal to help creators interact with (and monetize) their audience. The idea is that creators can directly livestream from Stadia, as well as play with creators through Stadia. ‘Crowd Play’ will allow users to play directly into the server instance with the creator – it acts like a lobby, so players will sit in line to play with their favorite creator. For example, the NBA2K above shows 'join this game (3rd in line)'.Google states that any link from any location can act as a launch point for a title. This means that developers do not have to be limited to a single game store – games can be launched from almost anywhere, as long as the user is in an up to date Chrome browser. Google is also set to put extensive parental controls into the mix.Google will be creating an entity called ‘Stadia Games and Entertainment’, headed up by Jade Raymond, enabling first party studios to use Stadia. Other partner studios will also work through the new division as outreach in order to enable game development on Stadia.Developers who want to create for Stadia should go to stadia.dev to sign up for tools and resources. Stadia Partners for distributers. Stadia.com will be the hub for gamers.Stadia will launch in 2019, in the US, Canada, UK, and most of Europe. No word on pricing yet, but Google will be announcing more in the Summer.
HP Reveals Envy x360 15 Laptops with AMD's Latest Ryzen APUs
HP on Tuesday introduced its new 15.6-inch convertible notebooks based on AMD’s Ryzen Mobile 3000-series APUs. The new HP Envy x360 15 are positioned as inexpensive 15.6-inch-class laptops for productivity applications. In addition, the company announced its new Intel-based HP Envy x360 15 PCs.HP’s AMD Ryzen 3000 and Intel Core i5/i7-based Envy x360 15 convertibles use exactly the same sand-blasted anodized aluminum chassis and thus have the same dimensions (17 mm z-height) and weight (~ 2 kilograms). The only visual difference between AMD and Intel-powered Envy x360 15 PCs is the color: the former features HP’s Nightfall Black finish, whereas the latter features HP’s Natural Silver finish. Overall the new 15.6-inch Envy x360 convertible laptops feature a 28% smaller bezel when compared to the previous generation according to the manufacturer. Meanwhile, all the HP Envy x360 15 machines introduced today also use the same 15.6-inch Full-HD IPS touch-enabled display panel featuring a WLED backlighting.Inside the new AMD-based HP Envy x360 15 convertible laptops are AMD’s quad-core Ryzen 5 3500U or Ryzen 7 3700U processors with integrated Radeon RX Vega 8/10 graphics. The APUs are accompanied by 8 GB or single-channel DDR4-2400 memory as well as a 256 GB NVMe/PCIe M.2 SSD. As for Intel-powered Envy x360 15, they use Core i5-8265U or Core i7-8565U CPUs.As far as connectivity is concerned, everything looks rather standard: the systems feature a 802.11ac + Bluetooth 5.0/4.2 controller from Intel or Realtek, one USB 3.1 Gen 1 Type-C connector (with DP 1.4), two USB 3.1 Gen 1 Type-A ports, an HDMI output, a 3.5-mm audio connector for headsets, an SD card reader, and so on. The new Envy x360 15 also has an HD webcam with a dual array microphone and a kill switch, a fingerprint reader, Bang & Olufsen-baged stereo speakers, and a full-sized keyboard.When it comes to battery life, HP claims that its AMD Ryzen Mobile-powered Envy x360 15 convertibles offer exactly the same battery life as Intel-based machines: up to 13 hours of mixed usage when equipped with a 55.67 Wh battery.HP will start sales of its Envy x360 15 convertible notebooks with AMD Ryzen Mobile inside this April. Pricing will start at $799.99. By contrast, a system featuring Intel’s Core i5-8265U with a generally similar configuration will cost $869.99.HP Envy X360 15"Envy x360 15 (AMD)
Western Digital: Over Half of Data Center HDDs Will Use SMR by 2023
Western Digital said at OCP Global Summit last week that over half of hard drives for data centers will use shingled magnetic recording (SMR) technology in 2023. At present Western Digital is the only supplier of SMR HDDs managed by hosts, but the technology is gaining support by hardware, software, and applications.SMR technology to boost capacity of hard drives fairly easily but at the cost of some performance trade-offs due to the read-modify-write cycle introduced by shingled tracks. Since operators of datacenters are interested in maximizing their storage capacities, they are inclined to invest in software that can mitigate peculiarities of SMR. As a result, several years after Western Digital introduced its first host-managed SMR HDDs, more and more companies are adopting them. Right now, the vast majority of datacenter hard drives are based on perpendicular magnetic recording technology, but WD states that in four years SMR HDDs will leave PMR drives behind.Obviously, usage of SMR will not be the only method to increase capacities of hard drives. Energy-assisted PMR technologies (e.g., MAMR, HAMR, etc.) will also be used by Western Digital. In the coming quarters the company intends to release MAMR-based HDDs featuring a 16 TB (ePMR) and 18 TB (eSMR) capacity. The company also plans to introduce 20 TB HDDs in 2020.High-capacity hard drives are not going to be replaced by high-capacity SSDs any time soon, according to Western Digital. HDDs will continue to cost significantly less than SSDs on per-TB basis. Therefore, they will be used to store 6.5 times more data than datacenter SSDs in 2023.Related Reading:
Quick Note: NVIDIA’s “Einstein” Architecture Was A Real Project
While it was never an official NVIDIA codename as far as roadmaps go, the name “Einstein” came up in rumors a few times earlier this decade. At the time, Einstein was rumored to be the architecture that would follow Maxwell in the NVIDIA lineup. And while we sadly didn’t find out anything new about NVIDIA’s future roadmap at this year’s show – or any sign of Ampere or other 7nm chips – I did inadvertently find out that the rumors about Einstein were true. At least, from a certain point of view.While talking with NVIDIA’s research group this morning about some of their latest projects (more on this a bit later this week when I have the time), the group was talking about past research projects. And, as it turns out, one of those former research projects was Einstein.Rather than just being a baseless rumor, Einstein was in fact a real project at NVIDIA. However rather than being an architecture, per-se, it was a research GPU that the NVIDIA research group was working on. And although this research project didn’t bear fruit under the Einstein name, it did under another name that is far more well-known: Volta.So while this means we can scratch Einstein off of the list of names for potential future NVIDIA architectures, the project itself was real, and it was actually a big success for NVIDIA. As Einstein morphed into what became the Volta architecture, it has become the cornerstone of what are now all of NVIDIA’s current-generation GPUs for servers and clients. This includes both regular Volta and it’s graphics-enhanced derivative, Turing.
Nvidia Announces Jetson Nano Dev Kit & Board: X1 for $99
Today at GTC 2019 Nvidia launched a new member of the Jetson family: The new Jetson Nano. The Jetson family of products represents Nvidia new focus on robotics, AI and autonomous machine applications. A few months back we had the pleasure to have a high level review of the Jetson AGX as well as the Xavier chip that powers it.The biggest concern of the AGX dev kit was its pricing – with a retail price of $1299, it’s massively out of range of most hobbyist users such as our readers.The new Jetson Nano addresses the cost issue in a quite dramatic way. Here Nvidia promises to deliver a similar level of functionality than its more expensive Jetson products, at a much lower price point, and of course at a lower performance point.The Jetson Nano is a full blown single-board-computer in the form of a module. The module form-factor and connector is SO-DIMM and is similar to past Nvidia modules by the company. The goal of the form-factor is to have the most compact form-factor possible, as it is envisioned to be used in a wide variety of applications where a possible customer will design their own connector boards best fit for their design needs.At the heart of the Nano module we find Nvidia’s “Erista” chip which also powered the Tegra X1 in the Nvidia Shield as well as the Nintendo Switch. The variant used in the Nano is a cut-down version though, as the 4 A57 cores only clock up to 1.43GHz and the GPU only has half the cores (128 versus 256 in the full X1) active. The module comes with 4GB of LPDDR4 and a 16GB eMMC module. The standalone Jetson Nano module for use in COTS production will be available to interested parties for $129/unit in quantities of 1000.Naturally, because you can’t do much with the module itself, Nvidia also offers the Jetson Nano in the form of a complete computer: The Jetson Nano Developer Kit. Among the advantages of the Kit is vastly better hardware capabilities compared to competing solutions, such as the performance of the SoC or simply better connectivity such as 4 USB full (3x 2.0 + 1x 3.0) ports, HDMI, DisplayPort and a Gigabit Ethernet port, along with the usual SDIO, I2C, SPI, GPIO and UART connectors you’re used to on such boards. One even finds a M.2 connector for additional WiFi as well as a MIPI-CSI interface for cameras.
NVIDIA To Bring DXR Ray Tracing Support to GeForce 10 & 16 Series In April
During this week, both GDC (the Game Developers’ Conference) and GTC (the Game Technology Conference) are happing in California, and NVIDIA is out in force. The company's marquee gaming-related announcement today is that, as many have been expecting would happen, NVIDIA is bringing DirectX 12 DXR raytracing support to the company's GeForce 10 series and GeForce 16 series cards.
The NVIDIA GPU Tech Conference 2019 Keynote Live Blog (Starts at 2pm PT/21:00 UTC)
Kicking off a very busy week for tech events in California, my first stop for the week is NVIDIA's annual GPU Technology Conference in San Jose.As always, CEO Jensen Huang will be kicking off the show proper with a 2 hour keynote, no doubt making some new product announcements and setting the pace for the company for the next year. The biggest question that's no doubt on everyone's minds being what NVIDIA plans to do for 7nm, as that process node is quickly maturing. Hopefully we'll find out the answer to that and more, so be sure to check-in at 2pm Pacific to see what's next for NVIDIA.
Micron Introduces 2200 Client NVMe SSD With New In-House Controller
Micron has announced the first product based on their new in-house client NVMe SSD controller. The Micron 2200 doesn't boast performance sufficient to compete with the top enthusiast-class NVMe drives on the retail market, but should be plenty fast enough for OEMs and system integrators to use it as a performance option in the business PCs it is intended for.Micron has been notably slow about bringing NVMe to their client and consumer product lines. They initially planned to launch both client OEM and consumer retail drives built around the combination of their first-generation 32-layer 3D NAND and the Silicon Motion SM2260 controller, but those plans were shelved as it became clear that combination could not deliver high-end performance. Last fall Micron finally launched the Crucial P1 entry-level NVMe SSD with QLC NAND and the SM2263 controller, but no high-end product has been announced until now.It's been no secret that Micron has been working on their own NVMe SSD controllers. Every other NAND manufacturer has either developed in-house controllers or acquired a controller vendor, and complete vertical integration has worked out extremely well for companies like Samsung. Micron has been the odd man out sourcing all their controllers from third parties like Silicon Motion, Marvell and Microsemi, but their 2015 acquisition of startup controller design firm Tidal Systems made their intentions clear. That acquisition and any other in-house controller design efforts bore no visible fruit until Flash Memory Summit last year, when a prototype M.2 client NVMe SSD was quietly included in their exhibits.Micron 2200 SpecificationsCapacity256 GB512 GB1 TBForm FactorM.2 2280 Single-SidedInterfaceNVMe PCIe 3 x4ControllerMicron in-houseNANDMicron 64-layer 3D TLCSequential Read3000 MB/sSequential Write1600 MB/s4KB Random Read240k IOPS4KB Random Write210k IOPSPowerActive6 WIdle300 mWSleep5 mWWarranty Endurance75 TB150 TB300 TBMicron has not yet shared details about their new NVMe controller, but the basic specs for the 2200 SSD are available. The 2200 uses Micron's 64-layer 3D TLC NAND flash memory and offers drive capacities from 256GB to 1TB as single-sided M.2 modules. The drive uses a PCIe gen 3 x4 interface and has the expected features for a Micron client drive, including power loss protection for data at rest and SKUs with or without TCG Opal self-encrypting drive (SED) capabilities.The performance and write endurance ratings for the Micron 2200 don't match up well against top consumer drives, but compare favorably against entry-level NVMe SSDs. Endurance is actually lower than their Crucial MX500 mainstream consumer SATA drive, so any retail derivative of the 2200 will need to improve on that metric. No such retail version has been announced, but with the 2200 available now it is likely we'll be hearing from Crucial within a few months, though they may wait until later in the year to launch with 96 layer NAND instead of 64 layer.
Apple Announces New 10.5" iPad Air, 7.9" iPad mini
Today in a surprise announcement, Apple has unveiled refreshes to both the iPad Air and iPad mini lineups. The last releases in the lineups were the iPad Air 2 and iPad mini 4 back in 2015. We had thought Apple had abandoned the models, yet today’s release now breathes fresh air into the devices with much needed internal hardware upgrades as well as new functionality.Apple iPad ComparisoniPad Air 2iPad mini 4iPad Air (2019)iPad mini (2019)SoCApple A8X
Kingston Launches New Enterprise SATA SSDs
Kingston is making a renewed effort in the enterprise storage market this year, starting with the launch of their DC500 family of enterprise SATA SSDs. The new DC500R and DC500M product lines are designed for read-intensive and mixed workloads respectively, with endurance ratings of 0.5 and 1.3 drive writes per day, respectively.The target market for the DC500 family is second-tier cloud service providers and system integrators. The biggest cloud companies (Google, Microsoft, Amazon, etc.) have largely moved over to NVMe SSDs, but among the smaller datacenter players there is still a large market for SATA drives. These companies are already Kingston's biggest customers for DRAM, so Kingston already has a foot in the door.The DC500 family continues Kingston's close relationship with Phison, incorporating the new Phison S12 SATA SSD controller. This provides all the usual features expected from an enterprise drive, including end-to-end data path protection, Phison's third-generation LDPC error correction, and power loss protection. The NAND flash Kingston is using this time is Intel's 64-layer 3D TLC, rated for 5000 Program/Erase cycles. Kingston most often uses Toshiba flash, especially given their investment in Toshiba Memory Corporation, but ultimately Kingston is still an independent buyer of memory, and at the moment they consider Intel to be a better option for their enterprise SSDs.Performance ratings are typical for SATA drives with TLC NAND. Both the DC500R and DC500M will saturate the SATA link for sequential transfers or random reads. The DC500R's steady-state random write performance is rated for 12k-28k IOPS depending on capacity, while the DC500M with substantially more overprovisioning can sustain 58k-75k random write IOPS. Capacities for both tiers of DC500 will be 480GB up to 3.84TB. The DC500R is shipping starting today, while the DC500M will start shipping next week, except for the largest 3.84TB capacity that will arrive later in Q2.
JapanNext 75 and 86-Inch 4K IPS HDR Monitors: What Separates TVs from Monitors, Anyhow?
Just when you thought that NVIDIA-inspired 65-Inch Big Format Gaming Displays (BFGDs) were huge, JapanNext has rolled-out its new 75 and 86-inch monitors. The JN-IPS7500UHDR-KG and JN-IPS8600UHDR monitors are aimed mostly at multimedia enthusiasts who also need to get some work done, but both LCDs feature profiles for gaming too.
Turtle Beach Acquires ROCCAT: a New Gaming Peripherals Giant Is Born
Turtle Beach, a leading supplier of headsets and a developer of various audio technologies, this week signed an agreement to acquire ROCCAT, a maker of gaming peripherals. The move creates a new combined supplier of gaming peripherals with presence all around the world.At present Turtle Beach is primarily known in the US and some European countries for its gaming headsets for consoles and PCs. By taking over ROCCAT, the company gets keyboards, mice, and a variety of accessories for gamers. Turtle Beach estimates that the merged company will have a total of 48 core product models for various markets. Furthermore, Turtle Beach gains presence in Asia and additional European countries, where ROCCAT is known. To a large degree, Turtle Beach and ROCCAT have no obvious overlap in terms of product portfolio and in terms of distribution channels, allowing them to integrate better. It's not clear if ROCCAT hardware will be rebranded Turtle Beach, or if the ROCCAT brand will remain.René Korte, the head of ROCCAT, and other employees of the company, will join Turtle Beach and will continue to design peripherals.Under the terms of the agreement, Turtle Beach will acquire ROCCAT for $14.8 million in cash (net of a working capital adjustment), $1 million in cash or stock (company option), and up to approximately $3.4 million in earnout payments. Turtle Beach expects ROCCAT to contribute about $20 - $24 million to its 2019 revenue as well as over $30 million to its 2020 revenue.Sales of Turtle Beach totaled $287.4 million in 2018, whereas it net income was $39.2 million. The lion’s share of the company’s revenue was contributed by headsets for game consoles sold in North America, a market where Turtle Beach commanded a ~40% share for the past nine years. Meanwhile, Turtle Beach plans to increase sales of its PC gaming accessories to $100 million in the coming years, so the acquisition of ROCCAT is strategically important for the company.Related Reading
Acer EI491CR: A Curved 49-Inch Monitor with FreeSync 2
Being one of the leading suppliers of monitors for gamers, Acer has not had a single ultra-large, ultra-wide display for gamers. Until now. This week the company finally started to sell its EI491CR, a 49-inch curved monitor that supports AMD’s FreeSync 2 technology.
Sony Xperia 1, the Long 21:9 Smartphone, Available for Pre-Order
When Sony introduced its Xperia 1 flagship smartphone at MWC 2019, the company disclosed all technical specifications, but omitted two important details: pricing and launch date. This month some of the company’s partners began to take pre-orders on the product and had to disclose its estimated price. While retailers are taking pre-orders, we still do not know when Sony intends to start shipments.
The ASRock DeskMini 310 Mini-PC Review: A Cost-Effective Mini-STX Platform
Small form-factor PCs and gaming systems have emerged as bright spots in the mature PC market over the last decade or so. Intel's NUC form-factor introduction was the turning point in the SFF segment, though it came with a few limitations for DIY enthusiasts. While mini-ITX systems are quite flexible and compact, Intel realized that the market could do with an option between the NUC and the mini-ITX systems. The mini-STX (5-inch by 5-inch, known as '5x5') form-factor was launched in 2015. ASRock and ECS have introduced a number of mini-STX form-factor boards and systems. Today, we are taking a look at a low-cost H310 chipset-based mini-STX system from ASRock: the DeskMini 310.
Acer’s TravelMate X514-51: A 14-Inch Commercial Laptop under 1 kg (2.2 lbs)
Acer has introduced its new thin-and-light commercial notebook aimed at small and medium businesses. Outfitted with a 14-inch display and based on Intel’s Core i5/i7 processors, the TravelMate X514-51 weighs only 2.16 pounds (980 grams). The laptop also supports a host of security features required by businesses.
Samsung Begins Mass Production of 12 GB LPDDR4X for Smartphones
Samsung said late on Wednesday that it had started volume production of 12 GB LPDDR4X-4266 memory for high-end smartphones. The chip is the highest-density DRAM for mobile applications. The first smartphone to use Samsung’s 12 GB LPDDR4X DRAM package will be the company’s own Galaxy S10+ handset formally announced last month.
NVIDIA’s 65-inch Big Format Gaming Display Is Here: HP OMEN X Emperium
Huge displays for entertainment and productivity are getting increasingly popular these days as prices are falling. Last year NVIDIA proposed a reference design for Big Format Gaming Displays: 65-inch monsters featuring a 120/144 Hz refresh rate along with the company’s G-Sync HDR technology. The initiative was supported by three companies: Acer, ASUS, and HP. But while all of them formally announced their BFGD products at CES 2018, only HP has started to sell one - the HP OMEN X Emperium.
HP Expands 2018 Battery Recall Program, Delayed Announcement Due To Govt Shutdown
HP started a voluntarily recall program of around 50,000 batteries back in early 2018. This year the company expanded the program with another 78,500 battery packs as it had received eight more complaints from its customers. HP initiated this recall in January, however due to the US Government shutdown earlier this year, the recall has only now been publically announced by the Consumer Product Safety Commission.
Best Motherboards: Q1 2019
We're once again back with our quarterly look at the PC motherboard market. For the first quarter of this year there has been a lot of buzz around impending AMD releases such as Zen 7nm, but for the motherboard market, the big shift has come in the higher end of the market. At CES a few of higher-end models dominated proceedings (literally), with the announcements of the ASUS ROG Dominus Extreme and the ROG X399 Alpha and X299 Omega models. But, by no means ignored, we're also blessed with no shortage of good motherboard options in all other product segments as well, ranging from the premium to the budget to the SFF markets.
The NVIDIA GeForce GTX 1660 Review, Feat. EVGA XC GAMING: Turing Stakes Its Claim at $219
Launching today is NVIDIA's next mainstream video card, the GeForce GTX 1660. The card is based on the a cut-down version of the TU116 Turing GPU used in the GeForce GTX 1660 Ti, and comes paired with GDDR5 memory rather than cutting-edge GDDR6. Overall, the new GTX 1660 is meant to be a cheaper option for the mainstream market, not delivering quite as much performance, but coming in at an even more wallet-friendly $219.
New WD Blue SSD Switches To NVMe
In the process of assimilating SanDisk, Western Digital has been re-using their hard drive branding on consumer SSDs: WD Green, Blue and Black can refer to either mechanical hard drives or SSDs. The WD Blue brand is used for the most mainstream products, which for SSDs meant SATA drives. The first WD Blue SSD introduced in 2016 used planar TLC NAND and a Marvell controller with the usual amount of DRAM for a mainstream SSD. The next year, the WD Blue was updated with 3D TLC NAND that kept it competitive with the Crucial MX series and Samsung 850 EVO. 2018 passed with no changes to the WD Blue hardware, but prices were slashed to keep up with the rest of the industry: the 1TB drive that debuted with a MSRP of $310 is now selling for $120.SanDisk's 64-layer 3D TLC NAND is nearing the end of its product cycle, but they and other NAND flash manufacturers aren't in a hurry to switch over to 96L NAND, so it's not quite time for another straightforward refresh of the WD Blue. Instead, Western Digital has chosen to migrate the WD Blue brand over to a different market segment. Now that the WD Black is well-established as a high-end NVMe product, there's room for an entry-level NVMe SSD, and it will be the new WD Blue SN500. This is little more than a re-branding of an existing OEM product (WD SN520), in the same way that the current WD Black SN750 SSD is based on the WD SN720. The SN520 was announced more than a year ago, but as an OEM product we were unable to obtain a review sample. Like the high-end SN720 and SN750, the SN520 and WD Blue SN500 use Western Digital's in-house NVMe SSD controller architecture, albeit in a cut-down implementation with just two PCIe lanes and no DRAM interface. The high-end version of this controller architecture has proven to be very competitive (especially for a first-generation product), but so far we have only the SN500's spec sheet by which to judge the low-end controller.WD Blue SN500 SpecificationsCapacity250 GB500 GBForm FactorM.2 2280 Single-SidedInterfaceNVMe PCIe 3 x2ControllerWestern Digital in-houseNANDSanDisk 64-layer 3D TLCDRAMNone (Host Memory Buffer not supported)Sequential Read1700 MB/s1700 MB/sSequential Write1300 MB/s1450 MB/s4KB Random Read210k IOPS275k IOPS4KB Random Write170k IOPS300k IOPSPowerPeak5.94 W5.94 WPS3 Idle25 mW25 mWPS4 Idle2.5 mW2.5 mWEndurance150 TB300 TBWarranty5 yearsMSRP$54.99
Motorola’s 5G Moto Mod for Moto z3 Now Available on Verizon
Motorola and Verizon have begun taking pre-orders for the 5G Moto Mod, the addon-accessory for the Moto z3 smartphone introduced last year. The device supports 5G mmWave radio, sub-6 GHz connectivity. The 5G Moto Mod only works with the Moto z3, and will available exclusively to Verizon customers in the U.S.
Toshiba's HDD Tech Roadmap: A Mix of SMR, MAMR, TDMR, and HAMR
In an interview published this week with Blocks & Files, Toshiba outlined the company will be relying on a mix of hard drive technologies in order to keep increasing hard drive capacities. Along with current-generation two-dimensional magnetic recording (TDMR) and shingled magnetic recording (SMR) technologies, the company will also be tapping both microwave assisted magnetic recording (MAMR) as well as heat-assisted magnetic recording (HAMR) for future drives. Already gearing up to ship its first 16 TB TDMR drives, Toshiba's short-term development plans call for it to adopt SMR as well as MAMR. Meanwhile in the longer-term, HAMR will be introduced for further capacity increases.Earlier this year Showa Denko, Toshiba’s supplier of HDD media, revealed that the company would be supplying platters for hard drives based on MAMR technology. Toshiba has since confirmed their plans to use MAMR in this week's interview, but in an added twist, the company also noted that some of its high-capacity MAMR hard drives will use shingling as well.“MAMR will be used to advance the capacity of both CMR (discrete track) recording and to SMR (shingled track) recording,” Scott Wright, director of HDD marketing at Toshiba America Electronic Components.Overall, It is not a secret that for years now Toshiba has been working on hard drives featuring SMR technology. However unlike its competitors, the company has yet to introduce any commercial SMR hard drives, so these new MAMR + SMR drives would be the first commercial SMR deployment for the company. SMR of course brings some new performance trade-offs due to the read-modify-write cycle introduced by shingled tracks, but it still makes a great deal of sense for high-capacity HDDs since it allows drive vendors to increase their capacities without switching to a new type of media.Toshiba's MAMR-based HDDs will begin sampling later this year. And, accounting for a few quarters for datacenter operators to validate the new drives, we should see their MAMR hard drives to enter volume production in 2020.Looking further out, Toshiba has also said that sooner or later it will have to use HAMR, due in large part to the higher scalability that the technology offers.“In theory, MAMR does not advance long-term areal density gain as far as what may be achievable with HAMR. MAMR is certainly the next step; HAMR is very likely an eventual future step up the areal density ladder.”By adopting MAMR for their 2019 – 2020 nearline HDDs Toshiba and Western Digital can continue using HDD media that is similar to platters used today. By contrast, Seagate is set to skip MAMR and use HAMR along with brand new disks instead.Related Reading:
The Reality of SSD Capacity: No-One Wants Over 16TB Per Drive
One of the expanding elements of the storage business is that the capacity per drive has been ever increasing. Spinning hard-disk drives are approaching 20 TB soon, while solid state storage can vary from 4TB to 16TB or even more, if you’re willing to entertain an exotic implementation. Today at the Data Centre World conference in London, I was quite surprised to hear that due to managed risk, we’re unlikely to see much demand for drives over 16TB.
The MasterAir MA621P TR4: CM's Triple Fan Tower Cooler for Threadripper
Cooler Master has introduced a new air cooler specifically designed for AMD’s Ryzen Threadripper processors. Dubbed the MasterAir MA621P TR4 Edition, the cooler features a special base for use with AMD’s TR4 CPUs, as well as multiple heat pipes and up to three fans to cool AMD's mighty processors. And, for the aesthetically-minded out there, the new cooler also includes addressable RGB lighting.
The Memblaze PBlaze5 C916 Enterprise SSD Review: High Performance and High Capacities
The Memblaze PBlaze5 high-end enterprise SSD product line has been refreshed with 64-layer 3D NAND, allowing for better power efficiency and more affordable top-tier performance. The PBlaze5 C916 offers some of the highest performance obtainable from a single SSD, large capacities enabled by modern 3D TLC NAND, and plenty of write endurance to survive demanding workloads.
AMD Launches China-only Radeon RX 560 XT
This evening – or rather this morning in China – AMD is rolling out a new mid-to-entry level Radeon RX video card. Dubbed the Radeon RX 560 XT, the new part is a lower-tier Polaris 10-based card that’s designed to fill the gap between the RX 560 and the RX 570 in the Chinese market. This new SKU will only be sold in China – don’t expect to see it come to North America – with AMD’s close partner Sapphire being the sole vender of the card.By the numbers, the Radeon RX 560 XT is a relatively straightforward cutting down of the Radeon RX 570, itself a cut-down Polaris 10 SKU. Relative to the 570, the 560 XT drops another 4 CUs, bringing it down to 28 CUs. Past that, the clockspeeds have also taken a hit; the 560 XT will top out at just 1073MHz for the boost clock, instead of 1244MHz like its fuller-fledged 570 sibling. So for shader and texture performance, it will deliver around 75% of the throughput of the RX 570.AMD Radeon RX 500 Series Specification Comparison (China)AMD Radeon RX 580 2048SPAMD Radeon RX 570AMD Radeon RX 560 XTAMD Radeon RX 560Compute Units32 CUs
Microsoft Brings DirectX 12 To Windows 7
Sometimes things happen that are unexpected – just ask Ned Stark. In a far less fictional event, Microsoft has posted an update on their DirectX Blog announcing that they’ve brought a form of DirectX 12 to Windows 7, via official support for the latest DX12 version of World of Warcraft on Windows 7. Where do we even begin?For some background, Microsoft’s latest DirectX API was created to remove some of the CPU bottlenecks for gaming by allowing for developers to use low-level programming conventions to shift some of the pressure points away from the CPU. This was a response to single-threaded CPU performance plateauing, making complex graphical workloads increasingly CPU-bounded. There’s many advantages to using this API over traditional DX11, especially for threading and draw calls. But, Microsoft made the decision long ago to only support DirectX 12 on Windows 10, with its WDDM 2.0 driver stack.Today’s announcement is a pretty big surprise on a number of levels. If Microsoft had wanted to back-port DX12 to Windows 7, you would have thought they’d have done it before Windows 7 entered its long-term servicing state. As it is, even free security patches for Windows 7 are set to end on January 14, 2020, which is well under a year away, and the company is actively trying to migrate users to Windows 10 to avoid having a huge swath of machines sitting in an unpatched state. In fact, they are about to add a pop-up notification to Windows 7 to let users know that they are running out of support very soon. So adding a big feature like DX12 now not only risks undermining their own efforts to migrate people away from Windows 7, but also adding a new feature well after Windows 7 entered long-term support. It’s just bizarre.Now before you get too excited, this is currently only enabled for World of Warcraft; and indeed it's not slated to be a general-purpose solution like DX12 on Win10. Instead, Microsoft has stated that they are working with a few other developers to bring their DX12 games/backends to Windows 7 as well. As a consumer it’s great to see them supporting their product ten years after it launched, but with the entire OS being put out to pasture in nine months, it seems like an odd time to be dedicating resources to bringing it new features.Microsoft does say that DX12 will offer more features on Windows 10, which makes sense since the graphics stack was designed for it right from the start, but if you do play World of Warcraft on Windows 7, you’re going to get a free performance boost. You may still want to look into getting off of Windows 7 soon though, since this isn’t going to move the January 2020 end-of-support date back for gamers.For Blizzard, the publisher of World of Warcraft, this is a huge win for their developers, since they’ll no longer need to maintain two versions of the game.Overall, this an unanticipated and rather exceptional event for the state of Windows graphics APIs. And having reached out to one expert for commentary on Microsoft's announcement, they seem to agree:
Corsair Announces K83 Wireless Entertainment Keyboard for HTPCs: Keyboard Meets Joystick
Corsair this month has rolled out its first keyboard designed primarily for HTPCs. The K83 Wireless Entertainment Keyboard incorporates a tenkeyless keyboard using Corsair's low-profile switches, along with a touchpad, a joystick, and a dedicated volume scroller to to flesh out the functionality of Corsair's lap-friendly keyboard.Corsair is of course no stranger to lapboards. The company has been offering a lapboard version of the gaming-focused K63 keyboard since last year. However whereas the K63 laptop setup ultimately used a standard wireless keyboard with an extra lapboard accessory – one whose lap-accommodations were focused on providing space for a wireless mouse – the K83 is an outright built from scratch lapboard, and one designed for broader uses as a general purpose HTPC keyboard.At its core, the Corsair K83 Wireless Entertainment is a 78-keys compact keyboard that uses Corsair’s ultra-low-profile scissor key switches, and a controller that supports a 20-key rollover as well as a 1000 Hz polling rate. So although it's not solely a gaming-focused keyboard, the K83 comes with gaming credentials, which is further underscored by its specially-painted WASD keys. The board can connect to host PCs using a 2.4 GHz dongle, Bluetooth 4.2, or a USB cable, and when working wirelessly, Corsair says that the battery should last for up to 40 hours.Meanwhile to build up its HTPC feature set, the Corsair K83 Wireless Entertainment is outfitted with a joystick to navigate through menus and play games, a multi-touch touchpad with discrete left and right click buttons, media playback buttons (that work when the Fn button is pressed), and even a dedicated volume control "scroller". This essentially makes the K83 a media keyboard on steroids, with further couch-friendly control options integrated directly into the board so that it can be used without requiring a discrete mouse. The joystick is an especially interesting design choice – Corsair is basically looking to emulate the right-half of a gamepad – and I'm not sure there's any other keyboard quite like this on the market.Of course, as this is a Corsair keyboard, the company's software stack is also a big part of their feature set. The K83 keyboard is compatible with the company's iCUE software, which can be used to recalibrate the joystick, configure Windows touchpad gestures, create macros, remap keys, adjust backlighting, and so on.Finally, one of the particularly interesting HTPC-focused features of the Corsair K83 Wireless Entertainment is that Corsair is that Corsair is officially supporting the keyboard with more than just macOS and Windows, allowing it to be used with set-top boxes and TVs. Other supported devices include NVIDIA's Shield TV, the Apple TV, Amazon's Fire TV, and Samsung's Tizen-powered TVs. Of course, not every feature works with every device – with Corsair taking care to document what works where – and generally speaking the more restricted an ecosystem, the fewer extra features like the touchpad work. But it's still an interesting take on compatibility, and making the keyboard more useful for more living rooms.The Corsair K83 Wireless Entertainment Keyboard is already available directly from the company as well as from its resellers. The board runs for $99.99 in the US.Related Reading:
Western Digital Develops Low-Latency Flash to Compete with Intel Optane
Western Digital is working on its own low-latency flash memory that will offer a higher performance and endurance when compared to conventional 3D NAND, ultimately designed to compete against Optane storage.
Ulefone Shows off the T3: A Helio P90-Powered Phone with a Large Punch-Hole Display & 48 MP Camera
One of the things that caught our eyes at Mobile World Congress in 2018 and 2019 were new, high-end smartphones coming from China-based manufacturers that are better known for their value handsets. Ulefone is one such company that is trying to break into the market for advanced smartphones by relying on MediaTek's premium, sub-flagship SoC offerings. Last year Ulefone demonstrated its T2 Pro 6.7-inch handset designed for demanding customers. This year the company showcased its T3 smartphone the promises to be even more impressive.
The ASRock X399 Phantom Gaming 6 Motherboard Review: $250 Sixteen Core Stunner
The ASRock X399 Phantom Gaming 6 hardware is one of the cheapest X399 motherboard currently on the market and brings the Phantom Gaming name to the high-end desktop market. This entry-level option for Threadripper uses a 2.5 gigabit ethernet controller, and is one of only a few boards to do so. It also offers a trio of M.2 slots, but supports only for Threadripper processors up to 16 cores due to its design.
The Xeon Entry Quad-Core CPU Review: Xeon E-2174G, E-2134, and E-2104G Tested
A couple of months ago we reviewed a few of the newest six-core Intel commercial CPUs that are also used in low-end servers. Intel has also launched some quad-core models, which we are focusing on today. These Xeon E quad-core processors compete directly against AMD's Ryzen Pro product line, focusing on manageability, ECC memory support, and guaranteed product longevity.
NVIDIA To Acquire Datacenter Networking Firm Mellanox for $6.9 Billion
Starting off the week bright and early, NVIDIA this morning announced that they’re acquiring datacenter networking and interconnect Mellanox. With a price tag of $6.9 billion, NVIDIA’s acquisition will be vaulting the company deep into the datacenter networking market, making them one of the leading vendors virtually overnight.Mellanox is not a name we normally see much here at AnandTech, as it’s often a company in the background of bigger projects. Mellanox specializes in datacenter connectivity, particularly high-bandwidth Ethernet and InfiniBand products, for use in high-performance systems. Overall their technology is used in over half of the TOP500-listed supercomputers in the world, as well as countless datacenters. So depending on which metrics you use and how wide you define the market, they’re generally a top-tier competitor or a market leader in the datacenter networking space.Meanwhile, with NVIDIA’s own datacenter and HPC revenues growing by leaps and bounds over the last few years – thanks in big part to the machine learning boom – NVIDIA has decided to expand their datacenter product portfolio by picking up Mellanox. According to NVIDIA, acquiring the company will not only give NVIDIA leading-edge networking products and IP, but it will also allow them to exploit the advantages of being able to develop in-house the high-performance interconnects needed to allow their own high-performance compute products to better scale.Like many other companies in the datacenter space, NVIDIA already has significant dealings with Mellanox. The company’s DGX-2 systems incorporate Mellanox’s controllers for multi-node scaling, and on an even bigger scale, Mellanox’s hardware is used in both the Summit and Sierra supercomputers, both of which are also powered by NVIDIA GPUs. So acquiring the company gives NVIDIA some verticality to leverage for future system sales, as well as to further broaden their overall product offerings beyond GPUs.In fact this will be about the least-GPU-like product in NVIDIA’s portfolio once the deal closes, as all other active NVIDIA product lines are ultimately compute products of some sort. Though to put the size of these businesses in perspective, Mellanox is a fraction of the size of NVIDIA, and so too is their business. Similarly, by 2023 NVIDIA is expecting a $61B total addressable market for compute + high-speed networking – but only $11B of that is networking. So Mellanox’s networking hardware is still one small piece of a much bigger NVIDIA.As for the deal itself, NVIDIA will be paying $125/share for Mellanox, which is 14% over Mellanox’s previous closing price. Notably, this is going to be an all-cash transaction for NVIDIA; rather than buying out Mellanox’s shareholders with equity in NVIDIA, the company will instead just pay for the company outright via their ample (and growing) cash reserves. Though if reports are to be believed, the timing of this deal was spurred by Mellanox more than NVIDIA – Mellanox had put itself on the market and was supposedly looking at several bidders, so NVIDIA would have needed to spend the cash now if they didn’t want to miss the chance to buy a high-end networking company.Finally, along with their own vertical integration plans, it sounds like NVIDIA intends to keep the rest of the Mellanox networking business largely status-quo, including keeping the company’s offices in Israel and as well as its existing sales & support infrastructure. Mellanox was already a profitable company – which helps NVIDIA’s own bottom line – so NVIDIA doesn’t necessarily need to change the company’s direction to profit from their new acquisition.Gallery: NVIDIA To Acquire Datacenter Networking Firm Mellanox for $6.9 Billion
CXL Specification 1.0 Released: New Industry High-Speed Interconnect From Intel
With the battleground moving from single core performance to multi-core acceleration, a new war is being fought with how data is moved around between different compute resources. The Interconnect Wars are truly here, and the battleground just got a lot more complicated. We’ve seen NVLink, CCIX, and GenZ come out in recent years as offering the next generation of host-to-device and device-to-device high-speed interconnect, with a variety of different features. Now CXL, or Compute Express Link, is taking to the field.
Best Video Cards for Gaming: Q1 2019
For gaming PCs that push the pretty pixels on the screens, the video card is the most important component. And given the sheer amount of custom options, choosing the right graphics card for your budget can be very difficult. In our Video Cards for Gaming guides, we give you our recommendations in terms of GPU models and current prices representative of an affordable non-blower custom card. Our guide targets common gaming resolutions at system-build price points similar to our CPU guides.
ChipRebel Releases Exynos 9820 Die Shot: M4 CPUs in New Cluster
Every time a new SoC comes one, the one thing we eagerly await is for someone to release a die shot of the new chip. This process is most interesting when the new chip either comes with a new microarchitecture or a new process node. Last November, we covered the release of ChipRebel’s Kirin 980 die shot which gave us the first ever look of Arm’s new Cortex A76 CPU as well as Mali G76 GPU on a new 7nm process node.The folks over there are at it again and have seemingly been very busy overnight, this time around tearing down the Galaxy S10 with the new Exynos 9820. I’ve been going back and forth with the ChipRebel team on these last two projects and they’re great guys, definitely visit them if you are in need of die shots.
NVIDIA To Move Mobile Kepler GPUs to Legacy Status in April 2019 (& 3D Vision Too)
In a note on their support website published earlier this afternoon, NVIDIA has announced that they are preparing to move their mobile (laptop) Kepler GPUs to legacy status, ending mainstream graphics driver support for these products. Starting in April 2019, mobile Kepler products will become legacy products, meaning they will no longer receive Game Ready driver enhancements, performance optimizations, and bugfixes. However, they will continue to receive critical security updates through the end of the legacy support phase, which is set to run through April of 2020.As NVIDIA continues to produce new GPU architectures on a roughly two-year cadence, the shuffle off to legacy status has become a relatively regular event for the company. However this latest retirement is quite a bit different in that NVIDIA is only retiring their mobile Kepler parts, and not their desktop parts. Previously, legacy retirements involved the whole architecture at once, as it would allow NVIDIA to neatly wrap-up all driver development in a single go. Instead, as NVIDIA’s document even takes specific care to note, desktop Kepler parts are not part of this retirement and will continue to receive full support for the time being.Past that, as NVIDIA tends to be a heavy data-driven company, I can only speculate that they believe Kepler laptop ownership/usage is low enough at this point that even retiring just Kepler laptop support would be beneficial for the company. By dropping ongoing “game ready” support for their mobile products, it means that NVIDIA no longer needs to regression test new drivers against these parts, even if they continue to develop optimizations and bug fixes for the Kepler architecture itself.That said, I am a bit surprised by how quickly this has come. Though introduced before Kepler, it was really only with Kepler that NVIDIA’s Optimus switchable graphics technology took off, and as a result seeing an additional NVIDIA GPU in a higher-end thin & light notebook became a more common occurrence that still continues to this day. For reference, NVIDIA only moved its previous Fermi-generation products to legacy status last April, so this is marks a shorter gap for the much more popular mobile Kepler.At any rate, NVIDIA’s current release doesn’t state what the final driver branch will be. So it’s not clear if the current R418 branch is it – and the branch after it will drop mobile Kepler – or if it’s going to be the next branch that’s the last. It does become a small but notable distinction, since NVIDIA will need to provide further security updates for that branch for another year.In the meantime, you can find a complete list of mobile Kepler products over at NVIDIA’s site. The list is rather extensive – along with the 600M series, Kepler parts were also used as part of the 700M, 800M, and even some 900M parts. So some of the products that are set to be retired are relatively recent, numerically speaking.Update: Another NVIDIA support article about legacy products has surfaced, this time regarding NVIDIA’s 3D Vision products. Alongside mobile Kepler support, NVIDIA will also be sunsetting 3D Vision starting next month. Like mobile Kepler, these products will be moving to legacy status, and will receive one year of critical driver support through April 2020.Unlike NVIDIA’s mobile Kepler note, their 3D Vision note does specify the final driver branch. It looks like the current R418 driver branch is it for 3D Vision, mobile Kepler, and whatever else NVIDIA decides to retire next month. All of which is a potential sign that the branch following R418 will incorporate significant driver and feature updates, since NVIDIA will get a clean(ish) break and won’t need to roll them out to their oldest products.As for NVIDIA’s 3D Vision products, NVIDIA hasn't launched a new version of the technology since 3D Vision 2 in 2011. And while NVIDIA has continued to support 3D Vision for over a decade now, the writing was clearly on the wall for these products after the first generation of VR headsets launched. Even ignoring their VR-specific aspects for a moment, the VR headsets also offer a superior 3D stereo experience due to their completely isolated eyes, eliminating ghost images meant for the other eye. Contrast is also better, since there’s no need to block out an eye.Still, it’s the end of an era for sure, as shutter glasses-based 3D stereo products are (once again) on their way out.
AOC Introduces Its G2868PQU Monitor: An Inexpensive 4K Gaming Display with FreeSync
AOC this week introduced its first entry-level 4K gaming display. The G2868PQU monitor boasts with numerous firmware-based features designed for gaming, as well as a 1 ms response time. The monitor also supports AMD’s FreeSync technology, though this isn't being paired with any kind of high refresh rate ranges.The AOC G2868PQU is a 28-inch LCD that uses a ‘next-generation HDR-Ready' TN panel. The monitor features a 3840×2160 resolution, 300 nits maximum brightness, a 1000:1 contrast ratio, a 1 ms response time, and a 60 Hz maximum refresh rate. The display has a scaler than supports AMD’s FreeSync dynamic refresh rate technology, but the manufacturer does not disclose its range or whether it's wide enough to supports Low Framerate Compensation.Although AOC claims that the monitor is ‘HDR-ready’, it never specifies whether the monitor supports HDR10 or other HDR transport formats. In any case, an LCD panel featuring a maximum brightness of 300 nits can hardly offer a good HDR experience. Meanwhile, the display can cover 102% of the sRGB and 82% of the AdobeRGB color spaces. Meanwhile, since the G2868PQU is aimed at gamers, it supports AOC’s Game Color (user-adjustable saturation), Shadow Control (user-adjustable brightness for dark areas in games), Game Modes (specific presets for FPS, RTS, Racing genres, and three user-defined presets), and Dial Point (crosshair) features.AOC's monitor comes in a rather angular chassis with red inlays to emphasize its gaming nature, and the stand is height-adjustable. As for connectivity, the G2868PQU offers four inputs (D-Sub, DVI-DL, DisplayPort, HDMI-MHL) to ensure compatibility with both modern and legacy systems, a quad-port USB 3.0 hub, an audio input, and a headphone output. As an added bonus, the LCD has stereo speakers.Specifications of AOC's Entry-Level 28" Gaming DisplayG2868PQUPanel28" TNNative Resolution3840 × 2160Maximum Refresh Rate60 HzDynamic Refresh TechFreeSyncResponse Time1 ms (gray-to-gray)Brightness300 cd/m²Contrast1000:1Viewing Angles170°/160° horizontal/vertical (not confirmed)Pixel Pitch0.1614 × 0.1614 mmPPI157Color Gamut102% sRGB
Cincoze DX-1100 Ultra Compact Rugged System: Xeon-E with Custom I/O Expansion
Cincoze has introduced its new ultra-compact rugged embedded workstation for use in space-constrained environments. The Diamond Extreme Series DX-1000 system supports Intel’s 8 Gen Core as well as Xeon E-2000-series CPUs, features extremely robust connectivity options for both office and industrial environments. Of particular note, the One of the key features of the machine is its custom I/O.The Cincoze DX-1000 comes in a 3.2-liter metallic unibody chassis, which serves to both contain and cool the internal components. The system can accommodate Intel’s 8 Gen Core or Xeon E-2100-series processors, though as it's passively cooled it stops just short of supporting Intel's complete lineup of chips, with Cincoze only qualifying chips up to 80W TDP. Meanwhile the miniature machine also sports two DDR4 SO-DIMM slots, two hot-swap front-accessible 2.5-inch SATA bays, three mSATA connectors, an optional M.2-2230 CNVi Wi-Fi adapter, and a SIM card bay. As a rugged PC the DX-1000 is rated for a wide range of operating conditions, including extreme temperatures (from -40°C to 70°C), various DC power input voltages (9~48 VDC), and it can tolerate high vibrations and shocks (5/50 Grms) that are common in industrial, in-vehicle, mobile, and similar environments.Aside from the rugged chassis, the other key feature of the Diamond Extreme Series DX-1000 is support for Cincoze’s proprietary CMI (Combined Multiple I/O) and CFM (Control Function Module) modules, which allow the manufacturer to tailor the external I/O capabilities of the system to a customer's needs.By default, the Diamond Extreme Series DX-1000 is configured with eight USB 3.0/3.1 ports (6+2), a DVI-I output (which supports D-Sub with an appropriate adapter), a DisplayPort, an HDMI, two GbE connectors (powered by Intel’s I219-LM, and I210-IT controllers), four COM ports, audio output/input connectors, an external fan connector, and a proprietary power input. But with various optional CMI or CFM modules installed, the Cincoze DX-1000 can get four additional RJ-45/M12 GbE ports, two more COM ports, a 16DIO (8-pin in, 8-pin out) connector, a DIN-RAIL mount kit, and so on.Cincoze already lists its Diamond Extreme Series DX-1000 on its website, and while the company is taking inquiries, it's not clear whether the system is actually available at this time. Meanwhile with the wide array of configuration options, the pricing of the system will depend on what CPU, memory, storage, and I/O features are selected.Related Reading:
Alcatel Readies a 5G Mi-Fi Hotspot with USB-C (and no Wi-Fi?)
One of the major use cases driving 5G development and deployment is the need for high-speed Internet connectivity in locations where it is impossible (or expensive) to cover the last mile via cables. Both carriers and hardware makers, in turn, are looking to capitalize on this by offering service contracts and high-end 5G mobile hotspots. TCL, the company behind Alcatel and BlackBerry smartphones, is prepping a rather interesting device: a 5G Mi-Fi, a 5G hotspot with a USB Type-C interface.The Alcatel "5G USB Device" (which will likely go by other names when distributed by the carriers) supports download speeds of up to 2 Gbps (using 4x4 MIMO) and upload speeds of up to 1 Gbps (using 2x2 MIMO). The hotspot is based on MediaTek’s Helio M70 modem, which supports 5G NR/LTE and 5G NSA & SA on the sub-6 GHz band. So the hotspot won't have access to mmWave and the even higher bandwidths that provides, but sub-6 is better suited for the kind of last-mile deployments that rural 5G will be aimed at anyhow. The unit has 1 GB of LPDDR4 memory, 1 GB of NAND flash storage, and a 4000-mAh battery that can last for one day, according to the manufacturer.One of the odd things about Alcatel’s 5G Mi-Fi is the fact that TCL does not explicitly list it as supporting Wi-Fi (at least not on its present form). Instead its only client connectivity is a USB Type-C port, which seems to be used for power as well. There are of course other wired Mi-Fi devices on the market, but it's unusual for these devices to ship with large internal batteries, as is the case for Alcatel's device. So I'm going to be surprised if we actually see the device ship without Wi-Fi support. Otherwise, expect to see the device used with solo PCs and with routers that support USB cellular adapters, the latter of which is actually a fairly common feature despite its limited use.At the moment TCL is not saying when it plans to release its device commercially. Like so many other 5G client devices, we expect to see it launch once the 5G networks are deployed.Related Reading:
Western Digital to Demo Dual-Actuator HDDs Next Week: Double the Actuators for Double the Perf
Western Digital has revealed this week that it will demonstrate its first dual-actuator hard drives at next week's OCP summit. Marking the company's first foray into multi-actuator drives, WD expects their dual-actuator HDDs to offer roughly twice the performance of conventional, single-actuator drives, although they'll be trading off some power efficiency in the process.While the capacities of enterprise and nearline 7200-RPM hard drives has been increasing consistently, due to the laws of physics the I/O performance of these drives has remained at around 80 IOPS per drive. This means that because capacity has gone up, the drives' IOPS per TB performance ratio is decreasing, something that is especially problematic when it comes to read IOPS. As a result, it is getting harder for datacenter operators to meet their service level agreements and quality-of-service requirements.To combat this, one of the most straightforward ways to increase drive performance and throughput is to increase the number of individual actuators, allowing drives to essentially service twice as many I/O operations at once. Both Seagate and Western Digital have been developing their multi-actuator HDDs for quite a while. The former demonstrated a working dual-actuator drive last March, whereas the latter will show off its dual-actuator prototype next week.Western Digital has rather high expectations for its dual-actuator HDDs. The company expects the new drives to offer double the sustained transfer rates as well as double the IOPS when compared to existing HDDs. Which if we use existing drives as a baseline, would mean that we're talking about data rates on the order of 500 MB/s as well as 160 ~ 200 IOPS. Meanwhile, although no official numbers were provided ahead of next week's formal reveal, the company did publish a photo of its dual-actuator prototype.The trade-off for dual-actuator technology is that since these hard drives are essentially two HDDs in a single chassis, they will consume more power than traditional drives. But it's still 26% less than two independent HDDs, owing to the fact that it's still a single set of spinning platters. For example, Western Digital’s Ultrastar 14 TB SATA hard drive consumes 7.6 W in operating mode, and a pair of such HDDs would be 15.2W. Meanwhile, a hypothetical dual-actuator hard drive that consumes 26% less than these two would end up at around 11.25W, which, importantly, is within power limits of a typical 3.5-inch SATA bay (typically up to 12 W).Right now, Western Digital is not disclosing when it intends to commercially release its dual-actuator HDDs. Instead, the company is stressing right now that it is only talking about a technology demo. Nonetheless, we hope the company will make its plans a bit more clear next week.Related Reading:
Ulefone Demonstrates the Power 6: A Mid-Range Smartphone with a 6,350 mAh Battery
While the power efficiency of phones has significantly improved over the years, overall phone battery life has none the less been squeezed by design choices such as large screens and the desire for thin phone bodies. This has resulted in manufacturers continuing to experiment with phones with larger batteries; and while some vendors take this to the extreme with 18,000 mAh monsters, others have been working on more modest-sized phones. Ulefone for their part was one of the companies that started to use high-capacity batteries with its handsets several years ago, so it was not particularly surprising to see its new mid-range smartphone, the 6,350-mAh capacity Power 6, at this year's Mobile World Congress.
TCL Shows Off the Alcatel 7: Their First High-End 5G Smartphone
Right now, TCL produces smartphones under two major brands. For the Alcatel brand the company produces entry-level and mainstream handsets, whereas under the BlackBerry brand it makes specialized smartphones for those who need an advanced security and/or a QWERTY keyboard. This product segmentation means that at present, TCL is not addressing the market for ‘classic’ higher-end smartphones. However this is going to change later this year, when TCL launches the Alcatel 7 series and their first 5G handset.TCL demonstrated a mock-up of the Alcatel 7 5G smartphone at Mobile World Congress last week. The handset will use TCL’s 6.39-inch (or 6.5-inch) LCD display with a 2340x1080 resolution, 600 nits brightness, a 1500:1 contrast ratio, and hole-punch selfie camera that the company showcased earlier at the trade show. Unfortunately TCL is not disclosing the SoC + modem platform it intends to use for the smartphone (though, the list of 5G modems available on the open market is currently limited), but says it will be paired with 6 GB of RAM, and 64 GB of NAND storage.The imaging capabilities of the Alcatel 7 5G handset will include a triple-module (48 MP + 16 MP + 5MP) rear camera array with a dual-tone LED flash, and a 24 MP front camera. Both cameras are expected to support the TCL's 4-in-1 big pixel technology, and of course, some AI-based features.When it comes to overall design of the Alcatel 7 smartphone, it will feature a metal frame, 2.5D glass on the front, and 3D glass gradient cover on the back. Since the phone relies on an LCD display, it's not possible to use it with an in-screen fingerprint scanner, which is why the scanner is more traditionally located on the back-side of the handset.Since it's equipped with a sizable 6.39-inch (or even a 6.5-inch) screen, the Alcatel 7 is rather big overall. But if nothing else, TCL is putting that space to good use, and they'll be equipping the phone with an above-average capacity 4000-mAh battery – though it will be interesting to see what the 5G modem does to overall battery life. Meanwhile, as far as the feel of the phablet-sized phone goes, since the backside of the phone is rounded, it does not feel thick despite the size, and is rather comfortable to hold.TCL intends to start shipments of its Alcatel 7 5G handset sometime later this year. Since TCL's biggest market for Alcatel phones is Europe, it makes a lot of sense for the company to wait till 5G networks are deployed in the region and then launch the product commercially. Meanwhile the company is tight-lipped about pricing of the Alcatel 7, but since the phone is aimed at a segment currently untapped by the brand, it will naturally cost more than existing smartphones Alcatel phones.Related Reading:
Lenovo Unveils ThinkStation P520 & P920 ‘AI Workstations’: Xeon Plus Quadro RTX 6000
Lenovo this week has introduced its new single and dual-processor “AI Workstations”, which the company is pitching at the artificial intelligence and deep learning markets. The new machines are based on the existing ThinkStation P520 and ThinkStation P920 workstations, and are powered by Intel’s Xeon processors as well as one or two of NVIDIA’s latest Quadro RTX 6000 professional graphics cards.The Lenovo AI Workstation lineup consists of two systems which specs are tailored for AI, DL, and similar workloads, according to the manufacturer. The systems can additionally be clustered together using Lenovo's Intelligent Computing Orchestration (LiCO) software to further improve their aggregate performance at both the desktop and datacenter levels. Both new ThinkStation machines run Ubuntu Linux, which in a departure from the usual PC desktop situation, is actually far more common than Windows when it comes to deep learning work. Besides the usual *nix synergies, a big reason for this is because Ubuntu Linux is the only OS currently recommended by NVIDIA for its popular RAPIDS open source software libraries, which are widely used in analytics and data science tasks.Under the hood, the new ThinkStation P520 AI Workstation is powered by Intel's Xeon W-2155 processor (10/20 cores/threads, 3.3 – 4.5 GHz, 13.75 MB L3, 140 W) and paired with NVIDIA’s Quadro RTX 6000 graphics card (4,608 cores, 576 tensor cores, 72 RT cores, 16.3 FP32 TFLOPS, 24 GB GDDR4). The system is outfitted with 128 GB of DDR4-2666 ECC memory, a 512 GB NVMe SSD, as well two 1 TB 2.5-inch SATA SSDs.Meanwhile the ThinkStation P920 AI Workstation is an inherently more powerful machine. It runs two Intel Xeon Gold 6136 chips (12/24 cores/threads, 3 – 3.7 GHz, 24.75 MB L3, 150 W) accompanied by a pair of NVIDIA’s Quadro RTX 6000 accelerators. The workstation comes equipped with 384 GB of DDR4-2666 ECC memory, 1 TB NVMe SSD, and two 2 TB 2.5-inch SATA SSDs.Lenovo ThinkStation P520 & P920 AI Workstation SpecsThinkStation P520
Best Android Phones: Q1 2019
Today we're sizing up the state of the smartphone market for the first quarter of the year. Q1 guides are hard to write because we’re at the verge of a new generation of devices. If the holiday period was a bad time to buy a new smartphone, then Q1 might possibly be even worse. Nevertheless while people aiming to get the most longevity out of their smartphone purchase might be better off waiting a few more months, Q1 is also the period in which we’d see some great deals as vendors are trying to clear inventory of their last generation devices.
The SeaSonic Focus Gold SGX-650 SFX Power Supply Review: Seasonic Starts off SFX With a Stunner
Today we are taking a look at SeaSonic’s new SFX PSU series, the Focus Gold SGX. The new PSUs boast some impressive electrical specifications and come with a 10-year warranty, and can deliver the performance to match. For our look at the Focus Gold SGX series SeaSonic is putting their best foot forward, sending us their 650W model.
12345678910...