The latest and possibly final round of bidding for Toshiba's memory business concluded today with Toshiba's board selecting a consortium led by private equity firm Bain Capital as the preferred bidder, over competing bids from Western Digital and Foxconn.Toshiba has spent most of 2017 trying to sell off their thriving flash memory business in order to offset devastating losses suffered by their Westinghouse nuclear power division. Those financial problems came to a head at the end of 2016 with Toshiba admitting that the losses could amount to several billion dollars, resulting in their stock price falling by more than 40% over a four day span and their bond ratings being downgraded by Moody's and S&P. As the magnitude of their problems became clearer, Toshiba first planned to spin off their memory business and sell a 20% stake, then revised that to selling a majority stake, and eventually settled on selling it outright. As the second largest manufacturer of NAND flash memory, Toshiba's memory business is valued at around $18 billion.In June, Toshiba selected as preferred buyer a consortium including Bain Capital, Innovation Network Corporation of Japan, Development Bank of Japan and competing memory manufacturer SK Hynix. Toshiba's attempts to sell off the memory business have been complicated by their long-running joint ventures with SanDisk, owned by Western Digital since May 2016. Western Digital and SanDisk have objected to Toshiba's attempts to unilaterally spin off and sell Toshiba's share of the partnerships, with the conflict escalating to arbitration proceedings and lawsuits in both Japan and California. Western Digital has sought all along to acquire the assets Toshiba is putting up for sale, but as the bidding process grew to encompass the entirety of Toshiba Memory and the price climbed, Western Digital had to partner with other investors to continue offering a competitive bid. The uncertainty caused by Western Digital's legal action has been a concern for other bidders, delaying Toshiba's efforts to finalize a deal.In August, Toshiba re-opened negotiations with Western Digital and their partners, as well as with Foxconn. In the meantime, the Bain-led consortium's offer grew to include investment from Apple, Dell, Kingston and Seagate. Last week, Toshiba announced they had signed a memorandum of understanding with Bain Capital, intending to finalize a deal by the end of September, but not ending negotiations with competing bidders. As recently as September 19, leaks suggested that Toshiba was leaning back toward selling to Western Digital. At a Toshiba board meeting earlier today, Toshiba reaffirmed their commitment to complete a deal with the Bain Capital consortium. The exact sale price has not been made public, nor has the breakdown of the contributions of the individual consortium member companies.Bloomberg is reporting that Toshiba is prepared to complete this deal even before resolving the legal disputes with Western Digital, by excluding the joint venture assets at issue from the sale and amending the purchase price to compensate. The joint ventures account for less than 5% of the Toshiba memory business, according to Bloomberg's anonymous source.Toshiba has not yet issued an official statement on today's decision. Western Digital issued a statement expressing disappointment with Toshiba's decision and declaring their intention to continue their legal efforts.
Mechanical keyboards have been a part of PC computing long before many of us were born and still have a place in the market today. With laptops becoming hugely popular in that time, many of the technologies found in full-sized PCs are making its way to the laptop space, including the mechanical keys many prefer to use due to the tactile and aural feedback. As time went on, technologies have improved and so have the number of choices consumers have in the mechanical keyboard space.In the past, it was either Cherry MX or membrane-based keyboards. Today, however, the market has many switches offering a wide range of characteristics from multiple sources. One of the biggest challenges is trying to fit mechanical keys into smaller devices. In comes Kailh with its PG1232 Mini Chocolate mechanical keyswitch.The PG1232 has a lower height than its regular mechanical switches and offers a pretravel of 1.2mm (±0.5mm) with an actuator/total travel of 2.4mm (±0.5mm). Compared to the original Chocolate key switches, which are set up with a total travel of 3mm and an actuation point at 1.5mm using 50g actuation force, this reduces the total travel height by 0.6mm (~ 20%). Meanwhile the PG1232 is also a bit smaller overall, measuring 14.5mm(W) x 13.5mm(D) x 8.2mm(H) versus the 15mm x 15mm x 11mm on the original chocolate key switches. This downsizing aims to make the switches more practical for use in laptop keyboards while still maintaining that tactile feel and aural feedback user want from a mechanical switch.The White PG1232 switches are the ‘clicky’ type. There wasn't a mention of different types for the Mini, however, Kailh already has a presence in this space with its low-profile PG1350 Chocolate keyswitch range which comes in three flavors; Red (linear), Brown (tactile) and White (clicky), all with 50g operating force. We expect to see the same options in this lineup.Current mini mechanical keys can be a mixed blessing in the laptop space, as the few laptops which have them really border on actually being portable. For example, MSI’s GT80 2QE Titan SLI and Acer’s Predator 21 X feature mechanical keys and are both ultra-high-end gaming machines, with double-digit weights to match. This makes it easy to see how lower-profile mechanical keys can help, as while they don't eliminate the space needs of a mechanical keyboard, they can help bring the size of such a keyboard down to something a bit more suitable for a truly portable laptop.No release date has been announced but we expect to see these on the market on keyboards in coming months.Related Articles:
EK has announced a new monoblock made to fit two of ASUS's X299 based motherboards, the ROG Rampage VI Extreme and Apex. The new EK-FB ASUS ROG 6E RGB monoblock, as its name implies, intends to cool both the CPU and VRM sections with one block. The cooler has integrated 4-pin RGB LED strips built-in and are compatible with ASUS’s Aura Sync lighting software allowing full customization of the LEDs. With questions about the efficacy of the thermal paste and VRM cooling solutions on some boards, water cooling them both can be a good idea, particularly when overclocking.Based off the EK-Supremacy cooling engine, the liquid flows directly over the CPU and VRMs helping keep temperatures lower in stock and overclocked configurations on the high TDP Intel Core X-series CPUs. EK says the high flow design can be easily used with weaker pumps or silent water pump settings. They go on to mention this kind of cooling (CPU and VRM both water cooled) brings down CPU temperatures compared to a traditional CPU water block and stock VRM heatsink (less heat soak through the motherboard). The X299 based monoblocks also have a newly redesigned cold plate to better mate the block to the integrated heat spreader (IHS) of the processors enabling better thermal transfer. The base of the monoblocks is made of nickel-plated electrolytic copper while the top will come in two options. The first is made from POM Acetal with an aesthetic aluminum cover, the other is made out of acrylic glass material. The nickel plated brass screw-in standoffs are pre-installed for easier mounting.The monoblock comes with a 4-pin RGB LED strip which connects to the motherboard’s 4-pin LED header for control using ASUS’ Aura Sync lighting software. They can also be connected to any other 4-pin LED controller for additional flexibility when not using ASUS software to control it. The Nickel block's LEDs are located towards the bottom, while on the Acetal, it is above the ports.As for pricing, both the Nickel and Acetal + Nickel models are €119.95 ($136.99USD). These are currently available for pre-order through the EK-Webshop and their Partner Reseller Network. Pre-orders begin shipping Monday, September 25th.EKWB EK-FB ASUS ROG R6E MonoblockMSRP (incl. VAT)EK-FB ASUS ROG R6E Monoblock - Nickel119.95€EK-FB ASUS ROG R6E Monoblock - Acetal+Nickel119.95€Related Articles:
Today we are taking a look at the Cherry G80-3494 MX Board Silent, a mechanical keyboard designed with professionals in mind. Based on the original Cherry G80-3000, the G80-3494 brings Cherry’s new MX Silent switches to the office desktop.
GIGABYTE has another motherboard coming out supporting AMD’s Threadripper CPUs, the X399 Designare EX. The Designare EX is slated to be their flagship motherboard for the X399 chipset and includes additional features over the AORUS Gaming 7, bringing dual Intel NICs (over a Killer NIC), Intel WiFi, Thunderbolt 3 add-in card support, and an integrated backplate for increased structural rigidity and aesthetics. This board fills out GIGABYTE's short X399, with the AORUS Gaming 7 a bit further down the line. In the past, we have seen Gaming 3, and Gaming 5 versions so there is still room for something a bit less expensive in the GIGABYTE lineup in the future.
Netgear is a popular vendor in the SMB and SME market segments for switches and access points. While they do have full-blown managed switches, their smart offerings (which rely on web management, rather than CLI) are very popular amongst IT administrators. Today, Netgear is announcing a host of new products (switches and access points) that build upon the smart offerings under the 'Insight-Managed Smart Cloud' category. The main difference compared to the existing offerings is the ability to set up, monitor, and maintain the equipment via Netgear's cloud-based Insight mobile app (available for iOS and Android). This is in addition to the traditional web-based management UI. The attractive aspect here is that the equipment can be installed and set up without having to configure everything through a PC / web browser. The downside is that the equipment must be able to connect to the Internet for the Insight management app to be able to configure and deploy the hardware.The cloud-based management approach ensures that businesses without full-time IT staff can install and manage their network infrastructure. Multiple sites can also be monitored and managed from a single device without resorting to VPNs and other features that require skilled IT administrators. The Insight management app also supports select ReadyNAS storage devices.Coming to the hardware side of things, we have 4 new switches. All of them are the 10-port variety, with 8 copper and 2 SFP gigabit ports. The GC110 is a fanless switch with an external PSU. The GC110P is the PoE version with 62W available across the PoE ports. The GC510P is also fanless, and supports PoE+ (max. of 25W per port), with a total of 134W available. It is rack mountable. The GC510PP is the only actively cooled version of the lot, and it is also a PoE+ switch with a 195W budget.On the wireless side, the WAC510 wireless access point with router capabilities (which has been in the market for some time now) is Insight-capable. In addition, we also have the new WAC505 AC1200 access point, which is very similar to a WAC510 without the router capabilities.It is very clear that Netgear is attempting to go after the Ubiquiti Networks UniFi lineup with the Insight-managed hardware. Netgear is playing to its strengths by including NAS management (a product category that Ubiquiti Networks does not address). The current Insight feature set seems to be a good initial version (I haven't had hands-on time yet with it). However, given the way that UniFi has evolved and added features over the last several years, Insight may have a lot of catching up to do. For example, Insight is currently a cloud-only offering. On the other hand, UniFi offers multiple usage models - a local-only configuration with the UniFi Controller running on a UniFi Cloud Key or a PC / virtual machine (which can then be associated with a cloud account, if necessary), or, a premium subscription-based cloud service managed by Ubiquiti Networks themselves (UniFi Elite). Enterprising IT users can also run the UniFi controller by themselves on a AWS instance in order to manage multiple sites. That said, Netgear's Insight management is currently free, and appears to require a lot less technical know-how compared to the UniFi offerings. Netgear seems to have realized that it is best to avoid tackling UniFi directly, which probably explains why the current Insight hardware offerings aim to fill holes in the UniFi lineup.Ubiquiti offers only one low port-count switch with SFP ports and PoE - the UniFi Switch 8-150W at $186. Netgear's GC510P targets the same feature set at $254. None of the other Insight offerings have an exact UniFi equivalent. The Netgear WAC505 at $90 will be up against Ubiquiti's UAP AC LITE ($79) in the AC1200 category. That said, the WAC505 is a Wave2 AP with MU-MIMO capabilities (the UAP AC LITE has been in the market for a long time now and uses a older chipset). It also features compatibility with standard 802.3af/at PoE switches, while the AC LITE uses passive PoE (Update: The AC LITE and AC LR that were introduced with passive PoE have been updated to support 802.3af PoE since September 2016). The premium associated with the WAC505 seems justified, though we can't say the same about the GC510P switch.Netgear's Insight strategy is a welcome approach to small business networks, and the integration of NAS management is unique. We are looking forward to the feature being made available in various other SMB-targeted products from Netgear.
An online retailer in the UK has started to take pre-orders on Intel’s upcoming Coffee Lake CPUs, specifically the socketed 'S' parts for desktop computers. As reported previously, the new processors will have more cores than their direct predecessors, but if the published pre-order prices are correct (and are not inflated because of their pre-order nature) then Intel’s new chips will also have higher MSRPs than the company’s existing products.Lambda-Tek, the UK retailer, is currently taking pre-orders on six Coffee Lake CPUs which are expected to hit the market in the coming weeks. The CPUs in question are the Core i7-8700K, the Core i7-8700, the Core i5-8600K, the Core i5-8400, the Core i3-8350K, and the Core i3-8100. Each segment will get an upgrade over the previous generation in core counts: the Core i7 parts will run in a 6C/12T configuration, the Core i5 parts will be 6C/6T, and the Core i3 parts will be 4C/4T (similar to the old Core i5). The flip side of this is that, if data from the retailer is correct, each element of the stack will cost quite a bit more than their direct predecessors.For example, the store charges nearly £354 for the Core i7-8700K, which converted to USD (and without tax) equals to around $400. This will be a substantial uptick in cost over the $340 that the Core i7-7700K retails for today. $400 may be too high for Intel's top mainstream CPU, as Intel sells its six-core Core i7-7800X for $375. The HEDT requires a more expensive X299 motherboard and an appropriate DRAM kit, but might have an overall build cost similar to the $400 part.The new quad-core Core i3 products will also get more expensive than their predecessors, with the calculated US price taken from the UK retailer coming to nearly $200 for the Core i3-8350K, up from $180. The per-core price will drop, which is perhaps not surprising, but the alleged price hike would put the Core i3 SKUs deeper into the Core i5 territory (the Core i3-7350K is already in the $190 ballpark), which will make it harder for many people to choose between different new i3 and older i5 models.Prices of Contemporary Mainstream CPUs from IntelCores/
Today we are having a look at a feature-packed Mini ITX motherboard from ASRock, the Fatal1ty Z270 Gaming-ITX/ac. With specifications that suggest it could be the holy grail for home entertainment systems and a reasonable price tag, the Z270 Gaming-ITX/ac appears to be one of the most interesting Z270-based motherboards currently available. We closely examine its features and capabilities in this review.
On the back of Intel’s Technology and Manufacturing Day in March, the company presented another iteration of the information at an equivalent event in Beijing this week. Most of the content was fairly similar to the previous TMD, with a few further insights into how some of the technology is progressing. High up on that list would be how Intel is coming along with its own 10nm process, as well as several plans regarding the 10nm product portfolio.The headline here was ‘we have a wafer’, as shown in the image above. Intel disclosed that this wafer was from a production run of a 10nm test chip containing ARM Cortex A75 cores, implemented with ‘industry standard design flows’, and was built to target a performance level in excess of 3 GHz. Both TSMC and Samsung are shipping their versions of their ‘10nm’ processes, however Intel reiterated the claim that their technology uses tighter transistors and metal pitches for almost double the density of other competing 10nm technologies. While chips such as the Huawei Kirin 970 from TSMC’s 10nm are in the region of 55 million transistors per mm, Intel is quoting over 100 million per mm with their 10nm (and using a new transistor counting methodology).Intel quoted a 25% better performance and 45% lower power than 14nm, though failed to declare if that was 14nm, 14+, or 14++. Intel also stated that the optimized version of 10nm, 10++, will boost performance 15% or reduce power by 30% from 10nm. Intel’s Custom Foundry business, which will start on 10nm, is offering customers two design platforms on the new technology: 10GP (general purpose) and 10HPM (high performance mobile), with validated IP portfolios to include ARM libraries and POP kits and turnkey services. Intel has yet to announce a major partner in its custom foundry business, and other media outlets are reporting that some major partners that had signed up are now looking elsewhere.Earlier this year Intel stated that its own first 10nm products would be aiming at the data center first (it has since been clarified that Intel was discussing 10nm++). At the time it was a little confusing, given Intel’s delayed cadence with typical data center products. However, since Intel acquired Altera, it seems appropriate that FPGAs would be the perfect fit here. Large-scale FPGAs, due to their regular repeating units, can take advantage of the smaller manufacturing process and still return reasonable yields by disabling individual gate arrays with defects and appropriate binning. Intel’s next generation of FPGAs will use 10nm, and they will go by the codename “Falcon Mesa”.Falcon Mesa will encompass multiple technologies, most noticeably Intel’s second generation of their Embedded Multi-Die Interconnect Bridge (EMIB) packaging. This technology embeds the package with additional silicon substrates, providing a connection between separate active silicon parts much faster than standard packaging methods and much cheaper than using full-blown interposers. The result is a monolithic FPGA in the package, surrounded by memory or IP blocks, perhaps created at a different process node, but all using high-bandwidth EMIB for communication. On a similar theme, Falcon Mesa will also include support for next-generation HBM.
Philips announced two professional displays with HDR support at IFA earlier this month. The new 328P6AU and 328P6VU monitors offer QHD and UHD 4K resolutions respectively, while both monitors feature wide color gamuts (which means higher-than-sRGB) as well as USB Type-C inputs, in accordance with recent market trends. The monitors will be aimed at the high end of the market, but are not going to to be too expensive.The first of the new P-series displays to arrive is the Philips 328P6AU (pictured). The monitor is based on an IPS-ADS panel with a 2560×1440 resolution and can hit 400 nits in brightness. Philips says that the 328P6AU display can reproduce 98% of the AdobeRGB color gamut (and therefore it is safe to say that it can cover 100% of the sRGB), but it does not reveal anything beyond that. The firm also is not disclosing the refresh rate of the panel, but given how the monitor is being positioned, it is likely that it is set at 60 Hz. Since the 328P6AU is a professional display, its stand can set the monitor in portrait mode and allows all kinds of other adjustments (height, rotate, tilt).The second P-series LCD that Philips is working on is called the 328P6VU, and this one is substantially more advanced. The monitor is based on an IPS-AAS panel featuring a 3840×2160 resolution and can hit 600 nits in brightness, which is considerably higher compared to numerous contemporary displays for professionals. Furthermore, the 328P6VU is equipped with a backlighting supporting 16-zone local dimming, so expect a fairly high contrast ratio. As for color gamut, Philips only mentions 95% of NTSC, but nothing else. Again, the company isn't publishing anything about the refresh rate of the LCD, but 60 Hz is a reasonable guess.When it comes to connectivity, the 328P6AU and 328P6VU both have the same functionality. Both monitors include DisplayPort, HDMI, and D-Sub (VGA) inputs (this one seems a bit odd on a 4K model, but could be useful for PiP) as well as a USB-C port. The latter can be used as a display input as well as an upstream port for a hub featuring USB 3.0 ports and a GbE connector. Such a hub will be very useful for those with notebooks featuring USB-C and lacking other connectors (e.g. MacBooks). Finally, the monitors will have two 3 W stereo speakers.Specifications of Philips P6-Series Displays328P6AU328P6VUPanel31.5" IPS ADS31.5" IPS AASNative Resolution2560 × 14403840 × 2160Maximum Refresh Rate60 Hz (?)Brightness400 cd/m²600 cd/m²Local DimmingNone16 zoneContrastunknownhighViewing Angles178°/178° horizontal/verticalHDR"Supported"Pixel Pitch0.2724 mm²0.1816 mm²Pixel Density93 PPI140 PPIColor Gamut SupportAdobeRGB: 98%NTSC: 95%StandTilt, pivot (90°) and height adjustableUnknown, but likely same as on the 328P6AUInputs1 × DisplayPort 1.2
If you’re reading this, then congratulations! You have successfully accessed AnandTech over HTTPS.I’m pleased to announce that as of this afternoon, all AnandTech pages and websites are now being served over HTTPS, allowing us to offer end-to-end transport encryption throughout the site. This is part of a larger project for us which started with moving the AnandTech Forums over to the XenForo software package and HTTPS last year; now it’s AnandTech main site to receive a security treatment of its own.This update is being rolled out both to improve the security of the site, and as part of a broader trend in site hosting & delivery. From a site operations point of view, we’ve needed to improve the security of the user login system for some time so that usernames and passwords are better protected, as the two of those items are obviously important. Meanwhile, although AnandTech itself is not sensitive content, the broader trends in website hosting is for all sites regardless of content to move to HTTPS, as end-to-end encryption still enhances user privacy, and that’s always a good thing.With today’s update, we’re now serving all pages, images, and other local content exclusively over HTTPS. This also includes redirecting any HTTP requests to HTTPS to ensure a secure connection. Overall, the hosting change should be transparent to everyone – depending on your browser, this even eliminates any security warnings – and site performance is virtually identical to before, both on the server side for us and on the client side for you. In other words, a true upgrade in every sense of the word.However in the unlikely event that you do encounter any issues, please let me know. Leave a note here in the comments, email me, send a tweet, etc. If something is amiss, we want to fix it as quickly as possible.Finally, I want to quickly thank our long-time developer John Campion, DB guru Ross Whitehead, hosting master Alec Ginsberg, and the rest of the AnandTech/Purch development team for working on this project. While today’s update is transparent at the user level, a lot of work was necessary on the backend to make this as seamless as possible and to make it work with third-party content (ads, JS libraries, etc). So none of this would be possible without their outstanding efforts.
Western Digital has begun to ship its WD Gold HDD with 12 TB capacity to partners and large retailers. The 3.5” drive relies on the same platform as the HGST Ultrastar He12 launched this year, and will initially be available to select customers of the company. The WD Gold 12 TB is designed for enterprise workloads and has all the performance and reliability enhancements that we come to expect, but the availability at retail should make them accessible to wider audiences.From a hardware point of view, the WD Gold 12 TB is similar to the HGST Ultrastar He12 12 TB hard drive: both are based on the fourth-generation HelioSeal technology that uses eight perpendicular magnetic recording platters with a 1.5 TB capacity for each platter. The internal architecture of both HDDs was redesigned compared to predecessors to accommodate the eighth platter. Since the WD Gold and the Ultrastar He12 are aimed at nearline enterprise environments, they are equipped with various sensors and technologies to protect themselves against vibration and as a result, guarantee sustained performance. For example, the WD Gold and the Ultrastar He12 attach their spindles both to the top and the bottom of the drives. In addition the HDDs feature a special technology that increases the accuracy of head positioning in high-vibration environments to improve performance, integrity, and reliability. Finally, both product families support TLER (time-limited error recovery) rebuild assist mode to speed up RAID recovery time.Since the WD Gold 12 TB and the HGST Ultrastar He12 are similar internally and feature the same 7200 RPM spindle speed, they also have similar performance — the manufacturer puts them both at 255 MB/s sustained transfer rate and 4.16 ms average latency. The main difference between the WD Gold and the HGST Ultrastar He12 are the enterprise options for the latter: there are models with the SAS 12 Gb/s interface and there are models with SED support and Instant Secure Erase feature.Comparison of Western Digital's WD Gold HDDsWD121KRYZWD101KRYZWD8002FRYZWD6002FRYZWD4002FRYZCapacity12 TB10 TB8 TB6 TB4 TBRPM7200 RPMInterfaceSATA 6 GbpsDRAM Cache256 MB128 MBNAND CacheUnknownNoYesUnknownHelium-FillingYesNoData Transfer Rate (host to/from drive)255 MB/s249 MB/s205 MB/s226 MB/s201 MB/sMTBF2.5 millionRated Annual Workload550 TBAcoustics (Seek)-36 dBAPower ConsumptionSequential read7 W7.1 W7.2 W9.3 W9 WSequential write6.8 W6.7 W7 W8.9 W8.7 WRandom read/write6.9 W6.8 W7.4 W9.1 W8.8 WIdle5 W5.1 W7.1 W7 WWarranty5 YearsPrice as of September 9, 2017MSRP$521.99$410.99$327.99$244.99$183.99Per GB$0.0435$0.0411$0.041$0.0408$0.046GB per $22.98 GB24.33 GB24.39 GB24.48 GB21.73 GBWestern Digital aims its WD Gold and HGST Ultrastar He-series drives at operators of cloud and exascale data centers that demand maximum capacity. The 12 TB HDDs can increase the total storage capacity for a single rack from 2440 TB to 2880 TB, replacing 10 TB drives with 12 TB drives, which can be a major benefit for companies that need to maximize their storage capacity per watt and per square meter. Where the HGST-branded drives are made available primarily through B2B channels, the WD Gold are sold both through B2B and B2C channels and thus can be purchased by wider audiences. For example, boutique PC makers, as well as DIY enthusiasts, may start using the WD Gold 12 TB for their high-end builds, something they could not do with the HGST drives. These HDDs may be considered as an overkill for desktops, but since WD’s desktop offerings top at 6 TB, the WD Gold (and the perhaps inevitable future WD Red Pro 12 TB) is the WD’s closest rival for Seagate’s BarraCuda Pro drives.The WD Gold HDD is currently available directly from Western Digital for $521.99 as well as from multiple retailers, including Newegg for $539.99. While over $500 for a hard drive is expensive, it is actually less than Western Digital charged for its WD Gold 8 TB about 1.5 years ago ($595) and considerably less than the initial price of the WD Gold 10 TB drive last April.Related Reading:
With the release of AMD’s Threadripper CPUs into the HEDT market, board partners have released new motherboards based on the X399 chipset. Consumers are going to see quad channel memory, native 4-Way SLI and Crossfire capabilities, more full-speed M.2 slots, added 10G network ports, and more on the new platform. We're taking a quick look at each of the motherboards that the vendors are promoting in the market, as well as a few upcoming teasers.
Corsair on Thursday announced two fresh Vengeance LPX memory kits that set new performance records for the product family. The new dual-channel memory kits are intended for Intel’s Kaby Lake-X CPUs and Intel’s X299 platforms, and they operate at DDR4-4500 and DDR4-4600 MT/s data transfer rates and require over 1.4 V.Corsair’s new fastest-ever DDR4 memory kits have a combined capacity of 16 GB and are rated for DDR4-4500 with CL19-19-19-39 timings at 1.45 V and for DDR4-4600 at CL19 26-26-46 at 1.5 V. Corsair verified stable performance of its DIMMs at transfer rates well beyond those recommended by JEDEC using an Intel Kaby Lake-X CPU and ASRock’s X299 OC Formula motherboard. The OC Formula motherboard only runs at one DIMM per channel (vs. 2 DPC on most X299 mainboards) in a bid to guarantee a “cleaner” data path and stable power supply to maximize overclocking potential for DRAM. Given the increased speeds and required overvoltage over the standard, the quality of the motherboard DRAM VRM becomes crucial for stability in case of DDR4-4500 and DDR4-4600 modules. For the same reason, Corsair does not equip its ultra-fast Vengeance LPX DIMMs with RGB LEDs because they may affect power supply and stability.The new Corsair Vengeance LPX DDR4-4500 and DDR4-4600 memory kits are based on Samsung’s B-die, produced using 20 nm process technology. These memory ICs have been used by makers of leading-edge DDR4 memory modules (Corsair, G.Skill, GeIL, etc.) for a couple of years and by now they all know what to expect from these devices even in extreme conditions, such as operation with a 20 or 25% overvoltage.The new Vengeance LPX memory modules from Corsair come with regular black aluminum heat spreaders that work well with all types of CPU coolers. The embedded XMP 2.0 SPD settings to make it easy for end users to set up correct timings and sub-timings.Corsair's 'Extreme' Vengeance LPX Memory for Intel's X299 PlatformSpeedCL TimingVoltageKitCapacityP/NDDR4-4500CL19 19-19-391.45 V2×8 GB16 GBCMK16GX4M2F4500C19DDR4-4600CL19 23-23-431.5 VCMK16GX4M2F4600C19Corsair’s new Vengeance LPX 16 GB (8 GB×2) DDR4-4500 and DDR4-4600 kits are going to hit the market in the coming days, and they are going to be expensive. The DDR4-4500 kit will retail at $479.99, whereas the DDR4-4600 kit will retail for $549.99.Related Reading
Dell has begun to take pre-orders on its Visor headset for Windows Mixed Reality applications. The company will start shipments of the device in mid-October, just in time for Microsoft’s Windows 10 Creators Update that arrives on October 17 and ahead of the holiday season.Starting from September 14, Dell’s Visor WMR headset is available for pre-order from Dell.com/Visor in the U.S. and from PCWorld in the U.K. The headset itself is priced at $349.99, the controller kit costs $99.99 and a Visor with controllers is priced at $449.99. In the U.K., the whole kit is available for pre-order at £429.99. In order to play non-controller based AR/VR games on the Visor, users will also have to get an Xbox One controller. Dell will start to ship its Visor product on October 17, 2017. In addition, the company plans to make the device available in BestBuy stores and directly from Microsoft (online and offline).Dell’s Visor AR/VR headset complies with Microsoft’s requirements for headsets compatible with the Windows Mixed Reality platform: it connects to Windows 10-based PCs using HDMI and USB cables, it features two 1440×1440@90 Hz LCD panels (for a total resolution of 2880×1440) and two cameras to capture the outside world. While ergonomics and industrial designs of WMR-compliant headsets from Dell, Acer, ASUS and Lenovo are different, internally they end up being very similar.The shipments date of the Dell Visor coincides with the launch date of Microsoft’s Windows 10 Creators Update, which will bring support for Windows Mixed Reality headsets to end users. That said it is highly likely that other makers of WMR gear will try to ship their products around the time of the official launch of the platform. In the meantime, Dell seems to be the first with pre-orders.Related Reading
HP has updated its most powerful dual-processor Z8 workstation line with the latest components. The new systems contain up to two Intel Skylake-SP Xeon CPUs with up to 56 cores in total, up to 3 TB of DDR4 RAM, terabytes of storage as well as up to 9 PCIe slots along with optional TB3 and 10 GbE support via add-in cards. The HP Z8 workstation will be the pinnacle of HP’s computers for personal and professional use and its price in high-end configurations will surpass even the top-of-the-range gaming PCs.Historically, most high-end workstations relied on server platforms to support more than one CPU and thus offer higher performance than any consumer desktop. The emergence of dual-core and then multi-core CPUs a little more a decade ago changed the workstation market quite quickly and significantly. In a world with quad-core CPUs, 4-way workstations did not make a lot of sense for 99% of the users and therefore they quickly became extinct. Moreover, by now, even 2-way workstations became rare. Today, the vast majority of workstations use one multi-core CPU that provides enough compute horsepower for professional workloads, whereas GPU-based accelerators are used for tasks like simulations. Nonetheless, there are still users who need maximum x86 performance and who therefore require 2-way workstations — and the HP Z8 is aimed precisely at such users. While the Intel Xeon Scalable processors with extreme core count were developed primarily with servers in mind, the Z8 is a system that people put on their desks and therefore it has a number of specific requirements regarding noise levels, features, security, compatibility with components and so on.One of the key components of all PCs is its microprocessor. When it comes to the HP Z8, it is based on up to two Intel Xeon Platinum 8180 with 28 cores and 205 W TDP each, which means that the system has to remove 410 W of thermal energy only from CPUs, and this requirement had a significant impact on the design of the whole system. The company did not want to use a liquid cooling system, so it had to design an air cooling solution capable of cooling down two extremely hot CPUs as well as up to 24 DDR4-2666 memory modules. Each processor has its own radiator equipped with a high-pressure air fan (which speed is regulated by BIOS in accordance with system temperature monitored by numerous sensors). In addition, the system has multiple airflow vents on the front and on the top as well as one fan that exhausts hot air on the back. According to HP, such a chassis architecture ensures that the second CPU does not re-use warm air from the first one, but since they are located in close proximity, one will always affect another with its heat. Finally, the system has additional fans that cool down other components and produce more airflow within the chassis.Speaking of other components, the HP Z8 supports plenty of them — whatever one might want. First off, the system has four PCIe 3.0 x16 slots for graphics cards or SSDs (up to AMD Radeon Pro, NVIDIA Quadro P100 or GP100, up to 4 TB HP Z Turbo Drive Quad Pro, etc.) three PCIe 3.0 x8 (two are non-hot swap) slots for SSDs and two PCIe 3.0 x4 slots. In addition to PCIe-based storage, the Z8 also features four 2.5”/3.5” bays for SATA/SAS SSDs or HDDs as well as two external 5.25” bays that can also accommodate drive form-factor storage devices using appropriate adapters. Those who need it, HP may also install an SD card reader as well as a slim DVD or Blu-ray ODD.When it comes to connectivity, the HP Z8 has all the bases covered. By default, the system supports two GbE connectors (powered by Intel controllers), an 802.11ac Wi-Fi + Bluetooth module (Intel Wireless-AC 8265 controller), two USB 3.1 Type-C ports and two USB 3.1 Type-A ports on the front, four USB 3.1 Type-A ports on the back, multi-channel audio connectors (a Realtek HD ALC221 controller) on the back, a TRRS audio connector on the front and so on. Meanwhile, owners can optionally order to install two 10 GbE controllers, a Thunderbolt 3-supporting add-in-card and a variety of custom components for various industries and workloads (an external audio solution for a 5.25” bay, for example).Since many businesses and enterprises require robust security for all of their machines, the HP takes everything seriously and ships the Z8 with a whole set of security features that it calls HP SureStart. The system features secure authentication, full volume encryption, TPM 2.0, has a Kensington lock and so on.All the CPUs, GPUs, SSDs and other components require a lot of power and HP Z8 has plenty of it. The manufacturer offers 1125 W, 1450 W or 1700 W internal PSUs with up to 90 % efficiency. The PSU is located in a compartment behind the motherboard, so chances are that HP uses proprietary units.General Specifications of the HP Z8 2017HP Z8 G4CPUFamilyIntel Xeon Scalable processorModelsXeon Platinum 8180 (2.5GHz/3.8GHz, 38.5MB cache, 28 cores)
TSMC has announced plans to build its first test chips for data center applications using its 7 nm fabrication technology. The chip will use compute cores from ARM, a Cache Coherent Interconnect for Accelerators (CCIX), and IP from Cadence (a DDR4 memory controller, PCIe 3.0/4.0 links). Given the presence of the CCIX bus and PCIe 4.0 interconnects, the chip will be used to show the benefits of TSMC’s 7 nm process primarily for high-performance compute (HPC) applications. The IC will be taped out in early Q1 2018.The 7 nm test chips from TSMC will be built mainly to demonstrate capabilities of the semiconductor manufacturing technology for performance-demanding applications and find out more about peculiarities of the process in general. The chip will be based on ARMv8.2 compute cores featuring DynamIQ, as well as a CMN-600 interconnect bus for heterogeneous multi-core CPUs. ARM and TSMC do not disclose which cores they are going to use for the device - the Cortex A55 and A75 are natural suspects, but that’s a speculation at this point. The new chip will also have a DDR4 memory controller as well as PCI Express 3.0/4.0 links, CCIX bus and peripheral IP buses developed by Cadence. The CCIX bus will be used to connect the chip to Xilinx’s Virtex UltraScale+ FPGAs (made using a 16 nm manufacturing technology), so in addition to implementation of its cores using TSMC’s 7 nm fabrication process, ARM will also be able to test Cadence’s physical implementation of the CCIX bus for accelerators, which is important for future data center products.TSMC's 7 nm Test Chip at GlanceLogicPHYCompute CoresARM v8.2 with DynamIQInternal Interconnect BusARM CMN-600CCIXCadenceDDR4 DRAM Controller?CadencePCI Express 3.0/4.0CadencePeripheral BusesI2C, SPI and QSPI by CadenceVerification and Implementation ToolsCadenceAs reported multiple times, TSMC’s 7 nm manufacturing process will be a “long” node and the foundry expects the majority of its large customers to use it. By contrast, the current 10 nm technology is aimed primarily at developers of smartphone SoCs. TSMC projects that its first-generation CLN 7FF fabrication technology, compared to its CLN16FF+, will enable its customers to reduce power consumption of their chip by 60% (at the same frequency and complexity), increase their clock rate by 30% (at the same power and transistor count) and shrink their die sizes by 70% at the same complexity. Sometime in 2019, TSMC plans to start making chips using its CLN7FF+ process technology with EUV for critical layers. TSMC claims that the CLN7FF+ will enable the company’s customers to further increase transistor density while improving other areas, such as yields and power consumption.TSMC does not disclose which of its 7 nm process technologies announced so far it is going to use for the test chip, but the use of EUV for test chips is something that cannot be excluded. For example, GlobalFoundries claims that they use EUV to accelerate production of test chips. On the other hand, since design rules for CLN7FF and CLN7FF+ are different, it is highly likely that TSMC conservatively uses the former for the test chip.TSMC’s CLN7FF process tech passed qualification in April and was expected to enter risk production in Q2 2017, according to TSMC’s management. The foundry expected 13 CLN7FF tape outs this year and it is projected that the fabrication technology would be used commercially starting from Q2 2018. Therefore, taping out the test vehicle using the first-gen DUV-only 7 nm process in Q1 2018 seems a bit late for early adopters who intend to ship their 7 nm SoCs in the second half of next year. Meanwhile, early adopters (read: Apple, Qualcomm, and some others) get access to new process technologies long before their development is completed and final PDKs (process development kits) are ready. Keeping in mind that the test chips feature a CCIX and PCIe 4.0 buses, it is clearly designed to show advantages of TSMC’s 7 nm process technologies for HPC applications. In fact, this is what TSMC says itself:
The first 3D NAND SSDs from Western Digital and its SanDisk subsidiary have arrived. The same mainstream SATA SSD with 3D TLC is being sold under two names, but either way it is a big step forward: SanDisk's 64-layer BiCS3 3D NAND enables faster performance and lower power consumption.
Late last night, PC Perspective confirmed rumors that Raja Koduri, AMD's Radeon Technologies Group (RTG) Senior Vice President and Chief Architect, is to go on sabbatical. Sourcing Raja’s internal letter to the RTG team, he will be taking leave from September 25 until an unspecified date in December, to spend time with his family. Dr Lisa Su, AMD's CEO, will lead RTG in the interim.As reproduced by Ryan Shrout, Raja’s letter is as follows:
The hot button item expected to come from Apple’s announcement today was the set of iPhones being announced. The iPhone 8 and iPhone 8 Plus were the expected models to come to market, but Apple felt that for the 10-year anniversary since the launch of the original iPhone, it should release a new model which ‘breaks the standard for another 10 years’. This new iPhone X device goes all in on some significant features that are novel to the Apple smartphone ecosystem: an edge-to-edge OLED display, a TrueDepth front-facing camera system, removal of TouchID in favor of a new facial recognition system called FaceID, and a few new features surrounding the integrated neural engine inside the A11 SoC.The iPhone X (pronounced iPhone Ten) is a visually significant departure from previous Apple smartphones. The 5.8-inch display is called an ‘edge-to-edge’ display in the marketing material, citing minimal bezels and taking up pretty much the full real estate of the phone. Apple also dubs this as a new retina display, specifically a ‘Super Retina’ display, with a 2436x1125 resolution with a pixel density of 458 PPI. The display is Apple’s first foray into OLED technology on a smartphone, as ‘previous versions of OLED were not sufficient’ in previous generations. This means that Apple is promoting features such as HDR10 for high dynamic range, a 1000000:1 contrast ratio, and high color accuracy. That contrast ratio is due to the blacks provided by the OLED display, although it will be interesting to see what the practical limits are. Apple has always been consistent with having superb color accuracy on its smartphones, so we will have to see in our testing if OLED changes things in Apple’s qualification process. Also Apple’s TrueTone technology makes its way from the iPad to the iPhone. This display technology uses data from the ambient light sensor to detect the ambiance of the surroundings and adjust colors (particularly when reading black on white) and adjusting the display to make it easier to read. The display will also support 3D Touch.With Apple moving to a full-screen technology like this, there is no room for the standard Home button, and with it, TouchID. As a replacement/upgrade, Apple is implementing FaceID: a set of front-facing technologies that will develop a face-map of a user and embed that as the passcode. This functionality is likely derived from Apple’s acquisitions of PrimeSense in 2013 (the IP behind Microsoft Kinect) and FaceShift in 2015. Apple states that the technology uses its embedded neural network engine to speed up facial recognition, but also that algorithms are in place such that the system will work if a user puts on glasses, wears a hat, has different hair, and even in low light. The algorithms will also auto-update as a user grows a beard. A lot of security researchers have questioned this move, while Apple quotes that the possibility for a false positive on TouchID was around 50k-to-1, FaceID should be more similar to a million-to-one. With FaceID, users will be able to unlock the device, as well as use their face to preapprove ApplePay purchases before touching a pay pad.In order to enable FaceID, Apple implemented a small top area for the main hardware. This includes an infrared camera, a flood illuminator, the front camera, and a dot projector. The hardware will map the face in three dimensions with a 5-second startup (when in sufficient light) to produce a face mesh. One version of the mesh, with the textures as part of the algorithm, will be held in a secure enclave for identification and approval. At this point in time, only one face per device can be registered, marking an initial limitation in the hardware. One of the other features for the technology shown by Apple was the ability to generate a face mesh and map new textures to it, such as new SnapChat ‘masks’, or animated emoji in Message. The hardware will map 50 muscle tracking points, and a user can choose one of twelve animal emoji (fox, cat, dog, pig, unicorn, poop emoji) and record a ten second message where the ‘ani-moji’ will mimic in real-time how the user is moving and speaking in order to send to the other person. Apples plan here is to open the resources up to developers to use in their own applications.Because the FaceID hardware is essentially an indent into the display, there will be some issues on content that will have to be addressed. On the home screen, Apple has designed the top icons to be inside the two nooks either side of the FaceID hardware, and adjust as needed. As shown by several journalists on the show floor at the launch event, the video will naturally default to fit perfectly without the little nooks, but if a user selects full screen, it will wrap around the FaceID hardware and intrude into the video being watched. Apple usually prides itself in the simplicity in its display support, and this might be a little scratch in that armor.With no home button, Apple is having to implement new interactions to deal with regular home button actions. To wake the phone from a screen off state, a user can tap on the display (or use FaceID if setup). To get to the home screen, the user can swipe up in any application, although this seems a bit fraught with issues, especially with games where swiping up is a key mechanic of the application. In order to get the list of applications in memory, then swipe up but hold the finger down on the screen. Apple neglected to mention how to put the phone to sleep / screen off mode – there is a button on the side, but that is specifically for Siri. In order to get the notifications menu, swipe down from the top.Under the hood, Apple is using its new A11 Bionic processor, with significant upgrades over the A10 and A10X. Details were scarce, but this is a TSMC 10nm design featuring six cores: two high-performance cores and four power efficient cores, with all six cores available for use at the same time. Apple is quoting that the high-performance cores are 25% faster than the high-performance cores in A10, while the high-efficiency cores are 70% faster than their counterparts in A10. No speeds are details about the cores were provided, though some initial analysis online from the code base suggests that the larger cores have two levels of private cache, while the smaller cores only have one level of private cache, with a high level of shared cache between both sets before hitting the DRAM. The A11 SoC will come in at 4.3 billion transistors, and features Apple’s second generation performance controller to assist with the 2+4 configuration. Also involved is a new GPU, which Apple states is its own custom design, coming in at ‘three cores’ (whatever that means in this context) and offers 30% higher performance than the graphics in the A10. Apple also stated that it can offer A10 graphics for half the A10 power, and that the GPU can assist in machine learning. We’ve seen discussions on Apple’s Metal 2 compute already appear at WWDC, so this is likely what Apple is talking about. The SoC also features a new ‘Neural Engine’ inside, offering two cores and 600 Giga-Ops per second, although no information as to how this inference hardware operates or at what precision (for example, Huawei’s NPU gives 1.92 TFLOPs of FP16). Apple was very light on A11 details, so we’ll likely revisit this topic later with more details.For the camera system, Apple is using a vertical dual camera on the rear of the iPhone X, rather than the horizontal cameras on the iPhone 7 Plus and iPhone 8 Plus. Both of these cameras are new models, both are 12 megapixels, and both come with optical image stabilization. One camera is f/1.8, while the other is f/2.4, with both having larger and faster sensors with deeper pixels than previous iPhones to aid in image focus. Like with the iPhone 8 and iPhone 8 Plus, Apple will use the embedded Neural Engine to assist with photo taking, such as adjusting skin-tone mapping in real-time depending on the environment. The camera also supports dual Quad-LED flash.The full design is glass on the back and front, using a new technology that Apple is quoting as the most shatter-resistant glass on an iPhone, and the band in the device will be ‘surgical grade stainless steel’ rather than aluminum. The iPhone X will be dust and water resistant, although Apple stopped short of giving it a full IPXX rating. Due to the glass, Apple is equipping the iPhone X with wireless charging capabilities using the Qi standard, and will offer a large ‘Air Power’ pad in 2018 that will allow users to wireless charge the iPhone X, the new Apple Watch Series 3, and the Air Pods all at the same time. Apple did not go into the size of the battery, although it does quote it as having two hours more battery life than the iPhone 7, despite the large OLED display.Lots of features that we’ve seen discussed in previous Apple launches were glossed over here: changes in the haptic feedback, anything about audio (there’s no 3.5mm jack, if you were wondering), any hard performance metrics, SoC details about the cores and how/if they are different, or frequencies, or how the Neural Engine is laid out, or even how much DRAM is in the device. This is likely due to the fact that even for a two-hour presentation, time was spent detailing the new features more than the underlying hardware. Unlike other smartphone vendors or chip designers, Apple doesn’t do a deeper ‘Tech Day’ on their hardware, which is a shame.What we do know is that Apple will be offering two storage options, 64GB and 256GB, and two colors in Space Grey and Silver (both of which have a slight pearlescence, according to Apple). The 64GB model will start at $999, and include Ear Pods in the box. The 256 GB model will have some markup, although Apple did not disclose how much. The iPhone X will go up for pre-order on October 27 in around 30 countries, and ship on November 3.Gallery: Apple 2017: The iPhone X (Ten) AnnouncedAdditional: turns out there are a lot more specifications on Apple's product page that just went live. Key features are screen brightness (625 nits), dimensions (143.6 x 70.9 x 7.7 mm, 174 grams), native FLAC support and HDR video playback support. The 256 GB model will start at $1149, putting a $150 mark-up on the higher capacity, and the Lightning-to-3.5mm cables are still included in the box.Apple iPhoneiPhone 7iPhone 7 PlusiPhone 8iPhone 8 PlusiPhone XSoCApple A10 Fusion
Today at Apple’s new Steve Jobs Theatre, Apple announced its new Apple Watch, called the Series 3. This is a new model above the Series 2 announced last year, with the new headline feature being LTE support through an integrated modem, which we believe to be an Intel modem according to trusted analysts.With other watch makers having had LTE models, it had been one of the missing features with the Watch Series 2. Now Apple is making that leap, supporting both LTE and UTMS by using the display as the antenna, rather than internal antennas that might take up extra space. Rather than use a regular SIM, Apple is implementing an eSIM to save on size, which was demonstrated on AT&T during the presentation. To that end, Apple stated that the Watch Series 3 is only 0.25mm wider than the Watch Series 2 on the rear crystal, with all other dimensions the same. With LTE, Apple states that users can use features such as Maps, take calls, and stream Apple music.At the heart of the Watch Series 3 is a new processor – moving up to a dual core version over the Series 2. Apple gave very little information on the processor, except that it offers 70% more performance over the Series 2 but stays at the same size. No details on the cores inside, or the node, but with the new LTE add-in, Apple is quoting the same 18 hours of battery life with a mix of LTE, WiFi and screen-off use during that time.Also in the hardware is a new wireless chip, called the W2. Again Apple was light on details, except to say that it offers 85% faster WiFi combined with a 50% higher efficiency. On the health side, there is a new barometric altimeter, for calculating air pressure and detecting going up stairs.For software, Apple is going to launch WatchOS4 on September 19 , which will ship on the new Watch Series 3. This update will bring the heart rate detection to now display directly on the display, with an enhanced heart-rate detection mechanism that will provide resting heart rate data, calculated based on continuous data over several days. Apple will also add in notifications for users that might experience abnormal heart rates when exercise is not detected. This will be in conjunction with Apple’s new Heart Study, which will use Watch data to analyse arrhythmia in a collaboration with Stanford Medical and the FDA. The first phase of this Heart Study will be available to download in the US early next year.For prices, Apple gave the base Watch Series 3 as $329, but in order to have the LTE version the price increases to $399. It looks like Apple will be discontinuing the Series 2 as it was not mentioned, but the Series 1 model will still be available at $249. Orders will begin on September 15, with availability on the 22.The Apple Watch NumbersDuring the presentation, Apple stated that the Apple Watch is now the #1 watch brand worldwide, up from #2 in 2016, supplanting Rolex. This is on the back of a 50% year-on-year growth in Apple Watch sales, with Apple citing a 97% customer satisfaction rate. Apple did not disclose the exact number of unit sales, due to bundling the numbers in with other products, and so did not disclose if the 50% YoY was on unit sales or overall revenue from accessory or app sales.Gallery: Apple 2017: Announcing a new Apple Watch Series 3, with Intel LTE/Cellular
Logitech is introducing its first new trackball in years. The MX Ergo trackball claims improved precision compared to its predecessors, as well as eight buttons combining modern features with an older use model. The device is also one of the first products by Logitech that supports the company’s Flow technology that enables seamless switching and file sharing between different systems.Trackball History 101The trackball was invented in 1947, decades before mice and personal computers, for the British Royal Navy’s command, control, and coordination system known as Comprehensive Display System (CDS). In fact, a rolling ball along with four disks to pick up motion were used both for early trackballs and for early mice. However, mice were chosen by Apple, Microsoft, Xerox and others for their programs and computers featuring GUI in the late 1970s and the early 1980s possibly because of more intuitive design. Meanwhile, rolling balls inside mice were not always optimal for precision and other reasons, which is why trackballs became relatively popular in the eighties and the nineties primarily among graphics designers. After both mice and trackballs switched to optical tracking technology in the late 1990s to early 2000s, advantages of trackballs somewhat eroded and their adoption diminished. Nonetheless, there are loyal trackball users who continue to operate them instead of other tracking devices either for personal efficiency, comfort, or nostalgia. Only two main companies produce trackballs nowadays: Logitech and Kensington, with Logitech introducing its first new trackball in many years.The Logitech MX Ergo (For Right-Handers)The Logitech MX Ergo looks like a huge mouse, except it has a ball which has to be rotated by a big digit. As the 'Ergo' name implies, the ergonomics of the trackball can be adjusted. This is achieved by increasing the angle of the device from 0 to 20 degrees, just like the precision of the optical tracking, which varies from 320 dpi to 440 dpi. The device has eight buttons, some of which can be reprogrammed. The latest trackball also comes with an integrated 500 mAh Li-Po battery that can work for 'days or months depending on usage model'.The new MX Ergo trackball from Logitech can use the company’s Unify wireless receiver (as well as Bluetooth) to connect to PCs. Moreover, just like Logitech’s latest mice, the MX Ergo supports the company’s Flow technology that allows to simultaneously control two computers (Macs and/or Windows) and automatically switch between them by moving the cursor to the edge of the screen. In addition, the Flow allows transferring files between two systems wirelessly using Wi-Fi or Ethernet networks.
Back in May of this year, we saw our first glimpse of the X299 OC Formula, ASRock’s to-be high-end overclocking focused motherboard. In the past couple of generations, the OC Formula line was black and yellow. For X299 it has changed to black with some gray. Though it looks like most other X299 boards on the shelves, ASRock positions it as “…ideally focused on overclocking exclusively, without any other useless features, designs or gimmicks.”One major change to previous OC Formula (OCF) boards of the past is the reduction down to one memory slot per channel. Most X299 based boards have eight slots; while the X299 OCF has four. Since the board is aimed towards overclocking, previous boards have shown one memory slot per channel has overclocking benefits, so the transition is made on the X299 version. Most users aren’t looking for the full 128GB capacity anyway, as this can put a ceiling on how fast your ram runs. ASRock states the board design aids in stability and reaching higher memory speeds, with up to DDR4 4600 (OC) officially supported out of the box, which is among the highest we have seen so far for X299. ASRock’s own in-house overclocker Nick Shih, who has helped design the OCF line for many years, has achieved DDR4-4800 with only air cooling on some high-end premium and binned memory. The X299 OC Formula will also support soon-to-be-announced DDR4-4500 XMP profile modules that will appear on the market soon.As with past OC Formula boards, the X299 version carries over the ASRock Formula Kits (Power Kit, Connector Kit, and Cooling Kit). The 'kits' are marketing speak for features found on the board. Items such as the all DigiPower VRM design with Dr. MOS MOSFETs, a Hi-Density power 8-pin EPS connector, 15µ gold contacts, 8-layer 2oz copper PCB and the heat pipe configuration for the large VRM heatsinks, are all part of that feature package. ASRock says this heat sink configuration can supports up to 450W from the VRMs with proper airflow. The same 13 phase VRM found on the X299 Taichi and the Gaming i9 makes its way to the OC Formula as well.Other overclocking features include an LN2 mode switch (disables CPU thermal protection, enables additional OC features), Rapid OC buttons to manually adjust the CPU ratio, the BCLK frequency, or the CPU voltages directly on the board. There are also PCIe on/off switches to disable PCIe slots, and the retry/reset/BFG button set for quick access to different types of restarts and boots. These features are found in the upper right-hand corner of the board above the ATX power. For those not wanting to manually overclock and play, the UEFI also has several of Nick's preset overclocking profiles. These range from 4 GHz all the way to extreme overclocking (liquid nitrogen, LN2) profiles. If BCLK adjustment is needed, the X299 OCF also comes with an additional external base clock generator, the Hyper BCLK Engine III. It is designed to support PCIe frequency overclocking and a wider range of BCLK frequency adjustments.Outside of overclocking centered features, the board supports 4-Way SLI and Crossfire in its five full-length PCIe slots, has dual PCIe 3.0 x4 M.2 slots supporting both PCIe and SATA based M.2 modules, eight SATA ports, dual Intel Gigabit LAN ports, and uses the latest Realtek ALC1220 audio codec with ASRock’s Purity Sound 4 software. For those looking for RGB LEDs, the X299 OCF does have them under the PCH heatsink only.ASRock X299 OC FormulaWarranty Period3 YearsProduct PageLinkPriceN/ASizeATXCPU InterfaceLGA2066ChipsetIntel X299Memory Slots (DDR4)Four DDR4
Toshiba has started to sell a new, 8 TB version of its X300 3.5” desktop hard drive. The new X300 8 TB hard drive relies on a specially developed platform with enterprise features that promises to enable extended reliability and has two performance-optimizing technologies. However what is especially noteworthy is that the price of this 7200 rpm-class HDD is considerably lower than the price of competing 7200 rpm-class 8 TB PMR internal hard drives.The Toshiba X300 family of hard drives now consists of 4 TB, 5 TB, 6 TB and 8 TB models that have 7200 RPM spindle speed and 128 MB cache. Toshiba is not disclosing the capacity of the platters that it uses for the 8 TB HDD, but only says that they feature perpendicular magnetic recording and thus the drive has predictable performance and behavior. Apart from increased capacities compared to Toshiba’s previous-gen P300-series (aka DT01ACA***) hard drives for desktops, the X300 lineup boasts higher performance and new features designed to improve the reliability of the HDDs.When it comes to the performance of the 8 TB model in particular, the drive uses platters with a higher areal density than its predecessors, as well as a 128 MB cache (up from 64 MB on P300-series drives). While Toshiba is not confirming this, based on what we know about the X300 series the 8 TB model most likely uses six 1.33 TB PMR platters, as opposed to 1 TB PMR platters in the other models. Consequently the 8 TB model has a higher areal density than the other X300 drives, which means that its sequential read/write performance should also be higher. Furthermore, in a new feature that appears to be unique to the 8 TB model, the cache of the drive features a self-contained cache algorithm with on-board buffer management, which is said to improve the cache allocation of read and write operations to increase performance..Toshiba is not disclosing exact performance figures for the 8 TB X300, but the company’s N300 8 TB HDD launched earlier this year and and is believed to be based on the same platters. Taking a look at that drive we find a maximum sustained transfer rate of around 240 MB/s, and we expect that the 8TB X300 is in the same ballpark.Toshiba X300-Series HDDsHDWF180XZSTAHDWE160XZSTAHDWE150XZSTAHDWE140XZSTACapacity8 TB6 TB5 TB4 TBRPM7200 RPMInterfaceSATA 6 GbpsDRAM Cache128 MB
G.Skill on Friday introduced its fastest dual-channel memory kit designed specifically for Intel’s Kaby Lake-X processors and Intel’s X299 HEDT platforms. The new Trident Z DDR4-4600 DIMMs not only boast the highest officially supported DDR4 transfer rate in the industry to date, but are also among the first to use 1.5 Volts to hit that milestone.The new extreme Trident Z DDR4 memory modules, as G.Skill calls them, are based on Samsung’s famous 8 Gb B-die memory ICs produced on their 20 nm fabrication process. G.Skill says that to build its DDR4-4600 CL19 DIMMs, it had to cherry pick DRAM chips with the highest frequency potential and increase the voltage of the memory modules all the way to 1.5 Volts. This is a whopping 25% increase over DDR4 standard's default voltage of 1.2v, and in AT's collective memory we can't recall the last time we saw a memory kit ship with a voltage so far over the standard. To G.Skill's credit they are now pushing DDR4 well above the specification's original maximum 3200 MT/sec transfer rate, so the payoff is clearly there, but it's also clear that the company is pushing current DDR4 technology and Samsung's B-dies to their limits.Such a high voltage is (obviously) not impossible to work with, but it does come with some challenges both for users and the manufacturer. The biggest is sheer power consumption – remember that power consumption increases with the square of the voltage – so a 25% voltage increase will increase the power consumption of these DIMMs by even more than that. G.Skill claims that the DIMMs do not have any overheating issues and it had run loads of burn-in tests to ensure stability and reliability, but these are top-of-the-range enthusiast-class products that will need sufficient cooling. Meanwhile on the manufacturing side, G.Skill not only needs to heavily bin chips to find those that can operate at these speeds, but then build a complete DIMM that can handle the frequency and the power delivery needs. Similarly, a solid motherboard is necessary to handle these speeds on its end, as well as the higher power delivery.G.Skill has validated stable operation of its “extreme” Trident Z dual-channel kit at 4600 MT/s on Intel’s Core i7-7740X (Kaby Lake-X) CPU and ASRock’s X299 OC Formula motherboard. The latter was designed by ASRock in cooperation with Nick Shih, a well-known overclocker, and it only has four memory slots in order to minimize interference and ensure a “clean” power supply. The modules come with XMP 2.0 SPD profiles that will simplify their set up on all Intel X299 platforms, but keep in mind that the kit is intended only for dual-channel operation and G.Skill tested it using a particular hardware config.G.Skill's Trident Z Memory for Intel's X299 PlatformSpeedCL TimingVoltageKit ConfigurationKit CapacityFamilyDDR4-3600CL16 16-16-361.35 V4×8 GB
Wrapping up our IFA coverage, at least week's trade show TPV demonstrated a preproduction version of its upcoming ultra-wide (32:9 aspect ratio) 49” Philips display. The 492P8 monitor will have something in common with Samsung’s C49HG90 introduced earlier this year, but it will lack quantum dots and a number of other features. A good news is that it will cost less, at a little over $1000.Over the past few quarters companies like Philips, LG, Samsung, JapanNext and some other have introduced computer displays with diagonals significantly exceeding 30” – 34”, setting a new trend for ultra large monitors. Separately, ASUS, Dell, Samsung, LG and other have launched LCDs with a 21:9 aspect ratio, setting another trend, this time for ultra-wide monitors. Different suppliers of monitors target their ultra-large LCDs at different audiences, but it is clear that these wide and/or huge displays are not niche products, but represent new market trends. Being one of the largest maker of LCD panels in the world, Samsung recognized both trends early enough and this year introduced the world’s first mass-market monitor with a 49” diagonal and an ultra-wide 32:9 aspect ratio.Samsung gave the backlighting on its C49HG90 a quantum dot treatment to expand its color space to 95% of the DCI-P3, while also equipping it with AMD’s FreeSync 2 technology and increasing its maximum refresh rate to 144 Hz in order to address the high end of the gaming market. At present, the display is indeed one of the most advanced and expensive ($1499) gaming monitors in the industry. Meanwhile, gamers are not the only category of users, who can benefit from a massive ultra-wide screen. There are users of multi-monitor configurations in finance, engineering, design, audio/video production and other industries, who would gladly swap two displays for one ultra-wide one or four LCDs for two. Apparently, Philips plans to address these industries with its upcoming 492P8 monitor. The company confirms that Samsung is the supplier of its 49" 32:9 panel, but given the fact that this is a rather niche product (there are not a lot of people who have enough space for a 49" monitor on their desks at home or in office), it is highly likely that Samsung will remain the only producer of such panels for a while.As the name implies, the Philips 492P8 belongs to the brand’s P-line offerings aimed at professionals. Although Philips has demonstrated the 492P8 in action at IFA, the company is not releasing the monitor's complete specifications just yet, as some things may change between the current prototype and the final product. Nonetheless, the basic details about the display panel itself are already known: a 3840x1080 resolution, up to 600 nits brightness, up to 5000:1 contrast ratio, 178º/178º vertical/horizontal viewing angles, 1800R curvature and so on. Unlike Samsung, Philips will not be using QLED backlighting to improve color gamut, citing the different target audiences. For the same reason, peak brightness could be limited and since we do not have the final specs of the 492P8 at hand, we'd rather not speculate about the specifications of the monitor itself.Connectivity capabilities of the Philips 492P8 look rather good: the monitor has a DisplayPort, an HDMI port, a USB Type-C input, a D-Sub connector, as well as a built-in dual-port USB 3.0 and an Ethernet hub (the USB-C acts like an upstream port for both). The presence of the D-Sub looks a bit odd, but it could be used to connect an additional computer and display its output in picture-by-picture (PBP) or picture-in-picture (PiP) mode. In addition, there are two 3.5-mm audio connectors for headphones and a microphone.Philips Ultra-Wide 49" Display492P8Panel49" VANative Resolution3840 × 1080Maximum Refresh RateunknownResponse TimeunknownBrightnessup to 600 cd/m² (?)Contrastup to 5000:1 (?)BacklightingLEDViewing Angles178°/178° horizontal/verticalCurvature1800RAspect Ratio32:9 (3.56:1)Color GamutsRGBDynamic Refresh Rate TechunknownPixel Pitch0.312 mm²Pixel Density81.41 PPIInputs1 × DP
Dell has updated its rugged Latitude 12 tablet designed to operate in extreme conditions. The new Latitude 12 model 7212 is getting faster CPUs featuring the Skylake and Kaby Lake microarchitecture, a new 11.6” FHD display with an improved cover glass, a USB-C connector, a higher-capacity SSD option, and other improvements.Dell launched its original Latitude 12 model 7202 rugged extreme tablet back in 2015. The unit was based on Intel’s Core M (Broadwell-Y) SoC and a set of mobile PC components capable of working in extreme conditions, but its main features were reinforced chassis, security technologies, vast communication capabilities as well as compatibility with various strengthened peripherals and special-purpose equipment. The new Latitude 12 model 7212 inherits virtually everything from the predecessor, but swaps internal hardware, changes the display and adds a couple of other things.The Dell Latitude 12 tablet comes in the MIL-STD-810G-certified 24-mm thick enclosure made to withstand operating drops, thermal extremes, dust, sand, humidity, blowing rain, vibration, functional shock and all other kinds of physical impact. The slate has operating thermal range from -29°C to 63°C (20°F to 145°F), it can work in hazardous locations and withstand electromagnetic interference (MIL-STD-461F certified). In short, the Latitude 12 can work safely almost everywhere and in almost any circumstances — from a construction site, to a drilling site in the desert, to a battlefield.Obviously, the rugged tablet is rather heavy (but not that heavy): the new Latitude 12 model 7212 weighs 1.27 kilograms with a 2-cell battery, like a full-fledged laptop. Dell says that the weight of the model 7212 is 27% lower compared to the original model 7202, but does not say how it managed to reduce the weight. Visually, the systems are similar and the new model is compatible with all of its predecessor's accessories. Yet, the Latitude 12 7202 and the Latitude 12 7212 are not completely identical: the new model has a new rigid handle option, it comes with new handles and straps that are easier to install and has a number of other advantages over the previous-gen model. Meanwhile, the optional RGB-backlit keyboard cover with kickstand for the Latitude 12 (also rugged, sealed and made for extremes) will further add weight and cost, if used.As mentioned above, the Dell Latitude 12 model 7212 is based on Intel’s latest CPUs featuring their Skylake and Kaby Lake microarchitectures. In fact, Dell decided to use dual-core Core i-series Skylake-U and Kaby Lake-U SoCs instead of low-power Broadwell-Y to offer higher performance. Depending on exact SKU, the Latitude 12 7212 will come with 8 GB or 16 GB of LPDDR3 memory, Class 20 or 40 SSDs with 128 GB, 256 GB, 512 GB or 1 TB capacity, optional encryption capabilities,, as well as a 26 Wh or 34 Wh internal battery. All new systems are equipped with 11.6” FHD displays featuring gloved multi-touch, AG/AR/AS/polarizer and etched Gorilla Glass.Meanwhile the communication capabilities of the Latitude 12 model 7212 are vast. By default, the rugged tablet has an Intel 8265 802.11ac Wi-Fi controller with Bluetooth 4.2, a Qualcomm Snapdragon X7 LTE modem as well as NFC capability. Optionally, the slates can be equipped with a GPS card, Bluetooth 4.2 can be removed and a different LTE modem installed.Wired I/O features of the Latitude 7212 rugged extreme tablet include USB 3.1 Type-C connector that can be used for charging and external display connectivity, a USB 3.0 Type-A connector, an optional micro RS-232 port, a universal audio jack and so on. The system is also equipped with optional rear and front cameras, a contactless smart card reader as well as a touch fingerprint sensor. For backwards compatibility, the model 7212 also has a regular 4.5-mm power connector. Finally, an optional dock station adds batteries, GbE, two USB 3.0 Type-A ports, an HDMI connector, a D-Sub output, as well as two more RS-232 ports.When it comes to security, Dell seems to have everything covered too. The system features a fingerprint reader, Dell’s ControlVault advanced authentication, Intel vPro remote management, a TPM 2.0 module, optional SED option for SSDs, NIST SP800-147 secure platform and so on.Specifications of the Dell Latitude 12 Rugged Extreme TabletLatitude 12 7212LCDDiagonal11.9"Resolution1920×1080FeaturesOutdoor-readable display with gloved multi-touch AG/AR/AS/Polarizer and Gorilla GlassCPUDual-Core 7th Gen Intel Core i5 CPUs (Skylake-U)
In a surprising move, Intel has announced plans to altogether discontinue their 802.11ad products. The company intends to cease shipments of all of its current-generation WiGig devices by late 2017. Intel has not announced any replacements for the 802.11ad parts and says that it would focus on WiGig solutions designed for VR applications.Intel is formally initiating the EOL program for the Wireless Gigabit 11000 and Tri Band Wireless-AC 18260 controllers, the Wireless Gigabit Antenna-M M100041 antenna and the Wireless Gigabit Sink W13100 sink today (September 8). Intel is asking its partners to place their final orders on its WiGig-supporting network cards, antenna and sink by September 29, 2017. The final shipments will be made by December 29, 2017.Typically, Intel continues to sell its products for at least several quarters after it initiates its product discontinuance plan. Four months is a relatively short period between the start of the EOL program and its finish, which may indicate that the company has a relatively limited amount of customers using the WiGig products and it does not expect them to be interested in the devices in 2018 and onwards. Last year Intel already announced EOL plan for its Tri Band Wireless-AC 17265 controllers and select W13100 dock SKUs. Coincidentally, the company stops their shipments on December 29, 2017, as well.The WiGig short range communication standard enables compatible devices to communicate at up to 7–8 Gb/s data rates and with minimal latencies, using the 60 GHz spectrum at distances of up to ten meters. WiGig cannot replace Wi-Fi or Bluetooth because 60 GHz signals cannot penetrate walls, but it can enable devices like wireless docking stations, wireless AR/VR head-mounted displays, wireless storage devices, wireless displays, and others that are in direct line of sight. Intel’s current-generation WiGig products were designed primarily for notebook dockings. A number of PC makers released laptops featuring Intel’s Tri Band Wireless-AC 18260/17265 controllers and supporting docks featuring Intel’s Wireless Gigabit Sink W13100. These WiGig-enabled solutions were primarily targeted at their B2B customers in business and enterprise segments.However, WiGig has never seen any adoption in mass-market laptops, displays and other devices. The vast majority of advanced notebooks these days come with either USB 3.1 Gen 2 Type-C or Thunderbolt 3 ports supporting up to 10 or 40 Gb/s data transfer rates (respectively), DisplayPort 1.2 and other protocols, thus providing far better performance and functionality than WiGig, albeit at the cost of a tethered connection. Meanwhile, the WiGig ecosystem has so far failed to become truly comprehensive, which is why the technology in general has never actually competed against Thunderbolt 3 or even USB 3.1 Gen 2. Therefore, Intel is pulling the plug on the current-gen WiGig products. They have never become popular and they are not going to, which is why Intel does not see any reasons to continue selling them. Meanwhile, this does not mean that the company intends to stop supporting them: the chip giant will continue offering drivers and support for its WiGig products in accordance with requirements.What is interesting is that Intel is not disclosing whether they have plans to introduce any new WiGig products for laptops or tablets, byt they say they will be continuing their 802.11ad work with a focus on VR headsets. Earlier this year HTC and Intel already demonstrated a wireless HTC Vive operating using the WiGig technology, but didn't reveal whether it used its off-the-shelf WiGig silicon or custom yet-unannounced solutions for the project.Intel and HTC are not the only firms trying to use WiGig for VR gear. A number of companies (DisplayLink, TPCast, etc.) have been trying to use the millimeter wave radio technology to build wireless VR headsets and some of them even demonstrated their devices at MWC 2017 earlier this year. AMD acquired Nitero for its millimeter wave radio tech and Facebook’s Oculus VR is working on wireless Project Santa Cruz HMD. All-in-all, while WiGig has not become popular in laptops, it may well power next generations of AR/VR headsets.Related Reading:
This afternoon AMD has released their latest Radeon driver update, Radeon Software Crimson ReLive Edition 17.9.1, which is largely focused on bug fixes. This update continues RTG’s rapid cadence of RX Vega post-launch support, marking the 3 driver release since the launch of Radeon RX Vega less than a month ago. This is also the first driver to be released since last Monday’s launch of RX Vega.Featuring Driver Version 17.30.1081 (Windows Driver Store Version 22.19.676.0), Radeon Software 17.9.1 addresses two bugs first noted in 17.8.1: RX Vega system hangs when resuming from sleep and attempting to play back video content, and mouse stuttering on certain Radeon RX series products when WattMan or third party GPU information polling programs are running in the background. 17.9.1 also brings further fixes for random corruption in Microsoft productivity applications, which was first addressed for RX Vega cards in 17.8.2.In addition, AMD has corrected issues where the Radeon Software installer would shrink when installing on certain 4K HDTVs, as well as Radeon Settings hangs or crashes when viewing the Display tab.In terms of games, AMD has resolved bugs with Moonlight Blade failing to launch on some Radeon GCN series products, as well as Titanfall 2 crashes or hangs on some Radeon GCN1 series products. Lastly, issues with ReLive Toolbar and Instant Replay in Guild Wars 2 were also fixed.The updated drivers for AMD’s desktop, mobile, and integrated GPUs are available through the Radeon Settings tab or online at the AMD driver download page. More information on this update and further issues can be found in the Radeon Software Crimson ReLive Edition 17.9.1 release notes.
ASRock has started sales of their new smart connected home router, the X10. The new device supports not only 802.11ac Wi-Fi and Gigabit Ethernet like any modern networking router, but also supports ZigBee and IR to control various smart and home electronics as a connected home central hub.The X10 and devices like it comes at an interesting inflection point for the consumer networking gear industry; nowadays, Wi-Fi is ubiquitous and basic routers are cheap, if not outright free from an ISP, pushing the overall market towards being highly commoditized. However farther at the edge of the market and consumer adoption, there are new technologies knocking the door, such as ZigBee and Z-Wave for smart home appliances as well as 802.11ad for wireless docking of laptops. While hubs for these devices can already be purchased seperately, standalone ZigBee and Z-Wave hubs/dongles cost about $100, slightly lowering attractiveness of home automation in general. As a result, demand for routers with ZigBee and Z-Wave is growing as a means of centralizing all of these network-related functions, and ASRock wants to capitalize on this with its new X10 product that supports ZigBee in addition to 802.11ac MU-MIMO.The ASRock X10 AC1300 IoT router is based on an unnamed SoC from Qualcomm featuring four ARM Cortex-A7 general-purpose cores. When it comes to wireless capabilities of the device, it works over 2.4 GHz (400 Mbps) as well as 5 GHz (867 Mbps) bands using two 5 dBi high gain antennas to connect different devices simultaneously. As for hardware connectors, the X10 router has one 1 GbE WAN port, four GbE LAN ports, one USB 2.0 port for storage devices and one USB 3.0 port (for add-on).As mentioned above, the ASRock X10 has integrated ZigBee radio that can connect to compatible smart home appliances (sensors, lighting, heaters, security systems, etc.) using a 250 Kbit/s channel and enable users to read/control those using special apps for Apple iOS and Google Android. Since there is a fleet of consumer electronics devices that use IR for controls, the developers of the X10 also equipped it with IR blasters as well. Obviously, in order to control things like TVs or conditioners using IR, the router has to be placed in direct line of sight with them, which may not always be optimal for various reasons, but that’s the price of added comfort.Hardware capabilities of the ASRock X10 are not its only advantages since the company invested a lot in its software in a bid to make it a hub for a smart home. For example, X10-compatible apps controlling alarms of certain devices can send commands to the router based on geolocation of the owner. In addition, the owner can control the X10 and all of their smart and CE devices remotely.ASRock’s X10 IoT router is available for pre-order from Amazon in the U.S. at $149.99, or can be bought from Newegg for $139.99. The ASRock X10 is not the only ZigBee-, IR- and remote management-supporting router on the market. An important thing about the X10 is that it comes from a mass-market manufacturer, which means that demand for routers with IoT features is growing rapidly.
This Wednesday, NVIDIA has announced that they have shipped their first commercial Volta-based DGX-1 system to the MGH & BWH Center for Clinical Data Science (CCDS), a Massachusetts-based research group focusing on AI and machine learning applications in healthcare. In a sense, this serves as a generational upgrade as CCDS was one of the first research institutions to receive a Pascal-based first generation DGX-1 last December. In addition, NVIDIA is shipping a DGX Station to CCDS later this month.At CCDS, these AI supercomputers will continue to be used in training deep neural networks for the purpose of evaluating medical images and scans, using Massachusetts General Hospital’s collection of phenotypic, genetics, and imaging data. In turn, this can assist doctors and medical practitioners in making faster and more accurate diagnoses and treatment plans.First announced at GTC 2017, the DGX-1V server is powered by 8 Tesla V100s and priced at $149,000. The original iteration of the DGX-1 was priced at $129,000 with a 2P 16-core Haswell-EP configuration, but has since been updated to the same 20-core Broadwell-EP CPUs found in the DGX-1V, allowing for easy P100 to V100 drop-in upgrades. As for the DGX Station, this was also unveiled at GTC 2017, and is essentially a full tower workstation 1P version of the DGX-1 with 4 Tesla V100s. This water cooled DGX Station is priced at $69,000.Selected NVIDIA DGX Systems SpecificationsDGX-1
HP is not a well-known name in the retail SSD market, but as a major PC OEM it's not too surprising to see them producing their own SSD models based on third-party controller solutions. The HP S700 and S700 Pro SSDs use Micron 3D TLC NAND and Silicon Motion controllers, but have undergone tuning and significant QA from HP in an effort to give them an edge over earlier drives from other vendors that are using the same basic formula.
Western Digital has introduced its new SanDisk iXpand Base storage solution for Apple iOS-based devices. Just like the SanDisk iXpand flash drive launched several years ago, the new device can backup photos, videos and contacts from iPhones, iPads and other devices to free some space and/or make a redundant copy. Internally, the iXpand base uses SD cards, essentially making it a card reader for Apple’s devices.As the name implies, the SanDisk iXpand Base is a base for iPhone, iPad or iPod Touch that holds an SD card and has a power adapter to charge iOS devices. To back up photos, videos and contacts, users have to connect the product to their mobile device using a Lightning cable (not bundled) and a special application will activate automatically. The software transfers content (including content from apps and located in the iCloud) to the card, which may then be removed and read on other devices. Moreover, the SanDisk iXpand Base itself can be connected to a computer using a Micro-USB to USB Type-A cable and used like an SD card reader.SanDisk will offer multiple versions of the iXpand Base with pre-installed SD cards ranging from 32GB to 256GB. The company does not disclose which SD cards it uses and whether the Xpand Base supports aftermarket memory cards. If it does (most likely), then the device is upgradeable too and once an owner runs out of space, they can simply swap the card with a new oneApple’s iPhones are often criticized for not having a memory card slot, which requires owners to clean up their photos from time to time and/or delete rarely used apps to free some space. To a large degree, the SanDisk iXpand Base solves this problem as it acts like an external card reader for Apple’s smartphones, which automatically backs up their photos and videos when used for charging (as opposed to the iXpand drive, which has to be connected separately). Afterwards, the content may be deleted from the phone to free up some space.The SanDisk iXpand Base will be available shortly in the U.S. from such stores as Amazon, BestBuy.com, B&H Photo Video.com and other major retailers. The most affordable model with a 32 GB SD card will cost $49.99, whereas the one with a 256 GB card will carry a $199.99 price tag.SanDisk iXpand Base at Glance32 GB64 GB128 GB256 GBP/NSDIB20N-032G-AN9ANSDIB20N-064G-AN9ANSDIB20N-128G-AN9AESDIB20N-256G-AN9AEFast ChargeYes, 5 V, 3 A (15 W)MaterialsRubber and plasticDimensions25.36 × 101.00 × 107.00 mm or 0.99 × 3.98 × 4.21 in (HxWxL)Warranty2 yearsPrice$49.99$99.99$129.99$199.99Gallery: SanDisk iXpand BaseRelated Reading:
AOC has introduced its new curved display specifically aimed at the entry-level market. The new C2789FH8 monitor is one the industry’s first curved LCDs that comes in yellow gold chassis with gold-and-white mosaic on the back. From performance point of view, the display seems rather basic for 2017, but among its advantages are extended color gamut, AMD’s FreeSync adaptive refresh rate technology, and a very affordable price.The AOC C2789FH8 is based on a VA panel with FHD (1920×1080) resolution and 1800R curvature. The panel has a 250 nits brightness, a 3000:1 contrast ratio, a 4 ms GtG response time, as well as a 60 Hz maximum refresh. When it comes to color gamut, AOC says that the display covers 90% of the NTSC color space, which means that its capabilities exceed those required to display 100% of the sRGB color space. For some reason, AOC does not disclose anything about the sRGB support, but it is logical to expect a mass-market monitor to support the primary color space used by Microsoft’s Windows.Two other important features of the display are support of AMD’s FreeSync technology (the range is unknown, but typically it is between 30 and 60 Hz on basic models) and very thin bezels. AOC suggests that its thin bezels will enable owners to build dual or triple display configurations for gaming or productivity. In fact, AOC considers gaming as one of the important selling points of the C2789FH8. Apart from FreeSync, the manufacturer equipped the monitor with two proprietary features: Game Modes that optimize brightness, contrast and other things for different game genres (FPS, RTS, Racing) and Shadow Control that adjusts brightness in dark scenes. Besides, the manufacturer mentioned that the monitor also features the AOC’s Clear Vision video engine to upscale SD content to HD quality.As for connectivity, the C2789FH8 has an HDMI and a VGA D-Sub input as well as features a 3.5-mm audio jack for headphones. D-Sub does not support HDCP or FreeSync, so it is not going to be used by gamers. A key reason why AOC decided to install a D-Sub port into a new monitor is probably because it wanted to address the market of older PCs that are still in use as well as very cheap new PCs with its inexpensive curved LCD.Over the past nine months AOC has released several displays aimed at people who value style above everything else. First, the company launched its Q2781PS monitor with rose gold stand and Swarovski crystals in February. Then, it released the PDS-series LCDs co-developed with Porsche Design. The C2789FH8 is a yet another stylish display that comes in yellow gold chassis and inherits some pros and cons from the aforementioned models, but also introduces some new features. On the bright side of things, the screen itself is rather thin (its thickness is 7 mm) and it has very narrow bezels. To make it even more attractive for a potential buyer, it is curved and supports various features aimed at gamers (FreeSync, game modes, etc.). On the other hand, the resolution and brightness of the new SKU are lower than these of typical 27” monitors, something we have seen this with the AOC PDS-series already. Moreover, the unorthodox yellow-gold die-cast metal stand of the C2789FH8 does not allow any kind of adjustments (e.g., height, tilt, etc.), a tradeoff between style and price.Specifications of AOC's Golden Curved DisplayC2789FH8Panel27" VANative Resolution1920 × 1080Maximum Refresh Rate60 HzDynamic Refresh TechAMD FreeSyncResponse Time4 ms (gray-to-gray)Brightness250 cd/m²Contrast3000:1Viewing Angles178°/178° horizontal/verticalCurvature1800RPixel Pitch0.3113 × 0.3113 mmPPI81Color Gamut90% NTSC
TCL's BlackBerry Mobile imprint has introduced a revamped BlackBerry KEYone smartphone at IFA trade show last week. The new KEYone Black Edition comes in all-black chassis and has more DRAM and storage space than the original KEYone model introduced at MWC earlier this year. The product will be available in multiple countries, but the U.S. is currently not listed among them.Traditionally, Research in Motion and then BlackBerry Limited developed most of their smartphones with business customers in mind and this prompted them to use strict designs and colors. Since black fits business environments well and looks good with almost any other color, most of BlackBerry handsets were black, sometimes with grey metallic inlays. Such methodology is fully understandable, yet when Nokia released its E-series smartphones in the mid-2000s, it took a bold approach and started to offer them in multiple colors. Eventually, BlackBerry Mobile took a page from Nokia’s book and introduced its Passport silver edition for those who prefer metallic, but only after it released an all-black Passport. With the KEYone, BlackBerry Mobile took a different tactic and launched the phone in metallic-with-black finish first, which looks very high-tech, but may not appeal to everyone from BlackBerry’s traditional customer base. The KEYone Black Edition makes the new BlackBerry completely black.Gallery: BlackBerry KEYone Black Edition: Hands On at IFAThe KeyONE BE continues to use the frame made of anodized aluminum, but the color of the frame is now black, not metallic. BlackBerry Mobile does not disclose details about its anodization process and how durable the frame is. It is possible that BlackBerry has been experimenting with black anodized aluminum for a while, which is why it did not release an all-black version earlier this year as it wanted to ensure that the quality and robustness of its materials.
Logitech has introduced its new gaming mouse that weds high mousing precision, a long battery life, and low input lag with a relatively affordable price. The new G603 Lightspeed wireless mouse uses the company’s latest proprietary sensor with enhanced power efficiency, as well as its new interconnection technology.The market of gaming peripherals is expanding. New suppliers enter the scene every year with rather promising products. and established players use more and more sophisticated technologies for their halo products to differentiate themselves from others. Because of this, the complexity of gaming mice has increased rather substantially over the last 10 years, and this evidently affected their costs and prices. Flagship gaming products from established players like Logitech and Razer have long crossed the psychological $100 barrier and now halo products retail for $150. Meanwhile, the vast majority of gamers hardly need and can barely afford gear from the high end of the product stack. As a result, numerous companies focus on mainstream price points, they try to make mice that cost $60 to $80 more attractive for the buyer and grab sales away from the market leaders. A good example of such an approach is Corsair’s Glaive RGB, that features a 16,000 DPI sensor, interchangeable grips, programmability and RGB lighting at a price of $70. Obviously, Logitech has to respond to products like this one and the G603 Lightspeed seems to be a very strong contender for the sweet spot of the gaming mice market.The Logitech G603 Lightspeed is based on the company’s new HERO (high efficiency rated optical) sensor with 12,000 DPI sensitivity, up to 400 inches per second speed and up to 40G acceleration. According to Logitech, the HERO sensor consumes less energy than other high-end optical sensors, which is why it can last for 500 hours non-stop gaming with maximum performance on two AA batteries.One of the key selling features of the Logitech G603 is its Lightspeed wireless interconnection technology that promises to cut the input lag by optimizing internal architecture of keyboards/mice, decreasing polling rate of wireless receivers to 1 ms, increasing signal strength, applying a proprietary frequency hopping mechanism that uses the strongest interference-free channel and optimizing software. In a bid to preserve energy, Logitech’s software allows to reduce polling rate of wireless transmitter and receiver to 8 ms when working with non-gaming applications.Since the Logitech G603 Lightspeed is a gaming mouse, all of its six buttons are programmable using the company’s LGS software. Just like some other contemporary mice and keyboards from Logitech, the G603 Lightspeed can work with two host systems while connecting to them using Bluetooth or Lightspeed.The Logitech G603 Lightspeed mouse will be available this month directly from the company and from its partners. As noted, since the G603 Lightspeed is designed to compete for mainstream gamers, its prices is not going to be too high — the MSRP is $69.99 in the U.S., but it will differ in other countries.Related Reading:
We have some good news for low-power AMD builders this morning: AMD has (finally) begun to sell the 35W versions of their "Bristol Ridge" desktop APUs. Overall the company has released 3 35W retail Bristol Ridge SKUs, the A12-9800E, A10-9700E, and A6-9500E, with these chips fleshing out the low-power segment of AMD's AM4 platform through the end of the year.AMD originally released its Bristol Ridge A9000-series APUs to OEMs in mid-2016, targeting desktops and laptops. The SoCs integrate one or two Excavator v2 modules (two or four x86 cores in AMD’s nomenclature), a Radeon R5/R7 iGPU featuring AMD’s GCN 1.2 (3 generation) architecture and up to 512 stream processors, a dual-channel DDR4 memory controller and so on. Earlier this year AMD finally decided to release a rather broad lineup of its 7-generation A9000-series APUs on the retail market, enabling end-users to build their own inexpensive AM4 systems, essentially popularizing the AM4 ecosystem compatible with the company’s latest Ryzen processors in general.AMD Bristol Ridge APUs and CPUsCPUGPUTDPModules/
Every once in a while, we get surprised. It seems to be a rare thing in this industry these days, but it does still happen from time to time. The Chuwi Lapbook 14.1 was one such surprise when we reviewed it earlier this year. Chuwi hasn’t been around for a long time, but in one fell swoop, they forever changed the expectations on a budget laptop. Reasonable components, coupled with a good IPS display, instantly changed the expectations on any budget offering from the big PC makers. So far, they’ve not really responded, and the LapBook 14.1 is easily the top pick for anyone wanting a 14-inch laptop for not a lot of money.So, imagine the shock when this still relatively unknown PC maker surprised us again. Earlier this year, they announced the LapBook 12.3, which is now available. It takes the same basic internals from the LapBook 14.1, couples it to the same display found in the Surface Pro, and packs it all into an all-aluminum chassis. The budget bar has been raised again.
In this review, we are having a look at the ECS Z270H4-I Durathon 2, a Mini ITX motherboard based on the Intel Z270 chipset and marketed towards gamers and overclockers. A quick look at its specifications reveals very interesting features for a motherboard that retails for $109, which we will closely examine in this review.
Logitech has added another keyboard to its arsenal, and this time they’ve integrated an input dial into it as well. The CRAFT Advanced Keyboard is designed for “creators” in the same vein as the Surface Dial, and it provides similar functionality, albeit without the on-display capabilities.Logitech is calling their dial the Crown, and it sits in the top left corner of the keyboard. The idea behind it is much like the Surface Dial, in that you would use your left hand to run the Crown, while your right hand is on the mouse. There’s no reason you couldn’t swap those hands around if you prefer mouse duties with your left hand, but the placement of the Crown isn’t going to be as well suited to that without moving the keyboard.Logitech is touting the same functionality as the Surface Dial as well, in particular in creative apps like Adobe Photoshop, where you can control context-specific functions. Ian got a chance to check out the CRAFT keyboard at IFA doing just that.In addition to the Crown, the keyboard itself is typical membrane keyboard, but it does offer “Smart Illumination” which automatically lights up the keys when your hands approach the keyboard, and the lighting adjusts to the ambient lighting. The keyboard can be connected to up to three devices, and has a switch to change which device has focus. It can be connected either over the Logitech Unifying receiver, of with Bluetooth LE.With macOS and Windows support, the keyboard would be a way to bring the Surface Dial to a Mac.On either OS, the capabilities of the Crown are connected through Logitech’s software suite. On a Mac, that makes sense, but on Windows, it would have been nice to see integration with the Windows Dial APIs so that the Crown could be used with any app that supported that as well, but that’s not the case, and that’s a miss by Logitech. Even if the Dial doesn’t have widespread support, there’s already plenty of apps that do support it, and none of those can be controlled via the Crown.Logitech’s CRAFT keyboard is at the premium end of their lineup, and will be available on October for $199.99Source: Logitech
Logitech has been in the PC speaker game for some time, and they’ve just announced a new set into their portfolio. The MX Sound speaker system is a two-channel PC speaker system which also integrates multiple inputs, as well as Bluetooth 4.1, to allow the owner to provide the improved audio capabilities of external speakers to their PC, phone, and more.There’s no dedicated subwoofer, which shrinks the footprint of this setup, but the two speakers should offer decent punch with rear-facing port tubes to improve bass response, and 12-Watts of RMS power (24 peak) should provide plenty of authority for the two drivers. The speaker housings are 160mm in diameter, or just over six inches, so these are reasonable sized speakers for a desktop set. The set of speakers weighs in a t 1.72 kg / 3.8 lbs as well.Logitech doesn’t provide a frequency response chart for these, but compared to any laptop, there will be a big step up in terms of audio quality thanks to the larger drivers and more powerful amplifier, but that’s not all these are made to connect to. Logitech allows for pairing with up to two Bluetooth devices, as well as two 3.5 mm input jacks. This versatility should be welcomed to many who use multiple devices. There’s also a headphone jack, to easily move from speakers to headphones without having to change any settings on the PC or phone.The MX branding is due to these speakers matching well with the other MX devices Logitech sells, with similar styling cues and coloring to their mice and keyboards. The speakers have fabric covers, and motion-activated backlit controls.These new speakers will be available starting in October, for $99.99.Source: Logitech
Last year, at IFA 2016, I stumbled across the Ockel Sirius project. In its infancy, the device was seemingly straight forward: put a full PC into a smartphone sized chassis. At the time the project was in its early stages, and in my hands was a non-functioning mockup before the idea went to crowd funding. Normally we do not cover crowdfunding projects as a general rule, so I did not write it up at the time. But I did meet the CEO and the Product Manager, and gave a lot of feedback. I somehow bumped into them again this year while randomly walking through the halls, and they showed a working version two months from a full launch. Some of those ideas were implemented, and it looks like an interesting mash of smartphone and PC.The Sirius A is easily as tall as, if not slightly taller than, my 6-inch smartphones, the Mate 9 and LG V30, and the requirements for PC ports means that it is also wider, particularly on one side which has two USB 3.0 ports, a HDMI 1.4 port, a DisplayPort, Gigabit Ethernet (alongside internal WiFi) and two different ways to charge, via USB Type-C or with the bundled wall adaptor. The new model was a bit heavier than the prototype from last year, namely because this one had a battery inside – an 11Wh / 3500 mAh battery, good for 3-4 hours of video consumption I was told. The weight of the prototype was around 0.73 lbs, or just over 330 grams. This is 2-2.5x a smartphone, but given that I carry two smartphones anyway, it wasn’t so much of a big jump (from my perspective).Perhaps the reason for such a battery life number comes from the chipset: Ockel is using Intel’s Cherry Trail Atom platform here, in the Atom x7-Z8750. This is a quad-core 1.60-2.60 GHz processor, with a rated TDP of 2W. It uses Intel’s Gen8 graphics, which has native H.264 decode but only hybrid HEVC and VP9, which is likely to draw extra power. The reason for Cherry Trail is one of time and available parts – Intel has not launched a 2W equivalent processor with its new Atom cores, and also Ockel has been designing the system for over a year, meaning that parts would have had to have been locked down. That aside, they see the device more as a tool for professionals that need a full windows device but do not want to carry a laptop. With Windows 10 in play, Ockel says, the separate PC and tablet modes take care of a number of pain points with Windows touch screen interactions.Implemented since the last discussion with them was a fingerprint sensor, for an easy unlock. Ockel are using a Goodix sensor, similar to the Huawei Matebook X and Huawei smartphones. This feature I requested just for easy access to the OS after picking the device up, rather than continually inserting a password. The power button in this case merely turns off the display, rather than putting the device into a sleep/hibernate state.The hardware also supports dual display output, from both the HDMI and DisplayPort simultaneously, with the idea that a user can plug the device into desktop hardware when at a desk.Ockel is set to offer two versions of the Sirius: the Sirius A and the Sirius A Pro. Both systems will have the same SoC, the same 1920x1080 IPS panel, and the same ports, differing in OS version (Win 10 Home vs Win 10 Pro), memory (4GB vs 8GB LPDDR3-1600) and storage (64GB vs 128GB eMMC). There is an additional micro-SD slot, and Ockel will be offering both versions of the device with optional 128GB micro-SD cards.Ockel SiriusSirius ASirius A ProCPUIntel Atom X7-Z8700, 4C/4T,
Riding on the back of the ‘not-announced then announced’ initial set of Kirin 970 details, Huawei had one of the major keynote presentations at the IFA trade show this year, detailing more of the new SoC, more into the AI details, and also providing some salient information about the next flagship phone. Richard Yu, CEO of Huawei’s Consumer Business Group (CBG), announced that the Huawei Mate 10 and Mate 10 Pro will be launched on October 16, at an event in Munich, and will feature both the Kirin 970 SoC and a new minimal-bezel display.
“The chosen one you are, with great promise I see.” Now that Disney owns the Star Wars franchise, the expansion of the universe is seemingly never ending. More films, more toys, and now more technology. We’re still a few years away from getting our own lightsabers , but until then Disney has partnered with Lenovo to design a Star Wars experience using smartphones and augmented reality.Lenovo is creating the hardware: a light beacon, a tracking sensor, a lightsaber controller, and the augmented reality headset designed for smartphones. The approach for Lenovo’s AR is different to how Samsung and others are approaching smartphone VR, or how Microsoft is implementing Hololens: by implementing a pre-approved smartphone into the headset, the hardware uses a four-inch diagonal portion of the screen to project an image that rebounds onto prisms and into the user’s eyes. The effect is that the user can still see ahead of them, but also images and details on the screens – limited mostly by the pixel density of the smartphone display.Lenovo already has the hardware up for pre-order in the US ($199) and the EU (249-299 EUR), and is running a curated system of Android and iOS smartphones. This means that the smartphones have to be on Lenovo’s pre-approved list, which I suspect means that the limitation will be enforced at the Play Store level (I didn’t ask about side loading). But the headset is designed for variable sized devices.In the two minute demo I participated in, I put on the headset and was given a lightsaber into a 10ft diameter circle, and fought Kylo Ren with my blue beam of painful light. Despite attempting harakiri in the first five seconds (to no effect), it was surprising how clear the image was without any IPD adjustment. The field of view with the headset is only 60 degrees horizontal and 30 degrees vertical, which is bigger than the Hololens and other AR headsets I have tried, but it still remains one of the biggest downsides to AR. In the demo, I had to move around and wait to counter-attack: after deflecting a blow or six from Kylo, I was given a time-slow opportunity to strike back. When waiting for him to attack, if I rushed to attack nothing seemed to happen. In typical boss-fight fashion, three successful block/hit combinations rendered me the victor – I didn’t see a health bar but this was a demo designed to encourage the user to have a positive experience.One thing I did notice is that most of what I saw was not particularly elaborate graphically: 2D menus and a reasonable polygon model. Without the need to render the background, relying on what the user is in front of to do this job (Lenovo had it in a specific dark corner for ease of use) this is probably a walk in the park for the hardware in the headset. The lightsaber connects directly to the phone via Bluetooth, which I thought might be a little slow, but I didn’t feel any lag. The lightsaber was calibrated a bit incorrectly, but only by a few degrees. I asked about different lightsabers, such as Darth Maul’s variant, and was told that it there are possibilities in the future for different hardware, although based on what I saw it was unclear if they would implement a Wii-mote type of system with a single controller with a different skin attached. The limit at the time was that the physical lightsaber only emits a blue light for the sensor for now; it does go red, but only when there’s a low battery. Think about that next time you watch Star Wars: red saber means low batteries.The possibilities for the AR headset could feasibly be endless. The agreement at this time is between Lenovo and Disney Interactive, so there is plenty of Disney IP that could feature in the future. Disney also likes to keep experiences on its platform locked down, so I wonder what the possibilities are for Lenovo to work with other developers and IP down the road. I was told by my Lenovo guide that it is all still very much a development in progress, with the hardware basically done and the software part going to ramp up. The current headset is given the name ‘Mirage’, and most smartphones should offer 3-4 hours of gameplay per charge.Lenovo MirageHeadset Mass470gHeadset Dimensions209 x 83 x 155 mmHeadset CamerasDual Motion Tracking CamerasHeadset ButtonsSelect, Cancel, MenuSupported Smartphones
Lenovo this week announced its new Yoga 920 convertible laptop that became more powerful due to Intel’s upcoming 8 generation Core i-series CPUs with up to four cores, better connected thanks to two Thunderbolt 3 ports, yet slimmer than its predecessor. The new model inherits most of the peculiarities of the previous-generation Lenovo Yoga 900-series notebooks and improves them in various ways.The new Lenovo Yoga 920 is the direct successor of the Yoga 2/3 Pro, Yoga 900 and the Yoga 910 convertible laptops that Lenovo launched in 2013 – 2016. These machines are aimed at creative professionals, who need high performance, 360° watchband hinge, touchscreen, reduced weight and a long battery life. Over the years, Lenovo has changed specs and design of its hybrid Yoga-series laptops quite significantly from generation to generation in a bid to improve the machines. This time the changes are not drastic, but still rather significant both inside and outside.The new Lenovo Yoga 920 will come with a 13.9” IPS display panel featuring very thin bezels and either 4K (3840×2160) or FHD (1920×1080) resolution, which is exactly the same panel options that are available for the Yoga 910. In the meantime, Lenovo moved the webcam from the bottom of the display bezel to its top. Besides, it reshaped the chassis slightly and sharpened its edges, making the Yoga 920 resemble Microsoft’s Surface Book. Changes in external and external design of the new Yoga vs. the predecessor enabled Lenovo to slightly reduce thickness of the PC from 14.3 to 13.95 mm (0.55”) and cut its weight from 1.38 kilograms to 1.37 kilograms (3.02 lbs).Internal differences between the Yoga 920 and the Yoga 910 seem to be no less significant than their external designs. In addition to the new Core i 8000-series CPU (presumably a U-series SoC with up to four cores and the HD Graphics 620 iGPU), the Yoga 920 also got a new motherboard that has a different layout and feature set. The new mainboard has two Thunderbolt 3 ports (instead of two USB 2.0/3.0 Type-C headers on the model 910) for charging, connecting displays/peripherals and other things. In addition, the new mobo moves the 3.5-mm TRRS Dolby Atmos-enabled audio connector to the left side of the laptop. Speaking of audio capabilities, it is necessary to note that the Yoga 920 is equipped with two speakers co-designed with JBL as well as with far field microphones that can activate Microsoft’s Cortana from four meters away (13 feet). As for other specifications, expect the Yoga 920 to be similar to its predecessor: up to 16 GB of RAM (expect a speed bump), a PCIe SSD (with up to 1 TB capacity), a 802.11ac Wi-Fi + Bluetooth 4.1 module, a webcam, as well as a end-to-end encrypted Synaptics fingerprint reader with Quantum Matcher compatible with Windows Hello.The slightly thinner and lighter chassis as well as different internal components made Lenovo to reduce capacity of Yoga 920’s battery to 66 Wh from 79 Wh, according to TechRadar. When it comes to battery life, LaptopMag reports that it will remain on the same level with the previous model: 10.8 hours on one charge for the UHD model and up to 15.5 hours for the FHD SKU.Lenovo Yoga SpecificationsYoga 900Yoga 910
GIGABYTE has outed their GeForce GTX 1080 Mini ITX 8G, the newest entrant in the high-performing small form factor graphics space. At only 169mm (6.7in) long, the company’s diminutive offering is now the second mITX NVIDIA GeForce GTX 1080 card, with the first being the ZOTAC GTX 1080 Mini, announced last December. While the ZOTAC card was described as “the world’s smallest GeForce GTX 1080,” the GIGABYTE GTX 1080 Mini ITX comes in ~40mm shorter, courtesy of its single-fan configuration.Just fitting in the 17 x 17cm mITX specifications, the GIGABYTE 1080 Mini ITX features a semi-passive 90mm fan (turning off under certain loads/temperatures), triple heat pipe cooling solution, and 5+2 power phases. Despite the size, the card maintains reference clocks under Gaming Mode, with OC Mode pushing the core clocks by a modest ~2%. Powering it all is an 8pin power connector on the top of the card.Specifications of Selected Graphics Cards for mITX PCsGIGABYTE
StarTech's new family of Thunderbolt 3 adapters that let one TB3 port to drive two 4K 60Hz displays are now available for sale. One of the adapters supports two DisplayPort 1.2 outputs, whereas another features two HDMI 2.0 headers. The devices are bus powered and do not use any kind of image compression technologies.When Intel introduced its Thunderbolt 3 interface two years ago, the company noted that one cable can drive two daisy chained 4Kp60 displays using one TB3 cable - as TB3 can carry two complete DisplayPort 1.2 streams - greatly simplifying dual-monitor setups. The reality turned out to be more complicated. At present, there are not a lot of displays supporting Thunderbolt 3 USB Type-C input along with an appropriate output to allow daisy-chaining another monitor. Makers of monitors are reluctant to install additional chips into their products to save BOM costs and keep designs simple, essentially concealing one of the features of the TB3 interface. Meanwhile, each TB3 controller supports two DisplayPort 1.2 streams, so to drive two 4Kp60 displays, some PC makers even integrate two TB3 ports into their ultra-thin laptops to support two 4Kp60 outputs, whereas others go with four. The new adapters from StarTech solve the problem and get two DisplayPort 1.2 or HDMI 2.0 headers from a single TB3 connector.Gallery: StarTech Thunderbolt 3 to Dual DisplayPort Adapter (TB32DP2T) Earlier this year StarTech introduced two devices: the Thunderbolt 3 to Dual DisplayPort Adapter (TB32DP2T), and the Thunderbolt 3 to Dual HDMI 2.0 Adapter (TB32HD4K60) for customers with monitors featuring DP or HDMI inputs. StarTech does not disclose much about internal architecture of the devices, but I understand that they feature a Thunderbolt 3 controller that “receives” two DisplayPort signals from the host via TB3 and then re-routes them to either two DP outputs or two HDMI 2.0 outputs using appropriate LSPCons. Moreover, the TB3 to Dual DisplayPort adapter can even handle a single 5K monitor by using both outputs.The new adapters are compatible with Apple macOS and Microsoft Windows-based PCs. Meanwhile, one thing to keep in mind is that the adapters do not support DP or HDMI alt modes over USB-C and they can only use TB3 ports.
In a show like IFA, it’s easy to get wide-eyed about a flashy new feature that is being heavily promoted but might have limited use. Normally, something like Sony’s 3D Creator app would fall under this umbrella – a tool that can create a 3D wireframe model of someone’s head and shoulders and then implement a 4K texture over the top. What is making me write about it is some of the implementation.Normally in a single photo, without subsequent depth map data, creating a 3D model is difficult. Also, depth data would only show points directly in front of the camera – it says nothing about what is around the corner, especially when it comes to generating a texture from the image data to fit the model. With multiple photos, by correlating points (and perhaps using internal x/y/z sensor data), distances can be measured for identical points and a full depth map can be done taking the color data from the pixels and understanding which pixel would be where in that depth map allows the wireframe model to be textured.For anyone who follows our desktop CPU coverage, we’ve actually been running a benchmark that does this for the last few years. Our test suite runs Agisoft Photoscan, which takes a set of high-quality images (usually 50+ images) of people, of items, of buildings and of landscapes, and builds a 3D textured model to be used in displays, games, and anything that wants a 3D model. Normally this benchmark is computationally expensive: Agisoft splits the work into four segments:
Huawei has a keynote at IFA this year. Having quietly announced the Kirin 970 and its new Neural Processing Unit yesterday without a word through the regular press channels, we're expecting to here Huawei's future march into AI from Richard Yu, CEO of Huawei's Consumer Business Group (CBG).