It is with great sadness that we announce the death of Rich Brueckner. His passing is an unexpected and enormous blow to both his family and the HPC Community. In his coverage of the HPC market, he was tireless and thorough. What Rich brought to the table was a deep curiosity about computing and science, and the people that made the two happen.The post Hats Over Hearts appeared first on insideHPC.
Spectra Logic launched a new Remote Installation Program to support customers during the global coronavirus outbreak. Through this program, customers will realize the benefits of their purchased Spectra solutions much sooner than would otherwise be possible where onsite assistance is not permitted. “Our new Remote Installation Program is really an extension of the outstanding onsite service and support we already provide to our customers, except that we will walk our customers through the entire installation, configuration and setup processes using remote activation, monitoring, testing and videoconferencing.”The post Maintaining Business Continuity with the Spectra Logic Remote Installation Program appeared first on insideHPC.
Inspur released five new AI servers that fully support the new NVIDIA Ampere architecture. The new servers support up to 8 or 16 NVIDIA A100 Tensor Core GPUs, with remarkable AI computing performance of up to 40 PetaOPS, as well as delivering tremendous non-blocking GPU-to-GPU P2P bandwidth to reach maximum 600 GB/s. “With this upgrade, Inspur offers the most comprehensive AI server portfolio in the industry, better tackling the computing challenges created by data surges and complex modeling. We expect that the upgrade will significantly boost AI technology innovation and applications."The post Inspur Launches 5 New AI Servers with NVIDIA A100 Tensor Core GPUs appeared first on insideHPC.
NVIDIA announced two powerful products for its EGX Edge AI platform — the EGX A100 for larger commercial off-the-shelf servers and the tiny EGX Jetson Xavier NX for micro-edge servers — delivering high-performance, secure AI processing at the edge. “Large industries can now offer intelligent connected products and services like the phone industry has with the smartphone. NVIDIA’s EGX Edge AI platform transforms a standard server into a mini, cloud-native, secure, AI data center. With our AI application frameworks, companies can build AI services ranging from smart retail to robotic factories to automated call centers.”The post NVIDIA EGX Platform Brings Real-Time AI to the Edge appeared first on insideHPC.
Today NVIDIA launched the NVIDIA Mellanox ConnectX-6 Lx SmartNIC — a highly secure and efficient 25/50 gigabit per second (Gb/s) Ethernet smart network interface controller (SmartNIC) — to meet surging growth in enterprise and cloud scale-out workloads. "ConnectX-6 Lx, the 11th generation product in the ConnectX family, is designed to meet the needs of modern data centers, where 25Gb/s connections are becoming standard for handling demanding workflows, such as enterprise applications, AI and real-time analytics."The post NVIDIA Mellanox ConnectX-6 Lx SmartNIC Accelerates Cloud and Enterprise Workloads appeared first on insideHPC.
Using XSEDE supercomputers, scientists have developed for the first time a way to screen drugs through their chemical structures for induced arrhythmias. Death from sudden cardiac arrest causes the most deaths by natural causes in the U.S. estimated at 325,000 per year. "Stampede 2 offered a large array of powerful multi-core CPU nodes, which we were able to efficiently use for dozens of molecular dynamics runs we had to do in parallel. Such efficiency and scalability rivaled and even exceeded other resources we used for those simulations including even GPU equipped nodes," Vorobyov added.The post Supercomputing Drug Screening for Deadly Heart Arrhythmias appeared first on insideHPC.
AI cloud computing Paperspace announced Paperspace Gradient is certified under the new NVIDIA DGX-Ready Software program. The program offers proven solutions that complement NVIDIA DGX systems, including the new NVIDIA DGX A100, with certified software that supports the full lifecycle of AI model development. "We developed our NVIDIA DGX-Ready Software program to accelerate AI development in the enterprise," said John Barco, senior director of DGX software product management at NVIDIA. "Paperspace has developed a unique CI/CD approach to building machine learning models that simplifies the process and takes advantage of the power of NVIDIA DGX systems."The post Paperspace Joins NVIDIA DGX-Ready Software Program appeared first on insideHPC.
This white paper from Bright Computing, "Increasing HPC Cluster Productivity Through System Resource Tracking" addresses the necessary steps to give administrators, managers, and users the information they need to use HPC system resources effectively, to maximize system productivity, to enable effective resource sharing, to identify waste and to provide charge-back capability.The post Increasing HPC Cluster Productivity Through System Resource Tracking appeared first on insideHPC.
Today Liqid announced that it has worked with industry leaders AMD and Dell Technologies to deliver one of the fastest one-socket storage rack servers on the market. "Liqid’s composable Gen-4 PCI-Express (PCIe) fabric technology, the LQD4500, is coupled with the AMD EPYC 7002 Series Processors, and enclosed in Dell Technologies’ industry-leading Dell EMC PowerEdge R7515 Rack Server to deliver an architecture designed for the most demanding next-generation, AI-driven HPC application environments."The post Liqid, Dell, and AMD power Industry’s Fastest Single-socket Storage Server appeared first on insideHPC.
Epidemiologists have turned to the power of supercomputers to model and predict how the disease spreads at local and regional levels in hopes of forecasting potential new hot spots and guiding policy makers' decisions in containing the disease's spread. GCS is supporting several projects focused on these goals. ""Our workflows are perfectly scalable in the sense that the number of calculations we can perform is directly proportional to the number of cores available."The post GCS Centres in Germany support COVID-19 research with HPC appeared first on insideHPC.
Nick Nystrom from the Pittsburgh Supercomputing Center gave this talk at the Stanford HPC Conference. "The Artificial Intelligence and Big Data group at Pittsburgh Supercomputing Center converges Artificial Intelligence and high performance computing capabilities, empowering research to grow beyond prevailing constraints. The Bridges supercomputer is a uniquely capable resource for empowering research by bringing together HPC, AI and Big Data."The post Video: Evolving Cyberinfrastructure, Democratizing Data, and Scaling AI to Catalyze Research Breakthroughs appeared first on insideHPC.
NERSC is seeking an HPC Storage Infrastructure Engineer for its Storage Systems Group. This group is responsible for architecting, deploying, and supporting the high-performance parallel storage systems relied upon by NERSC's 7,000 scientific users to conduct basic scientific research across a wide range of disciplines. "The HPC Storage Infrastructure Engineer will work closely with approximately eight other storage systems and software engineers in this group to support and optimize hundreds of petabytes of parallel storage that is served to thousands of clients at terabytes per second."The post Job of the Week: HPC Storage Infrastructure Engineer at NERSC appeared first on insideHPC.
Today Nimbus Data announced an all-new solid state storage operating system (Nimbus Data AFX), all-new enterprise support program (Tectonic), and new all-flash array (ExaFlash One). "With its versatility, federated architecture, and multi-tenant management capabilities, Nimbus Data AFX is a well-conceived solid state storage platform for dense enterprise workload consolidation,” said Eric Burgener, research vice president in the Infrastructure Systems, Platforms and Technologies Group at IDC.The post Video: Nimbus Data Unveils Next-Generation Storage OS and All-Flash Array appeared first on insideHPC.
NERSC is among the early adopters of the new NVIDIA A100 Tensor Core GPU processor announced by NVIDIA this week. More than 6,000 of the A100 chips will be included in NERSC’s next-generation Perlmutter system, which is based on an HPE Cray Shasta supercomputer that will be deployed at Lawrence Berkeley National Laboratory later this year. "Nearly half of the workload running at NERSC is poised to take advantage of GPU acceleration, and NERSC, HPE, and NVIDIA have been working together over the last two years to help the scientific community prepare to leverage GPUs for a broad range of research workloads."The post Perlmutter supercomputer to include more than 6000 NVIDIA A100 processors appeared first on insideHPC.
Oracle is bringing the newly announced NVIDIA A100 Tensor Core GPU to its Oracle Gen 2 Cloud regions. "Oracle is enhancing what NVIDIA GPUs can do in the cloud,” said Vinay Kumar, vice president, product management, Oracle Cloud Infrastructure. “The combination of NVIDIA’s powerful GPU computing platform with Oracle’s bare metal compute infrastructure and low latency RDMA clustered network is extremely compelling for enterprises. Oracle Cloud Infrastructure’s high-performance file server solutions supply data to the A100 Tensor Core GPUs at unprecedented rates, enabling researchers to find cures for diseases faster and engineers to build safer cars.”The post NVIDIA A100 Tensor Core GPUs come to Oracle Cloud appeared first on insideHPC.
Today AMD demonstrated continued momentum in HPC with NVIDIA’s announcement that 2nd Generation AMD EPYC 7742 processors will power their new DGX A100 dedicated AI and Machine Learning system. AMD has an impressive set of HPC wins in the past year, and has been chosen by the DOE to power two pending exascale-class supercomputers, Frontier and El Capitan. "2nd Gen AMD EPYC processors are the first and only current x86-architecture server processor supporting PCIe 4.0, providing up to 128 lanes of I/O, per processor for high performance computing and connections to other devices like GPUs."The post AMD Wins Slot in Latest NVIDIA A100 Machine Learning System appeared first on insideHPC.
Today Atos announced its new BullSequana X2415, the first supercomputer in Europe to integrate NVIDIA’s Ampere next-generation graphics processing unit architecture, the NVIDIA A100 Tensor Core GPU. This new supercomputer blade will deliver unprecedented computing power to boost application performance for HPC and AI workloads, tackling the challenges of the exascale era. The BullSequana X2415 blade will increase computing power by more than 2X and optimize energy consumption thanks to Atos’ 100% highly efficient water-cooled patented DLC (Direct Liquid Cooling) solution, which uses warm water to cool the machine.The post Atos Launches First Supercomputer Equipped with NVIDIA A100 GPU appeared first on insideHPC.
Today Lenovo announced a contract for a 17 petaflop supercomputer at Karlsruhe Institute of Technology (KIT) in Germany. Called HoreKa, the system will come online this Fall and will be handed over to the scientific communities by summer 2021. The procurement contract is reportedly on the order of EUR 15 million. "The result is an innovative hybrid system with almost 60.000 next-generation Intel Xeon Scalable Processor cores and 220 terabytes of main memory as well as 740 NVIDIA A100 Tensor Core GPUs. A non-blocking NVIDIA Mellanox InfiniBand HDR network with 200 GBit/s per port is used for communication between the nodes. Two Spectrum Scale parallel file systems offer a total storage capacity of more than 15 petabytes."The post Lenovo to deploy 17 Petaflop supercomputer at KIT in Germany appeared first on insideHPC.
Today Supermicro announced two new AI systems based on NVIDIA A100 GPUs. NVIDIA A100 is the first elastic, multi-instance GPU that unifies training, inference, HPC, and analytics. "Optimized for AI and machine learning, Supermicro’s new 4U system supports eight A100 Tensor Core GPUs. The 4U form factor with eight GPUs is ideal for customers that want to scale their deployment as their processing requirements expand. The new 4U system will have one NVIDIA HGX A100 8 GPU board with eight A100 GPUs all-to-all connected with NVIDIA NVSwitch for up to 600GB per second GPU-to-GPU bandwidth and eight expansion slots for GPUDirect RDMA high-speed network cards."The post Supermicro steps up with NVIDIA A100 GPU-Powered Systems appeared first on insideHPC.
In this video, NVIDIA CEO Jensen Huang announces the first GPU based on the NVIDIA Ampere architecture, the NVIDIA A100. Their fastest GPU ever is in now in full production and shipping to customers worldwide. “NVIDIA A100 GPU is a 20X AI performance leap and an end-to-end machine learning accelerator – from data analytics to training to inference. For the first time, scale-up and scale-out workloads can be accelerated on one platform. NVIDIA A100 will simultaneously boost throughput and drive down the cost of data centers.”The post Video: NVIDIA Launches Ampere Data Center GPU appeared first on insideHPC.
In this special guest feature, Dr. Rosemary Francis writes that HPC is playing a massive part in the fight against Covid-19 through modeling, genomics, and drug discovery. "Thanks to the work in labs and HPC centres around the world, we now know that the molecular mechanism of the SARS-CoV-2 entry is via a lock and key effect; a spike on the outside of the virus acts as a key to unlock an ACE2 receptor protein on the human cell."The post How HPC is aiding the fight against COVID-19 appeared first on insideHPC.
Today NVIDIA unveiled the NVIDIA DGX A100 AI system, delivering 5 petaflops of AI performance and consolidating the power and capabilities of an entire data center into a single flexible platform. "DGX A100 systems integrate eight of the new NVIDIA A100 Tensor Core GPUs, providing 320GB of memory for training the largest AI datasets, and the latest high-speed NVIDIA Mellanox HDR 200Gbps interconnects."The post New NVIDIA DGX A100 Packs Record 5 Petaflops of AI Performance for Training, Inference, and Data Analytics appeared first on insideHPC.
Today Atos announced that it has sold its Atos Quantum Learning Machine (QLM), the world’s highest-performing commercially available quantum simulator, through its APAC distributor Intelligent Wave Inc. (IWI), in Japan. This is the first QLM that Atos has sold in Japan. “The Atos Quantum Learning Machine enables businesses to develop and experiment with quantum processes and delivers superior simulation capabilities to speed innovation.”The post Atos delivers Quantum Learning Machine to Japan appeared first on insideHPC.
Today MemVerge introduced Big Memory Computing. This new category is sparking a revolution in data center architecture where all applications will run in memory. Big Memory Computing is the combination of DRAM, persistent memory and Memory Machine software technologies, where the memory is abundant, persistent and highly available. "With MemVerge's Memory Machine technology and Intel's Optane DC persistent memory, enterprises will be able to more efficiently and quickly gain insights from enormous amounts of data in near-real time."The post MemVerge Introduces Big Memory Computing appeared first on insideHPC.
Today TMGcore announced it has partnered with OnPoint Warranty Solutions, a warranty administrator, to construct and administer its OTTO data center platform warranty program. "TMGcore’s key focus is on building our technology. Our buyers, leaders in high capacity data centers, expect industry leading warranty support from a cutting-edge product such as OTTO. Therefore, we’ve partnered with OnPoint to help us structure our warranty program to ensure our products are installed, maintained and serviced exceptionally”, said John-David Enright, CEO, TMGcore.The post TMGcore teams with OnPoint Warranty for OTTO Immersive Cooling Platform appeared first on insideHPC.
Today announced the AMD Radeon Pro VII workstation graphics card for broadcast and engineering professionals, delivering exceptional graphics and computational performance, as well as innovative features. The new graphics card is designed to power today’s most demanding broadcast and media projects, complex computer aided engineering (CAE) simulations and the development of HPC applications that enable scientific discovery on AMD-powered supercomputers.The post AMD Rolls out Radeon Pro VII Workstation Graphics Card appeared first on insideHPC.
Today Quantum Corp. announced new advancements for its StorNext file system and data management software designed to make cloud content more accessible, with significantly improved read and write speeds for any cloud and object store based storage solution. "We are working closely with our customers to innovate and enhance the capabilities of our StorNext file system," said Ed Fiore, Vice President and General Manager, Primary Storage, Quantum. "At this time when customers are forced to work remotely, the flexibility to move content between locations, both on-premise and cloud datacenters, is critical. This latest version of StorNext software adds new ways to archive content and access it in the cloud and is another step toward providing a seamless bridge between on-premise and the cloud."The post Quantum StorNext Makes Cloud Content More Accessible, Speeds Data Retrieval appeared first on insideHPC.
In this video, Dario Gil from IBM shares results from the IBM Quantum Challenge and describes how you can access and program quantum computers on the IBM Cloud today. "Those working in the Challenge joined all those who regularly make use of the 18 quantum computing systems that IBM has on the cloud, including the 10 open systems and the advanced machines available within the IBM Q Network. During the 96 hours of the Challenge, the total use of the 18 IBM Quantum systems on the IBM Cloud exceeded 1 billion circuits a day."The post Video: The Future of Quantum Computing with IBM appeared first on insideHPC.
The OFA and Gen-Z Consortium recently entered a Memorandum of Understanding (MoU) agreement to advance the industry standardization of open-source fabric management. "Potential activities outlined in the agreement include joint development of a roadmap guiding future enhancements and development of the libfabric API as well as an abstract fabric manager built on the concepts of Distributed Management Task Force’s (DMTF) Redfish standard."The post OFA and Gen-Z Consortium to advance industry standardization of open-source fabric management appeared first on insideHPC.
"Jupyter is a free, open-source, interactive web tool known as a computational notebook, which researchers can use to combine software code, computational output, explanatory text and multimedia resources in a single document. This podcast looks at how the Bright Jupyter integration makes it easy for customers to use Bright for Data Science through JupyterLab notebooks, and allows users to run their notebooks through a supported HPC scheduler, Kubernetes, or on the server running JupyterHub."The post Podcast: Streamlined Data Science through Jupyter Lab and Jupyter Enterprise Gateway appeared first on insideHPC.
In this special guest feature, Robert Roe from Scientific Computing World writes that increasingly power-hungry and high-density processors are driving the growth of liquid and immersion cooling technology. "We know that CPUs and GPUs are going to get denser and we have developed technologies that are available today which support a 500-watt chip the size of a V100 and we are working on the development of boiling enhancements that would allow us to go beyond that."The post Novel Liquid Cooling Technologies for HPC appeared first on insideHPC.
Today ACM announced the inception of the ACM Gordon Bell Special Prize for HPC-Based COVID-19 Research. The new award will be presented in 2020 and 2021 and will recognize outstanding research achievements that use high performance computing applications to understand the COVID-19 pandemic, including the understanding of its spread. Nominations will be selected based on performance and innovation in their computational methods, in addition to their contributions toward understanding the nature, spread and/or treatment of the disease.The post New Gordon Bell Special Prize announced for HPC-Based COVID-19 Research appeared first on insideHPC.
In this time-lapse video, technicians install a 336U high-density ICEraQ immersion cooling system from GRC at Zelendata Centar in Serbia. "Only three days were required to completely prep, install, and test two ICEraQ Quads — turning four walls and a bare concrete floor into an up and ready-to-run 200 kW capable data center. Conversely, a typical comparable air-cooled data center would take closer to 12 months."The post Time-lapse Video: Installing an Immersion Cooling System at Zelendata Centar in Serbia appeared first on insideHPC.
Adaptive Computing is now making HPC access available for anyone working on COVID-19 related projects. Adaptive’s NODUS Cloud OS solution makes it simple to scale computing resources to your workloads and run compute intensive models and simulations from remote locations. Adaptive Computing is providing temporary software licenses to researchers and scientists who are currently working […]The post Adaptive Computing frees up Cloud HPC for researchers fighting COVID-19 appeared first on insideHPC.
The Inspur InCloud OpenStack has set new records for four key indicators in the latest SPEC Cloud IaaS test to lead the world in technology performance, scalability, application instances, and provisioning time. "The test results show that InCloud OpenStack can efficiently complete the scheduling of various loads such as I/O and computing, and its performance growth shows leading linear scalability. Therefore, it is fully capable of meeting the cloud requirements of users, whether they are traditional business requirements or cloud requirements for innovative applications such as big data and artificial intelligence."The post Inspur InCloud OpenStack Sets Records on New SPEC Cloud Tests appeared first on insideHPC.
In this video, Scott Jeschonek from Microsoft describes the performance advantages of Azure HPC Cache. "Whether you are rendering a movie scene, searching for variants in a genome, or running machine learning against a data set, HPC Cache can provide very low latency high throughput access to the required file data. Even more, your data can remain on its Network Attached Storage (NAS) environment in your data center while you drive your jobs into Azure Compute."The post Azure HPC Cache: File caching for high performance computing appeared first on insideHPC.
Today the Irish Centre for High-End Computing (ICHEC) announced that it is leading a novel quantum simulation project in collaboration with partners at the Leibniz Supercomputing centre (LRZ) to develop quantum simulation tools for Europe’s largest supercomputers. "While actual quantum computing is still some way off, the simulation tools we are creating will advance the necessary concepts and skill-sets for quantum programming," said Dr Niall Moran, Principal Investigator and project leader of the PRACE WP8 QuantEx project at ICHEC. "This work is being conducted with world-class research teams across a number of Irish third-level institutions and will contribute to preparing Ireland for Quantum programming.”The post ICHEC to develop quantum circuit simulation tools for Europe’s largest supercomputers appeared first on insideHPC.
Researchers at Intel Labs and the Perelman School of Medicine are using privacy-preserving technique called federated learning to train AI models that identify brain tumors. With federated learning, research institutions can collaborate on deep learning projects without sharing patient data. "AI shows great promise for the early detection of brain tumors, but it will require more data than any single medical center holds to reach its full potential," said Jason Martin, principal engineer at Intel Labs.The post Using AI to Identify Brain Tumors with Federated Learning appeared first on insideHPC.
NCI Australia and the Pawsey Supercomputing Centre are supporting the Australian and international research community undertaking COVID-19 research through provision of streamlined, prioritized and expedited access to computation and data resources. "Having access to advanced HPC resources and data expertise at Pawsey and NCI allows Australian researchers to accelerate their science to combat the pandemic and we are proud to contribute our national infrastructure and expertise in this collaborative effort.”The post Australian Supercomputers to help fight COVID-19 appeared first on insideHPC.
Peter Dueben from ECMWF gave this talk at the Stanford HPC Conference. "I will present recent studies that use deep learning to learn the equations of motion of the atmosphere, to emulate model components of weather forecast models and to enhance usability of weather forecasts. I will than talk about the main challenges for the application of deep learning in cutting-edge weather forecasts and suggest approaches to improve usability in the future."The post Video: Machine Learning for Weather Forecasts appeared first on insideHPC.
Researchers at the University of Rhode Island are using XSEDE supercomputer to show that high-performance computer modeling can accurately simulate tsunamis from volcanic events. Such models could lead to early-warning systems that could save lives and help minimize catastrophic property damage. "As our understanding of the complex physics related to tsunamis grows, access to XSEDE supercomputers such as Comet allows us to improve our models to reflect that, whereas if we did not have access, the amount of time it would take to such run simulations would be prohibitive."The post XSEDE Supercomputers Simulate Tsunamis from Volcanic Events appeared first on insideHPC.
Lockheed Martin is seeking an R&D Operations and Maintenance Lead in our Job of the Week. "This position is the CSCF Program’s Operations and Maintenance Lead. This position is responsible for managing a small team of geographically diverse System Administrators in a Research and Development (R&D), Multi User High Performance Computer (HPC), Multi Level Secure (MLS) Data Center on a 5x12 schedule."The post Job of the Week: R&D Operations and Maintenance Lead at Lockheed Martin appeared first on insideHPC.
The OpenFabrics Alliance (OFA) has opened registration for its OFA Virtual Workshop, taking place June 8-12, 2020. This virtual event will provide fabric developers and users an opportunity to discuss emerging fabric technologies, collaborate on future industry requirements, and address today’s challenges. "The OpenFabrics Alliance is committed to accelerating the development of high performance fabrics. This virtual event will provide fabric developers and users an opportunity to discuss emerging fabric technologies, collaborate on future industry requirements, and address challenges."The post Agenda Posted for OpenFabrics Virtual Workshop appeared first on insideHPC.
A team from Inspur ranked in the Top 3 at the recent Auto Deep Learning Finals. "Inspur’s leading core technology used in this competition has been applied to Inspur AutoML Suite, an automatic machine learning AI algorithm platform product. AutoML Suite realizes a one-stop automatic generation model based on GPU cluster visualization operations. It has three major automation engines: modeling AutoNAS, hyper-parameter adjustment AutoTune, and model compression AutoPrune, to provide powerful support for computing power."The post Inspur Takes 3rd Place in Auto Deep Learning Finals appeared first on insideHPC.
The Exascale Computing Project has selected Berkeley Lab’s Katie Antypas as its new Director for the project’s Hardware & Integration Focus Area. “Katie has more than 14 years of experience at Berkeley Lab and is a widely recognized speaker and presenter throughout the HPC community. We are thrilled to have her take on such a critical function of leading this group and ensuring the project’s success in interfacing with the DOE HPC facilities.”The post Katie Antypas Named Director of Hardware & Integration at Exascale Computing Project appeared first on insideHPC.
"ColdQuanta is headed by an old pal of ours Bo Ewald and has just come out of stealth mode into the glaring spotlight of RadioFreeHPC. When you freeze a gas of Bosons at low density to near zero you start to get macroscopic access to microscopic quantum mechanical effects, which is a pretty big deal. With the quantum mechanics start, you can control it, change it, and get computations out of it. The secret sauce for ColdQuanta is served cold, all the way down into the micro-kelvins and kept very locally, which makes it easier to get your condensate."The post Podcast: ColdQuanta Serves Up Some Bose-Einstein Condensate appeared first on insideHPC.
In this video from the Stanford HPC Conference, Dan Stanzione from the Texas Advanced Computing Center describes how their powerful supercomputers are helping to fight the coronavirus pandemic. "In times of global need like this, it's important not only that we bring all of our resources to bear, but that we do so in the most innovative ways possible," said TACC Executive Director Dan Stanzione. "We've pivoted many of our resources towards crucial research in the fight against COVID-19, but supporting the new AI methodologies in this project gives us the chance to use those resources even more effectively."The post Interview: Fighting the Coronavirus with TACC Supercomputers appeared first on insideHPC.
Researchers have developed hardware that can learn skills using a type of AI that currently runs on software platforms. Sharing intelligence features between hardware and software would offset the energy needed for using AI in more advanced applications such as self-driving cars or discovering drugs. "Through simulations of the properties discovered in this material, the team showed that the material is capable of learning the numbers 0 through 9. The ability to learn numbers is a baseline test of artificial intelligence."The post New material could make AI more energy efficient appeared first on insideHPC.
Today Lifebit Biotech announced the general release of Lifebit CloudOS. The fully federated, cloud-native system is designed for companies in the pharmaceutical, drug discovery, direct-to-consumer genetics, healthcare, and population genomics industries. "Because it is completely agnostic to the customer’s HPC and cloud infrastructure, workflows and data, Lifebit CloudOS is unlike any other genomics platforms in that it sits natively on one’s cloud/HPC and brings computation to the data instead of the other way around. This is a game-changer for organizations in genomics fields where data is too big to move and security and compliance are absolutely paramount.”The post Lifebit Launches Federated Genomics Cloud Operating System appeared first on insideHPC.
Today TYAN launched their latest GPU server platforms that support the NVIDIA V100S Tensor Core and NVIDIA T4 GPUs for a wide variety of compute-intensive workloads including AI training, inference, and supercomputing applications. "An increase in the use of AI is infusing into data centers. More organizations plan to invest in AI infrastructure that supports the rapid business innovation,” said Danny Hsu, Vice President of MiTAC Computing Technology Corporation's TYAN Business Unit. “TYAN’s GPU server platforms with NVIDIA V100S GPUs as the compute building block enables enterprise to power their AI infrastructure deployment and helps to solve the most computationally-intensive problems.”The post TYAN Launches AI-Optimized Servers Powered by NVIDIA V100S GPUs appeared first on insideHPC.