Feed insidehpc High-Performance Computing News Analysis | insideHPC

Favorite IconHigh-Performance Computing News Analysis | insideHPC

Link https://insidehpc.com/
Feed http://insidehpc.com/feed/
Updated 2024-11-24 23:00
Altair Releases PBS Works 2018 for Cloud Friendly Secure Workload Management
Today Altair released PBS Works 2018. Built around the company's PBS Professional core HPC workload manager and scheduler, the new PBS Works user environment streamlines enterprise access and management of on-premise and cloud HPC resources. "PBS Works has become a key technology to increase productivity and reduce expenses for organizations around the world and across verticals,” said Bill Nitzberg, Chief Technical Officer for PBS Works. “Altair has carefully designed the new PBS Works 2018 suite to invite HPC users to an environment that is both user-friendly and powerful.”The post Altair Releases PBS Works 2018 for Cloud Friendly Secure Workload Management appeared first on insideHPC.
Why Use Containers for HPC on the NVIDIA GPU Cloud?
"Containers just make life easy. So one of the things that people have issues with running bare metal is from time to time their libraries change, maybe they want to try the new version of CUDA, the new version of cuDNN, and they forget to simlink it back to the original path. If you have packaged your app into a container, it's the same every time."The post Why Use Containers for HPC on the NVIDIA GPU Cloud? appeared first on insideHPC.
Introducing the Cambridge Data Accelerator
In this video from the Dell EMC HPC Community meeting, Alisdair King from Cambridge University describes the new Cambridge Data Accelerator. "The Cambridge Data Accelerator is an open source software package for building Burst Buffers from high speed flash storage. With available support from Cambridge University, the Cambridge Data Accelerator offers flexibility and high performance for high performance workloads."The post Introducing the Cambridge Data Accelerator appeared first on insideHPC.
ScaleMP Powers Largest Shared-Memory Systems in Canada
ScaleMP announced that the government of Canada has extended the contract for its large shared memory systems acquired from Dell. These SMP systems use vSMP Foundation to aggregate more than 64 Intel Xeon processors each, totaling more than 1500 CPUs per system. The systems are used for a variety of HPC workloads, including computer-aided engineering (CAE) and computational fluid dynamics (CFD). "Together with our hardware partners, we have been providing technology to the government of Canada since 2012, and are proud of repeatedly earning their business,” said Shai Fultheim, founder and CEO of ScaleMP. “repeat customers are a big part of the vSMP Foundation user community, and we continue to see expansion of our footprint with existing customers along with strong growth in deployments of vSMP Foundation with new ones.”The post ScaleMP Powers Largest Shared-Memory Systems in Canada appeared first on insideHPC.
ISC 2018 is Now Open for Registration
Registration is now open for ISC 2018 at Early Bird discounted rates. The event takes place June 24-28 in Frankfurt. "Various topical and interest-specific Birds-of-a-Feather sessions, the fast-paced Vendor Showdown, and the Exhibitor Forums will again take place this year. Plus, the three-day ISC exhibition will feature about 150 exhibits from leading HPC companies and research organizations."The post ISC 2018 is Now Open for Registration appeared first on insideHPC.
DDN Powers Data Solutions for AI at the GPU Technology Conference
In this video from the GPU Technology Conference, James Coomer from DDN describes how the company delivers high performance data solutions for machine learning and AI applications. "DDN customers are leveraging machine learning techniques to speed results and improve competitiveness, profitability, customer service, business intelligence, and research effectiveness. The performance and flexible sizing of DDN solutions are well-suited for large-scale machine learning programs. They have the power to feed massive training sets to high core count systems as well as the mixed I/O capabilities necessary to handle data efficiently for CPU, GPU, and mixed multi-algorithm environments from simple linear regressions to deep neural nets."The post DDN Powers Data Solutions for AI at the GPU Technology Conference appeared first on insideHPC.
Fireside Chat: Jensen Huang from NVIDIA on how AI will revolutionize Medicine
In this video from the World Medical Innovation Forum, Jensen Huang from NVIDIA discusses the revolution of AI and Medicine with Keith Dreyer, Chief Data Science Officer at PHS. "Medical imaging researchers have discovered the power of deep learning. Half of the papers presented at last year’s MICCAI, the leading medical imaging conference, applied deep learning. We’re working with over 300 healthcare startups tackling challenges now possible with deep learning."The post Fireside Chat: Jensen Huang from NVIDIA on how AI will revolutionize Medicine appeared first on insideHPC.
ANSYS Software Powers Additive Manufacturing of Metal Components
Ansys is hoping to transform how industries such as aerospace and defense, biotech and automotive can manufacture metal parts thanks to its new solutions for metal additive manufacturing. "Our technology spurs the efficient creation of parts for some of the world's most demanding applications, including military machines on foreign soil, spacecraft on other planets and even custom-printed human body parts at hospitals."The post ANSYS Software Powers Additive Manufacturing of Metal Components appeared first on insideHPC.
HPE Deploys “Genius” Supercomputer at KU Leuven
Today HPE announced a new supercomputer installation at KU Leuven, a Flemish research university consistently ranked as one of the five most innovative universities in the world. HPE collaborated with KU Leuven to develop and deploy Genius, a new supercomputer built to run artificial intelligence (AI) workloads. The system will be available to both academia and the industry to build applications that drive scientific breakthroughs, economic growth and innovation in Flanders, the northern region of Belgium.The post HPE Deploys “Genius” Supercomputer at KU Leuven appeared first on insideHPC.
Researchers using HPC to help fight Bioterrorism
Researchers are using computational models powered by HPC to develop better strategies for protecting us from bioterrorism. "Recent advances in data analytics and artificial intelligence systems are fundamentally transforming our ability to personalize treatments to the specific needs of a patient under treat-to-target paradigms,” said Josep Bassaganya-Riera, co-director of the Biocomplexity Institute’s Nutritional Immunology and Molecular Medicine Laboratory. “Our goal in this project will be to leverage the power of modeling and advanced machine learning methods, so a group of people exposed to a harmful pathogen or its toxins can receive faster, safer, more effective and personalized treatments.”The post Researchers using HPC to help fight Bioterrorism appeared first on insideHPC.
RCE Podcast Looks at MEEP Software for Simulating Electromagnetic Systems
In this RCE podcast, Brock Palen and Jeff Squyres discuss MEEP, a free finite-difference time-domain (FDTD) software for electromagnetic simulations. Their guests are Dr. Steven G. Johnson and Dr. Ardavan Oskooi. "Meep is a free and open-source software package for simulating electromagnetic systems via the finite-difference time-domain (FDTD) method. Meep is an acronym for MIT Electromagnetic Equation Propagation."The post RCE Podcast Looks at MEEP Software for Simulating Electromagnetic Systems appeared first on insideHPC.
The Use of HPC to Model the California Wildfires
Ilkay Altintas from the San Diego Supercomputer Center gave this talk at the HPC User Forum. "WIFIRE is an integrated system for wildfire analysis, with specific regard to changing urban dynamics and climate. The system integrates networked observations such as heterogeneous satellite data and real-time remote sensor data, with computational techniques in signal processing, visualization, modeling, and data assimilation to provide a scalable method to monitor such phenomena as weather patterns that can help predict a wildfire's rate of spread."The post The Use of HPC to Model the California Wildfires appeared first on insideHPC.
The Machine Learning Potential of a Combined Tech Approach
This is the first in a five-part series from a report exploring the potential of unified deep learning with CPU, GPU and FGPA technologies. This post explores the machine learning potential of taking a combined approach to these technologies.The post The Machine Learning Potential of a Combined Tech Approach appeared first on insideHPC.
Earth-modeling System steps up to Exascale
"Unveiled today by the DOE, E3SM is a state-of-the-science modeling project that uses the world's fastest computers to more accurately understand how Earth's climate work and can evolve into the future. The goal: to support DOE's mission to plan for robust, efficient, and cost-effective energy infrastructures now, and into the distant future."The post Earth-modeling System steps up to Exascale appeared first on insideHPC.
Quantum Computing at NIST
Carl Williams from NIST gave this talk at the HPC User Forum in Tucson. "Quantum information science research at NIST explores ways to employ phenomena exclusive to the quantum world to measure, encode and process information for useful purposes, from powerful data encryption to computers that could solve problems intractable with classical computers."The post Quantum Computing at NIST appeared first on insideHPC.
Radio Free HPC Looks at the New Coral-2 RFP for Exascale Computers
In this podcast, the Radio Free HPC team looks at the new Department of Energy’s RFP for Exascale Computers. "As far as predictions go, Dan thinks one machine will go to IBM and the other will go to Intel. Rich thinks HPE will win one of the bids with an ARM-based system designed around The Machine memory-centric architecture. They have a wager, so listen in to find out where the smart money is."The post Radio Free HPC Looks at the New Coral-2 RFP for Exascale Computers appeared first on insideHPC.
Intel HPC Technology: Fueling Discovery and Insight with a Common Foundation
To remain competitive, companies, academic institutions, and government agencies must tap the data available to them to empower scientific breakthroughs and drive greater business agility. This guest post explores how Intel’s scalable and efficient HPC technology portfolio accelerates today’s diverse workloads.The post Intel HPC Technology: Fueling Discovery and Insight with a Common Foundation appeared first on insideHPC.
Containers Using Singularity on HPC
Abhinav Thota, from Indiana University gave this talk at the 2018 Swiss HPC Conference. "Container use is becoming more widespread in the HPC field. There are various reasons for this, including the broadening of the user base and applications of HPC. One of the popular container tools on HPC is Singularity, an open source project coming out of the Berkeley Lab. In this talk, we will introduce Singularity, discuss how users of Indiana University are using it and share our experience supporting it. This talk will include a brief demonstration as well."The post Containers Using Singularity on HPC appeared first on insideHPC.
Call for Submissions: SC18 Workshop on Reproducibility
Over at the SC18 Blog, Stephen Lien Harrell from Purdue writes that the conference will host will host a workshop on the hot topic of Reproducibility. Their Call for Submissions is out with a deadline of August 19, 2018. "
Job of the Week: HPC System Administrator at D.E. Shaw Research
D.E. Shaw Research is seeking an HPC System Administrator in our Job of the Week. "Our research effort is aimed at achieving major scientific advances in the field of biochemistry and fundamentally transforming the process of drug discovery."The post Job of the Week: HPC System Administrator at D.E. Shaw Research appeared first on insideHPC.
Using the Titan Supercomputer to Develop 50,000 Years of Flood Risk Scenarios
Dag Lohmann from KatRisk gave this talk at the HPC User Forum in Tucson. "In 2012, a small Berkeley, California, startup called KatRisk set out to improve the quality of worldwide flood risk maps. The team wanted to create large-scale, high-resolution maps to help insurance companies evaluate flood risk on the scale of city blocks and buildings, something that had never been done. Through the OLCF’s industrial partnership program, KatRisk received 5 million processor hours on Titan."The post Using the Titan Supercomputer to Develop 50,000 Years of Flood Risk Scenarios appeared first on insideHPC.
Fujitsu Upgrades RAIDEN at RIKEN Center for Advanced Intelligence Project
Fujitsu reports that the company has significantly boosted the performance of the RAIDEN supercompuer. RAIDEN is a computer system for artificial intelligence research originally deployed in 2017 at the RIKEN Center for Advanced Intelligence Project (AIP Center). "The upgraded RAIDEN has increased its performance by a considerable margin, moving from an initial total theoretical computational performance of 4 AI Petaflops to 54 AI Petaflops, placing it in the top tier of Japan's systems. In having built this system, Fujitsu demonstrates its commitment to support cutting-edge AI research in Japan."The post Fujitsu Upgrades RAIDEN at RIKEN Center for Advanced Intelligence Project appeared first on insideHPC.
Why UIUC Built HPC Application Containers for NVIDIA GPU Cloud
In this video from the GPU Technology Conference, John Stone from the University of Illinois describes how container technology in the NVIDIA GPU Cloud help the University distribute accelerated applications for science and engineering. "Containers are a way of packaging up an application and all of its dependencies in such a way that you can install them collectively on a cloud instance or a workstation or a compute node. And it doesn't require the typical amount of system administration skills and involvement to put one of these containers on a machine."The post Why UIUC Built HPC Application Containers for NVIDIA GPU Cloud appeared first on insideHPC.
Video: HPC Use for Earthquake Research
Christine Goulet from the Southern California Earthquake Center gave this talk at the HPC User Forum in Tucson. "SCEC coordinates fundamental research on earthquake processes using Southern California as its principal natural laboratory. The SCEC community advances earthquake system science through synthesizing knowledge of earthquake phenomena through physics-based modeling, including system-level hazard modeling and communicating our understanding of seismic hazards to reduce earthquake risk and promote community resilience."The post Video: HPC Use for Earthquake Research appeared first on insideHPC.
Students: Sign up now for ISC STEM Student Day in Frankfurt
Registration for the ISC STEM Student Day program is now open. As part of the ISC 2018 conference, the full-day program is free of charge and takes place June 27 in Frankfurt, Germany. "We have created a program to welcome science, technology, engineering, and mathematics (STEM) students into the world of HPC, demonstrate how technical skills in this area can propel your future career, introduce you to the current job landscape, and show you what the HPC workforce will look like in 2020 and beyond."The post Students: Sign up now for ISC STEM Student Day in Frankfurt appeared first on insideHPC.
Intel Open Sources nGraph Deep Neural Network model for Multiple Devices
Over at Intel, Scott Cyphers writes that the company has open-sourced nGraph, a framework-neutral Deep Neural Network (DNN) model compiler that can target a variety of devices. With nGraph, data scientists can focus on data science rather than worrying about how to adapt their DNN models to train and run efficiently on different devices. Continue reading below for highlights of our engineering challenges and design decisions, and see GitHub, our documentation, and our SysML paper for additional details.The post Intel Open Sources nGraph Deep Neural Network model for Multiple Devices appeared first on insideHPC.
JUWELS Supercomputer in Germany to be based on Modular Supercomputing
"The supercomputer JUQUEEN, the one-time reigning power in Europe’s high-performance computing industry, is ceding its place to its successor, the Jülich Wizard for European Leadership Science. Called JUWELS for short, the supercomputer is the culmination of the joint efforts of more than 16 European partners in the EU-funded DEEP projects since 2011. Once completed, JUWELS will consist of three fully integrated modules able to carry out demanding simulations and scientific tasks."The post JUWELS Supercomputer in Germany to be based on Modular Supercomputing appeared first on insideHPC.
Universities step up to Cloud Bursting
In this special guest feature, Mahesh Pancholi from OCF writes that many of universities are now engaging in cloud bursting and are regularly taking advantage of public cloud infrastructures that are widely available from large companies like Amazon, Google and Microsoft. "By bursting into the public cloud, the university can offer the latest and greatest technologies as part of its Research Computing Service for all its researchers."The post Universities step up to Cloud Bursting appeared first on insideHPC.
Charliecloud: Unprivileged Containers for User-Defined Software Stacks
"What if I told you there was a way to allow your customers and colleagues to run their HPC jobs inside the Docker containers they're already creating? or an easily learned, easily employed method for consistently reproducing a particular application environment across numerous Linux distributions and platforms? There is. In this talk/tutorial session, we'll explore the problem domain and all the previous solutions, and then we'll discuss and demo Charliecloud, a simple, streamlined container runtime that fills the gap between Docker and HPC -- without requiring HPC Admins to lift a finger!"The post Charliecloud: Unprivileged Containers for User-Defined Software Stacks appeared first on insideHPC.
Exascale Computing for Long Term Design of Urban Systems
In this episode of Let's Talk Exascale, Charlie Catlett from Argonne National Laboratory and the University of Chicago describes how extreme scale HPC will be required to better build Smart Cities. "Urbanization is a bigger set of challenges in the developing world than in the developed world, but it’s still a challenge for us in US and European cities and Japan."The post Exascale Computing for Long Term Design of Urban Systems appeared first on insideHPC.
Russian RSC Group deploys ‘hot water’ cooled supercomputer at JINR
Today the RSC Group from Russia announced the deployment of the world first 100% ‘hot water’ liquid cooled supercomputer at Joint Institute for Nuclear Research (JINR) in Dubna. "It's great to note that we launch the new heterogeneous supercomputer named after professor Govorun at JINR’s Information Technology Laboratory in the year of 60th anniversary of commissioning of the first Ural-1 supercomputer at our institute in 1958. Our scientists and research groups now have a powerful and modern tool that will greatly accelerate theoretical and experimental research of nuclear physics and condensed matter physics,” – said Vladimir Vasilyevich Korenkov, Director of the Information Technology Laboratory of the Joint Institute for Nuclear Research.The post Russian RSC Group deploys ‘hot water’ cooled supercomputer at JINR appeared first on insideHPC.
Video: Addressing Key Science Challenges with Adversarial Neural Networks
Wahid Bhimji from NERSC gave this talk at the 2018 HPC User Forum in Tucson. "Machine Learning and Deep Learning are increasingly used to analyze scientific data, in fields as diverse as neuroscience, climate science and particle physics. In this page you will find links to examples of scientific use cases using deep learning at NERSC, information about what deep learning packages are available at NERSC, and details of how to scale up your deep learning code on Cori to take advantage of the compute power available from Cori's KNL nodes."The post Video: Addressing Key Science Challenges with Adversarial Neural Networks appeared first on insideHPC.
David Kepczynski from GE to Chair ECP Industry Council
Today the Exascale Computing Project appointed David Kepczynski from GE Global Research as the new chair of the ECP Industry Council. “We are thrilled that Dave Kepczynski has agreed to take the leadership reins for the ECP’s Industry Council,” ECP Director Doug Kothe said. “He has been an active member of the Industry Council since day one, and his experience and vision pertaining to the potential impact of exascale on U.S. industries is invaluable to our mission.” Kothe added, “We wish to thank Michael McQuade for his pioneering leadership role with this external advisory group, and we wish him well with his future plans.”The post David Kepczynski from GE to Chair ECP Industry Council appeared first on insideHPC.
Introducing the SPEC High Performance Group and HPC Benchmark Suites
Robert Henschel from Indiana University gave this talk at the Swiss HPC Conference. "In this talk, I will present an overview of the High Performance Group as well as SPEC’s benchmarking philosophy in general. Most everyone knows SPEC for the SPEC CPU benchmarks that are heavily used when comparing processor performance, but the High Performance Group specifically focusses on whole system benchmarking utilizing the parallelization paradigms common in HPC, like MPI, OpenMP and OpenACC."The post Introducing the SPEC High Performance Group and HPC Benchmark Suites appeared first on insideHPC.
Cray Adopts AMD EPYC Processors for Supercomputing
Cray is the first system vendor to offer an optimized programing environment for AMD EYPC processors, which is a distinct advantage. "Cray's decision to offer the AMD EPYC processors in the Cray CS500 product line expands its market opportunities by offering buyers an important new choice," said Steve Conway, senior vice president of research at Hyperion Research.The post Cray Adopts AMD EPYC Processors for Supercomputing appeared first on insideHPC.
HPE Teams with University of Bristol for ARM-based HPC
Today the University of Bristol announced an initiative to accelerate the adoption of
StorONE and Mellanox Build Wire-Speed TRU Storage Solutions
Today StorONE announced that it has partnered with Mellanox Technologies to leverage each other’s technological approaches in order to create powerful, scalable and flexible storage solutions. These solutions achieve the enterprise-class functionality, high performance and high capacity results at the industry’s lowest total cost of ownership. "Modern software-defined storage solutions require high-performance, programmable and intelligent networks,” said Motti Beck, Senior Director Enterprise Market Development, at Mellanox. “Combining StorONE’s TRU STORAGE software with Mellanox Ethernet fabric storage solutions improves the simplicity, cost and efficiency of enterprise storage systems, supports enterprises’ mission critical storage features at wire speed and ensures the best end-user experience available.”The post StorONE and Mellanox Build Wire-Speed TRU Storage Solutions appeared first on insideHPC.
E8 Storage steps up to HPC with InfiniBand Support
Today E8 Storage announced availability of InfiniBand support to its high performance, NVMe storage solutions. The move comes as a direct response to HPC customers that wish to take advantage of the high speed, low latency throughput of InfiniBand for their data hungry applications. E8 Storage support for InfiniBand will be seamless for customers who now have the flexibility to connect via Ethernet or InfiniBand when paired with Mellanox ConnectX InfiniBand/VPI adapters. "Today we demonstrate once again that E8 Storage’s architecture can expand, evolve and always extract the full potential of flash performance,” comments Zivan Ori, co-founder and CEO of E8 Storage. “Partnering with market leaders like Mellanox that deliver the very best network connectivity technology ensures we continue to meet and, frequently, exceed the needs of our HPC customers even in their most demanding environments.”The post E8 Storage steps up to HPC with InfiniBand Support appeared first on insideHPC.
Ceph on the Brain: Storage and Data-Movement Supporting the Human Brain Project
Adrian Tate from Cray and Stig Telfer from StackHPC gave this talk at the 2018 Swiss HPC Conference. "This talk will describe how Cray, StackHPC and the HBP co-designed a next-generation storage system based on Ceph, exploiting complex memory hierarchies and enabling next-generation mixed workload execution. We will describe the challenges, show performance data and detail the ways that a similar storage setup may be used in HPC systems of the future."The post Ceph on the Brain: Storage and Data-Movement Supporting the Human Brain Project appeared first on insideHPC.
DOE INCITE Program Seeks Advanced Computational Research Proposals for 2019
The DOE Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program is now seeking proposals for high-impact, computationally intensive research campaigns in a broad array of science, engineering and computer science domains. "From April 16 to June 22, 2018, INCITE’s open call provides an opportunity for researchers to pursue transformational advances in science and technology through large allocations of computer time and supporting resources at the Argonne Leadership Computing Facility (ALCF) and the Oak Ridge Leadership Computing Facility (OLCF). The ALCF and OLCF are DOE Office of Science User Facilities. Open to researchers from academia, industry and government agencies, the INCITE program will award 50 percent of the allocable time on DOE’s leadership-class supercomputers: the ALCF’s Mira and Theta systems and the OLCF’s Summit and Titan systems."The post DOE INCITE Program Seeks Advanced Computational Research Proposals for 2019 appeared first on insideHPC.
Pawsey Supercomputing Centre Hosts GPU Hackathon this week
Australia’s Pawsey Supercomputing Centre is hosting a GPU Hackathon this week in Perth, Australia. "The GPU Hackathon is a free event taking place at Esplanade Hotel in Fremantle, from Monday 16 April to Friday 20 April. Six teams from Australia, the United States, and Europe, are gathering in Perth for this 5-day event to adapt their applications for GPU architectures."The post Pawsey Supercomputing Centre Hosts GPU Hackathon this week appeared first on insideHPC.
Industry Insights: Download the Results of our AI & HPC Perceptions Survey
The results from our HPC & AI peception survey are here. "90 percent of all respondents felt that their business will ultimately be impacted by AI. Although almost all respondents see AI as playing a role in the future of the business, the survey also revealed the top three industries that will see the most impact. Healthcare came in first, followed by life sciences, and finance/transportation tied in third place. The possibilities of AI are seemingly endless. And the shift has already begun."The post Industry Insights: Download the Results of our AI & HPC Perceptions Survey appeared first on insideHPC.
Atos Quantum Learning Machine can now simulate real Qubits
Researchers at the Atos Quantum Laboratory have successfully modeled ‘quantum noise’ and as a result, simulation is more realistic than ever before, and is closer to fulfilling researchers’ requirements. "We are thrilled by the remarkable progress that the Atos Quantum program has delivered as of today," said Thierry Breton, Chairman and CEO of Atos.The post Atos Quantum Learning Machine can now simulate real Qubits appeared first on insideHPC.
Scratch to Supercomputers: Bottoms-up Build of Large-scale Computational Lensing Software
Gilles Fourestey from EPFL gave this talk at the Swiss HPC Conference. "LENSTOOL is a gravitational lensing software that models mass distribution of galaxies and clusters. It is used to obtain sub-percent precision measurements of the total mass in galaxy clusters and constrain the dark matter self-interaction cross-section, a crucial ingredient to understanding its nature."The post Scratch to Supercomputers: Bottoms-up Build of Large-scale Computational Lensing Software appeared first on insideHPC.
DDN Builds New Engineering Facility in Colorado focused on AI, Cloud, and Enterprise Data Challenges
Today DDN announced the opening of a new facility in Colorado Springs, Colorado, including a significant expansion of lab, testing and benchmarking facilities. The enhanced capabilities will enable DDN to accelerate development efforts and increase in-house capabilities to mimic customer applications and workflows. "Our Enterprise, AI, HPC and Cloud customers have always relied upon us to develop the world’s leading data storage solutions at-scale, and for our long-term focus and sustained investments in research, technology and innovation,” said Alex Bouzari, chief executive officer, chairman and co-founder of DDN. “We are excited to add our new Colorado Springs facility to the DDN R&D centers worldwide and to expand our team of very talented engineers and technologists who will continue to drive innovation for our customers in the years to come.”The post DDN Builds New Engineering Facility in Colorado focused on AI, Cloud, and Enterprise Data Challenges appeared first on insideHPC.
Shifter – Docker Containers for HPC
Alberto Madonaa gave this talk at the Swiss HPC Conference. "In this work we present an extension to the container runtime of Shifter that provides containerized applications with a mechanism to access GPU accelerators and specialized networking from the host system, effectively enabling performance portability of containers across HPC resources. The presented extension makes possible to rapidly deploy high-performance software on supercomputers from containerized applications that have been developed, built, and tested in non-HPC commodity hardware, e.g. the laptop or workstation of a researcher."The post Shifter – Docker Containers for HPC appeared first on insideHPC.
Altair Steps up to Azure Cloud with Inspire Unlimited
Altair software is now part of the Inspire Unlimited software-as-a-service offering available on the Azure cloud. "Unlike the HyperWorks Unlimited Appliance, where performance is based on the number of nodes, Inspire’s scale requirements are based on the number of simultaneous users; there could be 1,000 engineers working together at a time,” says Sam Mahalingam from Altair. “We felt that the HPC environment in Azure was architected to meet the type of back-end requirements we needed for Inspire.” Altair uses Microsoft Azure Virtual Machines, with NV instances powered by NVIDIA Tesla M60 GPUs.The post Altair Steps up to Azure Cloud with Inspire Unlimited appeared first on insideHPC.
Accelerating Ceph with RDMA and NVMe-oF
Haodong Tang from Intel gave this talk at the 2018 Open Fabrics Workshop. "Efficient network messenger is critical for today’s scale-out storage systems. Ceph is one of the most popular distributed storage system providing a scalable and reliable object, block and file storage services. As the explosive growth of Big Data continues, there're strong demands leveraging Ceph build high performance & ultra-low latency storage solution in the cloud and bigdata environment. The traditional TCP/IP cannot satisfy this requirement, but Remote Direct Memory Access (RDMA) can."The post Accelerating Ceph with RDMA and NVMe-oF appeared first on insideHPC.
Using Ai to detect Gravitational Waves with the Blue Waters Supercomputer
NASA researchers are using AI technologies to detect gravitational waves. The work is described in a new article in Physics Review D this month. "This article shows that we can automatically detect and group together noise anomalies in data from the LIGO detectors by using artificial intelligence algorithms based on neural networks that were already pre-trained to classify images of real-world objects," said research scientist, Eliu Huerta.The post Using Ai to detect Gravitational Waves with the Blue Waters Supercomputer appeared first on insideHPC.
Amazon and Libfabric: A case study in flexible HPC Infrastructure
Brian Barrett from Amazon gave this talk at the 2018 OpenFabrics Workshop. "As network performance becomes a larger bottleneck in application performance, AWS is investing in improving HPC network performance. Our initial investment focused on improving performance in open source MPI implementations, with positive results. Recently, however, we have pivoted to focusing on using libfabric to improve point to point performance."The post Amazon and Libfabric: A case study in flexible HPC Infrastructure appeared first on insideHPC.
...129130131132133134135136137138...