by Rich Brueckner on (#4CD7Z)
In this video, researchers from the Texas Advanced Computing Center describe how Intel data-centric technologies power the Frontera supercomputer, which is currently under installation. "This system will provide researchers the groundbreaking computing capabilities needed to grapple with some of science’s largest challenges. Frontera will provide greater processing and memory capacity than TACC has ever had, accelerating existing research and enabling new projects that would not have been possible with previous systems."The post Video: How Intel Data-Centric Technologies will power the Frontera Supercomputer at TACC appeared first on insideHPC.
|
High-Performance Computing News Analysis | insideHPC
Link | https://insidehpc.com/ |
Feed | http://insidehpc.com/feed/ |
Updated | 2024-11-24 12:30 |
by staff on (#4CD2C)
Early registration is now open for ISC High Performance, the largest high performance computing forum in Europe. The event takes place June 16 - 20 in Frankfurt, Germany. "ISC High Performance is recognized internationally for its strength in bringing together different academic and commercial disciplines to share knowledge in the field of high performance computing. With our first event dating back over 30 years, we’ve created a community spanning the globe. Over that period we have welcomed attendees from over 80 countries, which has made ISC a highly diverse event."The post Registration Opens for ISC 2019 in Frankfurt appeared first on insideHPC.
|
by Rich Brueckner on (#4CD2D)
In this video from the HPC User Forum in Santa Fe, Earl Joseph from Hyperion Research presents an HPC Market Update. The company helps IT professionals, business executives, and the investment community make fact-based decisions on technology purchases and business strategy.The post Video: HPC Market Update from Hyperion Research appeared first on insideHPC.
|
by staff on (#4CCWZ)
Today Cloud HPC company XTREME-D today announced the official release of XTREME-Stargate G2, what it calls "next-gen cloud supercomputing for AI" with Intel’s latest CPU. "Announced in October, XTREME-Stargate has an increasing number of early adopters among commercial companies, national laboratories, and academia. The device provides High Performance Computing and graphics processing and is cost effective for both simulation and data analysis."The post XTREME-Stargate G2 Service to offer Next-gen cloud Supercomputing for AI appeared first on insideHPC.
|
by staff on (#4CB3Y)
Today Intel unveiled a new portfolio of data-centric solutions consisting of "Cascade Lake-SP" Intel Xeon Scalable processors, Intel Optane DC memory and storage solutions, and software and platform technologies optimized to help its customers extract more value from their data. "The portfolio of products announced today underscores our unmatched ability to move, store and process data across the most demanding workloads from the data center to the edge."The post Intel Rolls Out 48-Core Cascade Lake-SP Xeon Processors for HPC, AI, & Data-centric Workloads appeared first on insideHPC.
|
by Rich Brueckner on (#4CAYX)
For the first time, a majority of companies are putting mission critical apps in the cloud, according to the latest report released today by Cloud Foundry Foundation. The study revealed that companies treat digital transformation as a constant cycle of adaptation rather than a one-time fix. As part of that process, cloud technologies such as Platform-as-a-Service (PaaS), containers and serverless continue to grow at scale, while microservices and AI/ML are next to be integrated into their workflows.The post Survey: Companies are moving Mission Critical Apps to the Cloud appeared first on insideHPC.
|
by staff on (#4CASN)
Today Fujitsu Laboratories announced that it has developed technology to improve the speed of deep learning software, which has now achieved the world's highest speed when the time required for machine learning was measured using the ABCI system at AIST. "With the spread of deep learning in recent years, there has been a demand for algorithms that can execute machine learning processing at high speeds, and the speed of deep learning has accelerated by 30 times in the past two years. ResNet-50(1), a deep neural network for image recognition, is generally used as a benchmark to measure deep learning processing speed."The post New Fujitsu Technology Accelerates Deep Learning on ResNET-50 appeared first on insideHPC.
|
by staff on (#4CASQ)
Scientists are using the Comet supercomputer at SDSC to better understand the complexities of brain waves. With a goal of better understanding human brain development, the HBN project is currently collecting brain scans and EEG recordings, as well as other behavioral data from 10,000 New York City children and young adults – the largest such sample ever collected. "We hope to use portals such as the EEGLAB to process this data so that we can learn more about biological markers of mental health and learning disorders in our youngest patients,†said HBN Director Michael Milham.The post Supercomputing the Complexities of Brain Waves appeared first on insideHPC.
|
by staff on (#4CAKT)
The Leibniz Supercomputing Centre (LRZ) in Germany has joined the OpenMP Architecture Review Board (ARB), a group of leading hardware and software vendors and research organizations creating the standard for the most popular shared-memory parallel programming model in use today. "With the rise of core counts and the expected future deployment of accelerated systems, optimizing node-level performance is getting more and more important. As a member of the OpenMP ARB, we want to contribute to the future of OpenMP to meet the challenges of new architectures“, says Prof. Dieter Kranzlmüller, Chairman of the Board of Directors of LRZ.The post LRZ in Germany joins the OpenMP effort appeared first on insideHPC.
|
by staff on (#4CAKV)
The fastest supercomputer in Europe will soon join the WLHC Grid. Housed at CSCS in Switzerland, the Piz Daint supercomputer be used for data analysis from Large Hadron Collider (LHC) experiments. Until now, the ATLAS, CMS and LHCb particle detectors delivered their data to “Phoenix†system for analysis and comparison with the results of previous simulations.The post Piz Daint Supercomputer to Power LHC Computing Grid appeared first on insideHPC.
|
by staff on (#4BSAD)
Whether your code will run on industry-standard PCs or is embedded in devices for specific uses, chances are there’s more than one processor that you can utilize. Graphics processors, DSPs and other hardware accelerators often sit idle while CPUs crank away at code better served elsewhere. This sponsored post from Intel highlights the potential of Intel SDK for OpenCL Applications, which can ramp up processing power.The post CPU, GPU, FGPA, or DSP: Heterogeneous Computing Multiplies the Processing Power appeared first on insideHPC.
|
by staff on (#4C89G)
In this podcast, Franck Cappello from Argonne describes EZ, an effort to effort to compress and reduce the enormous scientific data sets that some of the ECP applications are producing. "There are different approaches to solving the problem. One is called lossless compression, a data-reduction technique that doesn’t lose any information or introduce any noise. The drawback with lossless compression, however, is that user-entry floating-point values are very difficult to compress: the best effort reduces data by a factor of two. In contrast, ECP applications seek a data reduction factor of 10, 30, or even more."The post Podcast: How the EZ Project is Providing Exascale with Lossy Compression for Scientific Data appeared first on insideHPC.
|
by Rich Brueckner on (#4C84X)
Björn Brynjulfsson from Etix Everywhere gave this talk at CloudFest 2019. "Benefiting from very favorable ambient conditions and drawing on a background of designing, building and operating both high end facilities and super economical blockchain facilities we will show how the Etix team meet the challenge in Blönduós Iceland. The presentation describes how Etix delivered a 45 MW, ultra-green, HPC facility with lowest TCO in class, in under 9 months from start to finish."The post Video: Defining A New Efficiency Standard for HPC Data Centers appeared first on insideHPC.
|
by Rich Brueckner on (#4C84Z)
In this video from GTC 2019 in San Jose, Harvey Skinner, Distinguished Technologist, discusses the advent of production AI and how the HPE AI Data Node offers a building block for AI storage. "The HPE AI Data Node is a HPE reference configuration which offers a storage solution that provides both the capacity for data, as well as a performance tier that meets the throughput requirements of GPU servers. The HPE Apollo 4200 Gen10 density optimized data server provides the hardware platform for the WekaIO Matrix flash-optimized parallel file system, as well as the Scality RING object store."The post Video: Prepare for Production AI with the HPE AI Data Node appeared first on insideHPC.
|
by Rich Brueckner on (#4C6MD)
Seongchan Kim from KISTI gave this talk at GTC 2019. "How do meteorologists predict weather or weather events such as hurricanes, typhoons, and heavy rain? Predicting weather events were done based on supercomputer (HPC) simulations using numerical models such as WRF, UM, and MPAS. But recently, many deep learning-based researches have been showing various kinds of outstanding results. We'll introduce several case studies related to meteorological researches."The post How Deep Learning Could Predict Weather Events appeared first on insideHPC.
|
by staff on (#4C6HJ)
The Air Force Research Laboratory has unveiled the first-ever shared classified Department of Defense high performance computing capability at the AFRL DOD Supercomputing Resource Center. "The ability to share supercomputers at higher classification levels will allow programs to get their supercomputing work done quickly while maintaining necessary security. Programs will not need to spend their budget and waste time constructing their own secure computer facilities, and buying and accrediting smaller computers for short-term work. This new capability will save billions for the DOD while providing additional access to state-of-the-art computing.â€The post AFRL Unveils Sharable Classified Supercomputing Capability appeared first on insideHPC.
|
by Rich Brueckner on (#4C53R)
Raghu Raja from Amazon gave this talk at the OpenFabrics Workshop in Austin. "Elastic Fabric Adapter (EFA) is the recently announced HPC networking offering from Amazon for EC2 instances. It allows applications such as MPI to communicate using the Scalable Reliable Datagram (SRD) protocol that provides connectionless and unordered messaging services directly in userspace, bypassing both the operating system kernel and the Virtual Machine hypervisor. This talk presents the designs, capabilities, and an early performance characterization of the userspace and kernel components of the EFA software stack."The post Amazon Elastic Fabric Adapter: Anatomy, Capabilities, and the Road Ahead appeared first on insideHPC.
|
by Rich Brueckner on (#4C518)
Children's Mercy Hospital in Kansas City is seeking a Senior HPC Systems Engineer in our Job of the Week. "A Senior HPC Systems Engineer is responsible for the daily operation and monitoring of a high performance computing infrastructure, which includes the high performance compute cluster, storage system, and backup and disaster recovery systems. The engineer plans, designs, and implements the integration of HPC systems, and provides consultation and support for other HPC projects. They are expected to be a top-level contributor/specialist for high performance computing and Linux/UNIX environments."The post Job of the Week: Senior HPC Systems Engineer at Children’s Mercy Hospital appeared first on insideHPC.
|
by staff on (#4C385)
Today Inspur announced that it will offer integrated storage solutions with the BeeGFS filesystem. BeeGFS, a leading parallel cluster file system with a distributed metadata architecture, has gained global acclaim for its usability, scalability and powerful metadata processing functions. "BeeGFS has unique advantages in terms of usability, flexibility and performance," said Liu Jun, General Manager of AI&HPC, Inspur. "It can easily adapt to the different business needs of HPC and AI users. The cooperation between Inspur and ThinkParQ will provide our HPC and AI cluster solutions users with an integrated BeeGFS system and a range of high-quality services, helping them to improve efficiency with BeeGFS."The post Inspur to Offer BeeGFS Storage Systems for HPC and AI Clusters appeared first on insideHPC.
|
by staff on (#4C33Z)
Last week at GTC, Altair announced that it has achieved up to 10x speedups with the Altair OptiStruct structural analysis solver on NVIDIA GPU-accelerated system architecture — with no compromise in accuracy. This speed boost has the potential to significantly impact industries including automotive, aerospace, industrial equipment, and electronics that frequently need to run large, high-fidelity simulations. "This breakthrough represents a significant opportunity for our customers to increase productivity and improve ROI with a high level of accuracy, much faster than was previously possible,†said Uwe Schramm, Altair’s chief technology officer for solvers and optimization. “By running our solvers on NVIDIA GPUs, we achieved formidable results that will give users a big advantage.â€The post NVIDIA GPUs Speed Altair OptiStruct structural analysis up to 10x appeared first on insideHPC.
|
by staff on (#4C341)
The Massive Storage Systems and Technology Conference (MSST) posted their preliminary speaker agenda. Keynote speakers include Margo Seltzer and Mark Kryder, along with a five-day agenda of invited and peer research talks and tutorials, May 20-24 in Santa Clara, California. "MSST 2019 will focus on current challenges and future trends in distributed storage system technologies,†said Meghan Wingate McClelland, Communications Chair of MSST."The post Agenda Posted for MSST Mass Storage Conference in May appeared first on insideHPC.
|
by staff on (#4C2YC)
In this Big Compute Podcast, Gabriel Broner from Rescale and Dave Turek from IBM discuss how AI enables the acceleration of HPC workflows. "HPC can benefit from AI techniques. One area of opportunity is to augment what people do in preparing simulations, analyzing results and deciding what simulation to run next. Another opportunity exists when we take a step back and analyze whether we can use AI techniques instead of simulations to solve the problem. We should think about AI as increasing the toolbox HPC users have."The post Big Compute Podcast: Accelerating HPC Workflows with AI appeared first on insideHPC.
|
by Rich Brueckner on (#4C2YE)
In this video from the 2019 GPU Technology Conference, James Coomer from DDN describes the company's high-speed storage solutions for Ai, Machine Learning, and HPC. "This week at GTC, DDN is showcasing its high speed storage solutions, including its A³I architecture and new customer use cases in autonomous driving, life sciences, healthcare, retail, and financial services. DDN next generation of A³I reference architectures include NVIDIA’s DGX POD, DGX-2, and the DDN’s AI400 parallel storage appliance."The post Video: DDN Accelerates Ai, Analytics, and Deep Learning at GTC appeared first on insideHPC.
|
by staff on (#4C0RE)
We've been hearing more and more about immersive cooling for HPC, but what happens to system warranties in these kinds of installations? Today GRC took a step to ease these concerns through collaboration with TechData to provide support and warranties for worldwide customers submerging servers and other equipment in GRC’s immersion cooling systems. "As we grow our global footprint, it is wonderful to add TechData to our network of partners, while providing customers with added peace of mind,†said Peter Poulin, CEO of GRC.The post GRC Partners to Provide Worldwide Support and Server Warranties for Immersion Cooling appeared first on insideHPC.
|
by staff on (#4C0RF)
Today HPC integrator Nor-Tech announced participation in two recent Nobel Physics Prize-Winning projects. The company's HPC gear will help power the Laser Interferometer Gravitational-Wave Observatory (LIGO) project as well as the IceCube neutrino detection experiment. "We are excited about the amazing discoveries these enhanced detectors will reveal," said Nor-Tech Executive Vice President Jeff Olson. "This is an energizing time for all of us at Nor-Tech—knowing that the HPC solutions we are developing for two Nobel projects truly are changing our view of the world.â€The post Nor-Tech Powers LIGO and IceCube Nobel-Physics Prize-Winning Projects appeared first on insideHPC.
|
by Rich Brueckner on (#4C0KK)
In this video from the GPU Technology Conference, Sumit Gupta from IBM describes how IBM is powering production-level Ai and Machine Learning. "IBM PowerAI provides the easiest on-ramp for enterprise deep learning. PowerAI helped users break deep learning training benchmarks AlexNet and VGGNet thanks to the world's only CPU-to-GPU NVIDIA NVLink interface. See how new feature development and performance optimizations will advance the future of deep learning in the next twelve months, including NVIDIA NVLink 2.0, leaps in distributed training, and tools that make it easier to create the next deep learning breakthrough."The post Video: IBM Powers Ai at the GPU Technology Conference appeared first on insideHPC.
|
by staff on (#4C0FD)
While there are many benefits to leveraging the cloud for HPC, there are challenges as well. Along with security and cost, data handling is consistently identified as a top barrier. "In this short article, we discuss the challenge of managing data in hybrid clouds, offer some practical tips to makes things easier, and explain how automation can play a key role in improving efficiency."The post Data Management: The Elephant in the Room for HPC Hybrid Cloud appeared first on insideHPC.
|
by Rich Brueckner on (#4BYTB)
The San Diego Supercomputer Center (SDSC) at UC San Diego, and Sylabs.io recently hosted the first-ever Singularity User Group meeting, attracting users and developers from around the nation and beyond who wanted to learn more about the latest developments in an open source project known as Singularity. Now in use on SDSC’s Comet supercomputer, Singularity has quickly become an essential tool in improving the productivity of researchers by simplifying the development and portability challenges of working with complex scientific software.The post SDSC and Sylabs Gather for Singularity User Group appeared first on insideHPC.
|
by staff on (#4BYFZ)
Today quantum computing startup ColdQuanta announced the appointment of Robert “Bo†Ewald as president and chief executive officer. Ewald is well-known in high technology having previously been president of supercomputing leader Cray Research, CEO of Silicon Graphics, and for the past six years, president of quantum computing company D-Wave International. With his experience at Cray, SGI and D-Wave, Bo has successfully navigated companies through the bleeding edge of technology several times before. I am thrilled to have Bo take ColdQuanta’s helm.â€The post Bo Ewald joins quantum computing firm ColdQuanta as CEO appeared first on insideHPC.
|
by Rich Brueckner on (#4BYB4)
Sean Hefty and Venkata Krishnan from Intel gave this talk at the OpenFabrics Workshop in Austin. "Advances in Smart NIC/FPGA with integrated network interface allow acceleration of application-specific computation to be performed alongside communication. Participants will learn about the potential for Smart NIC/FPGA application acceleration and will have the opportunity to contribute application expertise and domain knowledge to a discussion of how Smart NIC/FPGA acceleration technology can bring individual applications into the Exascale era."The post Video: Enabling Applications to Exploit SmartNICs and FPGAs appeared first on insideHPC.
|
by staff on (#4BYB6)
Today ACM named Yoshua Bengio, Geoffrey Hinton, and Yann LeCun recipients of the 2018 ACM Turing Award for conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing. "The ACM A.M. Turing Award, often referred to as the “Nobel Prize of Computing,†carries a $1 million prize, with financial support provided by Google, Inc. It is named for Alan M. Turing, the British mathematician who articulated the mathematical foundation and limits of computing."The post Pioneers in Deep Learning to Receive ACM Turing Award appeared first on insideHPC.
|
by Rich Brueckner on (#4BY6N)
In this video from GTC 2019 in Silicon Valley, Marc Hamilton from NVIDIA describes how accelerated computing is powering AI, computer graphics, data science, robotics, automotive, and more. "Well, we always make so many great announcements at GTC. But one of the traditions Jensen has now started a few years ago is coming up with a new acronym to really make our messaging for the show very, very simple to remember. So PRADA stands for Programmable Acceleration Multiple Domains One Architecture. And that's really what the GPU has become."The post Video: NVIDIA Showcases Programmable Acceleration of multiple Domains with one Architecture appeared first on insideHPC.
|
by staff on (#4BVZT)
Today MathWorks introduced Release 2019a of MATLAB and Simulink. The release contains new products and important enhancements for artificial intelligence (AI), signal processing, and static analysis, along with new capabilities and bug fixes across all product families. "One of the key challenges in moving AI from hype to production is that organizations are hiring AI ‘experts’ and trying to teach them engineering domain expertise. With R2019a, MathWorks enables engineers to quickly and effectively extend their AI skills, whether it’s to develop controllers and decision-making systems using reinforcement learning, training deep learning models on NVIDIA DGX and cloud platforms, or applying deep learning to 3-D data,†said David Rich, MATLAB marketing director.â€The post Ai comes to MATLAB and Simulink with new 2019a release appeared first on insideHPC.
|
by staff on (#4BVZV)
Can the Cloud power ground-breaking research? A new NSF-funded research project aims to provide a deeper understanding of the use of cloud computing in accelerating scientific discoveries. "First announced in 2018, the Exploring Clouds for Acceleration of Science (E-CAS) project has now selected the six research proposals to explore how scientific workflows can leverage advancements in real-time analytics, artificial intelligence, machine learning, accelerated processing hardware, automation in deployment and scaling, and management of serverless applications for a wider range of science."The post E-CAS Project to Explore Clouds for Acceleration of Science appeared first on insideHPC.
|
by Rich Brueckner on (#4BVVY)
Christopher Lameter from Jump Trading gave this talk at the OpenFabrics Workshop in Austin. "In 2017 we got 100G fabrics, in 2018 200G fabrics and in 2019 it looks like 400G technology may be seeing a considerable amount of adoption. These bandwidth compete with and sometimes are higher than the internal bus speeds of the servers that are connected using these fabrics. I think we need to consider these developments and work on improving fabrics and the associated APIs so that ways to access these features become possible using vendor neutral APIs. It needs to be possible to code in a portable way and not to a vendor specific one."The post Faster Fabrics Running Against Limits of the Operating System, the Processor, and the I/O Bus appeared first on insideHPC.
|
by staff on (#4BVW0)
Today D-Wave Systems announced the geographic expansion of its Leap quantum cloud service to 33 countries in Europe and Asia. According to the company, Leap is the only cloud-based service to provide real-time access to a live quantum computer, as well as open- source development tools, interactive demos, educational resources, and knowledge base articles. "The range and robustness of early applications from our customers continues to grow, and customers are starting to see early value in using quantum computing to address real-world business problems.â€The post D-Wave Leap Quantum Cloud Service comes to Europe and Japan appeared first on insideHPC.
|
by Sarah Rubenoff on (#4BVQ5)
The growing prevalence of artificial intelligence and machine learning is putting heightened focus on the quantities of data that organizations have recently accumulated — as well as the value potential in this data. Companies looking to gain a competitive edge in their market are turning to tools like graphic processing units – or GPUs – to ramp up computing power. That's according to a new white paper from Penguin Computing.The post Exploring the ROI Potential of GPU Supercomputing appeared first on insideHPC.
|
by staff on (#4BSJX)
The second annual GPU Hackathon kicked off in Perth this week, a collaboration between Oak Ridge National Laboratory, NVIDIA and the Pawsey Supercomputing Centre. Attended by five teams from across Australia, the five-day hack event centers around adapting scientific code for accelerated computing. "We’re excited to continue our collaboration with international research facilitiesâ€, said Mark Stickells, Executive Director at Pawsey. “Pawsey accelerates scientific discovery through technology, expertise and collaboration; and this is demonstrated by the GPU Hackathon. The Hackathon is a great example of collaboration and engagement with industry, academia and government.â€The post Perth’s GPU Hackathon looks to speed research computing appeared first on insideHPC.
|
by staff on (#4BSE9)
In this podcast, the Radio Free HPC team has an animated discussion about multicore scaling, how easy it seems to be to mislead AI systems, and some good sized catches of the week. "As CPU performance improvements have slowed down, we’ve seen the semiconductor industry move towards accelerator cards to provide dramatically better results. Nvidia has been a major beneficiary of this shift, but it’s part of the same trend driving research into neural network accelerators, FPGAs, and products like Google’s TPU."The post Podcast: Multicore Scaling Slow Down, and Fooling AI appeared first on insideHPC.
|
by Rich Brueckner on (#4BSEA)
In this video from the GPU Technology Conference, Karan Batta from Oracle describes how the company provides HPC and Machine Learning in the Cloud with Bare Metal speed. " Oracle Cloud Infrastructure offers wide-ranging support for NVIDIA GPUs, including the high-performance NVIDIA Tesla P100 and V100 GPU instances that provide the highest ratio of CPU cores and RAM per GPU available. With a maximum of 52 physical CPU cores, 8 NVIDIA Volta V100 units per bare metal server, 768 GB of memory, and two 25 Gbps interfaces, these are the most powerful GPU instances on the market."The post Oracle Cloud Speeds HPC & Ai Workloads at GTC 2019 appeared first on insideHPC.
|
by staff on (#4BQQP)
As supercomputers become ever more capable in their march toward exascale levels of performance, scientists can run increasingly detailed and accurate simulations to study problems ranging from cleaner combustion to the nature of the universe. Enter ExaLearn, a new machine learning project supported by DOE’s Exascale Computing Project (ECP), aims to develop new tools to help scientists overcome this challenge by applying machine learning to very large experimental datasets and simulations.The post ExaLearn Project to bring Machine Learning to Exascale appeared first on insideHPC.
|
by staff on (#4BQQR)
Today quantum computing startup IonQ released the results of two rigorous real-world tests that show that its quantum computer can solve significantly more complex problems with greater accuracy than results published for any other quantum computer. "The real test of any computer is what can it do in a real-world setting. We challenged our machine with tough versions of two well-known algorithms that demonstrate the advantages of quantum computing over conventional devices. The IonQ quantum computer proved it could handle them. Practical benchmarks like these are what we need to see throughout the industry.â€The post IonQ posts benchmarks for quantum computer appeared first on insideHPC.
|
by Rich Brueckner on (#4BP4P)
Ariel Almog from Mellanox gave this talk at the OpenFabrics Workshop in Austin. "Recently, deployment of 50 Gbps per lane (HDR) speed started and 100 Gbps per lane (EDR) which is a future technology is around the corner. The high bandwidth might cause the NIC PCIe interface to become a bottle neck as PCIe gen3 can handle up to single 100 Gbps interface over 16 lanes and PCIe gen4 can handle up to single 200 Gbps interface over 16 lanes. In addition, since the host might have dual CPU sockets, Socket direct technology, provides direct PCIe access to dual CPU sockets, eliminates the need for network traffic to go over the inter-process bus and allows better utilization of PCIe, thus optimizing overall system performance."The post InfiniBand: To HDR and Beyond appeared first on insideHPC.
|
by Rich Brueckner on (#4BP2E)
The German Climate Computing Centre (DKRZ) is seeking and HPC and Specialist in Software Development in our Job of the Week. "DKRZ is involved in numerous national and international projects in the field of high-performance computing for climate and weather research. In addition to the direct user support, depending on your interest and ability, you will also be able to participate in this development work."The post Job of the Week: HPC Specialist in Software Development at DKRZ appeared first on insideHPC.
|
by staff on (#4BM5E)
In this video, Cray CEO Pete Ungaro announces Aurora – Argonne National Laboratory’s forthcoming supercomputer and the United States’ first exascale system. Ungaro offers some insight on the technology, what makes exascale performance possible, and why we’re going to need it. “It is an exciting testament to Shasta’s flexible design and unique system and software capabilities, along with our Slingshot interconnect, which will be the foundation for Argonne’s extreme-scale science endeavors and data-centric workloads. Shasta is designed for this transformative exascale era and the convergence of artificial intelligence, analytics and modeling and simulation– all at the same time on the same system -- at incredible scale.â€The post Video: Cray Announces First Exascale System appeared first on insideHPC.
|
by staff on (#4BM5G)
NERSC has signed a contract with NVIDIA to enhance GPU compiler capabilities for Berkeley Lab’s next-generation Perlmutter supercomputer. "We are excited to work with NVIDIA to enable OpenMP GPU computing using their PGI compilers,†said Nick Wright, the Perlmutter chief architect. “Many NERSC users are already successfully using the OpenMP API to target the manycore architecture of the NERSC Cori supercomputer. This project provides a continuation of our support of OpenMP and offers an attractive method to use the GPUs in the Perlmutter supercomputer. We are confident that our investment in OpenMP will help NERSC users meet their application performance portability goals."The post NERSC taps NVIDIA compiler team for Perlmutter Supercomputer appeared first on insideHPC.
|
by Rich Brueckner on (#4BM0K)
Today the PASC19 Conference announced that Dr. Lois Curfman McInnes from Argonne and Rich Brueckner from insideHPC will moderate a panel discussion with thought leaders focused on software challenges for Exascale and beyond. "In this session, Lois Curfman McInnes from Argonne National Laboratory and Rich Brueckner from insideHPC will moderate a panel discussion with thought leaders focused on software challenges for Exascale and beyond – mixing “big picture†and technical discussions. McInnes will bring her unique perspective on emerging Exascale software ecosystems to the table, while Brueckner will illustrate the benefits of Exascale to world-wide audiences."The post PASC19 Preview: Brueckner and Dr. Curfman-McInnes to Moderate Exascale Panel Discussion appeared first on insideHPC.
|
by Rich Brueckner on (#4BM0N)
Mellanox Scalable Hierarchical Aggregation and Reduction Protocol (SHARP) technology improves upon the performance of MPI operations by offloading collective operations from the CPU to the switch network, and by eliminating the need to send data multiple times between endpoints. This innovative approach decreases the amount of data traversing the network as aggregation nodes are reached, and dramatically reduces the MPI operations time. Implementing collective communication algorithms in the network also has additional benefits, such as freeing up valuable CPU resources for computation rather than using them to process communication."The post How Mellanox SHARP technology speeds Ai workloads appeared first on insideHPC.
|
by Rich Brueckner on (#4BHWT)
If you are considering moving some of your HPC workload to the Cloud, nothing leads the way like a good set of case studies in your scientific domain. To this end, our good friends at the UberCloud have published their Compendium Of Case Studies In Computational Fluid Dynamics. The document includes 36 CFD case studies summarizing HPC Cloud projects that the UberCloud has performed together with the engineering community over the last six years.The post UberCloud Publishes Compendium Of CFD Case Studies appeared first on insideHPC.
|
by staff on (#4BHWW)
Today the ISC 2019 conference announced that their keynote will be delivered by Professor Ivo Sbalzarini, who will speak to an audience of 3500 attendees about the pivotal role high performance computing plays in the field of systems biology. Under the title, The Algorithms of Life - Scientific Computing for Systems Biology, Sbalzarini will discuss how HPC is being used as a tool for scientific investigation and for hypothesis testing, as well as a more fundamental way to think about problems in systems biology.The post ISC 2019 Keynote to focus on Algorithms of Life appeared first on insideHPC.
|