Feed insidehpc Inside HPC & AI News | High-Performance Computing & Artificial Intelligence

Favorite IconInside HPC & AI News | High-Performance Computing & Artificial Intelligence

Link https://insidehpc.com/
Feed http://insidehpc.com/feed/
Updated 2025-08-19 11:15
LLNL & Rensselaer Polytechnic to Promote Industrial HPC
Lawrence Livermore National Laboratory (LLNL) and the Rensselaer Polytechnic Institute will combine decades of expertise to help American industry and businesses expand use of high performance computing under a recently signed memorandum of understanding.The post LLNL & Rensselaer Polytechnic to Promote Industrial HPC appeared first on insideHPC.
TSUBAME2: How to Manage a Large GPU-Based Heterogeneous Supercomputer
Satoshi Matsuoka gave this talk at the PBS Works User Group this week. "The Tokyo Tech. TSUBAME2 supercomputer is one of the world’s leading supercomputer, ranked as high as #4 in the world on the Top500 and recognized as the “greenest supercomputer in the world” on the Green 500. With the GPU upgrade in 2013, it still sustains high performance (5.7 Petaflops Peak) and high usage (nearly 2000 registered users). However, such performance levels have been achieved with pioneering adoption of latest technologies such as GPUs and SSDs that necessitated non-traditional strategies in resource scheduling."The post TSUBAME2: How to Manage a Large GPU-Based Heterogeneous Supercomputer appeared first on insideHPC.
Heterogeneous MPI Application Optimization
"Two components of ITAC, the Intel Trace Collector and the Intel Trace Analyzer can be used to understand the performance and bottlenecks of a Monte Carlo simulation. When each of the strike prices are distributed to both the Intel Xeon cores the Intel Xeon Phi coprocessor, the efficiency was about 79%, as the coprocessors can calculate the results much faster than the main CPU cores."The post Heterogeneous MPI Application Optimization appeared first on insideHPC.
US Air Force Awards Pointwise $1.2 Million CFD Contract
Pointwise, a software company specializing in grid generation and pre-processing software for computational fluid dynamics (CFD), has been awarded a two-year, $1.2 million contract from the US Air Force Materiel Command, part of the Arnold Engineering Development Complex (AEDC), located at Arnold Air Force Base, Tennessee.The post US Air Force Awards Pointwise $1.2 Million CFD Contract appeared first on insideHPC.
IBM Platform Computing Solutions for Life Sciences
Massive amounts of computing power and data are needed for effective and efficient processing for many areas that are considered in the Life Science domain. From drug design to genomic sequencing and risk analysis , many workflows require that the tools and processes be in place so that entire organizations are more effective.The post IBM Platform Computing Solutions for Life Sciences appeared first on insideHPC.
Bill Dally from Nvidia Receives Funai Achievement Award
Today IPSJ, Japan's largest IT society honored Bill Dally from Nvidia with the Funai Achievement Award for his extraordinary achievements in the field of computer science and education. "Dally is the first non-Japanese scientist to receive the award since the first two awards were given out in 2002 to Alan Kay (a pioneer in personal computing) and in 2003 to Marvin Minsky (a pioneer in artificial intelligence)."The post Bill Dally from Nvidia Receives Funai Achievement Award appeared first on insideHPC.
Univa Grid Engine Adds Docker Support
Today Univa announced the Univa Grid Engine Container Edition, which fully incorporates Docker containers into the Univa Grid Engine resource manager. The Container Edition features the unique ability to run containers at scale and blend containers with other workloads and supports heterogeneous applications and technology environments.The post Univa Grid Engine Adds Docker Support appeared first on insideHPC.
Optalysys: Disruptive Optical Processing Technology for HPC
In this video from the Disruptive Technologies Session at the 2015 HPC User Forum, Nick New from Optalysis describes the company's optical processing technology. "Optalysys technology uses light, rather than electricity, to perform processor intensive mathematical functions (such as Fourier Transforms) in parallel at incredibly high-speeds and resolutions. It has the potential to provide multi-exascale levels of processing, powered from a standard mains supply. The mission is to deliver a solution that requires several orders of magnitude less power than traditional High Performance Computing architectures."The post Optalysys: Disruptive Optical Processing Technology for HPC appeared first on insideHPC.
Spectra Logic to Offer LTO-7 Technology
Today Spectra Logic announced it is now taking orders for Spectra tape libraries configured with Linear Tape-Open Generation 7 (LTO-7) technology. The combination of LTO-7, the industry-standard for tape technology, and Spectra tape libraries will result in the best storage density available per dollar.The post Spectra Logic to Offer LTO-7 Technology appeared first on insideHPC.
Six Strategies for Maximizing GPU Clusters
In a perfect world, there would be one version of all compilers, libraries, and profilers. To make things even easier, hardware would never change. However, technology marches forward, and such a world does not exist. Software tool features are updated, bugs are fixed, and performance is increased. Developers need these improvements but at the same time must manage these differences.The post Six Strategies for Maximizing GPU Clusters appeared first on insideHPC.
Bright Computing Collaborates on OpenHPEC Accelerator Suite
Today Curtiss-Wright Corporation announced that its Defense Solutions division is collaborating with leading High Performance Computing software vendor Bright Computing to bring its supercomputing software tools to the embedded Aerospace & Defense market as part of Curtiss-Wright’s recently announced OpenHPEC Accelerator Suite of best-in-class software development tools.The post Bright Computing Collaborates on OpenHPEC Accelerator Suite appeared first on insideHPC.
Bob Sorensen from IDC Presents: Best Practices in Private Sector Cyber Security
IDC developed a set of cybersecurity case studies of US commercial organizations in order to learn: What security problems they have experienced, changes that they have made to address them, and new underlying security procedures that they are exploring.The post Bob Sorensen from IDC Presents: Best Practices in Private Sector Cyber Security appeared first on insideHPC.
Australian Bureau of Meteorology to Manage Cray Workloads with Altair PBS Pro
Today Altair announced that its PBS Professional has been chosen to manage workloads for the new Cray supercomputer to be installed at the Bureau of Meteorology (BoM), Australia's national weather, climate and water agency.The post Australian Bureau of Meteorology to Manage Cray Workloads with Altair PBS Pro appeared first on insideHPC.
Czech Republic Steps Up with 2 Petaflop SGI ICE X Supercomputer
Today SGI and IT4Innovations national supercomputing center in the Czech Republic announced the deployment of the Salomon supercomputer. With a peak performance of 2 Petaflops, the Salomon supercomputer is twenty times more powerful than its predecessor and is the most powerful supercomputer in Europe running on the Xeon Phi coprocessors.The post Czech Republic Steps Up with 2 Petaflop SGI ICE X Supercomputer appeared first on insideHPC.
Altair, Intel and Amazon Offer HPC Challenge
For companies looking to test the viability of engineering in the cloud, Altair has teamed with Intel and Amazon Web Services (AWS) to offer an “HPC Challenge” for product design. In a nutshell, the program provides free cycles on AWS for up to 60 days, where users can run compute-intensive jobs for computer-aided engineering (CAE).The post Altair, Intel and Amazon Offer HPC Challenge appeared first on insideHPC.
Swiss CSCS to Power Weather Forecasts with GPUs on Cray CS-Storm
Today Cray announced that the Swiss National Supercomputing Centre (CSCS) has installed a Cray CS-Storm cluster supercomputer to power the operational numerical weather forecasts run by the Swiss Federal Office of Meteorology and Climatology (MeteoSwiss). This is the first time a GPU-accelerated supercomputer has been used to run production numerical weather models for a major national weather service.The post Swiss CSCS to Power Weather Forecasts with GPUs on Cray CS-Storm appeared first on insideHPC.
Next-generation Subsurface Flow Simulations on Titan
Researchers are using the Titan supercomputer to power next-generation subsurface flow simulations. Improved models could benefit carbon sequestration, contaminant transport, and oil recovery research.The post Next-generation Subsurface Flow Simulations on Titan appeared first on insideHPC.
E4 Benchmarks EnginSoft CFD on ARM64
Today E4 Computer Engineering announced the results of tests carried out independently on a GPU cluster provided to EnginSoft Italy, a premier global consulting firm in the field of Simulation Based Engineering Science (SBES).The post E4 Benchmarks EnginSoft CFD on ARM64 appeared first on insideHPC.
Video: Panel on US Plans for Advancing HPC with NSCI
In this video plus transcripts from the 2015 HPC User Forum in Broomfield, Bob Sorensen from IDC moderates a panel discussion on the the National Strategic Computing Initiative (NSCI). "Established by an Executive Order by President Obama, the National Strategic Computing Initiative has a mission to ensure the United States continues leading high performance computing over the coming decades. As part of the effort, NSCI will foster the deployment of exascale supercomputers to take on the nation's Grand Challenges."The post Video: Panel on US Plans for Advancing HPC with NSCI appeared first on insideHPC.
Transcript: Will Koella from DoD Discusses the NSCI Initiative
In this video from the 2015 HPC User Forum, Will Koella from the Department of Defense discusses National Strategic Computing Initiative (NSCI). Established by an Executive Order by President Obama, NSCI has a mission to ensure the United States continues leading high performance computing over the coming decades. As part of the effort, NSCI will foster the deployment of exascale supercomputers to take on the nation’s Grand Challenges.The post Transcript: Will Koella from DoD Discusses the NSCI Initiative appeared first on insideHPC.
Transcript: Doug Kothe from Oak Ridge Discusses the NSCI Initiative
In this video from the 2015 HPC User Forum, Doug Kothe from Oak Ridge discusses National Strategic Computing Initiative (NSCI). Established by an Executive Order by President Obama, NSCI has a mission to ensure the United States continues leading high performance computing over the coming decades. As part of the effort, NSCI will foster the deployment of exascale supercomputers to take on the nation’s Grand Challenges.The post Transcript: Doug Kothe from Oak Ridge Discusses the NSCI Initiative appeared first on insideHPC.
Transcript: Randy Bryant from the White House OSTP Discusses the NSCI Initiative
In this video from the 2015 HPC User Forum, Randy Bryant, from the White House’s Office of Science and Technology Policy (OSTP) discusses National Strategic Computing Initiative (NSCI). Established by an Executive Order by President Obama, NSCI has a mission to ensure the United States continues leading high performance computing over the coming decades. As part of the effort, NSCI will foster the deployment of exascale supercomputers to take on the nation’s Grand Challenges.The post Transcript: Randy Bryant from the White House OSTP Discusses the NSCI Initiative appeared first on insideHPC.
Transcript: Irene Qualters from the NSF Discusses the NSCI Initiative
In this video from the 2015 HPC User Forum, Irene Qualters from the National Science Foundation discusses National Strategic Computing Initiative (NSCI). Established by an Executive Order by President Obama, NSCI has a mission to ensure the United States continues leading high performance computing over the coming decades. As part of the effort, NSCI will foster the deployment of exascale supercomputers to take on the nation's Grand Challenges.The post Transcript: Irene Qualters from the NSF Discusses the NSCI Initiative appeared first on insideHPC.
ISC Cloud & Big Data: From Banking to Personalized Medicine
In this special guest feature, Tom Wilkie from Scientific Computing World looks at some issues of life and death that will be discussed at the upcoming ISC Cloud and Big Data conference in Frankfurt.The post ISC Cloud & Big Data: From Banking to Personalized Medicine appeared first on insideHPC.
DDNtool Streamlines File System Monitoring at Oak Ridge
Over at Oak Ridge, Eric Gedenk writes that monitoring the status of complex supercomputer systems is an ongoing challenge. Now, Ross Miller from OLCF has developed DDNtool, which provides a single interface to 72 controllers in near real time.The post DDNtool Streamlines File System Monitoring at Oak Ridge appeared first on insideHPC.
Job of the Week: HPC Systems Specialist at Virginia Tech
Virginia Tech is seeking an HPC Systems Specialist in our Job of the Week.The post Job of the Week: HPC Systems Specialist at Virginia Tech appeared first on insideHPC.
Thomas Lippert on Why the Human Brain Project Needs HPC and Data Analytics Infrastructures
In this video from the Neuroinformatics 2015 Conference, Thomas Lippert from Jülich presents: Why Does the Human Brain Project Need HPC and Data Analytics Infrastructures? HBP, the human brain project, is one of two European flagship projects foreseen to run for 10 years. The HBP aims at creating an open neuroscience driven infrastructure for simulation and big data aided modeling and research with a credible user program.The post Thomas Lippert on Why the Human Brain Project Needs HPC and Data Analytics Infrastructures appeared first on insideHPC.
BAW in Germany to Install Additional Bull Supercomputer
Bull Atos will provide a second High Performance Computing cluster to the German Waterways Engineering and Research Institute (BAW). Following the installation of the first computer in 2012, the Federal Agency selected a Bull supercomputer for one of their HPC clusters dedicated to complex simulations. The new supercomputer will be operational in October and under the new contract Bull will also provide maintenance services for five years.The post BAW in Germany to Install Additional Bull Supercomputer appeared first on insideHPC.
Video: HPC for the Louisiana Coastal Master Plan
Zachary Cobell from ARCADIS-US presented this talk at the HPC User Forum. "As a global leader for designing sustainable coastlines and waterways, ARCADIS believes in developing multi-faceted, integrated solutions to restore, protect, and enhance sensitive coastal areas. We are working with the Army Corps and the state of Louisiana to design these projects with and from nature, in effect using nature as a dynamic engine."The post Video: HPC for the Louisiana Coastal Master Plan appeared first on insideHPC.
Satoshi Matsuoka to Keynote Workshop on Directives and Tools for Accelerators
The Center for Advanced Computing systems has announced their agenda for the Directives and Tools for Accelerators Workshop. Also known as the Seismic Programming Shift Workshop, the event takes place Oct. 11-13 at the University of Houston.The post Satoshi Matsuoka to Keynote Workshop on Directives and Tools for Accelerators appeared first on insideHPC.
Ace Computers Steps Up HPC for Oil & Gas
Ace Computers in Illinois reports that the company is expanding its high performance computer footprint in the Oil & Gas Industry. Not a stranger to this space, the company has than 30 years of experience designing powerful, scalable computers for leading energy suppliers.The post Ace Computers Steps Up HPC for Oil & Gas appeared first on insideHPC.
Allinea Forge Sparks Convergent Science Combustion Simulation
Convergent Science reports that the company has adopted the Allinea Forge development tool suite. As the leader in internal combustion engine (ICE) simulation, Convergent Science is using Allinea to increase the capability and performance of the company's CONVERGE software.The post Allinea Forge Sparks Convergent Science Combustion Simulation appeared first on insideHPC.
Bo Ewald Presents: D-Wave Quantum Computing
Bo Ewald from D-Wave Systems presented this Disruptive Technologies talk at the HPC User Forum. "While we are only at the beginning of this journey, quantum computing has the potential to help solve some of the most complex technical, commercial, scientific, and national defense problems that organizations face. We expect that quantum computing will lead to breakthroughs in science, engineering, modeling and simulation, financial analysis, optimization, logistics, and national defense applications."The post Bo Ewald Presents: D-Wave Quantum Computing appeared first on insideHPC.
Intel’s Diane Bryant to Keynote HPC Matters Plenary at SC15
"Starting in 2013, the SC conference organizers launched “HPC Matters” to encourage members of the computational sciences community to share their thoughts, vision, and experiences with how high performance computers are used to improve the lives of people all over the world in more simple terms. Four pillars provide structure to the program: Influencing Daily Lives; Science and Engineering; Economic Impact; and Education."The post Intel’s Diane Bryant to Keynote HPC Matters Plenary at SC15 appeared first on insideHPC.
Podcast: Dell Workshop on Large Genomic Data sets Coming to La Jolla Sept. 15
In this podcast, David Bump from Dell describes an upcoming Workshop on large genomic data sets coming to La Jolla, California on Sept. 15, 2015. "Please join us for this one day workshop featuring presentations from Dell, Appistry, UNC Chapel Hill, Arizona State University, and TGen who all will share their cutting-edge results and best practices for helping labs process, manage, and analyze large genomic data sets. You will also hear from Intel and Nvidia on their latest HPC/Big Data technology innovations."The post Podcast: Dell Workshop on Large Genomic Data sets Coming to La Jolla Sept. 15 appeared first on insideHPC.
Optimization Through Profiling
Through profiling, developers and users can get ideas on where an application’s hotspots are, in order to optimize certain sections of the code. In addition to locating where time is spent within an application, profiling tools can locate where there is little or no parallelism and a number of other factors that may affect performance. Performance tuning can help tremendously in many cases.The post Optimization Through Profiling appeared first on insideHPC.
Video: HPC in Earth & Planetary Science using MITgcm
Christopher Hill from MIT presented this talk at the HPC User Forum. "The MITgcm (MIT General Circulation Model) is a numerical model designed for study of the atmosphere, ocean, and climate. Its non-hydrostatic formulation enables it to simulate fluid phenomena over a wide range of scales; its adjoint capability enables it to be applied to parameter and state estimation problems. By employing fluid isomorphisms, one hydrodynamical kernel can be used to simulate flow in both the atmosphere and ocean."The post Video: HPC in Earth & Planetary Science using MITgcm appeared first on insideHPC.
Presentations Posted from MUG’15 – MVAPICH User Group
Ohio State University has posted presentations from MUG'15 MVAPICH User Group. The event took place Aug. 19-21 in Columbus, Ohio.The post Presentations Posted from MUG’15 – MVAPICH User Group appeared first on insideHPC.
AMD Forms Radeon Technologies Group for Graphics & Immersive Computing
Today AMD announced the promotion of Raja Koduri to senior vice president and chief architect, Radeon Technologies Group, reporting to president and CEO Dr. Lisa Su. In his expanded role, Koduri is responsible for overseeing all aspects of graphics technologies used in AMD's APU, discrete GPU, semi-custom, and GPU compute products.The post AMD Forms Radeon Technologies Group for Graphics & Immersive Computing appeared first on insideHPC.
X-ISS Launches CloudHPC Service
Today X-ISS rolled out CloudHPC, a consulting service created to guide organizations through the complex process of moving their HPC systems into the cloud environment.The post X-ISS Launches CloudHPC Service appeared first on insideHPC.
Cray Scales Fluent to 129,000 Compute Cores
Today Cray announced a world record by scaling ANSYS Fluent to 129,000 compute cores. "Less than a year ago, ANSYS announced Fluent had scaled to 36,000 cores with the help of NCSA. While the nearly 4x increase over the previous record is significant, it tells only part of the story. ANSYS has broadened the scope of simulations allowing for applicability to a much broader set of real-world problems and products than any other company offers."The post Cray Scales Fluent to 129,000 Compute Cores appeared first on insideHPC.
Intel HPC Developer Conference Coming to SC15
The first annual Intel HPC Developer Conference is coming to Austin Nov. 14-15 in conjunction with SC15. "The Intel® HPC Developer Conference will bring together developers from around the world to discuss code modernization in high performance computing. Learn what’s next in HPC, its technologies, and its impact on tomorrow’s innovations. Find the solutions to your biggest challenges at the Intel® HPC Developer Conference."The post Intel HPC Developer Conference Coming to SC15 appeared first on insideHPC.
Best Practices for Maximizing GPU Resources in HPC Clusters
HPC developers want to write code and create new applications. The advanced nature of HPC often requires that this process be associated with specific hardware and software environment present on a given HPC resource. Developers want to extract the maximum performance from HPC hardware and at the same time not get mired down in the complexities of software tool chains and dependencies.The post Best Practices for Maximizing GPU Resources in HPC Clusters appeared first on insideHPC.
Reducing Your Data Center “Water Guilt”
Concerns over data center water usage have become topical both in the industry and even in the general press of late. This is not a bad thing as data center water usage is a legitimate concern. The reality is that the problem is rooted in today’s established approaches to data center cooling.The post Reducing Your Data Center “Water Guilt” appeared first on insideHPC.
How the QPACE 2 Supercomputer is Solving Quantum Physics with Intel Xeon Phi
In this special guest feature from Scientific Computing World, Tilo Wettig from the University of Regensburg in Germany describes the unusual design of a supercomputer dedicated to solving some of the most arcane issues in quantum physics.The post How the QPACE 2 Supercomputer is Solving Quantum Physics with Intel Xeon Phi appeared first on insideHPC.
Video: Scalable High Performance Systems
In this video, Alexandru Iosup from the TU Delft presents: Scalable High Performance Systems. "During this masterclass, Alexandru discussed several steps towards addressing interesting new challenges which emerge in the operation of the datacenters that form the infrastructure of cloud services, and in supporting the dynamic workloads of demanding users. If we succeed, we may not only enable the advent of big science and engineering, and the almost complete automation of many large-scale processes, but also reduce the ecological footprint of datacenters and the entire ICT industry."The post Video: Scalable High Performance Systems appeared first on insideHPC.
Podcast: Preemptible VMs Lower Cost of Cancer Research at Broad Insitutue
In this podcast, Jason Stowe from Cycle Computing describes how the Broad Institute is mapping cancer genes with CycleCloud. According to Stowe, Cycle Computing recently ran a 50,000+ core workload for the B​road Institute with low-cost Preemptible VMs on the Google Compute Engine, performing three decades of cancer research computations in a single afternoon.The post Podcast: Preemptible VMs Lower Cost of Cancer Research at Broad Insitutue appeared first on insideHPC.
How the XSEDE Scholars Program Fosters Career Opportunities
Over at XSEDE, Scott Gibson writes that Computational Scientist Paul Delgado Says the XSEDE Scholars Program helped him realize his dream of solving real-life problems.The post How the XSEDE Scholars Program Fosters Career Opportunities appeared first on insideHPC.
Preview of Fall 2015 HPC Events
After spending a lovely six straight weeks at home, I find myself marveling at how many conferences are in the queue this Fall leading up to SC15 in Austin. Starting this week at the HPC User Forum, insideHPC will be on the road, bringing you the very latest in high performance computing.The post Preview of Fall 2015 HPC Events appeared first on insideHPC.
Video: Looking to the Future of NNSA Supercomputing
In this video, Douglas P. Wade from NNSA describes the computational challenges the agency faces in the stewardship of the nation's nuclear stockpile. As the Acting Director of the NNSA Office of Advanced Simulation and Computing, Wade looks ahead to future systems on the road to exascale computing.The post Video: Looking to the Future of NNSA Supercomputing appeared first on insideHPC.
...221222223224225226227228229230...