by staff on (#N3TV)
Today Mellanox announced that the HPC4Health Consortium, led by The Hospital for Sick Children (SickKids) and the University Health Network's Princess Margaret Cancer Centre, has selected its InfiniBand networking solutions to improve patient care and help researchers to optimize treatment with the ultimate goal of finding a cure for cancer. The end-to-end FDR 56Gb/s InfiniBand networking solution was adopted as the foundation of the center’s cancer and genomics program, to accelerate the sharing, processing and analysis of data generated from radiology imaging, medical imaging analysis, protein folding, x-ray diffraction in order to improve patient care and expedite cancer research.The post HPC4Health Selects Mellanox InfiniBand for Cancer and Genomics Research appeared first on insideHPC.
|
High-Performance Computing News Analysis | insideHPC
Link | https://insidehpc.com/ |
Feed | http://insidehpc.com/feed/ |
Updated | 2024-11-06 16:00 |
by Rich Brueckner on (#N385)
In this video from the Disruptive Technologies Panel at the HPC User Forum, Peter Braam from Cambridge University presents: Processing 1 EB per Day for the SKA Radio Telescope. "The Square Kilometre Array is an international effort to investigate and develop technologies which will enable us to build an enormous radio astronomy telescope with a million square meters of collecting area."The post Video: Processing 1 Exabyte per Day for the SKA Radio Telescope appeared first on insideHPC.
|
by Rich Brueckner on (#N36Y)
In this video (with transcript) from the 2015 HPC User Forum in Broomfield, Bob Sorenson from IDC moderates a User Agency panel discussion on the NSCI initiative. "You all have seen that usable statement inside the NSCI, and we are all about trying to figure out how to make usable machines. That is a key critical component as far, as we're concerned. But the thing that I think we're really seeing, we talked about the fact that a single thread performance is not increasing, and so what we're doing is we're simply increasing the parallelism and then the physics limitations, if you will, of how you cool and distribute power among the parts that are there. That really is leading to a paradigm shift from something that's based on how fast you can crunch the numbers to how fast you can feed the chips with data. It's really that paradigm shift, I think, more than anything else that's really going to change the way that we have to do our computing."The post User Agency Panel Discussion on the NSCI Initiative appeared first on insideHPC.
|
by Rich Brueckner on (#N0FX)
Altair has announced that Brüel & Kjær Sound & Vibration Measurement has joined the Altair Partner Alliance (APA), bringing its noise, vibration and harshness (NVH) software, Insight+, to HyperWorks customers. Insight+ creates the ability to efficiently consider test and computer-aided engineering (CAE) data together to assist engineers in better understanding NVH contributions early in the design process.The post Brüel & Kjær joins the Altair Partner Alliance appeared first on insideHPC.
by Rich Brueckner on (#N0BN)
"A university environment can be a challenge in many ways, with a wide variety of differing demands from more than a hundred different research groups, so how can a High Performance Computing group hope to meet the requirements of everyone? In this presentation I’ll explore some of the drivers for the HPC services we run at Imperial College London and how this maps onto our PBS Professional configuration. My talk will also cover how we use different features of PBS Pro and what advantages and benefits they give to us."The post Video: PBS and Scheduling at NCI – The past, present and future appeared first on insideHPC.
|
by staff on (#MYA7)
Linding Lab at the University of Copenhagen used an SGI UV system to discover how genetic diseases such as cancer systematically attack the networks controlling human cells. By developing advanced algorithms to integrate data from quantitative mass-spectrometry and next generation sequencing of tumor samples, the UCPH researchers have been able to uncover cancer related changes to phospho-signaling networks at a global scale. The studies are some of the early results of the strategic collaboration between SGI and the Linding Lab at UCPH. The landmark findings have been published in two back-to-back papers in today's Cell journal.The post SGI UV Helps Decode How Mutations Rewire Cancer Cells appeared first on insideHPC.
|
by Rich Brueckner on (#MY9B)
BAE Systems in Salt Lake City is seeking an HPC Administrator in our Job of the Week.The post Job of the Week: HPC Systems Administrator at BAE Systems appeared first on insideHPC.
|
by Rich Brueckner on (#MW3X)
Today Oak Ridge announced approval of a project run ParallelWare from Appentra on the Titan Supercomputer. The project includes an allocation of 50,000 core hours on supercomputer. "ParallelWare is an source-to-source parallelizing compiler for sequential scientific programs. ParallelWare automatically discovers the parallelism available int he input sequential C code, and automatically generates parallel-equivalent C code annotated with OpenMP compiler directives.The post Oak Ridge to Run ParallWare on Titan appeared first on insideHPC.
|
by staff on (#MW29)
Today IBM announced that Online, a leading managed service provider in France, has selected IBM Power Systems to extend its service capabilities for customers who want to increase the performance of their bare metal servers in the cloud.The post IBM Power Systems Speed Online’s Bare Metal Cloud Environment appeared first on insideHPC.
|
by MichaelS on (#MVPE)
Moving from a desktop oriented computing environment to a cluster based environment has it challenges for some organizations. However, a number of companies are aware of the benefits and are progressing in the move to a cluster based technical computing system.The post High Performance Cluster Computing Survey appeared first on insideHPC.
by Rich Brueckner on (#MVJ8)
The SC15 conference has asked us to reach out to our readers and encourage you to sign up for this year's Mentor-Protégé Program. It's as easy as checking a box when you register for the conference.The post Sign up to Mentor a Student at SC15 appeared first on insideHPC.
|
by staff on (#MVGR)
CoolIT Systems made the 2015 PROFIT 500 list with five-year revenue growth of 578% and ranked 20th overall in the Information Technology sector.The post CoolIT Systems Named as one of Canada’s fastest-Growing Companies on the 2015 PROFIT 500 appeared first on insideHPC.
|
by staff on (#MR9V)
Last week, SC15 announced that Diane Bryant, senior vice president and general manager of Intel’s Data Center Group, has been selected as the HPC Matters plenary speaker. Recently named as one of Fortune’s Most Powerful Women, Bryant offers her perspectives on high performance computing, U.S. competitiveness, and the goal of reaching Exascale.The post Intel’s Diane Bryant Describes Pathways to Exascale appeared first on insideHPC.
|
by staff on (#MR6C)
Lawrence Livermore National Laboratory (LLNL) and the Rensselaer Polytechnic Institute will combine decades of expertise to help American industry and businesses expand use of high performance computing under a recently signed memorandum of understanding.The post LLNL & Rensselaer Polytechnic to Promote Industrial HPC appeared first on insideHPC.
|
by Rich Brueckner on (#MR3C)
Satoshi Matsuoka gave this talk at the PBS Works User Group this week. "The Tokyo Tech. TSUBAME2 supercomputer is one of the world’s leading supercomputer, ranked as high as #4 in the world on the Top500 and recognized as the “greenest supercomputer in the world†on the Green 500. With the GPU upgrade in 2013, it still sustains high performance (5.7 Petaflops Peak) and high usage (nearly 2000 registered users). However, such performance levels have been achieved with pioneering adoption of latest technologies such as GPUs and SSDs that necessitated non-traditional strategies in resource scheduling."The post TSUBAME2: How to Manage a Large GPU-Based Heterogeneous Supercomputer appeared first on insideHPC.
|
by MichaelS on (#MR1P)
"Two components of ITAC, the Intel Trace Collector and the Intel Trace Analyzer can be used to understand the performance and bottlenecks of a Monte Carlo simulation. When each of the strike prices are distributed to both the Intel Xeon cores the Intel Xeon Phi coprocessor, the efficiency was about 79%, as the coprocessors can calculate the results much faster than the main CPU cores."The post Heterogeneous MPI Application Optimization appeared first on insideHPC.
|
by staff on (#MQFG)
Pointwise, a software company specializing in grid generation and pre-processing software for computational fluid dynamics (CFD), has been awarded a two-year, $1.2 million contract from the US Air Force Materiel Command, part of the Arnold Engineering Development Complex (AEDC), located at Arnold Air Force Base, Tennessee.The post US Air Force Awards Pointwise $1.2 Million CFD Contract appeared first on insideHPC.
|
by MichaelS on (#MNJS)
Massive amounts of computing power and data are needed for effective and efficient processing for many areas that are considered in the Life Science domain. From drug design to genomic sequencing and risk analysis , many workflows require that the tools and processes be in place so that entire organizations are more effective.The post IBM Platform Computing Solutions for Life Sciences appeared first on insideHPC.
|
by Rich Brueckner on (#MNCG)
Today IPSJ, Japan's largest IT society honored Bill Dally from Nvidia with the Funai Achievement Award for his extraordinary achievements in the field of computer science and education. "Dally is the first non-Japanese scientist to receive the award since the first two awards were given out in 2002 to Alan Kay (a pioneer in personal computing) and in 2003 to Marvin Minsky (a pioneer in artificial intelligence)."The post Bill Dally from Nvidia Receives Funai Achievement Award appeared first on insideHPC.
|
by Rich Brueckner on (#MMV2)
Today Univa announced the Univa Grid Engine Container Edition, which fully incorporates Docker containers into the Univa Grid Engine resource manager. The Container Edition features the unique ability to run containers at scale and blend containers with other workloads and supports heterogeneous applications and technology environments.The post Univa Grid Engine Adds Docker Support appeared first on insideHPC.
|
by Rich Brueckner on (#MMSY)
In this video from the Disruptive Technologies Session at the 2015 HPC User Forum, Nick New from Optalysis describes the company's optical processing technology. "Optalysys technology uses light, rather than electricity, to perform processor intensive mathematical functions (such as Fourier Transforms) in parallel at incredibly high-speeds and resolutions. It has the potential to provide multi-exascale levels of processing, powered from a standard mains supply. The mission is to deliver a solution that requires several orders of magnitude less power than traditional High Performance Computing architectures."The post Optalysys: Disruptive Optical Processing Technology for HPC appeared first on insideHPC.
|
by staff on (#MMQ6)
Today Spectra Logic announced it is now taking orders for Spectra tape libraries configured with Linear Tape-Open Generation 7 (LTO-7) technology. The combination of LTO-7, the industry-standard for tape technology, and Spectra tape libraries will result in the best storage density available per dollar.The post Spectra Logic to Offer LTO-7 Technology appeared first on insideHPC.
|
by Douglas Eadline on (#MM8T)
In a perfect world, there would be one version of all compilers, libraries, and profilers. To make things even easier, hardware would never change. However, technology marches forward, and such a world does not exist. Software tool features are updated, bugs are fixed, and performance is increased. Developers need these improvements but at the same time must manage these differences.The post Six Strategies for Maximizing GPU Clusters appeared first on insideHPC.
|
by staff on (#MKV0)
Today Curtiss-Wright Corporation announced that its Defense Solutions division is collaborating with leading High Performance Computing software vendor Bright Computing to bring its supercomputing software tools to the embedded Aerospace & Defense market as part of Curtiss-Wright’s recently announced OpenHPEC Accelerator Suite of best-in-class software development tools.The post Bright Computing Collaborates on OpenHPEC Accelerator Suite appeared first on insideHPC.
|
by Rich Brueckner on (#MH95)
IDC developed a set of cybersecurity case studies of US commercial organizations in order to learn: What security problems they have experienced, changes that they have made to address them, and new underlying security procedures that they are exploring.The post Bob Sorensen from IDC Presents: Best Practices in Private Sector Cyber Security appeared first on insideHPC.
|
by staff on (#MH7B)
Today Altair announced that its PBS Professional has been chosen to manage workloads for the new Cray supercomputer to be installed at the Bureau of Meteorology (BoM), Australia's national weather, climate and water agency.The post Australian Bureau of Meteorology to Manage Cray Workloads with Altair PBS Pro appeared first on insideHPC.
|
by Rich Brueckner on (#MGST)
Today SGI and IT4Innovations national supercomputing center in the Czech Republic announced the deployment of the Salomon supercomputer. With a peak performance of 2 Petaflops, the Salomon supercomputer is twenty times more powerful than its predecessor and is the most powerful supercomputer in Europe running on the Xeon Phi coprocessors.The post Czech Republic Steps Up with 2 Petaflop SGI ICE X Supercomputer appeared first on insideHPC.
|
by staff on (#MGJX)
For companies looking to test the viability of engineering in the cloud, Altair has teamed with Intel and Amazon Web Services (AWS) to offer an “HPC Challenge†for product design. In a nutshell, the program provides free cycles on AWS for up to 60 days, where users can run compute-intensive jobs for computer-aided engineering (CAE).The post Altair, Intel and Amazon Offer HPC Challenge appeared first on insideHPC.
|
by Rich Brueckner on (#MG2Q)
Today Cray announced that the Swiss National Supercomputing Centre (CSCS) has installed a Cray CS-Storm cluster supercomputer to power the operational numerical weather forecasts run by the Swiss Federal Office of Meteorology and Climatology (MeteoSwiss). This is the first time a GPU-accelerated supercomputer has been used to run production numerical weather models for a major national weather service.The post Swiss CSCS to Power Weather Forecasts with GPUs on Cray CS-Storm appeared first on insideHPC.
|
by Rich Brueckner on (#MDWX)
Researchers are using the Titan supercomputer to power next-generation subsurface flow simulations. Improved models could benefit carbon sequestration, contaminant transport, and oil recovery research.The post Next-generation Subsurface Flow Simulations on Titan appeared first on insideHPC.
|
by Rich Brueckner on (#MDEF)
Today E4 Computer Engineering announced the results of tests carried out independently on a GPU cluster provided to EnginSoft Italy, a premier global consulting firm in the field of Simulation Based Engineering Science (SBES).The post E4 Benchmarks EnginSoft CFD on ARM64 appeared first on insideHPC.
by Rich Brueckner on (#MDB5)
In this video plus transcripts from the 2015 HPC User Forum in Broomfield, Bob Sorensen from IDC moderates a panel discussion on the the National Strategic Computing Initiative (NSCI). "Established by an Executive Order by President Obama, the National Strategic Computing Initiative has a mission to ensure the United States continues leading high performance computing over the coming decades. As part of the effort, NSCI will foster the deployment of exascale supercomputers to take on the nation's Grand Challenges."The post Video: Panel on US Plans for Advancing HPC with NSCI appeared first on insideHPC.
|
by Rich Brueckner on (#MDB7)
In this video from the 2015 HPC User Forum, Will Koella from the Department of Defense discusses National Strategic Computing Initiative (NSCI). Established by an Executive Order by President Obama, NSCI has a mission to ensure the United States continues leading high performance computing over the coming decades. As part of the effort, NSCI will foster the deployment of exascale supercomputers to take on the nation’s Grand Challenges.The post Transcript: Will Koella from DoD Discusses the NSCI Initiative appeared first on insideHPC.
|
by Rich Brueckner on (#MDB9)
In this video from the 2015 HPC User Forum, Doug Kothe from Oak Ridge discusses National Strategic Computing Initiative (NSCI). Established by an Executive Order by President Obama, NSCI has a mission to ensure the United States continues leading high performance computing over the coming decades. As part of the effort, NSCI will foster the deployment of exascale supercomputers to take on the nation’s Grand Challenges.The post Transcript: Doug Kothe from Oak Ridge Discusses the NSCI Initiative appeared first on insideHPC.
|
by Rich Brueckner on (#MD9K)
In this video from the 2015 HPC User Forum, Randy Bryant, from the White House’s Office of Science and Technology Policy (OSTP) discusses National Strategic Computing Initiative (NSCI). Established by an Executive Order by President Obama, NSCI has a mission to ensure the United States continues leading high performance computing over the coming decades. As part of the effort, NSCI will foster the deployment of exascale supercomputers to take on the nation’s Grand Challenges.The post Transcript: Randy Bryant from the White House OSTP Discusses the NSCI Initiative appeared first on insideHPC.
|
by Rich Brueckner on (#MD9N)
In this video from the 2015 HPC User Forum, Irene Qualters from the National Science Foundation discusses National Strategic Computing Initiative (NSCI). Established by an Executive Order by President Obama, NSCI has a mission to ensure the United States continues leading high performance computing over the coming decades. As part of the effort, NSCI will foster the deployment of exascale supercomputers to take on the nation's Grand Challenges.The post Transcript: Irene Qualters from the NSF Discusses the NSCI Initiative appeared first on insideHPC.
|
by staff on (#MCPP)
In this special guest feature, Tom Wilkie from Scientific Computing World looks at some issues of life and death that will be discussed at the upcoming ISC Cloud and Big Data conference in Frankfurt.The post ISC Cloud & Big Data: From Banking to Personalized Medicine appeared first on insideHPC.
|
by staff on (#MANP)
Over at Oak Ridge, Eric Gedenk writes that monitoring the status of complex supercomputer systems is an ongoing challenge. Now, Ross Miller from OLCF has developed DDNtool, which provides a single interface to 72 controllers in near real time.The post DDNtool Streamlines File System Monitoring at Oak Ridge appeared first on insideHPC.
|
by Rich Brueckner on (#MAKT)
Virginia Tech is seeking an HPC Systems Specialist in our Job of the Week.The post Job of the Week: HPC Systems Specialist at Virginia Tech appeared first on insideHPC.
|
by Rich Brueckner on (#M86K)
In this video from the Neuroinformatics 2015 Conference, Thomas Lippert from Jülich presents: Why Does the Human Brain Project Need HPC and Data Analytics Infrastructures? HBP, the human brain project, is one of two European flagship projects foreseen to run for 10 years. The HBP aims at creating an open neuroscience driven infrastructure for simulation and big data aided modeling and research with a credible user program.The post Thomas Lippert on Why the Human Brain Project Needs HPC and Data Analytics Infrastructures appeared first on insideHPC.
|
by staff on (#M84X)
Bull Atos will provide a second High Performance Computing cluster to the German Waterways Engineering and Research Institute (BAW). Following the installation of the first computer in 2012, the Federal Agency selected a Bull supercomputer for one of their HPC clusters dedicated to complex simulations. The new supercomputer will be operational in October and under the new contract Bull will also provide maintenance services for five years.The post BAW in Germany to Install Additional Bull Supercomputer appeared first on insideHPC.
by Rich Brueckner on (#M5KA)
Zachary Cobell from ARCADIS-US presented this talk at the HPC User Forum. "As a global leader for designing sustainable coastlines and waterways, ARCADIS believes in developing multi-faceted, integrated solutions to restore, protect, and enhance sensitive coastal areas. We are working with the Army Corps and the state of Louisiana to design these projects with and from nature, in effect using nature as a dynamic engine."The post Video: HPC for the Louisiana Coastal Master Plan appeared first on insideHPC.
|
by Rich Brueckner on (#M56B)
The Center for Advanced Computing systems has announced their agenda for the Directives and Tools for Accelerators Workshop. Also known as the Seismic Programming Shift Workshop, the event takes place Oct. 11-13 at the University of Houston.The post Satoshi Matsuoka to Keynote Workshop on Directives and Tools for Accelerators appeared first on insideHPC.
|
by staff on (#M54N)
Ace Computers in Illinois reports that the company is expanding its high performance computer footprint in the Oil & Gas Industry. Not a stranger to this space, the company has than 30 years of experience designing powerful, scalable computers for leading energy suppliers.The post Ace Computers Steps Up HPC for Oil & Gas appeared first on insideHPC.
|
by staff on (#M534)
Convergent Science reports that the company has adopted the Allinea Forge development tool suite. As the leader in internal combustion engine (ICE) simulation, Convergent Science is using Allinea to increase the capability and performance of the company's CONVERGE software.The post Allinea Forge Sparks Convergent Science Combustion Simulation appeared first on insideHPC.
|
by Rich Brueckner on (#M4D4)
Bo Ewald from D-Wave Systems presented this Disruptive Technologies talk at the HPC User Forum. "While we are only at the beginning of this journey, quantum computing has the potential to help solve some of the most complex technical, commercial, scientific, and national defense problems that organizations face. We expect that quantum computing will lead to breakthroughs in science, engineering, modeling and simulation, financial analysis, optimization, logistics, and national defense applications."The post Bo Ewald Presents: D-Wave Quantum Computing appeared first on insideHPC.
|
by Rich Brueckner on (#M1JJ)
"Starting in 2013, the SC conference organizers launched “HPC Matters†to encourage members of the computational sciences community to share their thoughts, vision, and experiences with how high performance computers are used to improve the lives of people all over the world in more simple terms. Four pillars provide structure to the program: Influencing Daily Lives; Science and Engineering; Economic Impact; and Education."The post Intel’s Diane Bryant to Keynote HPC Matters Plenary at SC15 appeared first on insideHPC.
|
by Rich Brueckner on (#M16P)
In this podcast, David Bump from Dell describes an upcoming Workshop on large genomic data sets coming to La Jolla, California on Sept. 15, 2015. "Please join us for this one day workshop featuring presentations from Dell, Appistry, UNC Chapel Hill, Arizona State University, and TGen who all will share their cutting-edge results and best practices for helping labs process, manage, and analyze large genomic data sets. You will also hear from Intel and Nvidia on their latest HPC/Big Data technology innovations."The post Podcast: Dell Workshop on Large Genomic Data sets Coming to La Jolla Sept. 15 appeared first on insideHPC.
|
by MichaelS on (#M15D)
Through profiling, developers and users can get ideas on where an application’s hotspots are, in order to optimize certain sections of the code. In addition to locating where time is spent within an application, profiling tools can locate where there is little or no parallelism and a number of other factors that may affect performance. Performance tuning can help tremendously in many cases.The post Optimization Through Profiling appeared first on insideHPC.
|
by Rich Brueckner on (#M0P7)
Christopher Hill from MIT presented this talk at the HPC User Forum. "The MITgcm (MIT General Circulation Model) is a numerical model designed for study of the atmosphere, ocean, and climate. Its non-hydrostatic formulation enables it to simulate fluid phenomena over a wide range of scales; its adjoint capability enables it to be applied to parameter and state estimation problems. By employing fluid isomorphisms, one hydrodynamical kernel can be used to simulate flow in both the atmosphere and ocean."The post Video: HPC in Earth & Planetary Science using MITgcm appeared first on insideHPC.
|