by Rich Brueckner on (#18XB0)
Today the Open Scalable File Systems (OpenSFS) community announced the release of Lustre 2.8.0, the fastest and most scalable parallel file system. OpenSFS, founded in 2010 to advance Lustre development, is the premier non-profit organization promoting the use of Lustre and advancing its capabilities through coordinated releases of the Lustre file system.The post OpenSFS Releases Lustre 2.8.0 for LUG 2016 Conference appeared first on insideHPC.
|
High-Performance Computing News Analysis | insideHPC
Link | https://insidehpc.com/ |
Feed | http://insidehpc.com/feed/ |
Updated | 2024-11-06 09:00 |
by Rich Brueckner on (#18X3K)
In this podcast, Rich Brueckner interviews Hugo Saleh, Director of Marketing for the Intel High Performance Computing Platform Group. They discuss the new Intel® Xeon® processor E5-2600 v4 product family, based upon the Broadwell microarchitecture, and the first processor within Intel® Scalable System Framework (Intel® SSF). Hugo describes how the new processors improve HPC performance and examine the impact of Intel® SSF on vastly different procurements ranging from the massive 200 Petaflops Aurora system to small and large enterprise clusters plus scientific systems.The post Podcast: Intel Moves HPC Forward with Broadwell Family of Xeon Processors appeared first on insideHPC.
|
by Rich Brueckner on (#18WN8)
"Huawei focuses on R&D of IT infrastructure, cooling solutions, software integration, and provides end-to-end HPC solution by building ecosystems with partners. Huawei help customers from different sectors and fields, solving challenges and problems with computing resources, energy expenditure and business needs. This presentation will introduce how Huawei brings fresh technologies to next-generation HPC solutions for more innovation, higher efficiency and scale, as well as presenting our best practices for HPC."The post Video: Huawei Powers Efficient and Scalable HPC appeared first on insideHPC.
|
by staff on (#18WK7)
Today Allinea announced plans to showcase its software tools for developing and optimizing high performance code at the GPU Technology Conference April 4-7 in San Jose. The company will highlight the best practices required to unleash the potential performance within the latest generation of NVIDIA GPUs for a wide range of software applications.The post Allinea Taps the Power of GPUs for High Performance Code appeared first on insideHPC.
|
by staff on (#18VTT)
In this special guest feature from Scientific Computing World, Dr Bruno Silva from The Francis Crick Institute in London writes that new cloud technologies will make the cloud even more important to scientific computing. "The emergence of public cloud and the ability to cloud-burst is actually the real game-changer. Because of its ‘infinite’ amount of resources (effectively always under-utilized), it allows for a clear decoupling of time-to-science from efficiency. One can be somewhat less efficient in a controlled fashion (higher cost, slightly more waste) to minimize time-to-science when required (in burst, so to speak) by effectively growing the computing estate available beyond the fixed footprint of local infrastructure – this is often referred to as the hybrid cloud model. You get both the benefit of efficient infrastructure use, and the ability to go beyond that when strictly required."The post Reducing the Time to Science with Efficient Clouds appeared first on insideHPC.
|
by staff on (#18SGS)
"As a community, we are excited about enabling HPC for everyone. If OpenHPC can really make it so easy to install HPC systems that more people join the ecosystem – as users, system administrators, resource managers, or developers – we all win."The post Jeff Squyres on Building Community at OpenHPC appeared first on insideHPC.
|
by staff on (#18SDV)
The Human Brain Project (HBP) is developing a shared European research infrastructure with the aim of examining the organization of the brain using detailed analyses and simulations and thus combating neurological and psychiatric disorders. For this purpose, the HBP is creating new information technologies like neurosynaptic processors which are based on the principles governing how the human brain works.The post European Research Infrastructure Launched for Human Brain Project appeared first on insideHPC.
|
by Rich Brueckner on (#18RR6)
Kenneth Hoste from the University Ghent presented this tutorial at the Switzerland HPC Conference. "One unnecessarily time-consuming task for HPC user support teams is installing software for users. Due to the advanced nature of a supercomputing system (think: multiple multi-core modern microprocessors (possibly next to co-processors like GPUs), the availability of a high performance network interconnect, bleeding edge compilers & libraries, etc.), compiling the software from source on the actual operating system and system architecture that it is going to be used on is typically highly preferred over using readily available binary packages that were built in a generic way.The post Tutorial on the EasyBuild Framework appeared first on insideHPC.
|
by staff on (#18RPB)
With help from the Pittsburgh Supercomputer Center, an international team of researchers has published the largest network to date of connections between neurons in the cortex, where high-level processing occurs, and have revealed several crucial elements of how networks in the brain are organized. The results are published this week in the journal Nature.The post PSC Powers 3D-Reconstruction of Excitatory Visual Neuron Wiring appeared first on insideHPC.
|
by Rich Brueckner on (#18RJ6)
In this video from the 2016 HPC Advisory Council Switzerland Conference, Addison Snell from Intersect360 Research moderates a panel discussion on Exascale computing. “Exascale computing will uniquely provide knowledge leading to transformative advances for our economy, security and society in general. A failure to proceed with appropriate speed risks losing competitiveness in information technology, in our industrial base writ large, and in leading-edge science.â€The post Panel Discussion on Exascale Computing appeared first on insideHPC.
|
by staff on (#18N53)
Total, one of the largest integrated oil and gas companies in the world, announced they are boosting the compute power of their SGI Pangea supercomputer with an additional 4.4 petaflops provided by a new SGI ICE X system and based on the Intel Xeon processor. Purchased last year, the new SGI system is now in production and will allow Total to determine the optimal extraction methods more quickly. The SGI supercomputer allows Total to improve complex modeling of the subsurface and to simulate the behavior of reservoirs, reducing the time and costs associated with discovering and extracting energy reserves.The post SGI Provides Total with Improved Modeling to Support Decision Making appeared first on insideHPC.
|
by staff on (#18MWW)
Today Lawrence Livermore National Laboratory (LLNL) announced it has purchased a first-of-a-kind brain-inspired supercomputing platform for deep learning inference developed by IBM Research. Based on a breakthrough neurosynaptic computer chip called IBM TrueNorth, the scalable platform will process the equivalent of 16 million neurons and 4 billion synapses and consume the energy equivalent of a tablet computer – a mere 2.5 watts of power for the 16 TrueNorth chips. The brain-like, neural network design of the IBM Neuromorphic System is able to infer complex cognitive tasks such as pattern recognition and integrated sensory processing far more efficiently than conventional chips.The post IBM & LLNL Collaborate on TrueNorth Neuromorphic Computing appeared first on insideHPC.
|
by staff on (#18MTP)
The ability to study complex interactions inside a diesel engine gives Army scientists a new perspective that may help design more fuel efficient and effective engines. Using state-of-the-art supercomputers, Army scientist Dr. Luis Bravo led efforts to create simulations highlighting the first year of his project, allowing them to investigate the jet fuel spray breakup process with microsecond time fidelity and sub-millimeter resolution, while generating petabytes of data.The post Supercomputing Complex Diesel Injection Mist at Army Research Lab appeared first on insideHPC.
|
by Rich Brueckner on (#18MQ5)
"As the most powerful public research supercomputer in the southern hemisphere, Magnus supports several hundred research projects led by researchers from Australian academic and research institutions. As a Cray XC40 system, Magnus provides a petascale supercomputing environment for a diverse range of application areas, such as energy and resources, food security, ground water modeling, climate modeling, astronomy and astrophysics, genomics, and advanced manufacturing."The post Interview: Pawsey Supercomputer Centre Propels Science Down Under appeared first on insideHPC.
|
by Rich Brueckner on (#18HM1)
Pak Lui from the HPC Advisory Council presented this talk at the Switzerland HPC Conference. "To achieve good scalability performance on the HPC scientific applications typically involves good understanding of the workload though performing profile analysis, and comparing behaviors of using different hardware which pinpoint bottlenecks in different areas of the HPC cluster."The post Video: Best Practices – Applications Performance Optimizations appeared first on insideHPC.
|
by Rich Brueckner on (#18HJT)
In this a16z Podcast, Vijay Pande and Alex Rampell share their observations and advice on all things data network effects. "If network effects are one of the most important concepts for software-based businesses, then that may be especially true of data network effects -- a network effect that results from data."The post Podcast: For Data Network Effects, the Cool Stuff Only Happens at Scale appeared first on insideHPC.
|
by Rich Brueckner on (#18HHK)
Gary Grider from Los Alamos National Laboratory will keynote the 2016 OpenFabrics Workshop this year with a talk on HPC Storage and IO Trends and Workflows. The event takes place April 4-8, 2016 in Monterey, California.The post Gary Grider from LANL to Keynote OpenFabrics Workshop appeared first on insideHPC.
|
by Rich Brueckner on (#18H5C)
Bo Ewald from D-Wave Systems presented this talk at the HPC Advisory Council Switzerland Conference. "This talk will provide an introduction to quantum computing and briefly review different approached to implementing a quantum computer. D-Wave’s approach to implementing a quantum annealing architecture and the software and programming environment will be discussed. Finally, some potential applications of quantum computing will also be addressed."The post Bo Ewald Presents: The Quantum Effect – HPC Without FLOPS appeared first on insideHPC.
|
by Rich Brueckner on (#18H47)
The University of Aberdeen has become the first Scottish university to partner with IBM to offer students and staff access to its cognitive computing technology. "Cognitive represents an entirely new model of computing that includes a range of technology innovations in analytics, natural language processing and machine learning. The collaboration between IBM and the University of Aberdeen, which builds on a long-standing relationship, aims to help nurture the next generation of innovators and is the first initiative of this type in Scotland."The post IBM Partners with University of Aberdeen to Drive Cognitive Computing appeared first on insideHPC.
|
by Rich Brueckner on (#18ENB)
"Containers wrap up software with all its dependencies in packages that can be executed anywhere. This can be specially useful in HPC environments where, often, getting the right combination of software tools to build applications is a daunting task. However, typical container solutions such as Docker are not a perfect fit for HPC environments. Instead, Shifter is a better fit as it has been built from the ground up with HPC in mind. In this talk, we show you what Shifter is and how to leverage from the current Docker environment to run your ap- plications with Shifter."The post Video: Shifter – Containers in HPC environments appeared first on insideHPC.
|
by Rich Brueckner on (#18EJ7)
This week IBM announced the opening of a Bluemix Garage in Nice, France to help European organizations of all sizes and industries accelerate the development and design of next-generation apps on IBM Cloud. "Our latest Bluemix Garage in Nice is a critical addition to our network of Garages across Europe and globally, and will help our clients to more quickly build with IBM Cloud,†said Steve Robnison, General Manager, Client Engagement of IBM Cloud. “IBM Cloud offers companies the most rapid on-ramp, and most robust toolset, for their developers to create the apps they need to succeed and compete.â€The post IBM Bluemix Garage in France Fuels Cloud Development appeared first on insideHPC.
|
by Rich Brueckner on (#18C7N)
"The Exascale computing challenge is the current Holy Grail for high performance computing. It envisages building HPC systems capable of 10^18 floating point operations under a power input in the range of 20-40 MW. To achieve this feat, several barriers need to be overcome. These barriers or “walls†are not completely independent of each other, but present a lens through which HPC system design can be viewed as a whole, and its composing sub-systems optimized to overcome the persistent bottlenecks."The post Using Xeon + FPGA for Accelerating HPC Workloads appeared first on insideHPC.
|
by Rich Brueckner on (#18C4B)
Mount Sinai Health Systems in New York is seeking an HPC Systems Administrator in our Job of the Week.The post Job of the Week: HPC System Administrator at Mount Sinai Health appeared first on insideHPC.
|
by Rich Brueckner on (#189H1)
Performing experimental weather forecasts using the Stampede supercomputer at the Texas Advanced Computing Center, researchers have gained a better understanding of what conditions cause severe hail to form, and are producing predictions with far greater accuracy than those currently used operationally.The post Improving Storm Forecasts with the Stampede Supercomputer appeared first on insideHPC.
|
by Rich Brueckner on (#189DN)
In this podcast, the Radio Free HPC team previews the GPU Technology Conference coming up April 4-7 in Silicon Valley. "GTC is the largest and most important event of the year for GPU developers. Join us this year as we showcase the most vital work in the computing industry today - Artificial Intelligence and Deep Learning, Virtual Reality and Self Driving Cars. GTC attracts developers, researchers, and technologists from some of the top companies, universities, research firms and government agencies from around the world."The post Radio Free HPC Previews the GPU Technology Conference appeared first on insideHPC.
|
by Rich Brueckner on (#1899S)
"Thanks to the arrival of SSDs, the performance of storage systems can be boosted by orders of magnitude. While a considerable amount of software engineering has been invested in the past to circumvent the limitations of rotating media, there is a misbelief than a lightweight software approach may be sufficient for taking advantage of solid state media. Taking the data protection as an example, this talk will present some of the limitations of current storage software stacks. We will then discuss how this unfold to a more radical re-design of the software architecture and ultimately is making a case for an I/O interception layer."The post Video: Protecting Your Data, Protecting Your Hardware appeared first on insideHPC.
|
by Rich Brueckner on (#1895Q)
Today the OpenACC standards group announced a set of additional hackathons and a broad range of learning opportunities taking place during the upcoming GPU Technology Conference being held in San Jose, CA April 4-7, 2016. OpenACC is a mature and performance-portable path for developing scalable parallel programs across multi-core CPUs, GPU accelerators or many-core processors.The post OpenACC Building Momentum going into GTC appeared first on insideHPC.
|
by Rich Brueckner on (#185EJ)
"With Docker v1.9 a new networking system was introduced, which allows multi-host network- ing to work out-of-the-box in any Docker environment. This talk provides an introduction on what Docker networking provides, followed by a demo that spins up a full SLURM cluster across multiple machines. The demo is based on QNIBTerminal, a Consul backed set of Docker Images to spin up a broad set of software stacks."The post Video: The State of Linux Containers appeared first on insideHPC.
|
by Rich Brueckner on (#1856D)
Researchers at the University of Adelaide will soon have access to a new Lenovo supercomputer named "Phoenix" with as much as 30 times more computing power than before.The post Lenovo Powers Phoenix Supercomputer at University of Adelaide appeared first on insideHPC.
|
by Rich Brueckner on (#1853F)
Funded by the European Commission in 2011, the DEEP project was the brainchild of scientists and researchers at the Jülich Supercomputing Centre (JSC) in Germany. The basic idea is to overcome the limitations of standard HPC systems by building a new type of heterogeneous architecture. One that could dynamically divide less parallel and highly parallel parts of a workload between a general-purpose Cluster and a Booster—an autonomous cluster with Intel® Xeon Phi™ processors designed to dramatically improve performance of highly parallel code.The post How Intel Worked with the DEEP Consortium to Challenge Amdahl’s Law appeared first on insideHPC.
|
by MichaelS on (#184YT)
"It is important to be able to express algorithms and then the coding in an architecture independent manner to gain maximum portability. Vectorization, using the available CPUs and coprocessors such as the Intel Xeon Phi coprocessor, are critical for HPC applications where performance is of the highest importance. However, since architectures change over time and become more powerful, using libraries that can adjust to the new architectures is quite important."The post Intrinsic Vectorization for Intel Xeon Phi appeared first on insideHPC.
|
by Rich Brueckner on (#181BN)
"High performance computing has begun scaling beyond Petaflop performance towards the Exaflop mark. One of the major concerns throughout the development toward such performance capability is scalability – at the component level, system level, middleware and the application level. A Co-Design approach between the development of the software libraries and the underlying hardware can help to overcome those scalability issues and to enable a more efficient design approach towards the Exascale goal."The post Co-Design Architecture: Emergence of New Co-Processors appeared first on insideHPC.
|
by staff on (#18140)
Today the Ethernet Alliance unveiled its 2016 Ethernet Roadmap at OFC 2016. The roadmap highlights Ethernet’s breadth of speeds, current and next-generation modules and interfaces, PoE, and innovations like the OIF's FlexEthernet, and offers an overview of existing and future modules including QSFP-DD, microQSFP, and OBO; interfaces; and nomenclature at speeds from 10 Mb/s to 400GbE.The post A Quick Look at the 2016 Ethernet Roadmap appeared first on insideHPC.
|
by Rich Brueckner on (#180WN)
A recent study conducted by the Barcelona Supercomputer Center suggests that calibrated model ensembles improve the trustworthiness of climate event attribution to extreme weather events. The study also found that current climate model limitations tend to overestimate climate change attribution.The post Supercomputing Extreme Weather Events and Climate Change appeared first on insideHPC.
|
by Rich Brueckner on (#180V7)
Axel Koehler from Nvidia presented this talk at the HPC Advisory Council Switzerland Conference. "Accelerated computing is transforming the data center that delivers unprecedented throughput, enabling new discoveries and services for end users. This talk will give an overview about the NVIDIA Tesla accelerated computing platform including the latest developments in hardware and software. In addition it will be shown how deep learning on GPUs is changing how we use computers to understand data."The post Video: The Nvidia Tesla Accelerated Computing Platform appeared first on insideHPC.
|
by staff on (#180RD)
The Blue Waters project at the University of Illinois is offering a new graduate course entitled Introduction to High Performance Computing. The course will be offered as a collaborative, online course for multiple participating institutions fall semester 2016. "The project is seeking university partners that are interested in offering the course for credit to their students. The course includes online video lectures, quizzes, and homework assignments with access to free accounts on the Blue Waters system."The post Seeking Students and Instructors for the Blue Waters Intro to HPC Virtual Course appeared first on insideHPC.
|
by staff on (#17WSF)
Expected later in 2016, Intel will be releasing production versions of its Knights Landing (KNL) 72-core coprocessor. These next generation coprocessors are impacting the physical design of the supercomputers now coming down the pike in a number of ways. One of the most dramatic changes is the significant increase in cooling requirements – these are high wattage chips that run very hot and present some interesting engineering challenges for systems designers.The post Cooling Today’s Hot New Processors appeared first on insideHPC.
|
by Rich Brueckner on (#17XM6)
Today ISC 2016 announced that a research paper in the area of Message Passing Interface (MPI) performance, has been selected to receive the 2016 Hans Meuer Award. The awarding will take place at the ISC High Performance conference on Monday, June 20.The post Intel MPI Messaging Paper Wins ISC 2016 Hans Meuer Award appeared first on insideHPC.
|
by Rich Brueckner on (#17XDK)
Zaikun Xu from the Università della Svizzera Italiana presented this talk at the Switzerland HPC Conference. "In the past decade, deep learning as a life-changing technology, has gained a huge success on various tasks, including image recognition, speech recognition, machine translation, etc. Pioneered by several research groups, Deep learning is a renaissance of neural network in the Big data era."The post Tutorial on Deep Learning appeared first on insideHPC.
|
by staff on (#17X15)
Seagate Technology and Los Alamos National Laboratory are researching a new storage tier to enable massive data archiving for supercomputing. The joint effort is aimed at determining innovative new ways to keep massive amounts of stored data available for rapid access, while also minimizing power consumption and improving the quality of data-driven research. Under a Cooperative Research and Development Agreement, Seagate and Los Alamos are working together on power-managed disk and software solutions for deep data archiving, which represents one of the biggest challenges faced by organizations that must juggle increasingly massive amounts of data using very little additional energy.The post Seagate and LANL to Heat Up Data Archiving For Supercomputers appeared first on insideHPC.
|
by staff on (#17WXQ)
Today CoolIT Systems announced it has enabled Cascade Technologies to increase their compute density by 2.5 times within their existing floor space, rack space, and air conditioning capacity by deploying liquid cooling. "Partnering with CoolIT Systems solved our key requirements of more compute density without having to expand our floor space or AC capacity,†said Frank Ham, CEO at Cascade Technologies. “The liquid cooled solution surpasses our efficiency goals, allows us to pack a lot of compute into a small environment, and is impressively quiet.â€The post Liquid Cooling Doubles Compute Capacity at Cascade Technologies appeared first on insideHPC.
|
by Rich Brueckner on (#17WTS)
DK Panda from Ohio State University presented this talk at the Switzerland HPC Conference. "This talk will focus on challenges in designing runtime environments for Exascale systems with millions of processors and accelerators to support various programming models. We will focus on MPI, PGAS (OpenSHMEM, CAF, UPC and UPC++) and Hybrid MPI+PGAS programming models by taking into account support for multi-core, high-performance networks, accelerators (GPUs and Intel MIC) and energy-awareness. Features and sample performance numbers from the MVAPICH2 libraries will be presented."The post High-Performance and Scalable Designs of Programming Models for Exascale Systems appeared first on insideHPC.
|
by staff on (#17SRG)
Nominations are now open for the PRACE Ada Lovelace HPC Award. The new award recognizes woman who are making an outstanding contributions to HPC in Europe.The post Nominations Open for PRACE Ada Lovelace HPC Award appeared first on insideHPC.
|
by Rich Brueckner on (#17SM5)
Calista Redmond from IBM presented this talk at the Switzerland HPC Conference. "The OpenPOWER Foundation was founded in 2013 as an open technical membership organization that will enable data centers to rethink their approach to technology. Today, nearly 200 member companies are enabled to customize POWER CPU processors and system platforms for optimization and innovation for their business needs. These innovations include custom systems for large or warehouse scale data centers, workload acceleration through GPU, FPGA or advanced I/O, platform optimization for SW appliances, or advanced hardware technology exploitation. OpenPOWER members are actively pursing all of these innovations and more and welcome all parties to join in moving the state of the art of OpenPOWER systems design forward."The post Industry Shifts to Open Infrastructure as OpenPOWER Foundation Gains Momentum appeared first on insideHPC.
|
by staff on (#17SJ5)
"The Techila user experience available in Google Cloud Launcher revolutionizes simulation and analysis. Techila’s patented end-to-end solution integrates the scalable power of Google Cloud Platform seamlessly into popular tools and environments on the user's own PC: MATLAB, Python, R programming language, and more. And what’s more, you don't need to buy MATLAB licenses for the computing environment,†says Rainer Wehkamp, CEO, Techila Technologies Ltd.The post Techila & Google Bringing on-demand HPC to Every Desk appeared first on insideHPC.
|
by staff on (#17SCY)
The STFC Hartree Centre in the UK will host a Hackathon for coders, developers, designers, entrepreneurs and start-ups in May. The event will take place May 18-20 at the Hartree Centre in Cheshire. In partnership with IBM Watson, the Hartree Hack will put the latest cognitive technologies directly into the hands of attendees. Participants will learn from the experts about what IBM Watson APIs (application programming interfaces) can offer them and how to use them, create their first cognitive app and compete to win £25k of support from STFC to propel their idea forward to a market reality over just three days.The post Learn the Latest Cognitive and Big Data Tools at the Hartree Hack appeared first on insideHPC.
|
by Rich Brueckner on (#17S97)
Michele de Lorenzi from the Swiss National Supercomputing Centre presented this talk at the 2016 HPC Advisory Council Switzerland Conference. "Founded in 1991, CSCS, the Swiss National Supercomputing Centre, develops and provides the key supercomputing capabilities required to solve important problems to science and/or society. The centre enables world-class research with a scientific user lab that is available to domestic and international researchers through a transparent, peer-reviewed allocation process. CSCS's resources are open to academia, and are available as well to users from industry and the business sector. The centre is operated by ETH Zurich and is located in Lugano."The post Video: Welcome to HPC in Switzerland appeared first on insideHPC.
|
by Rich Brueckner on (#17P6H)
Over at Enterprise Storage Forum, Henry Newman looks at why we should focus on how much work gets done rather than specifications as disk drives and SSDs get faster and faster. This is not a new rant for Henry, and in fact the importance of workflow over bandwidth or IOPs is the main theme at this year’s Mass Storage Systems and Technology Conference (MSST) coming up in May.The post Henry Newman on Why Workloads Matter More Than IOPS appeared first on insideHPC.
|
by Rich Brueckner on (#17P5R)
In this video, Al Roker from the Today Show looks at how Cray XC30 supercomputers give ECMWF more accurate forecasts than we get here in America. ECMWF uses advanced computer modeling techniques to analyze observations and predict future weather. Their assimilation system uses 40 million observations a day from more than 50 different instruments on satellites, and from many ground-based and airborne measurement systems.The post Video: Cray Powers More Accurate Forecasts at ECMWF appeared first on insideHPC.
|
by Rich Brueckner on (#17MFS)
A new DOE program designed to spur the use of high-performance supercomputers to advance U.S. manufacturing is now seeking a second round of proposals from industry to compete for approximately $3 million in new funding. “We are thrilled with the response from the U.S. manufacturing industry,†said LLNL mathematician Peg Folta, the director of the HPC4Mfg program. “This program lowers the barrier of entry for U.S. manufacturers to adopt HPC. It makes it easier for a company to use supercomputers by not only funding access to the HPC systems, but also to experts in the use of these systems to solve complex problems.â€The post HPC4Mfg Seek New Proposals from Industry appeared first on insideHPC.
|