Feed insidehpc High-Performance Computing News Analysis | insideHPC

Favorite IconHigh-Performance Computing News Analysis | insideHPC

Link https://insidehpc.com/
Feed http://insidehpc.com/feed/
Updated 2024-11-02 13:15
Podcast: Enterprises go HPC at GPU Technology Conference
In this podcast, the Radio Free HPC team looks at news from the GPU Technology Conference. "Dan has been attending GTC since well before it became the big and important conference that it is today. We get a quick update on what was covered: the long keynote, automotive and robotics, the Mellanox acquisition, how a growing fraction of enterprise applications will be AI."The post Podcast: Enterprises go HPC at GPU Technology Conference appeared first on insideHPC.
AMD Powers Corona Cluster for HPC Analytics at Livermore
Lawrence Livermore National Lab has deployed a 170-node HPC cluster from Penguin Computing. Based on AMD EPYC processors and Radeon Instinct GPUs, the new Corona cluster will be used to support the NNSA Advanced Simulation and Computing (ASC) program in an unclassified site dedicated to partnerships with American industry. “Even as we do more of our computing on GPUs, many of our codes have serial aspects that need really good single core performance. That lines up well with AMD EPYC.”The post AMD Powers Corona Cluster for HPC Analytics at Livermore appeared first on insideHPC.
Berkeley Engineers build World’s Fastest Optical Switch Arrays
Engineers at the University of California, Berkeley have built a new photonic switch that can control the direction of light passing through optical fibers faster and more efficiently than ever. This optical "traffic cop" could one day revolutionize how information travels through data centers and high-performance supercomputers that are used for artificial intelligence and other data-intensive applications.The post Berkeley Engineers build World’s Fastest Optical Switch Arrays appeared first on insideHPC.
Arm A64fx and Post-K: A Game-Changing CPU & Supercomputer
Satoshi Matsuoka from RIKEN gave this talk at the HPC User Forum in Santa Fe. "Post-K is the flagship next generation national supercomputer being developed by Riken and Fujitsu in collaboration. Post-K will have hyperscale class resource in one exascale machine, with well more than 100,000 nodes of sever-class A64fx many-core Arm CPUs, realized through extensive co-design process involving the entire Japanese HPC community."The post Arm A64fx and Post-K: A Game-Changing CPU & Supercomputer appeared first on insideHPC.
GPUs Address Growing Data Needs for Finance & Insurance Sectors
A new whitepaper from Penguin Computing contends “a new era of supercomputing” has arrived — driven primarily by the emergence of graphics processing units or GPUs. The tools once specific to gaming are now being used by investment and financial services to gain greater insights and generate actionable data. Learn how GPUs are spurring innovation and changing how today's finance companies address their data processing needs.The post GPUs Address Growing Data Needs for Finance & Insurance Sectors appeared first on insideHPC.
Video: HPC Networking in the Real World
Jesse Martinez from Los Alamos National Laboratory gave this talk at the OpenFabrics Workshop in Austin. "High speed networking has become extremely important in the world of HPC. As parallel processing capabilities increases and storage solutions increase in capacity, the network must be designed and implemented in a way to keep up with these trends. LANL has a very diverse use of high speed fabrics within its environment, from the compute clusters, to the storage solutions. This keynote/introduction session to the Sys Admin theme at the workshop will focus on how LANL has made use of these diverse fabrics to optimize and simplify the notion of data movement and communication to obtain these results for scientists solving real world problems."The post Video: HPC Networking in the Real World appeared first on insideHPC.
Scaling Deep Learning for Scientific Workloads on the #1 Summit Supercomputer
Jack Wells from ORNL gave this talk at the GPU Technology Conference. "HPC centers have been traditionally configured for simulation workloads, but deep learning has been increasingly applied alongside simulation on scientific datasets. These frameworks do not always fit well with job schedulers, large parallel file systems, and MPI backends. We'll share benchmarks between native compiled versus containers on Power systems, like Summit, as well as best practices for deploying learning and models on HPC resources on scientific workflows."The post Scaling Deep Learning for Scientific Workloads on the #1 Summit Supercomputer appeared first on insideHPC.
DOE Extending Quantum Networks for Long Distance Entanglement
Scientists from Brookhaven National Laboratory, Stony Brook University, and DOE’s Energy Sciences Network (ESnet) are collaborating on an experiment that puts U.S. quantum networking research on the international map. Researchers have built a quantum network testbed that connects several buildings on the Brookhaven Lab campus using unique portable quantum entanglement sources and an existing DOE ESnet communications fiber network—a significant step in building a large-scale quantum network that can transmit information over long distances.The post DOE Extending Quantum Networks for Long Distance Entanglement appeared first on insideHPC.
Turbocharge your HPC Hybrid Cloud with Policy-based Automation
While there are many advantages to running in the cloud, the issues can be complex. Users need to figure out how to securely extend on-premise clusters, devise solutions for data handling and constantly keep an eye on costs. Univa's Robert Lalonde, Vice President and General Manager, Cloud, explores how to turbocharge your HPC hybrid cloud with tools like policy-based automation, and how closing the loop between workload scheduling and cloud-automation can drive higher performance and dramatic cost efficiencies.The post Turbocharge your HPC Hybrid Cloud with Policy-based Automation appeared first on insideHPC.
Podcast: Intel to power Anthos Google Cloud Platform
In this Chip Chat podcast, Paul Nash from the Google Cloud Platform discusses the industry trends impacting IaaS and how Google Cloud Platform together with Intel are driving innovation in the cloud. "The two companies will collaborate on Anthos, a new reference design based on the 2nd-Generation Intel Xeon Scalable processor and an optimized Kubernetes software stack that will deliver increased workload portability to customers who want to take advantage of hybrid cloud environments. Intel will publish the production design as an Intel Select Solution, as well as a developer platform."The post Podcast: Intel to power Anthos Google Cloud Platform appeared first on insideHPC.
Agenda Posted for LUG 2019 in Houston
The Lustre User Group has posted their speaker agenda for LUG 2019. The event takes place May 14-17 in Houston. "LUG 2019 is the industry’s primary venue for discussion and seminars on the Lustre parallel file system and other open source file system technologies. Don’t miss your chance to actively participate in industry dialogue on best practices and emerging technologies, explore upcoming developments of the Lustre file system, and immerse in the strong Lustre community."The post Agenda Posted for LUG 2019 in Houston appeared first on insideHPC.
Qualcomm to bring power-efficient AI Inference to the Cloud
Today Qualcomm announced that it is bringing the Company’s artificial intelligence expertise to the cloud with the Qualcomm Cloud AI 100. "Our all new Qualcomm Cloud AI 100 accelerator will significantly raise the bar for the AI inference processing relative to any combination of CPUs, GPUs, and/or FPGAs used in today’s data centers,” said Keith Kressin, senior vice president, product management, Qualcomm Technologies, Inc.The post Qualcomm to bring power-efficient AI Inference to the Cloud appeared first on insideHPC.
SingularityPRO comes to Google Cloud
Today Sylabs announced a multi-phase collaboration with Google Cloud as a technology partner. Aimed at systematically addressing enterprise requirements in a cloud-native fashion, the first phase of the collaboration will be based upon availability of Sylabs' SingularityPRO via the Google Cloud Platform Marketplace. "Singularity is a widely adopted container runtime that implements a unique security model to mitigate privilege escalation risks, and provides a platform to capture a complete application environment into a single file."The post SingularityPRO comes to Google Cloud appeared first on insideHPC.
Video: A History of Los Alamos National Lab
Terry Wallace from Los Alamos National Lab gave this talk at the HPC User Forum. "The Laboratory was established in 1943 as site Y of the Manhattan Project for a single purpose: to design and build an atomic bomb. It took just 27 months. The Los Alamos of today has a heightened focus on worker safety and security awareness, with the ever-present core values of intellectual freedom, scientific excellence, and national service. Outstanding science underpins the Laboratory's past and its future."The post Video: A History of Los Alamos National Lab appeared first on insideHPC.
Spectra Swarm Brings Ethernet Connectivity to LTO Tape Libraries
Today Spectra Logic announced enhancements to its family of tape libraries that help end users simplify and enhance their workflows. With the introduction of Spectra Swarm ethernet connectivity, the company makes tape connectivity easier and adds a modern interface to LTO tape libraries by leveraging all of the same infrastructure and networking capabilities as the rest of the equipment in a modern data center. Spectra Swarm is tested and qualified to work with Spectra LTO tape libraries — Spectra Stack through Spectra T950.The post Spectra Swarm Brings Ethernet Connectivity to LTO Tape Libraries appeared first on insideHPC.
WekaIO HPC Storage Solutions come to Central Europe
Today WekaIO announced the opening of an office in Germany, along with the appointment of Kim Gardner as Regional Sales Manager. Gardner, a seasoned storage veteran, will continue the rapid market expansion of Matrix in Germany, Austria, and Switzerland (DACH) that has been seen in the US and other markets. "We have many customer deployments in artificial intelligence (AI), high-performance computing (HPC), finance, media & entertainment, and genomics in the US and UK,” said Richard Dyke, VP of Sales at WekaIO. “The opportunities inherent in Germany’s fully-fledged AI and HPC sector made this the next logical target for us and prompted our expansion into this fertile market.”The post WekaIO HPC Storage Solutions come to Central Europe appeared first on insideHPC.
Big Compute Podcast: Boom Supersonic looks to HPC Cloud
In this Big Compute Podcast, host Gabriel Broner interviews Josh Krall co-founder and VP of Technology at Boom Supersonic. Boom is using HPC in the cloud to design a passenger supersonic plane and address the technical and business challenges it poses. "We witnessed technical success with supersonic flying with Concorde, but the economics did not work out. More than forty years later, Boom is embarking in building Overture, a supersonic plane, where passengers will pay the price of today’s business class seats."The post Big Compute Podcast: Boom Supersonic looks to HPC Cloud appeared first on insideHPC.
ThinkParQ Brings BeeGFS to E8 Storage
Today E8 Storage announced a technology partnership with ThinkParQ to enable the integration of BeeGFS with E8 Storage’s NVMe-oF solution. The combined solution will offer customers new levels of improved performance and scalability for data intensive workloads. "With the combination of E8 Storage’s RAID functionality and ThinkParQ’s BeeGFS file system, customers can scale out by adding as many nodes as needed to meet or exceed capacity requirements."The post ThinkParQ Brings BeeGFS to E8 Storage appeared first on insideHPC.
Making HPC Cloud a Reality in the Federal Space
Martin Reiger from Penguin Computing gave this talk at the HPC User Forum. "Built on a secure, high-performance bare-metal server platform with supercomputing-grade, non-blocking InfiniBand interconnects infrastructure, Penguin on Demand can handle the most challenging simulation and analytics. But, because of access via the cloud (from either a traditional Linux command line interface (CLI) or a secure web portal) you get both instant accesses and extreme scalability — without having to invest in on-premise infrastructure or the associated operational costs."The post Making HPC Cloud a Reality in the Federal Space appeared first on insideHPC.
NVIDIA Powers New Lab for AI Radiology
Today NVIDIA and the American College of Radiology announced a collaboration to enable thousands of radiologists nationwide to create and use AI for diagnostic radiology in their own facilities, using their own data, to meet their own clinical needs. "NVIDIA builds platforms that democratize the use of AI and we purpose-built the Clara AI toolkit to give every radiologist the opportunity to develop AI tools that are customized to their patients and their clinical practice,” said Kimberly Powell, vice president of Healthcare at NVIDIA. “Our successful pilot with the ACR is the first of many that will make AI more accessible to the entire field of radiology.”The post NVIDIA Powers New Lab for AI Radiology appeared first on insideHPC.
Time-Lapse Video: Installation of Spectra Logic Tape Robot at Rutherford Appleton Laboratory
In this video, engineers install a Spectra Logic tape library at STFC's Scientific Data Centre at the Rutherford Appleton Laboratory in the UK. The new Spectra TFinity Tape Library has an initial capacity of 65PB. "This system will provide for the predicted data-growth from existing groups over the next decade, and an active archive for JASMIN users and the IRIS science communities. It brings SCD's total tape storage capacity within the RAL Scientific Data Centre to 240PB."The post Time-Lapse Video: Installation of Spectra Logic Tape Robot at Rutherford Appleton Laboratory appeared first on insideHPC.
Excelero NVMesh comes to Lenovo ThinkSystems
Excelero is bringing its NVMesh software-defined block storage solutions to Lenovo customers and channel partners worldwide. "Already proven in Lenovo deployments at SciNet, Canada’s largest supercomputing facility, and at a London-based machine learning firm, Excelero’s NVMesh provides an optimal choice for web-scale deployments and in Big Data uses in concert with Lenovo’s ThinkSystem portfolio."The post Excelero NVMesh comes to Lenovo ThinkSystems appeared first on insideHPC.
Video: ATOM Consortium to Accelerate AI in Drug Discovery with NVIDIA
The Public-private consortium ATOM has announced today that it is collaborating with NVIDIA to scale ATOM’s AI-driven drug discovery platform. “Scientists at ATOM have created a predictive model development pipeline that calls upon large datasets to build and test predictive machine learning models which consider pharmacokinetics, safety, developability, and efficacy. NVIDIA will provide additional resources that will enable this pipeline to be run at increased scale and speed.”The post Video: ATOM Consortium to Accelerate AI in Drug Discovery with NVIDIA appeared first on insideHPC.
Video: New AI Hardware and Trends
In this video from the HPC User Forum, Alex Norton from Hyperion Research presents: New AI Hardware and Trends. This presentation will highlight some of the trends in emerging technologies associated with the AI ecosystem. Much of the information in this presentation is broad trends of the overall market in terms of the emerging technology […]The post Video: New AI Hardware and Trends appeared first on insideHPC.
Hyperion Research: HPC Server Market Beat Forecast in 2018
Hyperion Research has released their latest High-Performance Technical Server QView, a comprehensive report on the state of the HPC Market. The QView presents the HPC market from various perspectives, including competitive segment, vendor, cluster versus non-cluster, geography, and operating system. It also contains detailed revenue and shipment information by HPC models.The post Hyperion Research: HPC Server Market Beat Forecast in 2018 appeared first on insideHPC.
Exascale Computing Project Software Activities
Mike Heroux from Sandia National Labs gave this talk at the HPC User Forum. "The Exascale Computing Project is accelerating delivery of a capable exascale computing ecosystem for breakthroughs in scientific discovery, energy assurance, economic competitiveness, and national security.The goal of the ECP Software Technology focus area is to develop a comprehensive and coherent software stack that will enable application developers to productively write highly parallel applications that can portably target diverse exascale architectures."The post Exascale Computing Project Software Activities appeared first on insideHPC.
Video: Making Innovation Real – Dell EMC Update
Ed Turkel from Dell EMC gave this talk at the HPC User Forum in Santa Fe. "As data analytics, HPC and AI converge and the technology evolves, Dell EMC’s worldwide HPC and AI innovation centers provide expert leadership, test new machine learning technologies, and share best practices. In working with the HPC Community, Dell EMC HPC and AI Centers of Excellence provide a network of resources based on the wide-ranging know-how and experience of technology developers, service providers, and end-users."The post Video: Making Innovation Real – Dell EMC Update appeared first on insideHPC.
Job of the Week: HPC Technology Researcher at Chevron
Chevron is seeking an HPC Technology Researcher in our Job of the Week. This position will be accountable for strategic research, technology development and business engagement to deliver High Performance Computing solutions that differentiate Chevron’s performance. The successful candidate is expected to manage projects and small programs and personally apply and grow technical skills in the Advanced Computing space."The post Job of the Week: HPC Technology Researcher at Chevron appeared first on insideHPC.
Supercomputing Aerodynamics in Paralympic Cycling
A project carried out at the National University of Ireland Galway and Eindhoven University of Technology (TU/e) and KU Leuven has been exploring the role of aerodynamic science in Paralympic cycling. "This work also opens the door for world-class Paralympic athletes to have the same expertise and equipment available to them as other professional athletes. At the world championships and Paralympics where tenths of seconds can decide medals this work can unlock that vital time!"The post Supercomputing Aerodynamics in Paralympic Cycling appeared first on insideHPC.
Podcast: Supercomputing Synthetic Biomolecules
Researchers are using HPC to design potentially life-saving proteins. In this TACC podcast, host Jorge Salazar discusses this groundbreaking work with the science team. "The scientists say their methods could be applied to useful technologies such as pharmaceutical targeting, artificial energy harvesting, 'smart' sensing and building materials, and more."The post Podcast: Supercomputing Synthetic Biomolecules appeared first on insideHPC.
A look inside the White House AI Initiative
In this special guest feature, SC19 General Chair Michela Taufer discusses with Lynne Parker, Assistant Director for Artificial Intelligence at The White House Office of Science and Technology Policy. Parker describes her new role, share her insights on the state of AI in the US (and beyond), and opine on the future impact of HPC on the evolution of AI.The post A look inside the White House AI Initiative appeared first on insideHPC.
Thermodynamics of Computation: Far More Than Counting Bit Erasure
David Wolpert from the Santa Fe Institute gave this talk at the HPC User Forum. "The thermodynamic restrictions on all systems that perform computation provide major challenges to modern design of computers. As a result, the time is ripe to pursue a new field of science and engineering: a modern thermodynamics of computation. This would combine the resource/time tradeoffs of concern in conventional CS with the thermodynamic tradeoffs in computation that are now being revealed. In this way we should be able to develop the tools necessary both for analyzing thermodynamic costs in biological systems and for engineering next-generation computers."The post Thermodynamics of Computation: Far More Than Counting Bit Erasure appeared first on insideHPC.
Atos Opens AI Laboratory in France with the Google Cloud
Today Atos inaugurated a new AI laboratory in France. Set up as part of the global partnership between Atos and Google Cloud, the laboratory will enable clients, businesses and public organizations to identify practical cases, for which AI could provide innovative and effective solutions. "In order for France to continue to play a key role in the information space, it has to invest heavily in artificial intelligence and new technologies," said Thierry Breton, Chairman and CEO of Atos. "Beyond economic development, being able to offer technological excellence while protecting European data is a matter of sovereignty. With this joint laboratory between Atos and Google Cloud, we are enabling the adoption of artificial intelligence by our clients by offering them the best technologies and the highest level of security for their data processing, all within a clearly defined European regulatory framework. As such, Atos combines economic and technological development with sovereignty, compliance and security and helps to design a secure and valued European information space.”The post Atos Opens AI Laboratory in France with the Google Cloud appeared first on insideHPC.
Penguin Computing steps up with 2nd Generation Intel Xeon Scalable Processors
Today Penguin Computing announced that the company's Relion family of Linux-based servers is now available with the latest generation of Intel Xeon Scalable processors including both the processor formerly codenamed Cascade Lake-SP as well as the Walker Pass-based Cascade Lake-AP technology. This enhancement will enable Penguin Computing to improve performance for data center, HPC, and AI customers while also delivering the flexibility, density, and scalability of the Relion server design.The post Penguin Computing steps up with 2nd Generation Intel Xeon Scalable Processors appeared first on insideHPC.
Thomas Schulthess from CSCS Awarded Doron Prize
"With his precious scientific and technical contribution, Prof. Dr. Thomas Schulthess has laid important foundations for the success of research groups that use the CSCS infrastructure and carry out computational research. For about five years the CSCS has had the best computing performance in Europe and is currently one of the world's leading centers in this sector: it is with these arguments that the Foundation Board motivates the choice of the winner of the award."The post Thomas Schulthess from CSCS Awarded Doron Prize appeared first on insideHPC.
Video: EuroHPC – The EU Strategy in HPC
In this video from the HPC User Forum in Santa Fe, Leonardo Flores from the European Commission presents: EuroHPC - The EU Strategy in HPC. "EuroHPC is a joint collaboration between European countries and the European Union about developing and supporting exascale supercomputing by 2022/2023. EuroHPC will permit the EU and participating countries to coordinate their efforts and share resources with the objective of deploying in Europe a world-class supercomputing infrastructure and a competitive innovation ecosystem in supercomputing technologies, applications and skills."The post Video: EuroHPC – The EU Strategy in HPC appeared first on insideHPC.
Thorny Flat Supercomputer comes to West Virginia University
Today West Virginia University announced of one of the state's most powerful computer clusters to help power research and innovation statewide. "The Thorny Flat High Performance Computer Cluster, named after the state's second highest peak, joins the Spruce Knob cluster as resources. With 1,000 times more computing power than a desktop computer, the Thorny Flat cluster could benefit a variety research: forest hydrology; genetic studies; forensic chemistry of firearms; modeling of solar-to-chemical energy harvesting; and design and discovery of new materials."The post Thorny Flat Supercomputer comes to West Virginia University appeared first on insideHPC.
Video: How Intel Data-Centric Technologies will power the Frontera Supercomputer at TACC
In this video, researchers from the Texas Advanced Computing Center describe how Intel data-centric technologies power the Frontera supercomputer, which is currently under installation. "This system will provide researchers the groundbreaking computing capabilities needed to grapple with some of science’s largest challenges. Frontera will provide greater processing and memory capacity than TACC has ever had, accelerating existing research and enabling new projects that would not have been possible with previous systems."The post Video: How Intel Data-Centric Technologies will power the Frontera Supercomputer at TACC appeared first on insideHPC.
Registration Opens for ISC 2019 in Frankfurt
Early registration is now open for ISC High Performance, the largest high performance computing forum in Europe. The event takes place June 16 - 20 in Frankfurt, Germany. "ISC High Performance is recognized internationally for its strength in bringing together different academic and commercial disciplines to share knowledge in the field of high performance computing. With our first event dating back over 30 years, we’ve created a community spanning the globe. Over that period we have welcomed attendees from over 80 countries, which has made ISC a highly diverse event."The post Registration Opens for ISC 2019 in Frankfurt appeared first on insideHPC.
Video: HPC Market Update from Hyperion Research
In this video from the HPC User Forum in Santa Fe, Earl Joseph from Hyperion Research presents an HPC Market Update. The company helps IT professionals, business executives, and the investment community make fact-based decisions on technology purchases and business strategy.The post Video: HPC Market Update from Hyperion Research appeared first on insideHPC.
XTREME-Stargate G2 Service to offer Next-gen cloud Supercomputing for AI
Today Cloud HPC company XTREME-D today announced the official release of XTREME-Stargate G2, what it calls "next-gen cloud supercomputing for AI" with Intel’s latest CPU. "Announced in October, XTREME-Stargate has an increasing number of early adopters among commercial companies, national laboratories, and academia. The device provides High Performance Computing and graphics processing and is cost effective for both simulation and data analysis."The post XTREME-Stargate G2 Service to offer Next-gen cloud Supercomputing for AI appeared first on insideHPC.
Intel Rolls Out 48-Core Cascade Lake-SP Xeon Processors for HPC, AI, & Data-centric Workloads
Today Intel unveiled a new portfolio of data-centric solutions consisting of "Cascade Lake-SP" Intel Xeon Scalable processors, Intel Optane DC memory and storage solutions, and software and platform technologies optimized to help its customers extract more value from their data. "The portfolio of products announced today underscores our unmatched ability to move, store and process data across the most demanding workloads from the data center to the edge."The post Intel Rolls Out 48-Core Cascade Lake-SP Xeon Processors for HPC, AI, & Data-centric Workloads appeared first on insideHPC.
Survey: Companies are moving Mission Critical Apps to the Cloud
For the first time, a majority of companies are putting mission critical apps in the cloud, according to the latest report released today by Cloud Foundry Foundation. The study revealed that companies treat digital transformation as a constant cycle of adaptation rather than a one-time fix. As part of that process, cloud technologies such as Platform-as-a-Service (PaaS), containers and serverless continue to grow at scale, while microservices and AI/ML are next to be integrated into their workflows.The post Survey: Companies are moving Mission Critical Apps to the Cloud appeared first on insideHPC.
New Fujitsu Technology Accelerates Deep Learning on ResNET-50
Today Fujitsu Laboratories announced that it has developed technology to improve the speed of deep learning software, which has now achieved the world's highest speed when the time required for machine learning was measured using the ABCI system at AIST. "With the spread of deep learning in recent years, there has been a demand for algorithms that can execute machine learning processing at high speeds, and the speed of deep learning has accelerated by 30 times in the past two years. ResNet-50(1), a deep neural network for image recognition, is generally used as a benchmark to measure deep learning processing speed."The post New Fujitsu Technology Accelerates Deep Learning on ResNET-50 appeared first on insideHPC.
Supercomputing the Complexities of Brain Waves
Scientists are using the Comet supercomputer at SDSC to better understand the complexities of brain waves. With a goal of better understanding human brain development, the HBN project is currently collecting brain scans and EEG recordings, as well as other behavioral data from 10,000 New York City children and young adults – the largest such sample ever collected. "We hope to use portals such as the EEGLAB to process this data so that we can learn more about biological markers of mental health and learning disorders in our youngest patients,” said HBN Director Michael Milham.The post Supercomputing the Complexities of Brain Waves appeared first on insideHPC.
LRZ in Germany joins the OpenMP effort
The Leibniz Supercomputing Centre (LRZ) in Germany has joined the OpenMP Architecture Review Board (ARB), a group of leading hardware and software vendors and research organizations creating the standard for the most popular shared-memory parallel programming model in use today. "With the rise of core counts and the expected future deployment of accelerated systems, optimizing node-level performance is getting more and more important. As a member of the OpenMP ARB, we want to contribute to the future of OpenMP to meet the challenges of new architectures“, says Prof. Dieter Kranzlmüller, Chairman of the Board of Directors of LRZ.The post LRZ in Germany joins the OpenMP effort appeared first on insideHPC.
Piz Daint Supercomputer to Power LHC Computing Grid
The fastest supercomputer in Europe will soon join the WLHC Grid. Housed at CSCS in Switzerland, the Piz Daint supercomputer be used for data analysis from Large Hadron Collider (LHC) experiments. Until now, the ATLAS, CMS and LHCb particle detectors delivered their data to “Phoenix” system for analysis and comparison with the results of previous simulations.The post Piz Daint Supercomputer to Power LHC Computing Grid appeared first on insideHPC.
CPU, GPU, FGPA, or DSP: Heterogeneous Computing Multiplies the Processing Power
Whether your code will run on industry-standard PCs or is embedded in devices for specific uses, chances are there’s more than one processor that you can utilize. Graphics processors, DSPs and other hardware accelerators often sit idle while CPUs crank away at code better served elsewhere. This sponsored post from Intel highlights the potential of Intel SDK for OpenCL Applications, which can ramp up processing power.The post CPU, GPU, FGPA, or DSP: Heterogeneous Computing Multiplies the Processing Power appeared first on insideHPC.
Podcast: How the EZ Project is Providing Exascale with Lossy Compression for Scientific Data
In this podcast, Franck Cappello from Argonne describes EZ, an effort to effort to compress and reduce the enormous scientific data sets that some of the ECP applications are producing. "There are different approaches to solving the problem. One is called lossless compression, a data-reduction technique that doesn’t lose any information or introduce any noise. The drawback with lossless compression, however, is that user-entry floating-point values are very difficult to compress: the best effort reduces data by a factor of two. In contrast, ECP applications seek a data reduction factor of 10, 30, or even more."The post Podcast: How the EZ Project is Providing Exascale with Lossy Compression for Scientific Data appeared first on insideHPC.
Video: Defining A New Efficiency Standard for HPC Data Centers
Björn Brynjulfsson from Etix Everywhere gave this talk at CloudFest 2019. "Benefiting from very favorable ambient conditions and drawing on a background of designing, building and operating both high end facilities and super economical blockchain facilities we will show how the Etix team meet the challenge in Blönduós Iceland. The presentation describes how Etix delivered a 45 MW, ultra-green, HPC facility with lowest TCO in class, in under 9 months from start to finish."The post Video: Defining A New Efficiency Standard for HPC Data Centers appeared first on insideHPC.
...103104105106107108109110111112...