![]() |
by staff on (#3B5KD)
Today Hyperion Research announced that it has completed an employee buyout and is now an independent company. "The team will continue all the worldwide activities that have made it the world’s most respected HPC industry analyst group for more than 25 years," said Steve Conway, senior vice president of research. "That includes sizing and tracking the global markets for HPC and high-performance data analysis (HPDA). We will also continue offering our subscription services, customer studies and papers, and operating the HPC User Forum."The post Hyperion Research Becomes Independent Company appeared first on insideHPC.
|
Inside HPC & AI News | High-Performance Computing & Artificial Intelligence
Link | https://insidehpc.com/ |
Feed | http://insidehpc.com/feed/ |
Updated | 2025-08-12 21:00 |
![]() |
by Rich Brueckner on (#3B57T)
Over at Rigetti Computing, Will Zeng writes that the company has published a new white paper on unsupervised machine learning using 19Q, their new 19-qubit general purpose superconducting quantum processor. To accomplish this, they used a quantum/classical hybrid algorithm for clustering developed at Rigetti.The post New Paper looks at Unsupervised Machine Learning on Rigetti 19Q Quantum Computer appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#3B4XB)
In this video, Happy Sithole from CHPC introduces the team from South Africa that will travel to the Student Cluster Competition at ISC 2018 in Frankfurt. "Last week the local students came first in a national competition at the annual Centre for High Performance Computing conference in Pretoria. They will now compete next year with 11 other teams from abroad including China, Singapore, Thailand, Poland, and Germany."The post Video: South Africa to Compete in ISC 2018 Student Cluster Competition appeared first on insideHPC.
|
![]() |
by staff on (#3B4SR)
In this special guest feature from Scientific Computing World, Robert Roe writes that the EuroExa project has Europe on the road to Exascale computing. "Ultimately, the goals for exascale computing projects are focused on delivering and supporting an exascale-class supercomputer, but the benefits have the potential to drive future developments far beyond the small number of potential exascale systems. Projects such as EuroExa and the Exascale Computing Project in the US could have far-reaching benefits for smaller-scale HPC systems."The post EuroExa Project puts Europe on the Road to Exascale appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#3B4GW)
Haohuan Fu gave this Invited Talk at SC17 in Denver. "The Sunway TaihuLight supercomputer is the world's first system with a peak performance greater than 100 PFlops and a parallel scale of over 10 million cores. Different from other existing heterogeneous supercomputers, the system adopts its unique design strategies in both the architecture of its 260-core Shenwei CPU and its way of integrating 40,960 such CPUs as 40 powerful cabinets. This talk will first introduce and discuss design philosophy about the approach to integrate these 10 million cores, at both the processor and the system level."The post Lessons on Integrating and Utilizing 10 Million Cores: Experience of Sunway TaihuLight appeared first on insideHPC.
|
![]() |
by staff on (#3B22M)
Today Intel announced the availability of the Intel Stratix 10 MX FPGA, the industry’s first field programmable gate array (FPGA) with integrated High Bandwidth Memory DRAM (HBM2). "To efficiently accelerate these workloads, memory bandwidth needs to keep pace with the explosion in data†said Reynette Au, vice president of marketing, Intel Programmable Solutions Group. “We designed the Intel Stratix 10 MX family to provide a new class of FPGA-based multi-function data accelerators for HPC and HPDA markets.â€The post Intel Unveils Industry’s First FPGA Integrated with HBM Memory appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#3B1N9)
In this video, Nages Sieslack from ISC 2018 and Michele De Lorenzi from PASC18 invite you to join both conferences next summer. "ISC High Performance and the PASC Conference are pleased to announce that ISC 2018 and PASC18, two major events in the European HPC ecosystem, are scheduled back-to-back at the end of June and beginning of July 2018."The post Video: Meet in Europe for ISC 2018 and PASC18 Conferences appeared first on insideHPC.
|
![]() |
by staff on (#3B1JD)
Scientists from the Jülich Supercomputing Centre in Germany have set a new world record--simulating a quantum computer with 46 quantum bits – or qubits – for the first time. For their calculations, the scientists used the Jülich supercomputer JUQUEEN as well as the world’s fastest supercomputer Sunway TaihuLight at China’s National Supercomputing Center in Wuxi.The post Jülich Simulates World Record 46 Qubit Quantum Computer appeared first on insideHPC.
|
![]() |
by staff on (#3B1JF)
Today the Australian Government announced plans to invest $70 million for a new supercomputer at the The Board of Australia’s National Computational Infrastructure (NCI). The funding will be used to replace NCI’s aging Raijin supercomputer. "“The NCI supercomputer is one of the most important pieces of research infrastructure in Australia"The post NCI in Australia to Deploy $70 Million Supercomputer appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#3B1F6)
In this video from SC17, Peter Lyu from Rescale demonstrates how the company brings HPC Workloads to the Cloud. "Rescale helps customers shift from complex on-premise workflows to an easy-to-use web-based engineering SaaS workflow; Workflow variety: Rescale supports all types of workflows including multidisciplinary exploration, optimizations, design of experiments, and more."The post Rescale Demonstrates Easy HPC Cloud Management at SC17 appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#3AZ36)
In this podcast, the Radio Free HPC team looks at the FCC's move to abolish Net Neutrality regulations put in place during the Obama administration. Dan thinks this is a good move to remove unnecessary regulations, but rest of the crew is worried about where this will lead the future of the Internet.The post Radio Free HPC Says Goodbye to Net Neutrality appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#3AZ1C)
In this video from SC17, Mike Vildibill from Hewlett Packard Enterprise describes the company's experiences engaging large HPC customers with prototype systems and highlight the value and benefit of running SUSE on the Apollo 70 platform. "Coming soon, the HPE Apollo70: is a powerful HPC platform. It offers a disruptive ARM HPC processor technology with maximum memory bandwidth, familiar management and performance tools, and the density and scalability required for large HPC cluster deployments."The post Video: Arm on the Path to Exascale appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#3AWTQ)
Paul Messina from Argonne gave this Invited Talk at SC17. "Balancing evolution with innovation is challenging, especially since the ecosystem must be ready to support critical mission needs of DOE, other Federal agencies, and industry, when the first DOE exascale systems are delivered in 2021. The software ecosystem needs to evolve both to support new functionality demanded by applications and to use new hardware features efficiently. We are utilizing a co-design approach that uses over two dozen applications to guide the development of supporting software and R&D on hardware technologies as well as feedback from the latter to influence application development.The post The U.S. D.O.E. Exascale Computing Project – Goals and Challenges appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#3AWTS)
Taos is immediately hiring a HPC Systems Engineer for a cutting-edge tech company in Sunnyvale, CA! We're changing the face of some of the most innovative companies with our diverse solution offerings, exceptional talent and thought leadership. Our clients look to us first for advice, insight, and support, driving us to relentlessly focus on customer success.The post Job of the Week: HPC Systems Engineer at Taos appeared first on insideHPC.
|
![]() |
by staff on (#3AT9V)
PASC18 is soliciting high-quality original research papers related to scientific computing in any of the eight scientific domains represented at the conference (chemistry and materials, life sciences, physics, climate and weather, solid earth dynamics, engineering, computer science and applied mathematics, and emerging application domains). Papers that address aspects emphasizing the close coupling of data and computation in current and future high performance computing applications – as indicated by the theme of PASC18: Fast and Big Data, Fast and Big Computation – are particularly welcome.The post Video: Introduction to PASC18 Paper Track appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#3AT9X)
Researchers are using the Blue Waters supercomputer at NCSA to process new data from NASA’s Terra Satellite. Approximately the size of a small school bus, the Terra satellite explores the connections between Earth’s atmosphere, land, snow and ice, ocean, and energy balance to understand Earth’s climate and climate change and to map the impact of human […]The post Blue Waters Supercomputer Crunches Data from NASA’s Terra Satellite appeared first on insideHPC.
|
![]() |
by staff on (#3AT6W)
The MontBlanc 2020 project launched this week in Europe. As a sequel to a set of successful research projects into energy-efficient ARM-based computing, MontBlanc 2020 now begins an ambitious reach for Exascale. "The ambition of the consortium is to quickly industrialize our research. This is why we decided to rely on the Arm instruction set architecture (ISA), which is backed by a strong software ecosystem. By leveraging the current efforts, including the Mont-Blanc ecosystem and other international projects, we will benefit from the system software and applications required for successful usage†explained Said Derradji, Atos, coordinator of the Mont-Blanc 2020 project."The post Atos Kicks off MontBlanc 2020 Project for Exascale appeared first on insideHPC.
|
![]() |
by staff on (#3AT3M)
Today BP announced that it has more than doubled the total computing power of its Center for High-Performance Computing (CHPC) in Houston, making it the most powerful supercomputer in the world for commercial research. "Our investment in supercomputing is another example of BP leading the way in digital technologies that deliver improved safety, reliability and efficiency across our operations and give us a clear competitive advantage,†said Ahmed Hashmi, BP’s head of upstream technology."The post HPE Powers World’s Most Powerful Commercial Supercomputer at BP appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#3AT12)
Gordon Bell gave this Invited Talk at SC17. "A globally recognized pioneer in the supercomputing world, Bell will be sharing his latest reflections and insights with his fellow scientists, engineers and researchers at SC17 in Denver. Bell will highlight the work of the winners of the ACM Gordon Bell Prize from the past 30 years. Presented by the ACM, the recipients’ achievements have chronicled the important innovations and transitions of high performance computing, including the rise of parallel computing, a computing architecture that breaks down problems into smaller ones that may be solved simultaneously."The post SC17 Invited Talk: Gordon Bell on the Rise of Scalable Systems appeared first on insideHPC.
|
![]() |
by staff on (#3AQNB)
Today IBM announced the first clients to tap into its IBM Q early-access commercial quantum computing systems to explore practical applications important to business and science. In all, twelve initial organizations joined to foster a growing quantum computing ecosystem based on IBM’s open source quantum software and developer tools.The post Fortune 500 Firms Join IBM Q Network for Quantum Computing Research appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#3AQ8R)
In this video from SC17, Gabriel Broner from Rescale describes how the company brings HPC Workloads to the Cloud. "Rescale offers HPC in the cloud for engineers and scientists, delivering computational performance on-demand. Using the latest hardware architecture at cloud providers and supercomputing centers, Rescale enables users to extend their on-premise system with optimized HPC in the cloud."The post Rescale Brings HPC Workloads to the Cloud at SC17 appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#3APZK)
In this slidecast, Dr. Rosemary Francis describes the new Ellexus Container Checker, a pioneering cloud-based tool that provides visibility into the inner workings of Docker containers. "Container Checker will help people using cloud platforms to quickly detect problems within their containers before they are let loose on the cloud to potentially waste time and compute spend. Estimates suggest that up to 45% of cloud spend is wasted due in part to unknown application activity and unsuitable storage decisions, which is what we want to help businesses tackle.â€The post Slidecast: HPC and the Cloud – Announcing the Ellexus Container Checker appeared first on insideHPC.
|
![]() |
by staff on (#3APZM)
"Upon its completion in late 2018, the new Lenovo supercomputer (called SuperMUC-NG) will support LRZ in its groundbreaking research across a variety of complex scientific disciplines, such as astrophysics, fluid dynamics and life sciences, by offering highly available, secure and energy-efficient high-performance computing services that leverage industry-leading technology optimized to address the a broad range of scientific computing applications. The LRZ installation will also feature the 20-millionth server shipped by Lenovo, a significant milestone in the company’s data center history."The post Lenovo to Build 26.7 Petaflop SuperMUC-NG Cluster for LRZ in Germany appeared first on insideHPC.
|
![]() |
by Richard Friedman on (#3APW6)
Some deep learning applications tend to have very complex graphs with thousands of nodes and edges. To make it easier to visualize, analyze, design, and tune such complex parallel applications employing Intel TBB flow graphs, Intel provides Intel Advisor Flow Graph Analyzer (Intel FGA). It gives developers a comprehensive set of tools to examine, debug, and analyze Intel TBB flow graphs.The post Intel Advisor’s TBB Flow Graph Analyzer: Making Complex Layers of Parallelism More Manageable appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#3APS7)
In this video from SC17 in Denver, Dr. Eng Lim Goh describes the spaceborne supercomputer that HPE built for NASA. "The research objectives of the Spaceborne Computer include a year-long experiment of operating high performance commercial off-the-shelf (COTS) computer systems on the ISS with its changing radiation climate. During high radiation events, the electrical power consumption and, therefore, the operating speeds of the computer systems are lowered in an attempt to determine if such systems can still operate correctly."The post Dr. Eng Lim Goh on HPE’s Spaceborne Supercomputer appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#3AM53)
In this video, researchers from CINECA in Italy describe how their new D.A.V.I.D.E. supercomputer powers science and engineering. "Developed by E4 Engineering, D.A.V.I.D.E. (Development for an Added Value Infrastructure Designed in Europe) is an Energy Aware Petaflops Class High Performance Cluster based on Power Architecture and coupled with NVIDIA Tesla Pascal GPUs with NVLink."The post A Closer Look at the D.A.V.I.D.E. Supercomputer at CINECA in Italy appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#3AKVT)
In this video from the DDN User Group at SC17, Dr. Peter Clapham from Wellcome Trust Sanger Institute presents: Experiences in providing secure multi-tenant Lustre access to OpenStack. “If you need 10,000 cores to perform an extra layer of analysis in an hour, you have to scale a significant cluster to get answers quickly. You need a real solution that can address everything from very small to extremely large data sets.â€The post Experiences in providing secure multi-tenant Lustre access to OpenStack appeared first on insideHPC.
|
![]() |
by staff on (#3AKVW)
There is a gap in the market between NAS systems designed for enterprise data management and HPC solutions designed for data-intensive workloads," said Molly Presley, vice president, Global Marketing, Quantum. "Xcellis Scale-out NAS fills this gap with the features needed by enterprises and the performance required by HPC in a single solution. Xcellis uniquely delivers capacity with the economics of tape and cloud and integrated AI for advanced data insights and can even support traditional block storage demands within the same platform.â€The post Quantum Launches Scale-out NAS for High-Value and Data-Intensive Workloads appeared first on insideHPC.
|
![]() |
by staff on (#3AKNX)
“Baidu’s mission is to make a complex world simpler through technology, and we are constantly looking to discover and apply the latest cutting-edge technologies, innovations, and solutions to business. AMD EPYC processors provide Baidu with a new level of energy efficient and powerful computing capability.â€The post Baidu Deploys AMD EPYC Single Socket Platforms for ‘ABC’ Datacenters appeared first on insideHPC.
|
![]() |
by Sarah Rubenoff on (#3AKKD)
In a special report, Intel offers details on use cases that explore AI systems and those designed to learn in a limited information environment. "AI systems that can compete against and beat humans in limited information games have great potential, because so many activities between humans happen in the context of limited information such as financial trading and negotiations and even the much simpler task of buying a home."The post AI Systems Designed to Learn in a Limited Information Environment appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#3AH37)
Jake Carroll from The Queensland Brain Institute gave this talk at the DDN User Group. "The Metropolitan Data Caching Infrastructure (MeDiCI) project is a data storage fabric developed and trialled at UQ that delivers data to researchers where needed at any time. The “magic†of MeDiCI is it offers the illusion of a single virtual data centre next door even when it is actually distributed over potentially very wide areas with varying network connectivity."The post MeDiCI – How to Withstand a Research Data Tsunami appeared first on insideHPC.
|
![]() |
by staff on (#3AGZ9)
Researchers at the DOE are looking to dramatically increase their data transfer capabilities with the Petascale DTN project. "The collaboration, named the Petascale DTN project, also includes the National Center for Supercomputing Applications (NCSA) at the University of Illinois in Urbana-Champaign, a leading center funded by the National Science Foundation (NSF). Together, the collaboration aims to achieve regular disk-to-disk, end-to-end transfer rates of one petabyte per week between major facilities, which translates to achievable throughput rates of about 15 Gbps on real world science data sets."The post Speeding Data Transfer with ESnet’s Petascale DTN Project appeared first on insideHPC.
|
![]() |
by staff on (#3AGV9)
In this special guest feature, Siddhartha Jana provides an update on cross-community efforts to improve energy efficiency in the software stack. The article covers events at SC17 that focused on energy efficiency and highlights ongoing collaborations across the community to develop advanced software technologies for system energy and power management.The post From SC17: Energy Efficiency in the Software Stack — Cross Community Efforts appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#3AGHD)
In this video from the Intel HPC Developer Conference, Simon Warfield from Boston Children's Hospital describes how radiology is being transformed with 3D and 4D MRI technology powered by AI and HPC. "A complete Diffusion Compartment Imaging study can now be completed in 16 minutes on a workstation, which means Diffusion Compartment Imaging can now be used in emergency situations, in a clinical setting, and to evaluate the efficacy of treatment. Even better, higher resolution images can be produced because the optimized code scales."The post Moving Radiology Forward with HPC at Boston Children’s Hospital appeared first on insideHPC.
|
![]() |
by staff on (#3ADSF)
Today Avere Systems introduced the top-of-the-line FXT Edge filer, the FXT 5850. Designed for high data-growth industries, the new FXT enables customers to speed time to market, produce higher quality output and modernize the IT infrastructure with both cloud and advanced networking technologies."Our customers in the fields of scientific research, financial services, media and entertainment and others are nearing the limits of the modern data center with ever-increasing workload demands,†said Jeff Tabor, Senior Director of Product Management and Marketing at Avere Systems. “Built on Avere’s enterprise-proven file system technology, FXT 5850 delivers unparalleled performance and capacity to support the most compute-intensive environments and help our customers accelerate their businesses.â€The post New Avere FXT Edge Filer Doubles Performance, Capacity, and Bandwidth for Challenging Workloads appeared first on insideHPC.
|
![]() |
by staff on (#3AE2E)
The MareNostrum 4 supercomputer at the Barcelona Supercomputing Centre has been named the winner of the Most Beautiful Data Center in the world Prize, hosted by the Datacenter Dynamics Company. "Aside from being the most beautiful, MareNostrum has been dubbed the most interesting supercomputer in the world due to the heterogeneity of the architecture it will include once installation of the supercomputer is complete. Its total speed will be 13.7 Petaflops. Its main memory is of 390 Terabytes and it has the capacity to store 14 Petabytes (14 million Gigabytes) of data. A high-speed network connects all the components in the supercomputer to one another."The post MareNostrum 4 Named Most Beautiful Datacenter in the World appeared first on insideHPC.
|
![]() |
by staff on (#3ADJZ)
Today D-Wave Systems announced its involvement in a grant-funded UK project to improve logistics and planning operations using quantum computing algorithms. "Advancing AI planning techniques could significantly improve operational efficiency across major industries, from law enforcement to transportation and beyond,†said Robert “Bo†Ewald, president of D-Wave International. “Advancing real-world applications for quantum computing takes dedicated collaboration from scientists and experts in a wide variety of fields. This project is an example of that work and will hopefully lead to faster, better solutions for critical problems.â€The post Innovate UK Award to Confirm Business Case for Quantum-enhanced Optimization Algorithms appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#3ADCY)
In this video from SC17 in Denver, Rick Stevens from Argonne leads a discussion about the Comanche Advanced Technology Collaboration. By initiating the Comanche collaboration, HPE brought together industry partners and leadership sites like Argonne National Laboratory to work in a joint development effort,†said HPE’s Chief Strategist for HPC and Technical Lead for the Advanced Development Team Nic Dubé. “This program represents one of the largest customer-driven prototyping efforts focused on the enablement of the HPC software stack for ARM. We look forward to further collaboration on the path to an open hardware and software ecosystem.â€The post Video: Comanche Collaboration Moves ARM HPC forward at National Labs appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#3AD6V)
In this podcast, Radio Free HPC Looks at the New Power9, Titan V, and Snapdragon 845 devices for AI and HPC. "Built specifically for compute-intensive AI workloads, the new POWER9 systems are capable of improving the training times of deep learning frameworks by nearly 4x allowing enterprises to build more accurate AI applications, faster."The post Radio Free HPC Looks at the New Power9, Titan V, and Snapdragon 845 appeared first on insideHPC.
|
![]() |
by staff on (#3AD3R)
FPGAs can improve performance per watt, bandwidth and latency. In this guest post, Intel explores how Field Programmable Gate Arrays (FPGAs) can be used to accelerate high performance computing. "Tightly coupled programmable multi-function accelerator platforms, such as FPGAs from Intel, offer a single hardware platform that enables servers to address many different workloads needs—from HPC needs for the highest capacity and performance through data center requirements for load balancing capabilities to address different workload profiles."The post Accelerating HPC with Intel FPGAs appeared first on insideHPC.
|
![]() |
by staff on (#3AAVG)
Over at LBNL, Kathy Kincade writes that cosmologists are using supercomputers to study how heavy metals expelled from exploding supernovae helped the first stars in the universe regulate subsequent star formation. "In the early universe, the stars were massive and the radiation they emitted was very strong,†Chen explained. “So if you have this radiation before that star explodes and becomes a supernova, the radiation has already caused significant damage to the gas surrounding the star’s halo.â€The post Supercomputing How First Supernovae Altered Early Star Formation appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#3AASG)
In this video from SC17, Gabor Samu describes how IBM Spectrum LSF helps users orchestrate HPC workloads. "This week we celebrate the release of our second agile update to IBM Spectrum LSF 10. And it’s our silver anniversary… 25 years of IBM Spectrum LSF! The IBM Spectrum LSF Suites portfolio redefines cluster virtualization and workload management by providing a tightly integrated solution for demanding, mission-critical HPC environments that can increase both user productivity and hardware utilization while decreasing system management costs."The post IBM Spectrum LSF Powers HPC at SC17 appeared first on insideHPC.
|
![]() |
by staff on (#3A8JY)
Today Dell EMC announced a joint solution with Alces Flight and AWS to provide HPC for the University of Liverpool. Dell EMC will provide a fully managed on-premises HPC cluster while a cloud-based HPC account for students and researchers will enable cloud bursting computational capacity. "We are pleased to be working with Dell EMC and Alces Flight on this new venture," said Cliff Addison, Head of Advanced Research Computing at the University of Liverpool. "The University of Liverpool has always maintained cutting-edge technology and by architecting flexible access to computational resources on AWS we're setting the bar even higher for what can be achieved in HPC."The post Dell EMC Powers HPC at University of Liverpool with Alces Flight appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#3A8K0)
In this video from the Intel HPC Developer Conference, Andres Rodriguez describes his presentation on Enabling the Future of Artificial Intelligence. "Intel has the industry’s most comprehensive suite of hardware and software technologies that deliver broad capabilities and support diverse approaches for AI—including today’s AI applications and more complex AI tasks in the future."The post Video: Enabling the Future of Artificial Intelligence appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#3A60C)
Today NVIDIA introduced their new high end TITAN V GPU for desktop PCs. Powered by the Volta architecture, TITAN V excels at computational processing for scientific simulation. Its 21.1 billion transistors deliver 110 teraflops of raw horsepower, 9x that of its predecessor, and extreme energy efficiency. “With TITAN V, we are putting Volta into the hands of researchers and scientists all over the world. I can’t wait to see their breakthrough discoveries.â€The post NVIDIA TITAN V GPU Brings Volta to the Desktop for AI Development appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#3A5SN)
James Coomer gave this talk at the DDN User Group at SC17. "Our technological and market leadership comes from our long-term investments in leading-edge research and development, our relentless focus on solving our customers’ end-to-end data and information management challenges, and the excellence of our employees around the globe, all relentlessly focused on delivering the highest levels of satisfaction to our customers. To meet these ever-increasing requirements, users are rapidly adopting DDN’s best-of-breed high-performance storage solutions for end-to-end data management from data creation and persistent storage to active archives and the Cloud."The post Video: DDN Applied Technologies, Performance and Use Cases appeared first on insideHPC.
|
![]() |
by staff on (#3A5SQ)
Data Vortex Technologies has formalized a partnership with Providentia Worldwide, LLC. Providentia is a technologies and solutions consulting venture which bridges the gap between traditional HPC and enterprise computing. The company works with Data Vortex and potential partners to develop novel solutions for Data Vortex technologies and to assist with systems integration into new markets. This partnership will leverage the deep experience in enterprise and hyperscale environments of Providentia Worldwide founders, Ryan Quick and Arno Kolster, and merge the unique performance characteristics of the Data Vortex with traditional systems.The post Data Vortex Technologies Teams with Providentia Worldwide for HPC appeared first on insideHPC.
|
![]() |
by staff on (#3A5PM)
In this video from SC17, Thomas Krueger describes how Intel supports Open Source High Performance Computing software like OpenHPC and Lustre. "As the Linux initiative demonstrates, a community-based, vendor-catalyzed model like this has major advantages for enabling software to keep pace with requirements for HPC computing and storage hardware systems. In this model, stack development is driven primarily by the open source community and vendors offer supported distributions with additional capabilities for customers that require and are willing to pay for them."The post Intel Supports open source software for HPC appeared first on insideHPC.
|
![]() |
by staff on (#3A31V)
Today System Fabric Works announced its support and integration of the BeeGFS file system with the latest NetApp E-Series All Flash and HDD storage systems which makes BeeGFS available on the family of NetApp E-Series Hyperscale Storage products as part of System Fabric Work’s (SFW) Converged Infrastructure solutions for high-performance Enterprise Computing, Data Analytics and Machine Learning. "We are pleased to announce our Gold Partner relationship with ThinkParQ,†said Kevin Moran, President and CEO, System Fabric Works. “Together, SFW and ThinkParQ can deliver, worldwide, a highly converged, scalable computing solution based on BeeGFS, engineered with NetApp E-Series, a choice of InfiniBand, Omni-Path, RDMA over Ethernet and NVMe over Fabrics for targeted performance and 99.9999 reliability utilizing customer-chosen clustered servers and clients and SFW’s services for architecture, integration, acceptance and on-going support services.â€The post System Fabric Works adds support for BeeGFS Parallel File System appeared first on insideHPC.
|
![]() |
by staff on (#3A2Q5)
Today DDN announced the results of its annual HPC Trends survey, which reflects the continued adoption of flash-based storage as essential to respondent’s overall data center strategy. While flash is deemed essential, respondents anticipate needing additional technology innovations to unlock the full performance of their HPC applications. Managing complex I/O workload performance remains far and away the largest challenge to survey respondents, with 60 percent of end-users citing this as their number one challenge.The post DDN’s HPC Trends Survey: Complex I/O Workloads are the #1 Challenge appeared first on insideHPC.
|