by Rich Brueckner on (#3APS7)
In this video from SC17 in Denver, Dr. Eng Lim Goh describes the spaceborne supercomputer that HPE built for NASA. "The research objectives of the Spaceborne Computer include a year-long experiment of operating high performance commercial off-the-shelf (COTS) computer systems on the ISS with its changing radiation climate. During high radiation events, the electrical power consumption and, therefore, the operating speeds of the computer systems are lowered in an attempt to determine if such systems can still operate correctly."The post Dr. Eng Lim Goh on HPE’s Spaceborne Supercomputer appeared first on insideHPC.
|
High-Performance Computing News Analysis | insideHPC
Link | https://insidehpc.com/ |
Feed | http://insidehpc.com/feed/ |
Updated | 2024-11-05 07:15 |
by Rich Brueckner on (#3AM53)
In this video, researchers from CINECA in Italy describe how their new D.A.V.I.D.E. supercomputer powers science and engineering. "Developed by E4 Engineering, D.A.V.I.D.E. (Development for an Added Value Infrastructure Designed in Europe) is an Energy Aware Petaflops Class High Performance Cluster based on Power Architecture and coupled with NVIDIA Tesla Pascal GPUs with NVLink."The post A Closer Look at the D.A.V.I.D.E. Supercomputer at CINECA in Italy appeared first on insideHPC.
|
by Rich Brueckner on (#3AKVT)
In this video from the DDN User Group at SC17, Dr. Peter Clapham from Wellcome Trust Sanger Institute presents: Experiences in providing secure multi-tenant Lustre access to OpenStack. “If you need 10,000 cores to perform an extra layer of analysis in an hour, you have to scale a significant cluster to get answers quickly. You need a real solution that can address everything from very small to extremely large data sets.â€The post Experiences in providing secure multi-tenant Lustre access to OpenStack appeared first on insideHPC.
|
by staff on (#3AKVW)
There is a gap in the market between NAS systems designed for enterprise data management and HPC solutions designed for data-intensive workloads," said Molly Presley, vice president, Global Marketing, Quantum. "Xcellis Scale-out NAS fills this gap with the features needed by enterprises and the performance required by HPC in a single solution. Xcellis uniquely delivers capacity with the economics of tape and cloud and integrated AI for advanced data insights and can even support traditional block storage demands within the same platform.â€The post Quantum Launches Scale-out NAS for High-Value and Data-Intensive Workloads appeared first on insideHPC.
|
by staff on (#3AKNX)
“Baidu’s mission is to make a complex world simpler through technology, and we are constantly looking to discover and apply the latest cutting-edge technologies, innovations, and solutions to business. AMD EPYC processors provide Baidu with a new level of energy efficient and powerful computing capability.â€The post Baidu Deploys AMD EPYC Single Socket Platforms for ‘ABC’ Datacenters appeared first on insideHPC.
|
by Sarah Rubenoff on (#3AKKD)
In a special report, Intel offers details on use cases that explore AI systems and those designed to learn in a limited information environment. "AI systems that can compete against and beat humans in limited information games have great potential, because so many activities between humans happen in the context of limited information such as financial trading and negotiations and even the much simpler task of buying a home."The post AI Systems Designed to Learn in a Limited Information Environment appeared first on insideHPC.
|
by Rich Brueckner on (#3AH37)
Jake Carroll from The Queensland Brain Institute gave this talk at the DDN User Group. "The Metropolitan Data Caching Infrastructure (MeDiCI) project is a data storage fabric developed and trialled at UQ that delivers data to researchers where needed at any time. The “magic†of MeDiCI is it offers the illusion of a single virtual data centre next door even when it is actually distributed over potentially very wide areas with varying network connectivity."The post MeDiCI – How to Withstand a Research Data Tsunami appeared first on insideHPC.
|
by staff on (#3AGZ9)
Researchers at the DOE are looking to dramatically increase their data transfer capabilities with the Petascale DTN project. "The collaboration, named the Petascale DTN project, also includes the National Center for Supercomputing Applications (NCSA) at the University of Illinois in Urbana-Champaign, a leading center funded by the National Science Foundation (NSF). Together, the collaboration aims to achieve regular disk-to-disk, end-to-end transfer rates of one petabyte per week between major facilities, which translates to achievable throughput rates of about 15 Gbps on real world science data sets."The post Speeding Data Transfer with ESnet’s Petascale DTN Project appeared first on insideHPC.
|
by staff on (#3AGV9)
In this special guest feature, Siddhartha Jana provides an update on cross-community efforts to improve energy efficiency in the software stack. The article covers events at SC17 that focused on energy efficiency and highlights ongoing collaborations across the community to develop advanced software technologies for system energy and power management.The post From SC17: Energy Efficiency in the Software Stack — Cross Community Efforts appeared first on insideHPC.
|
by Rich Brueckner on (#3AGHD)
In this video from the Intel HPC Developer Conference, Simon Warfield from Boston Children's Hospital describes how radiology is being transformed with 3D and 4D MRI technology powered by AI and HPC. "A complete Diffusion Compartment Imaging study can now be completed in 16 minutes on a workstation, which means Diffusion Compartment Imaging can now be used in emergency situations, in a clinical setting, and to evaluate the efficacy of treatment. Even better, higher resolution images can be produced because the optimized code scales."The post Moving Radiology Forward with HPC at Boston Children’s Hospital appeared first on insideHPC.
|
by staff on (#3ADSF)
Today Avere Systems introduced the top-of-the-line FXT Edge filer, the FXT 5850. Designed for high data-growth industries, the new FXT enables customers to speed time to market, produce higher quality output and modernize the IT infrastructure with both cloud and advanced networking technologies."Our customers in the fields of scientific research, financial services, media and entertainment and others are nearing the limits of the modern data center with ever-increasing workload demands,†said Jeff Tabor, Senior Director of Product Management and Marketing at Avere Systems. “Built on Avere’s enterprise-proven file system technology, FXT 5850 delivers unparalleled performance and capacity to support the most compute-intensive environments and help our customers accelerate their businesses.â€The post New Avere FXT Edge Filer Doubles Performance, Capacity, and Bandwidth for Challenging Workloads appeared first on insideHPC.
|
by staff on (#3AE2E)
The MareNostrum 4 supercomputer at the Barcelona Supercomputing Centre has been named the winner of the Most Beautiful Data Center in the world Prize, hosted by the Datacenter Dynamics Company. "Aside from being the most beautiful, MareNostrum has been dubbed the most interesting supercomputer in the world due to the heterogeneity of the architecture it will include once installation of the supercomputer is complete. Its total speed will be 13.7 Petaflops. Its main memory is of 390 Terabytes and it has the capacity to store 14 Petabytes (14 million Gigabytes) of data. A high-speed network connects all the components in the supercomputer to one another."The post MareNostrum 4 Named Most Beautiful Datacenter in the World appeared first on insideHPC.
|
by staff on (#3ADJZ)
Today D-Wave Systems announced its involvement in a grant-funded UK project to improve logistics and planning operations using quantum computing algorithms. "Advancing AI planning techniques could significantly improve operational efficiency across major industries, from law enforcement to transportation and beyond,†said Robert “Bo†Ewald, president of D-Wave International. “Advancing real-world applications for quantum computing takes dedicated collaboration from scientists and experts in a wide variety of fields. This project is an example of that work and will hopefully lead to faster, better solutions for critical problems.â€The post Innovate UK Award to Confirm Business Case for Quantum-enhanced Optimization Algorithms appeared first on insideHPC.
|
by Rich Brueckner on (#3ADCY)
In this video from SC17 in Denver, Rick Stevens from Argonne leads a discussion about the Comanche Advanced Technology Collaboration. By initiating the Comanche collaboration, HPE brought together industry partners and leadership sites like Argonne National Laboratory to work in a joint development effort,†said HPE’s Chief Strategist for HPC and Technical Lead for the Advanced Development Team Nic Dubé. “This program represents one of the largest customer-driven prototyping efforts focused on the enablement of the HPC software stack for ARM. We look forward to further collaboration on the path to an open hardware and software ecosystem.â€The post Video: Comanche Collaboration Moves ARM HPC forward at National Labs appeared first on insideHPC.
|
by Rich Brueckner on (#3AD6V)
In this podcast, Radio Free HPC Looks at the New Power9, Titan V, and Snapdragon 845 devices for AI and HPC. "Built specifically for compute-intensive AI workloads, the new POWER9 systems are capable of improving the training times of deep learning frameworks by nearly 4x allowing enterprises to build more accurate AI applications, faster."The post Radio Free HPC Looks at the New Power9, Titan V, and Snapdragon 845 appeared first on insideHPC.
|
by staff on (#3AD3R)
FPGAs can improve performance per watt, bandwidth and latency. In this guest post, Intel explores how Field Programmable Gate Arrays (FPGAs) can be used to accelerate high performance computing. "Tightly coupled programmable multi-function accelerator platforms, such as FPGAs from Intel, offer a single hardware platform that enables servers to address many different workloads needs—from HPC needs for the highest capacity and performance through data center requirements for load balancing capabilities to address different workload profiles."The post Accelerating HPC with Intel FPGAs appeared first on insideHPC.
|
by staff on (#3AAVG)
Over at LBNL, Kathy Kincade writes that cosmologists are using supercomputers to study how heavy metals expelled from exploding supernovae helped the first stars in the universe regulate subsequent star formation. "In the early universe, the stars were massive and the radiation they emitted was very strong,†Chen explained. “So if you have this radiation before that star explodes and becomes a supernova, the radiation has already caused significant damage to the gas surrounding the star’s halo.â€The post Supercomputing How First Supernovae Altered Early Star Formation appeared first on insideHPC.
|
by Rich Brueckner on (#3AASG)
In this video from SC17, Gabor Samu describes how IBM Spectrum LSF helps users orchestrate HPC workloads. "This week we celebrate the release of our second agile update to IBM Spectrum LSF 10. And it’s our silver anniversary… 25 years of IBM Spectrum LSF! The IBM Spectrum LSF Suites portfolio redefines cluster virtualization and workload management by providing a tightly integrated solution for demanding, mission-critical HPC environments that can increase both user productivity and hardware utilization while decreasing system management costs."The post IBM Spectrum LSF Powers HPC at SC17 appeared first on insideHPC.
|
by staff on (#3A8JY)
Today Dell EMC announced a joint solution with Alces Flight and AWS to provide HPC for the University of Liverpool. Dell EMC will provide a fully managed on-premises HPC cluster while a cloud-based HPC account for students and researchers will enable cloud bursting computational capacity. "We are pleased to be working with Dell EMC and Alces Flight on this new venture," said Cliff Addison, Head of Advanced Research Computing at the University of Liverpool. "The University of Liverpool has always maintained cutting-edge technology and by architecting flexible access to computational resources on AWS we're setting the bar even higher for what can be achieved in HPC."The post Dell EMC Powers HPC at University of Liverpool with Alces Flight appeared first on insideHPC.
|
by Rich Brueckner on (#3A8K0)
In this video from the Intel HPC Developer Conference, Andres Rodriguez describes his presentation on Enabling the Future of Artificial Intelligence. "Intel has the industry’s most comprehensive suite of hardware and software technologies that deliver broad capabilities and support diverse approaches for AI—including today’s AI applications and more complex AI tasks in the future."The post Video: Enabling the Future of Artificial Intelligence appeared first on insideHPC.
|
by Rich Brueckner on (#3A60C)
Today NVIDIA introduced their new high end TITAN V GPU for desktop PCs. Powered by the Volta architecture, TITAN V excels at computational processing for scientific simulation. Its 21.1 billion transistors deliver 110 teraflops of raw horsepower, 9x that of its predecessor, and extreme energy efficiency. “With TITAN V, we are putting Volta into the hands of researchers and scientists all over the world. I can’t wait to see their breakthrough discoveries.â€The post NVIDIA TITAN V GPU Brings Volta to the Desktop for AI Development appeared first on insideHPC.
|
by Rich Brueckner on (#3A5SN)
James Coomer gave this talk at the DDN User Group at SC17. "Our technological and market leadership comes from our long-term investments in leading-edge research and development, our relentless focus on solving our customers’ end-to-end data and information management challenges, and the excellence of our employees around the globe, all relentlessly focused on delivering the highest levels of satisfaction to our customers. To meet these ever-increasing requirements, users are rapidly adopting DDN’s best-of-breed high-performance storage solutions for end-to-end data management from data creation and persistent storage to active archives and the Cloud."The post Video: DDN Applied Technologies, Performance and Use Cases appeared first on insideHPC.
|
by staff on (#3A5SQ)
Data Vortex Technologies has formalized a partnership with Providentia Worldwide, LLC. Providentia is a technologies and solutions consulting venture which bridges the gap between traditional HPC and enterprise computing. The company works with Data Vortex and potential partners to develop novel solutions for Data Vortex technologies and to assist with systems integration into new markets. This partnership will leverage the deep experience in enterprise and hyperscale environments of Providentia Worldwide founders, Ryan Quick and Arno Kolster, and merge the unique performance characteristics of the Data Vortex with traditional systems.The post Data Vortex Technologies Teams with Providentia Worldwide for HPC appeared first on insideHPC.
|
by staff on (#3A5PM)
In this video from SC17, Thomas Krueger describes how Intel supports Open Source High Performance Computing software like OpenHPC and Lustre. "As the Linux initiative demonstrates, a community-based, vendor-catalyzed model like this has major advantages for enabling software to keep pace with requirements for HPC computing and storage hardware systems. In this model, stack development is driven primarily by the open source community and vendors offer supported distributions with additional capabilities for customers that require and are willing to pay for them."The post Intel Supports open source software for HPC appeared first on insideHPC.
|
by staff on (#3A31V)
Today System Fabric Works announced its support and integration of the BeeGFS file system with the latest NetApp E-Series All Flash and HDD storage systems which makes BeeGFS available on the family of NetApp E-Series Hyperscale Storage products as part of System Fabric Work’s (SFW) Converged Infrastructure solutions for high-performance Enterprise Computing, Data Analytics and Machine Learning. "We are pleased to announce our Gold Partner relationship with ThinkParQ,†said Kevin Moran, President and CEO, System Fabric Works. “Together, SFW and ThinkParQ can deliver, worldwide, a highly converged, scalable computing solution based on BeeGFS, engineered with NetApp E-Series, a choice of InfiniBand, Omni-Path, RDMA over Ethernet and NVMe over Fabrics for targeted performance and 99.9999 reliability utilizing customer-chosen clustered servers and clients and SFW’s services for architecture, integration, acceptance and on-going support services.â€The post System Fabric Works adds support for BeeGFS Parallel File System appeared first on insideHPC.
|
by staff on (#3A2Q5)
Today DDN announced the results of its annual HPC Trends survey, which reflects the continued adoption of flash-based storage as essential to respondent’s overall data center strategy. While flash is deemed essential, respondents anticipate needing additional technology innovations to unlock the full performance of their HPC applications. Managing complex I/O workload performance remains far and away the largest challenge to survey respondents, with 60 percent of end-users citing this as their number one challenge.The post DDN’s HPC Trends Survey: Complex I/O Workloads are the #1 Challenge appeared first on insideHPC.
|
by Rich Brueckner on (#3A2KQ)
Xtreme Design is seeking a Senior HPC Cloud Architect in our Job of the Week. "XTREME Design is seeking an HPC-focused Cloud Architect to join our team and continue development of the XTREME family of products and solutions, and specifically the virtual supercomputing-on-demand service – XTREME DNA. You will have the opportunity to help shape and execute product designs that will build mindshare and promote broad use of XTREME products and solutions."The post Job of the Week: Senior HPC Architect at Xtreme Design appeared first on insideHPC.
|
by Rich Brueckner on (#3A2H9)
BIGstack 2.0 incorporates our latest Intel Xeon Scalable processors, Intel 3D NAND SSD, and Intel FPGAs while also leveraging the latest genomic tools from the Broad Institute in GATK 3.8 and GATK 4.0. This new stack provides a 3.34x speed up in whole genome analysis and a 2.2x daily throughput increase. It is able to deliver these performance improvements with a cost of just $5.68 per whole genome analyzed. The result: researchers will be able to analyze more genomes, more quickly and at lower cost, enabling new discoveries, new treatment options, and faster diagnosis of disease.The post Intel Select Solutions: BigStack 2.0 for Genomics appeared first on insideHPC.
|
by MichaelS on (#3A2EM)
With the introduction of Intel Parallel Studio XE, instructions for utilizing the vector extensions have been enhanced and new instructions have been added. Applications in diverse domains such as data compression and decompression, scientific simulations and cryptography can take advantage of these new and enhanced instructions. "Although microkernels can demonstrate the effectiveness of the new SIMD instructions, understanding why the new instructions benefit the code can then lead to even greater performance."The post Intel Parallel Studio XE AVX-512: Tuning for Success with the Latest SIMD Extensions and Intel® Advanced Vector Extensions 512 appeared first on insideHPC.
|
by staff on (#3A0HP)
Today Cray announced the Company has joined the Big Data Center at NERSC. The collaboration between the two organizations is representative of Cray’s commitment to leverage its supercomputing expertise, technologies, and best practices to advance the adoption of Artificial Intelligence, deep learning, and data-intensive computing. "We are really excited to have Cray join the Big Data Center,†said Prabhat, Director of the Big Data Center, and Group Lead for Data and Analytics Services at NERSC. “Cray’s deep expertise in systems, software, and scaling is critical in working towards the BDC mission of enabling capability applications for data-intensive science on Cori. Cray and NERSC, working together with Intel and our IPCC academic partners, are well positioned to tackle performance and scaling challenges of Deep Learning.â€The post Cray Joins Big Data Center at NERSC for AI Development appeared first on insideHPC.
|
by Rich Brueckner on (#3A09G)
In this video from the DDN User Group at SC17, Ron Hawkins from the San Diego Supercomputer Center presents: BioBurst — Leveraging Burst Buffer Technology for Campus Research Computing. Under an NSF award, SDSC will implement a separately scheduled partition of TSCC with technology designed to address key areas of bioinformatics computing including genomics, transcriptomics, […]The post BioBurst: Leveraging Burst Buffer Technology for Campus Research Computing appeared first on insideHPC.
|
by staff on (#3A09J)
A Chinese team of researchers awarded this year’s prestigious Gordon Bell prize for simulating the devastating 1976 earthquake in Tangshan, China, used an open-source code developed by researchers at the San Diego Supercomputer Center (SDSC) at UC San Diego and San Diego State University (SDSU) with support from the Southern California Earthquake Center (SCEC). "We congratulate the researchers for their impressive innovations porting our earthquake software code, and in turn for advancing the overall state of seismic research that will have far-reaching benefits around the world,†said Yifeng Cui, director of SDSC’s High Performance Geocomputing Laboratory, who along with SDSU Geological Sciences Professor Kim Olsen, Professor Emeritus Steven Day and researcher Daniel Roten developed the AWP-ODC code.The post SDSC Earthquake Codes Used in 2017 Gordon Bell Prize Research appeared first on insideHPC.
|
by Rich Brueckner on (#39ZFN)
Today AMD announced the first public cloud instances powered by the AMD EPYC processor. Microsoft Azure has deployed AMD EPYC processors in its datacenters in advance of preview for its latest L-Series of Virtual Machines (VM) for storage optimized workloads. The Lv2 VM family will take advantage of the high-core count and connectivity support of […]The post Microsoft Azure Becomes First Global Cloud Provider to Deploy AMD EPYC appeared first on insideHPC.
|
by Rich Brueckner on (#39ZFP)
In this video from SC17, and Martin Yip and Josh Simons from VMware describe how the company is moving Virtualized HPC forward. "In recent years, virtualization has started making major inroads into the realm of High Performance Computing, an area that was previously considered off-limits. In application areas such as life sciences, electronic design automation, financial services, Big Data, and digital media, people are discovering that there are benefits to running a virtualized infrastructure that are similar to those experienced by enterprise applications, but also unique to HPC."The post VMware moves Virtualized HPC Forward at SC17 appeared first on insideHPC.
|
by staff on (#39ZCA)
Today Cavium announced it is collaborating with IBM for next generation platforms by joining OpenCAPI, an initiative founded by IBM, Google, AMD and others. OpenCAPI provides high-bandwidth, low latency interface optimized to connect accelerators, IO devices and memory to CPUs. With this announcement Cavium plans to bring its leadership in server IO and security offloads to next generation platforms that support the OpenCAPI interface. "We are excited to be a part of the OpenCAPI consortium. As our partnership with IBM continues to grow, we see more synergies in high speed communication and Artificial Intelligence applications.†said Syed Ali, founder and CEO of Cavium. “We look forward to working with IBM to enable exponential performance gains for these applications.â€The post Cavium Joins OpenCAPI for Next-gen Platforms appeared first on insideHPC.
|
by Rich Brueckner on (#39Z9C)
In this video from SC17, Dr Kwang Jin Oh, Director of Supercomputing Service Center at KISTI describes the new Intel-powered Cray supercomputer coming to South Korea. “Our cluster supercomputers are specifically designed to give customers like KISTI the computing resources they need for achieving scientific breakthroughs throughout a wide array of increasingly-complex, data-intensive challenges across modeling, simulation, analytics, and artificial intelligence. We look forward to working closely with KISTI now and into the future.â€The post Interview: Cray to Deploy Largest Supercomputer in South Korea at KISTI appeared first on insideHPC.
|
by staff on (#39Z6T)
Use cases show AI technology, like what's used in an optimized Diffusion Compartment Imaging (DCI) technique, is growing the potential of HPC computing. AI is thought to be a solution to many of DCI's challenges. "A complete Diffusion Compartment Imaging study can now be completed in 16 minutes on a workstation, which means Diffusion Compartment Imaging can now be used in emergency situations, in a clinical setting, and to evaluate the efficacy of treatment. Even better, higher resolution images can be produced because the optimized code scales."The post AI Technology: The Answer to Diffusion Compartment Imaging Challenges appeared first on insideHPC.
|
by staff on (#39WRR)
The OFA Workshop 2018 Call for Sessions encourages industry experts and thought leaders to help shape this year’s discussions by presenting or leading discussions on critical high performance networking issues. Sessions are designed to educate attendees on current development opportunities, troubleshooting techniques, and disruptive technologies affecting the deployment of high performance computing environments. The OFA Workshop places a high value on collaboration and exchanges among participants. In keeping with the theme of collaboration, proposals for Birds of a Feather sessions and panels are particularly encouraged.The post Call for Sessions: OpenFabrics Alliance Workshop in Boulder appeared first on insideHPC.
|
by Rich Brueckner on (#39WJJ)
In this video from SC17, Adel El Hallak from IBM unveils the POWER9 servers that will form the basis of the world's fastest "Coral" supercomputers coming to ORNL and LLNL. "In addition to arming the world's most powerful supercomputers, IBM POWER9 Systems is designed to enable enterprises around the world to scale unprecedented insights, driving scientific discovery enabling transformational business outcomes across every industry."The post Video: IBM Launches POWER9 Nodes for the World’s Fastest Supercomputers appeared first on insideHPC.
|
by staff on (#39W7Z)
Today Verne Global in Iceland announced hpcDIRECT, a powerful, agile and efficient HPC-as-a-service (HPCaaS) platform. hpcDIRECT provides a fully scalable, bare metal service with the ability to rapidly provision the full performance of HPC servers uncontended and in a secure manner.“With hpcDIRECT, we take the complexity and capital costs out of scaling HPC and bring greater accessibility and more agility in terms of how IT architects plan and schedule their workloads.â€The post Verne Global Launches hpcDIRECT, an HPC as a Service Platform appeared first on insideHPC.
|
by Rich Brueckner on (#39W81)
In this video, David Warberg gives us a quick tour of the Intel booth at SC17. "Intel unveiled new HPC advancements for optimized workloads, including the the addition of a new family of HPC solutions to the Intel Select Solutions program. Built on the latest Intel Xeon Scalable platforms, Intel Select Solutions are verified configurations designed to speed and simplify the evaluation and deployment of data center infrastructure while meeting a high performance threshold."The post Intel SC17 Booth Tour: Driving Innovation in HPC appeared first on insideHPC.
|
by staff on (#39W4X)
Today Mellanox announced in collaboration with NEC Corporation support for the newly announced SX-Aurora TSUBASA systems with Mellanox ConnectX InfiniBand adapters. "We appreciate the performance, efficiency and scalability advantages that Mellanox interconnect solutions bring to our platform,†said Shigeyuki Aino, assistant general manager system platform business unit, IT platform division, NEC Corporation. “The in-network computing and PeerDirect capabilities of InfiniBand are the perfect complement to the unique vector processing engine architecture we have designed for our SX-Aurora TSUBASA platform.â€The post Mellanox and NEC Partner to Deliver High-Performance Artificial Intelligence Platforms appeared first on insideHPC.
|
by Rich Brueckner on (#39W1P)
In this video from SC17 in Denver, James Coomer from DDN describes how the company is driving high performance storage for HPC. For more than 15 years, DDN has designed, developed, deployed, and optimized systems, software, and solutions that enable enterprises, service providers, universities, and government agencies to generate more value and accelerate time to insight from their data and information, on premise and in the cloud.The post DDN Simplifies High Performance Storage at SC17 appeared first on insideHPC.
|
by staff on (#39W1R)
New benchmarks from Computer Simulation Technology on their recently optimized 3D electromagnetic field simulation tools compare the performance of the new Intel Xeon Scalable processors with previous generation Intel Xeon processors. "Our team works with the customers in terms of testing of models and configuration settings to make good recommendations for customers so they get a well performing system and the best performance when running the models.â€The post Benchmarking Optimized 3D Electromagnetic Simulation Tools appeared first on insideHPC.
|
by Rich Brueckner on (#39S4V)
In this video from SC17 in Denver, Rich Kanadjian from Kingston describes the company's wide array server memory and NVMe PCIe Flash solutions for HPC. "Today’s supercomputing installations are capable of doing billions of calculations per second and managing data in enormous volume and velocity. Kingston continues to provide top data solutions with reliability and predictable performance for the world’s most powerful HPC and enterprise big data applications, while also laying the groundwork for future innovation in data center efficiency.â€The post Kingston NVMe Technologies Speed Up HPC at SC17 appeared first on insideHPC.
|
by staff on (#39S1K)
Today NVIDIA announced that hundreds of thousands of AI researchers using desktop GPUs can now tap into the power of NVIDIA GPU Cloud. “With GPU-optimized software now available to hundreds of thousands of researchers using NVIDIA desktop GPUs, NGC will be a catalyst for AI breakthroughs and a go-to resource for developers worldwide.â€The post Consumer GPUs come to NVIDIA GPU Cloud for AI Research appeared first on insideHPC.
|
by staff on (#39RW9)
In this video, Florina Ciorba from University of Basel describes the theme of the upcoming PASC18 conference. With a focus on the convergence of Big Data and Computation, the conference takes place from July 2-4, 2018 in Basel, Switzerland. "PASC18 is the fifth edition of the PASC Conference series, an international platform for the exchange of competences in scientific computing and computational science, with a strong focus on methods, tools, algorithms, application challenges, and novel techniques and usage of high performance computing."The post Video: PASC18 to Focus on Big Data & Computation appeared first on insideHPC.
|
by Rich Brueckner on (#39RS3)
"On the November 2017 TOP500 list, Intel-powered supercomputers accounted for six of the top 10 systems and a record high of 471 out of 500 systems. Intel Omni-Path Architecture (Intel OPA) gained momentum, delivering a majority of the petaFLOPS of systems using 100Gb fabric delivering over 80 petaFLOPS, an almost 20 percent increase compared with the June 2017 Top500 list. In addition, Intel OPA now connects almost 60 percent of nodes using 100Gb fabrics on the Top500 list. Also, Intel powered all 137 new systems added to the November list."The post Intel Omni Path Gains Momentum at SC17 appeared first on insideHPC.
|
by Rich Brueckner on (#39RS5)
In this podcast, the Radio Free HPC team looks at a controversy stirred up by the recent Irish Supercomputing List.The 9th Irish Supercomputer List was released this week. For the first time, Ireland has four computers ranked on the Top500. "Since the publication of the List, a third party called the Irish Centre for High-End Computing (ICHEC) has expressed concerns that the press release issued by the Irish Supercomputing List is misleading. You can read their opinion here."The post Radio Free HPC Looks at a Controversy in the Irish Supercomputing List appeared first on insideHPC.
|
by Rich Brueckner on (#39PAG)
In this RCE Podcast, Brock Palen and Jeff Squyres discuss PMIx with Ralph Castain from Intel. "The Process Management Interface (PMI) has been used for quite some time as a means of exchanging wireup information needed for interprocess communication. While PMI-2 demonstrates better scaling properties than its PMI-1 predecessor, attaining rapid launch and wireup of the roughly 1M processes executing across 100k nodes expected for exascale operations remains challenging."The post RCE Podcast Looks at PMIx Process Management Interface for Exascale appeared first on insideHPC.
|