by Rich Brueckner on (#4C84Z)
In this video from GTC 2019 in San Jose, Harvey Skinner, Distinguished Technologist, discusses the advent of production AI and how the HPE AI Data Node offers a building block for AI storage. "The HPE AI Data Node is a HPE reference configuration which offers a storage solution that provides both the capacity for data, as well as a performance tier that meets the throughput requirements of GPU servers. The HPE Apollo 4200 Gen10 density optimized data server provides the hardware platform for the WekaIO Matrix flash-optimized parallel file system, as well as the Scality RING object store."The post Video: Prepare for Production AI with the HPE AI Data Node appeared first on insideHPC.
|
High-Performance Computing News Analysis | insideHPC
Link | https://insidehpc.com/ |
Feed | http://insidehpc.com/feed/ |
Updated | 2024-11-02 13:15 |
by Rich Brueckner on (#4C6MD)
Seongchan Kim from KISTI gave this talk at GTC 2019. "How do meteorologists predict weather or weather events such as hurricanes, typhoons, and heavy rain? Predicting weather events were done based on supercomputer (HPC) simulations using numerical models such as WRF, UM, and MPAS. But recently, many deep learning-based researches have been showing various kinds of outstanding results. We'll introduce several case studies related to meteorological researches."The post How Deep Learning Could Predict Weather Events appeared first on insideHPC.
|
by staff on (#4C6HJ)
The Air Force Research Laboratory has unveiled the first-ever shared classified Department of Defense high performance computing capability at the AFRL DOD Supercomputing Resource Center. "The ability to share supercomputers at higher classification levels will allow programs to get their supercomputing work done quickly while maintaining necessary security. Programs will not need to spend their budget and waste time constructing their own secure computer facilities, and buying and accrediting smaller computers for short-term work. This new capability will save billions for the DOD while providing additional access to state-of-the-art computing.â€The post AFRL Unveils Sharable Classified Supercomputing Capability appeared first on insideHPC.
|
by Rich Brueckner on (#4C53R)
Raghu Raja from Amazon gave this talk at the OpenFabrics Workshop in Austin. "Elastic Fabric Adapter (EFA) is the recently announced HPC networking offering from Amazon for EC2 instances. It allows applications such as MPI to communicate using the Scalable Reliable Datagram (SRD) protocol that provides connectionless and unordered messaging services directly in userspace, bypassing both the operating system kernel and the Virtual Machine hypervisor. This talk presents the designs, capabilities, and an early performance characterization of the userspace and kernel components of the EFA software stack."The post Amazon Elastic Fabric Adapter: Anatomy, Capabilities, and the Road Ahead appeared first on insideHPC.
|
by Rich Brueckner on (#4C518)
Children's Mercy Hospital in Kansas City is seeking a Senior HPC Systems Engineer in our Job of the Week. "A Senior HPC Systems Engineer is responsible for the daily operation and monitoring of a high performance computing infrastructure, which includes the high performance compute cluster, storage system, and backup and disaster recovery systems. The engineer plans, designs, and implements the integration of HPC systems, and provides consultation and support for other HPC projects. They are expected to be a top-level contributor/specialist for high performance computing and Linux/UNIX environments."The post Job of the Week: Senior HPC Systems Engineer at Children’s Mercy Hospital appeared first on insideHPC.
|
by staff on (#4C385)
Today Inspur announced that it will offer integrated storage solutions with the BeeGFS filesystem. BeeGFS, a leading parallel cluster file system with a distributed metadata architecture, has gained global acclaim for its usability, scalability and powerful metadata processing functions. "BeeGFS has unique advantages in terms of usability, flexibility and performance," said Liu Jun, General Manager of AI&HPC, Inspur. "It can easily adapt to the different business needs of HPC and AI users. The cooperation between Inspur and ThinkParQ will provide our HPC and AI cluster solutions users with an integrated BeeGFS system and a range of high-quality services, helping them to improve efficiency with BeeGFS."The post Inspur to Offer BeeGFS Storage Systems for HPC and AI Clusters appeared first on insideHPC.
|
by staff on (#4C33Z)
Last week at GTC, Altair announced that it has achieved up to 10x speedups with the Altair OptiStruct structural analysis solver on NVIDIA GPU-accelerated system architecture — with no compromise in accuracy. This speed boost has the potential to significantly impact industries including automotive, aerospace, industrial equipment, and electronics that frequently need to run large, high-fidelity simulations. "This breakthrough represents a significant opportunity for our customers to increase productivity and improve ROI with a high level of accuracy, much faster than was previously possible,†said Uwe Schramm, Altair’s chief technology officer for solvers and optimization. “By running our solvers on NVIDIA GPUs, we achieved formidable results that will give users a big advantage.â€The post NVIDIA GPUs Speed Altair OptiStruct structural analysis up to 10x appeared first on insideHPC.
|
by staff on (#4C341)
The Massive Storage Systems and Technology Conference (MSST) posted their preliminary speaker agenda. Keynote speakers include Margo Seltzer and Mark Kryder, along with a five-day agenda of invited and peer research talks and tutorials, May 20-24 in Santa Clara, California. "MSST 2019 will focus on current challenges and future trends in distributed storage system technologies,†said Meghan Wingate McClelland, Communications Chair of MSST."The post Agenda Posted for MSST Mass Storage Conference in May appeared first on insideHPC.
|
by staff on (#4C2YC)
In this Big Compute Podcast, Gabriel Broner from Rescale and Dave Turek from IBM discuss how AI enables the acceleration of HPC workflows. "HPC can benefit from AI techniques. One area of opportunity is to augment what people do in preparing simulations, analyzing results and deciding what simulation to run next. Another opportunity exists when we take a step back and analyze whether we can use AI techniques instead of simulations to solve the problem. We should think about AI as increasing the toolbox HPC users have."The post Big Compute Podcast: Accelerating HPC Workflows with AI appeared first on insideHPC.
|
by Rich Brueckner on (#4C2YE)
In this video from the 2019 GPU Technology Conference, James Coomer from DDN describes the company's high-speed storage solutions for Ai, Machine Learning, and HPC. "This week at GTC, DDN is showcasing its high speed storage solutions, including its A³I architecture and new customer use cases in autonomous driving, life sciences, healthcare, retail, and financial services. DDN next generation of A³I reference architectures include NVIDIA’s DGX POD, DGX-2, and the DDN’s AI400 parallel storage appliance."The post Video: DDN Accelerates Ai, Analytics, and Deep Learning at GTC appeared first on insideHPC.
|
by staff on (#4C0RE)
We've been hearing more and more about immersive cooling for HPC, but what happens to system warranties in these kinds of installations? Today GRC took a step to ease these concerns through collaboration with TechData to provide support and warranties for worldwide customers submerging servers and other equipment in GRC’s immersion cooling systems. "As we grow our global footprint, it is wonderful to add TechData to our network of partners, while providing customers with added peace of mind,†said Peter Poulin, CEO of GRC.The post GRC Partners to Provide Worldwide Support and Server Warranties for Immersion Cooling appeared first on insideHPC.
|
by staff on (#4C0RF)
Today HPC integrator Nor-Tech announced participation in two recent Nobel Physics Prize-Winning projects. The company's HPC gear will help power the Laser Interferometer Gravitational-Wave Observatory (LIGO) project as well as the IceCube neutrino detection experiment. "We are excited about the amazing discoveries these enhanced detectors will reveal," said Nor-Tech Executive Vice President Jeff Olson. "This is an energizing time for all of us at Nor-Tech—knowing that the HPC solutions we are developing for two Nobel projects truly are changing our view of the world.â€The post Nor-Tech Powers LIGO and IceCube Nobel-Physics Prize-Winning Projects appeared first on insideHPC.
|
by Rich Brueckner on (#4C0KK)
In this video from the GPU Technology Conference, Sumit Gupta from IBM describes how IBM is powering production-level Ai and Machine Learning. "IBM PowerAI provides the easiest on-ramp for enterprise deep learning. PowerAI helped users break deep learning training benchmarks AlexNet and VGGNet thanks to the world's only CPU-to-GPU NVIDIA NVLink interface. See how new feature development and performance optimizations will advance the future of deep learning in the next twelve months, including NVIDIA NVLink 2.0, leaps in distributed training, and tools that make it easier to create the next deep learning breakthrough."The post Video: IBM Powers Ai at the GPU Technology Conference appeared first on insideHPC.
|
by staff on (#4C0FD)
While there are many benefits to leveraging the cloud for HPC, there are challenges as well. Along with security and cost, data handling is consistently identified as a top barrier. "In this short article, we discuss the challenge of managing data in hybrid clouds, offer some practical tips to makes things easier, and explain how automation can play a key role in improving efficiency."The post Data Management: The Elephant in the Room for HPC Hybrid Cloud appeared first on insideHPC.
|
by Rich Brueckner on (#4BYTB)
The San Diego Supercomputer Center (SDSC) at UC San Diego, and Sylabs.io recently hosted the first-ever Singularity User Group meeting, attracting users and developers from around the nation and beyond who wanted to learn more about the latest developments in an open source project known as Singularity. Now in use on SDSC’s Comet supercomputer, Singularity has quickly become an essential tool in improving the productivity of researchers by simplifying the development and portability challenges of working with complex scientific software.The post SDSC and Sylabs Gather for Singularity User Group appeared first on insideHPC.
|
by staff on (#4BYFZ)
Today quantum computing startup ColdQuanta announced the appointment of Robert “Bo†Ewald as president and chief executive officer. Ewald is well-known in high technology having previously been president of supercomputing leader Cray Research, CEO of Silicon Graphics, and for the past six years, president of quantum computing company D-Wave International. With his experience at Cray, SGI and D-Wave, Bo has successfully navigated companies through the bleeding edge of technology several times before. I am thrilled to have Bo take ColdQuanta’s helm.â€The post Bo Ewald joins quantum computing firm ColdQuanta as CEO appeared first on insideHPC.
|
by Rich Brueckner on (#4BYB4)
Sean Hefty and Venkata Krishnan from Intel gave this talk at the OpenFabrics Workshop in Austin. "Advances in Smart NIC/FPGA with integrated network interface allow acceleration of application-specific computation to be performed alongside communication. Participants will learn about the potential for Smart NIC/FPGA application acceleration and will have the opportunity to contribute application expertise and domain knowledge to a discussion of how Smart NIC/FPGA acceleration technology can bring individual applications into the Exascale era."The post Video: Enabling Applications to Exploit SmartNICs and FPGAs appeared first on insideHPC.
|
by staff on (#4BYB6)
Today ACM named Yoshua Bengio, Geoffrey Hinton, and Yann LeCun recipients of the 2018 ACM Turing Award for conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing. "The ACM A.M. Turing Award, often referred to as the “Nobel Prize of Computing,†carries a $1 million prize, with financial support provided by Google, Inc. It is named for Alan M. Turing, the British mathematician who articulated the mathematical foundation and limits of computing."The post Pioneers in Deep Learning to Receive ACM Turing Award appeared first on insideHPC.
|
by Rich Brueckner on (#4BY6N)
In this video from GTC 2019 in Silicon Valley, Marc Hamilton from NVIDIA describes how accelerated computing is powering AI, computer graphics, data science, robotics, automotive, and more. "Well, we always make so many great announcements at GTC. But one of the traditions Jensen has now started a few years ago is coming up with a new acronym to really make our messaging for the show very, very simple to remember. So PRADA stands for Programmable Acceleration Multiple Domains One Architecture. And that's really what the GPU has become."The post Video: NVIDIA Showcases Programmable Acceleration of multiple Domains with one Architecture appeared first on insideHPC.
|
by staff on (#4BVZT)
Today MathWorks introduced Release 2019a of MATLAB and Simulink. The release contains new products and important enhancements for artificial intelligence (AI), signal processing, and static analysis, along with new capabilities and bug fixes across all product families. "One of the key challenges in moving AI from hype to production is that organizations are hiring AI ‘experts’ and trying to teach them engineering domain expertise. With R2019a, MathWorks enables engineers to quickly and effectively extend their AI skills, whether it’s to develop controllers and decision-making systems using reinforcement learning, training deep learning models on NVIDIA DGX and cloud platforms, or applying deep learning to 3-D data,†said David Rich, MATLAB marketing director.â€The post Ai comes to MATLAB and Simulink with new 2019a release appeared first on insideHPC.
|
by staff on (#4BVZV)
Can the Cloud power ground-breaking research? A new NSF-funded research project aims to provide a deeper understanding of the use of cloud computing in accelerating scientific discoveries. "First announced in 2018, the Exploring Clouds for Acceleration of Science (E-CAS) project has now selected the six research proposals to explore how scientific workflows can leverage advancements in real-time analytics, artificial intelligence, machine learning, accelerated processing hardware, automation in deployment and scaling, and management of serverless applications for a wider range of science."The post E-CAS Project to Explore Clouds for Acceleration of Science appeared first on insideHPC.
|
by Rich Brueckner on (#4BVVY)
Christopher Lameter from Jump Trading gave this talk at the OpenFabrics Workshop in Austin. "In 2017 we got 100G fabrics, in 2018 200G fabrics and in 2019 it looks like 400G technology may be seeing a considerable amount of adoption. These bandwidth compete with and sometimes are higher than the internal bus speeds of the servers that are connected using these fabrics. I think we need to consider these developments and work on improving fabrics and the associated APIs so that ways to access these features become possible using vendor neutral APIs. It needs to be possible to code in a portable way and not to a vendor specific one."The post Faster Fabrics Running Against Limits of the Operating System, the Processor, and the I/O Bus appeared first on insideHPC.
|
by staff on (#4BVW0)
Today D-Wave Systems announced the geographic expansion of its Leap quantum cloud service to 33 countries in Europe and Asia. According to the company, Leap is the only cloud-based service to provide real-time access to a live quantum computer, as well as open- source development tools, interactive demos, educational resources, and knowledge base articles. "The range and robustness of early applications from our customers continues to grow, and customers are starting to see early value in using quantum computing to address real-world business problems.â€The post D-Wave Leap Quantum Cloud Service comes to Europe and Japan appeared first on insideHPC.
|
by Sarah Rubenoff on (#4BVQ5)
The growing prevalence of artificial intelligence and machine learning is putting heightened focus on the quantities of data that organizations have recently accumulated — as well as the value potential in this data. Companies looking to gain a competitive edge in their market are turning to tools like graphic processing units – or GPUs – to ramp up computing power. That's according to a new white paper from Penguin Computing.The post Exploring the ROI Potential of GPU Supercomputing appeared first on insideHPC.
|
by staff on (#4BSJX)
The second annual GPU Hackathon kicked off in Perth this week, a collaboration between Oak Ridge National Laboratory, NVIDIA and the Pawsey Supercomputing Centre. Attended by five teams from across Australia, the five-day hack event centers around adapting scientific code for accelerated computing. "We’re excited to continue our collaboration with international research facilitiesâ€, said Mark Stickells, Executive Director at Pawsey. “Pawsey accelerates scientific discovery through technology, expertise and collaboration; and this is demonstrated by the GPU Hackathon. The Hackathon is a great example of collaboration and engagement with industry, academia and government.â€The post Perth’s GPU Hackathon looks to speed research computing appeared first on insideHPC.
|
by staff on (#4BSE9)
In this podcast, the Radio Free HPC team has an animated discussion about multicore scaling, how easy it seems to be to mislead AI systems, and some good sized catches of the week. "As CPU performance improvements have slowed down, we’ve seen the semiconductor industry move towards accelerator cards to provide dramatically better results. Nvidia has been a major beneficiary of this shift, but it’s part of the same trend driving research into neural network accelerators, FPGAs, and products like Google’s TPU."The post Podcast: Multicore Scaling Slow Down, and Fooling AI appeared first on insideHPC.
|
by Rich Brueckner on (#4BSEA)
In this video from the GPU Technology Conference, Karan Batta from Oracle describes how the company provides HPC and Machine Learning in the Cloud with Bare Metal speed. " Oracle Cloud Infrastructure offers wide-ranging support for NVIDIA GPUs, including the high-performance NVIDIA Tesla P100 and V100 GPU instances that provide the highest ratio of CPU cores and RAM per GPU available. With a maximum of 52 physical CPU cores, 8 NVIDIA Volta V100 units per bare metal server, 768 GB of memory, and two 25 Gbps interfaces, these are the most powerful GPU instances on the market."The post Oracle Cloud Speeds HPC & Ai Workloads at GTC 2019 appeared first on insideHPC.
|
by staff on (#4BQQP)
As supercomputers become ever more capable in their march toward exascale levels of performance, scientists can run increasingly detailed and accurate simulations to study problems ranging from cleaner combustion to the nature of the universe. Enter ExaLearn, a new machine learning project supported by DOE’s Exascale Computing Project (ECP), aims to develop new tools to help scientists overcome this challenge by applying machine learning to very large experimental datasets and simulations.The post ExaLearn Project to bring Machine Learning to Exascale appeared first on insideHPC.
|
by staff on (#4BQQR)
Today quantum computing startup IonQ released the results of two rigorous real-world tests that show that its quantum computer can solve significantly more complex problems with greater accuracy than results published for any other quantum computer. "The real test of any computer is what can it do in a real-world setting. We challenged our machine with tough versions of two well-known algorithms that demonstrate the advantages of quantum computing over conventional devices. The IonQ quantum computer proved it could handle them. Practical benchmarks like these are what we need to see throughout the industry.â€The post IonQ posts benchmarks for quantum computer appeared first on insideHPC.
|
by Rich Brueckner on (#4BP4P)
Ariel Almog from Mellanox gave this talk at the OpenFabrics Workshop in Austin. "Recently, deployment of 50 Gbps per lane (HDR) speed started and 100 Gbps per lane (EDR) which is a future technology is around the corner. The high bandwidth might cause the NIC PCIe interface to become a bottle neck as PCIe gen3 can handle up to single 100 Gbps interface over 16 lanes and PCIe gen4 can handle up to single 200 Gbps interface over 16 lanes. In addition, since the host might have dual CPU sockets, Socket direct technology, provides direct PCIe access to dual CPU sockets, eliminates the need for network traffic to go over the inter-process bus and allows better utilization of PCIe, thus optimizing overall system performance."The post InfiniBand: To HDR and Beyond appeared first on insideHPC.
|
by Rich Brueckner on (#4BP2E)
The German Climate Computing Centre (DKRZ) is seeking and HPC and Specialist in Software Development in our Job of the Week. "DKRZ is involved in numerous national and international projects in the field of high-performance computing for climate and weather research. In addition to the direct user support, depending on your interest and ability, you will also be able to participate in this development work."The post Job of the Week: HPC Specialist in Software Development at DKRZ appeared first on insideHPC.
|
by staff on (#4BM5E)
In this video, Cray CEO Pete Ungaro announces Aurora – Argonne National Laboratory’s forthcoming supercomputer and the United States’ first exascale system. Ungaro offers some insight on the technology, what makes exascale performance possible, and why we’re going to need it. “It is an exciting testament to Shasta’s flexible design and unique system and software capabilities, along with our Slingshot interconnect, which will be the foundation for Argonne’s extreme-scale science endeavors and data-centric workloads. Shasta is designed for this transformative exascale era and the convergence of artificial intelligence, analytics and modeling and simulation– all at the same time on the same system -- at incredible scale.â€The post Video: Cray Announces First Exascale System appeared first on insideHPC.
|
by staff on (#4BM5G)
NERSC has signed a contract with NVIDIA to enhance GPU compiler capabilities for Berkeley Lab’s next-generation Perlmutter supercomputer. "We are excited to work with NVIDIA to enable OpenMP GPU computing using their PGI compilers,†said Nick Wright, the Perlmutter chief architect. “Many NERSC users are already successfully using the OpenMP API to target the manycore architecture of the NERSC Cori supercomputer. This project provides a continuation of our support of OpenMP and offers an attractive method to use the GPUs in the Perlmutter supercomputer. We are confident that our investment in OpenMP will help NERSC users meet their application performance portability goals."The post NERSC taps NVIDIA compiler team for Perlmutter Supercomputer appeared first on insideHPC.
|
by Rich Brueckner on (#4BM0K)
Today the PASC19 Conference announced that Dr. Lois Curfman McInnes from Argonne and Rich Brueckner from insideHPC will moderate a panel discussion with thought leaders focused on software challenges for Exascale and beyond. "In this session, Lois Curfman McInnes from Argonne National Laboratory and Rich Brueckner from insideHPC will moderate a panel discussion with thought leaders focused on software challenges for Exascale and beyond – mixing “big picture†and technical discussions. McInnes will bring her unique perspective on emerging Exascale software ecosystems to the table, while Brueckner will illustrate the benefits of Exascale to world-wide audiences."The post PASC19 Preview: Brueckner and Dr. Curfman-McInnes to Moderate Exascale Panel Discussion appeared first on insideHPC.
|
by Rich Brueckner on (#4BM0N)
Mellanox Scalable Hierarchical Aggregation and Reduction Protocol (SHARP) technology improves upon the performance of MPI operations by offloading collective operations from the CPU to the switch network, and by eliminating the need to send data multiple times between endpoints. This innovative approach decreases the amount of data traversing the network as aggregation nodes are reached, and dramatically reduces the MPI operations time. Implementing collective communication algorithms in the network also has additional benefits, such as freeing up valuable CPU resources for computation rather than using them to process communication."The post How Mellanox SHARP technology speeds Ai workloads appeared first on insideHPC.
|
by Rich Brueckner on (#4BHWT)
If you are considering moving some of your HPC workload to the Cloud, nothing leads the way like a good set of case studies in your scientific domain. To this end, our good friends at the UberCloud have published their Compendium Of Case Studies In Computational Fluid Dynamics. The document includes 36 CFD case studies summarizing HPC Cloud projects that the UberCloud has performed together with the engineering community over the last six years.The post UberCloud Publishes Compendium Of CFD Case Studies appeared first on insideHPC.
|
by staff on (#4BHWW)
Today the ISC 2019 conference announced that their keynote will be delivered by Professor Ivo Sbalzarini, who will speak to an audience of 3500 attendees about the pivotal role high performance computing plays in the field of systems biology. Under the title, The Algorithms of Life - Scientific Computing for Systems Biology, Sbalzarini will discuss how HPC is being used as a tool for scientific investigation and for hypothesis testing, as well as a more fundamental way to think about problems in systems biology.The post ISC 2019 Keynote to focus on Algorithms of Life appeared first on insideHPC.
|
by Rich Brueckner on (#4BHK8)
Today One Stop Systems introduced the world’s first PCIe Gen 4 backplane. "Delivering the high performance required by edge applications necessitates PCIe interconnectivity traveling on the fast data highway between high-speed processors, NVMe storage and compute accelerators using GPUs or application specific FPGAs,†continued Cooper. “‘AI on the Fly’ applications naturally demand this capability, like the government mobile shelter application we announced earlier this year.â€The post OSS Introduces World’s First PCIe Gen 4 Backplane at GTC appeared first on insideHPC.
|
by Rich Brueckner on (#4BHK9)
The European Commission has approved a multi-million funding program for developing applications in High Performance Computing. The funds will be used to help build HPC Centers of Excellence in 10 member countries. From computing the reduction of noise and fuel for passenger airplanes to assessing the effects of climate change – Applications in High Performance […]The post European Commission Funds 10 Centers of Excellence for HPC appeared first on insideHPC.
|
by Rich Brueckner on (#4BHKA)
Paul Grun from Cray gave this talk at the OpenFabrics Workshop in Austin. "Persistent Memory exhibits several interesting characteristics including persistence, capacity and others. These (sometimes)competing characteristics may require system and server architects to make tradeoffs in system architecture. In this session, we explore some of those tradeoffs and take an early look at the emerging use cases for Remote Persistent Memory and how those may impact network architecture and API design."The post Characteristics of Remote Persistent Memory – Performance, Capacity, or Locality? appeared first on insideHPC.
|
by staff on (#4BFFZ)
Today Quantum Corp. announced the Texas Advanced Computing Center has selected Quantum StorNext as their archive file system, with a Quantum Scalar i6000 tape library providing dedicated Hierarchical Storage Management. "Our ability to archive data is vital to TACC’s success, and the combination of StorNext as our archive file system managing Quantum hybrid storage, Scalar tape and our DDN primary disk will enable us to meet our commitments to the talented researchers who depend on TACC now and in the future,†said Tommy Minyard, Director of Advanced Computing at TACC.The post TACC to power HSM Archives with Quantum Corp Tape Libraries appeared first on insideHPC.
|
by staff on (#4BFG1)
Today Inspur announced that their new NF5488M5 high-density AI server supports eight NVIDIA V100 Tensor Core GPUs in a 4U form factor. "The rapid development of AI keeps increasing the requirements for computing performance and flexibility of AI infrastructure. The NF5488M5 help users shorten AI model development cycles, and accelerate AI technology innovation and application development."The post New Inspur AI Server Supports Eight NVIDIA V100 Tensor Core GPUs appeared first on insideHPC.
|
by staff on (#4BFAX)
In this Intel on Ai Podcast, Dr. David Ellison from Lenovo describes the Lenovo Intelligent Computing Orchestration (LiCO). "Dr. Ellison explains how LiCO accelerates artificial intelligence training and traditional high performance computing (HPC) deployment by providing a single software solution that simplifies resource management and the use of a cluster environment. He discusses an exciting real-world deployment where data scientists use LiCO to help analyze satellite images of crop fields to drive drought prevention and increase food security."The post Podcast: How Lenovo Intelligent Computing Orchestration is Simplifying HPC & AI appeared first on insideHPC.
|
by staff on (#4BFAZ)
"The all-new hyperscale configuration of AI-Ready Infrastructure (AIRI) from Pure Storage is designed to deliver supercomputing capabilities for enterprises that pioneer real-world AI initiatives and have grown beyond the capabilities of AI-ready solutions available in the market today. Built jointly with the leaders of AI supercomputing, NVIDIA and Mellanox, hyperscale AIRI delivers multiple racks of NVIDIA DGX-1 and DGX-2 systems with both Infiniband and Ethernet fabrics as interconnect options. In addition, Pure Storage announced FlashStackTM for AI, a solution built jointly with Cisco and NVIDIA to bring AI within reach for every enterprise."The post Pure Storage Unveils NVIDIA-Powered Ai Solutions appeared first on insideHPC.
|
by Rich Brueckner on (#4BF5Z)
Xiaoyi Lu from Ohio State University gave this talk at the 2019 OpenFabrics Workshop in Austin. "Google's TensorFlow is one of the most popular Deep Learning (DL) frameworks. We propose a unified way of achieving high performance through enhancing the gRPC runtime with Remote Direct Memory Access (RDMA) technology on InfiniBand and RoCE. Through our proposed RDMAgRPC design, TensorFlow only needs to run over the gRPC channel and gets the optimal performance."The post Accelerating TensorFlow with RDMA for High-Performance Deep Learning appeared first on insideHPC.
|
by Rich Brueckner on (#4BCZ7)
Today HPC cloud provider Nimbix announced a new strategic partnership with Lenovo Data Center Group (DCG). "Lenovo DCG and Nimbix have teamed up to deliver flexible, powerful solutions based on Lenovo HPC clusters and the Nimbix Cloud. By bringing together Lenovo’s supercomputing expertise with JARVICE, the purpose-built, container-based, bare metal HPC Cloud platform from Nimbix, customers can tailor their hardware and software resources to meet their business requirements, no matter how demanding."The post Lenovo HPC Clusters come to the Nimbix Cloud appeared first on insideHPC.
|
by staff on (#4BCTQ)
Today Microway announced that it has been recognized with the Americas 2018 NVIDIA Partner Network (NPN) HPC Partner of the Year Award. Microway was presented with this award at the NPN Reception and Awards Ceremony held during the 2019 NVIDIA GPU Technology Conference (GTC). "Microway lives and breathes high performance computing,†said Craig Weinstein, Vice President of the Americas Partner Organization at NVIDIA. “They’ve been a long-time partner of ours for many years and it has been a pleasure to see them flourish into one of the most well-respected, knowledgeable companies in the industry.â€The post Microway Receives NVIDIA HPC Partner of the Year Award appeared first on insideHPC.
|
by Rich Brueckner on (#4BCTS)
This week at GTC, DDN is showcasing its high speed storage solutions, including its A³I architecture and new customer use cases in autonomous driving, life sciences, healthcare, retail, and financial services. DDN next generation of A³I reference architectures include NVIDIA’s DGX POD, DGX-2, and the DDN’s AI400 parallel storage appliance. “DDN’s commitment to developing highly parallel and scalable architectures matches well with the high performance requirements of compute and AI applications."The post DDN Accelerates Ai, Analytics, and Deep Learning at GTC appeared first on insideHPC.
|
by Rich Brueckner on (#4BCPG)
In this video, NVIDIA CEO Jensen Huang delivers a sweeping opening keynote at San Jose State University, describing the company’s progress accelerating the sprawling datacenters that power the world’s most dynamic industries. "As a highlight, Mellanox CEO Eyal Waldman joined Huang on stage to describe how the two company's technologies power more than half the world’s TOP500 fastest supercomputers."The post Video: Jensen Huang Keynote and News Recap from GPU Technology Conference appeared first on insideHPC.
|
by staff on (#4BCHW)
AI is a game changer for industries today but achieving AI success contains two critical factors to consider — time to value and time to insights. Time to value is the metric that looks at the time it takes to realize the value of a product, solution or offering. Time to insight is a key measure for how long it takes to gain value from use of the product, solution or offering.The post AI Critical Measures: Time to Value and Insights appeared first on insideHPC.
|