by Rich Brueckner on (#4Y0D6)
James Moawad and Greg Nash from Intel gave this talk at ATPESC 2019. "FPGAs are a natural choice for implementing neural networks as they can handle different algorithms in computing, logic, and memory resources in the same device. Faster performance comparing to competitive implementations as the user can hardcore operations into the hardware. Software developers can use the OpenCL device C level programming standard to target FPGAs as accelerators to standard CPUs without having to deal with hardware level design."The post Video: FPGAs and Machine Learning appeared first on insideHPC.
|
High-Performance Computing News Analysis | insideHPC
Link | https://insidehpc.com/ |
Feed | http://insidehpc.com/feed/ |
Updated | 2024-11-24 05:30 |
by Rich Brueckner on (#4XZT4)
Today WekaIO announced that Genomics England (GEL) has selected the Weka File System (WekaFS) to accelerate genomics research for the 5 Million Genomes Project. "We needed a modern storage solution that could scale to 100s petabytes while maintaining performance scaling, and it had to be simple to manage at that scale. With its clever combination of flash for performance and object store for scale, Weka has proven to be a great solution.â€The post WekaIO Accelerates 5 Million Genomes Project at Genomics England appeared first on insideHPC.
|
by Rich Brueckner on (#4XYR0)
Early career researchers and university students are encouraged to take advantage of the ISC Travel Grant and the ISC Student Volunteer programs. The programs are intended to enable these participants to attend the ISC 2020 conference in Frankfurt Germany, from June 21 – 25. "The ISC High Performance Travel Grant Program is open to early career researchers and students who have never been to the conference and wish to be a part of it this year. For recipients traveling from Europe or North Africa, the maximum funding is 1500 euros per person; for the rest of the world, it is 2500 euros per person. ISC Group will also provide the grant recipients free registration for the entire conference, as well as mentorship during the event. The purpose of the grant is to enable university students and researchers who are highly interested in acquiring high performance computing knowledge and skills, but lack the necessary resources to obtain them, to participate in the ISC High Performance conference series."The post Apply Now for ISC Travel Grant and Student Volunteer Programs appeared first on insideHPC.
|
by Rich Brueckner on (#4XYR1)
People with mechanical heart valves need blood thinners on a daily basis, because they have a higher risk of blood clots and stroke. With the help of the Piz Daint supercomputer, researchers at the University of Bern have identified the root cause of blood turbulence leading to clotting. Design optimization could greatly reduce the risk of clotting and enable these patients to live without life-long medication.The post Reducing the risk of blood clots by supercomputing turbulent flow appeared first on insideHPC.
|
by staff on (#4XTT3)
Today Atos announced an €80 million contract with the European Centre for Medium-Range Weather Forecasts (ECMWF) to supply its for a BullSequana XH2000 supercomputer. Powered by AMD EPYC 7742 processors, the new system will be the most powerful meteorological supercomputers in the world. "The Atos supercomputer will allow ECMWF to run its world-wide 15-day ensemble prediction at a higher resolution of about 10km, reliably predicting the occurrence and intensity of extreme weather events ahead of time."The post AMD to Power Atos Supercomputer at ECMWF appeared first on insideHPC.
|
by Rich Brueckner on (#4XYR3)
Dr. Jack Dongarra from the University of Tennessee has been named to receive the IEEE Computer Society’s 2020 Computer Pioneer Award. "The award is given for significant contributions to early concepts and developments in the electronic computer field, which have clearly advanced the state-of-the-art in computing. Dongarra is being recognized “for leadership in the area of high-performance mathematical software.â€The post Jack Dongarra to Receive 2020 IEEE Computer Pioneer Award appeared first on insideHPC.
|
by staff on (#4XYR4)
Lex Fridman gave this talk as part of the MIT Deep Learning series. "This lecture is on the most recent research and developments in deep learning, and hopes for 2020. This is not intended to be a list of SOTA benchmark results, but rather a set of highlights of machine learning and AI innovations and progress in academia, industry, and society in general."The post Deep Learning State of the Art in 2020 appeared first on insideHPC.
|
by staff on (#4XYR6)
Vortex Bladeless presented the company’s design for a new wind energy technology. One of the key characteristics of this system is the reduction of mechanical elements that can be worn by friction. The company developed the technology using CFD tools provided by Altair, which helped the company study both the fluid-structure interaction and the behaviour of the magnetic fields in the alternator.The post Advanced simulation tools for Vortex Bladeless wind power appeared first on insideHPC.
|
by staff on (#4XX0Z)
Incooling has adopted GIGABYTE’s overclockable R161 Series server platform as the test-bed and prototype model for a new class of two-phase liquid cooled overclockable servers designed for the high frequency trading market. "Incooling's technology is capable of pushing temperatures far below the traditional data center air temperatures, unlocking a new class of turbocharged servers. It does this by leveraging specialized refrigerants using phase-change cooling, inside a pressure-controlled loop. This is a highly efficient method that allows exchange of the heat from the chip with the datacenter air with far less thermal resistance. Coupled with the R161 overclockable server from GIGABYTE, Incooling’s solutions are able to push performance further than ever before. First system tests showed up to 20°C lower core temperatures contributing up to 10% increase in boost clock-speed whilst lowering total power draw by 200 watts."The post GIGABYTE and Incooling to Develop Two-Phase Liquid Cooled Servers for High Frequency Trading appeared first on insideHPC.
|
by staff on (#4XX10)
The 2020 HiPEAC conference will kick off next week in Bologna to showcase the innovative made-in-Europe technologies driving computing systems from the edge to the cloud. This year, the conference will dive into radical new developments in European processor technology, including open source hardware, while building on HiPEAC’s long-established reputation for cutting-edge research into heterogeneous architectures, cross-cutting artificial intelligence themes, security and more.The post 2020 HiPEAC Conference to Showcase European Computing Technologies appeared first on insideHPC.
|
by Rich Brueckner on (#4XWQC)
The 19th Workshop on High Performance Computing in Meteorology has issued its Call for Abstracts. With a theme entitled "Towards Exascale Computing in Numerical Weather Prediction, the event takes place September 14-18 in Bologna, Italy. "The workshop will consist of keynote talks from invited speakers, 20-30 minute presentations, a panel discussion and a visit to the new data centre. Our aim is to provide a forum where users from our Member States and around the world can report on recent experience and achievements in the field of high performance computing; plans for the future and requirements for computing power will also be presented."The post Call for Abstracts: 19th Workshop on High Performance Computing in Meteorology appeared first on insideHPC.
|
by Rich Brueckner on (#4XWBJ)
The Spanish Supercomputing Network (RES) closed the year 2019 with a record in the number of processor hours allocated to Spanish researchers in different areas of knowledge such as: Astronomy, Space and Earth Sciences, Biomedicine and Science Life, Engineering, Physics, Mathematics, Solid State Chemistry, Biological Systems Chemistry. Specifically, in 2019, 583.21 million processor hours were made available to Spanish researchers in the 12 supercomputers that are part of this Singular Scientific-Technical Infrastructure. These data represent an increase of 80% of the amount of 2018, almost doubling the total number of hours.The post Spanish Supercomputing Network serves up record computing hours for Science appeared first on insideHPC.
|
by Rich Brueckner on (#4XWQD)
In this Lets Talk Exascale podcast, Katrin Heitmann from Argonne describes how the ExaSky project may be one of the first applications to reach exascale levels of performance. "Our current challenge problem is designed to run across the full machine [on both Aurora and Frontier], and doing so on a new machine is always difficult,†Heitmann said. “We know from experience, having been first users in the past on Roadrunner, Mira, Titan, and Summit; and each of them had unique hurdles when the machine hit the floor.â€The post Podcast: Will the ExaSky Project be First to Reach Exascale? appeared first on insideHPC.
|
by Rich Brueckner on (#4XVAG)
Researchers are using a novel approach to solving the mysteries of planet formation with the help of the Comet supercomputer at the San Diego Supercomputer Center on the UC San Diego campus. The modeling enabled scientists at the Southwest Research Institute (SwRI) to implement a new software package, which in turn allowed them to create a simulation of planet formation that provides a new baseline for future studies of this mysterious field. “The problem of planet formation is to start with a huge amount of very small dust that interacts on super-short timescales (seconds or less), and the Comet-enabled simulations finish with the final big collisions between planets that continue for 100 million years or more.â€The post Supercomputing Planet Formation at SDSC appeared first on insideHPC.
|
by Rich Brueckner on (#4XVAJ)
Today TMGcore announced that Facility Solutions Group (FSG) will be the first OTTO-Ready Electrical Contractor. OTTO is a line of highly efficient, high density, modular two-phase immersion cooled data center platforms with a fully integrated power, cooling, racking, networking and management experience, backed by partnerships with numerous industry leaders. "We first worked with FSG in 2018 when we were building our own headquarters and data center in Plano, Texas,†said John-David Enright, CEO of TMGcore. “We trust them to handle our electrical needs daily and, therefore, we trust they can do the same for our customers now, helping them decide upon the best electrical solutions for the installation and creation of ideal environments for their OTTO platforms.â€The post TMGcore teams with FSG as Certified OTTO Ready Electrical Contractor appeared first on insideHPC.
|
by Rich Brueckner on (#4XV0V)
The MSST 2020 Mass Storage Conference has issued its Call for Papers. The event will be held May 4-8, 2020 at Santa Clara University, with the Research Track taking place May 7 and 8. "The conference focuses on current challenges and future trends in storage technologies. MSST 2020 will include a day of tutorials, two days of invited papers, and two days of peer-reviewed research papers. The conference will be held, once again, on the beautiful campus of Santa Clara University, in the heart of Silicon Valley."The post Call for Papers: MSST 2020 Mass Storage Conference in Santa Clara appeared first on insideHPC.
|
by Rich Brueckner on (#4XV0X)
In this video from ATPESC 2019, Rob Schreiber from Cerebras Systems looks back at historical computing advancements, Moore's Law, and what happens next. "A recent report by OpenAI showed that, between 2012 and 2018, the compute used to train the largest models increased by 300,000X. In other words, AI computing is growing 25,000X faster than Moore’s law at its peak. To meet the growing computational requirements of AI, Cerebras has designed and manufactured the largest chip ever built."The post Video: The Parallel Computing Revolution Is Only Half Over appeared first on insideHPC.
|
by Rich Brueckner on (#4XSRQ)
Darrin Johnson from NVIDIA gave this talk at the DDN User Group. "The NVIDIA DGX SuperPOD is a first-of-its-kind artificial intelligence (AI) supercomputing infrastructure that delivers groundbreaking performance, deploys in weeks as a fully integrated system, and is designed to solve the world's most challenging AI problems. "When combined with DDN’s A3I data management solutions, NVIDIA DGX SuperPOD creates a real competitive advantage for customers looking to deploy AI at scale.â€The post NVIDIA DGX SuperPOD: Instant Infrastructure for AI Leadership appeared first on insideHPC.
|
by Rich Brueckner on (#4XSXQ)
Today Los Alamos National Laboratory announced that it is joining the cloud-based IBM Q Network as part of the Laboratory’s research initiative into quantum computing, including developing quantum computing algorithms, conducting research in quantum simulations, and developing education tools. "Joining the IBM Q Network will greatly help our research efforts in several directions, including developing and testing near-term quantum algorithms and formulating strategies for mitigating errors on quantum computers,†said Irene Qualters, associate laboratory director for Simulation and Computation at Los Alamos.The post Los Alamos National Laboratory joins IBM Q Network for quantum computing appeared first on insideHPC.
|
by Rich Brueckner on (#4XRYK)
In this Lets Talk Exascale podcast, Elaine Raybourn from Sandia National Laboratories decribes how Productivity Sustainability Improvement Planning (PSIP) is bringing software development teams together at the Exascale Computing Project. PSIP brings software development activities together and enables partnerships and the adoption of best practices across aggregate teams.The post Podcast: PSIP Brings Software Development teams together at the Exascale Computing Project appeared first on insideHPC.
|
by Rich Brueckner on (#4XRYN)
UC San Diego is seeking a Research Systems Integration Engineer in our Job of the Week. "Information Technology Services uses world-class services and technologies to empower UC San Diego's mission to transform California and the world as a student-centered, research-focused, service-oriented public university. As a strategic member of the UC San Diego community, IT Services embraces innovation in their delivery of IT services, infrastructure, applications, and support. IT Services is customer-focused and committed to collaboration, continuous improvement, and accountability."The post Job of the Week: Research Systems Integration Engineer at UC San Diego appeared first on insideHPC.
|
by staff on (#4XQFR)
Today Sano Genetics announced the company is using Lifebit CloudOS to power their free DNA sequencing platform, achieving 35% increase in speed of imputation analyses. Imputation is a statistical technique that fills in the gaps between sites measured by genotyping arrays, and is very useful for genetic genealogy and other forms of ‘citizen science’. Any participant who uploads their DTC genetic data to the Sano platform can download their imputed data within about 15 minutes.The post Sano Genetics Deploys Lifebit CloudOS for Direct-to-Consumer Genetics appeared first on insideHPC.
|
by staff on (#4XQFS)
MIRIS has entered into an agreement with LiquidCool Solutions (LCS), a world leader in rack-based immersion cooling technology for datacenters. The agreement gives MIRIS the exclusive right to distribute LCS technology on heat recovery projects. "Together we will develop the next generation of datacenter racks, with the highest density and the most effective heat recovery that the industry has ever seen. LCS brings to the plan a technology that is uniquely able to recapture more than 90% of rack input energy in the form of a 60 degree C liquid, and an emphasis will be placed on reusing valuable energy that would otherwise be wasted."The post MIRIS Teams with LiquidCool Solutions for Datacenter Heat Recovery appeared first on insideHPC.
|
by staff on (#4XQFV)
Researchers at the Department of Energy’s Oak Ridge National Laboratory have developed a quantum chemistry simulation benchmark to evaluate the performance of quantum devices and guide the development of applications for future quantum computers. “This work is a critical step toward a universal benchmark to measure the performance of quantum computers, much like the LINPACK metric is used to judge the fastest classical computers in the world.â€The post ORNL Researchers Develop Quantum Chemistry Simulation Benchmark appeared first on insideHPC.
|
by staff on (#4XQFX)
Thanks to clouds, latest climate models predict more global warming than their predecessors. Researchers at LLNL in collaboration with colleagues from the University of Leeds and Imperial College London have found that the latest generation of global climate models predict more warming in response to increasing carbon dioxide. "If global warming leads to fewer or thinner clouds, it causes additional warming above and beyond that coming from carbon dioxide alone. In other words, an amplifying feedback to warming occurs."The post Latest Climate Models Predict Thinner Clouds and More Global Warming appeared first on insideHPC.
|
by Rich Brueckner on (#4XP58)
Microway has deployed six NVIDIA DGX-2 supercomputer systems at Oregon State University. As an NVIDIA Partner Network HPC Partner of the Year, Microway installed the DGX-2 systems, integrated software, and transferred their extensive AI operational knowledge to the University team. "The University selected the NVIDIA DGX-2 platform for its immense power, technical support services, and the Docker images with NVIDIA's NGC containerized software. Each DGX-2 system delivers an unparalleled 2 petaFLOPS of AI performance."The post Microway Deploys NVIDIA DGX-2 supercomputers at Oregon State University appeared first on insideHPC.
|
by Rich Brueckner on (#4XNV4)
All qubits are not created equal. Over at the IBM Blog, Jerry Chow and Jay Gambetta write that the company's new Raleigh 28-qubit quantum computer has achieved the company’s goal of doubling its Quantum Volume. The development marks a shift from experimentation towards building Quantum Computers with a systems approach.The post IBM Doubles Quantum Volume with 28 Qubit Raleigh System appeared first on insideHPC.
|
by staff on (#4XNJ5)
In October Bittware and Achronix announced a strategic collaboration with Achronix to introduce the S7t-VG6 PCIe accelerator product – a PCIe card sporting the new Achronix 7nm Speedster7t FPGA. This new generation of accelerator products offers a range of capabilities including low-cost and highly flexible GDDR6 memory that aims to offer HBM-class memory bandwidth, high-performance machine learning processors and a new 2D network-on-chip for high bandwidth and energy-efficient data movement.The post New Achronix Bittware FPGA Accelerator Speeds Cloud, AI, and Machine Learning appeared first on insideHPC.
|
by Rich Brueckner on (#4XNJ6)
In this Let’s Talk Exascale podcast, Ryan Adamson from Oak Ridge National Laboratory describes how his role at the Exascale Computing Project revolves around software deployment and continuous integration at DOE facilities. “Each of the scientific applications that we have depends on libraries and underlying vendor software,†Adamson said. “So managing dependencies and versions of all of these different components can be a nightmare."The post Podcast: Software Deployment and Continuous Integration for Exascale appeared first on insideHPC.
|
by staff on (#4XNJ8)
In this special guest feature, Robert Roe from Scientific Computing World writes that it is not always clear which HPC technology provides the most energy-efficient solution for a given application. "You need to understand your application as somebody that is coming into this from a greenfield perspective. If your application doesn’t parallelize well, or if it needs higher frequency processors, then the best thing you can do is pick the right processor and the right number of them so you are not wasting power on CPU cycles that are not being used."The post Technologies for Energy Efficient Supercomputing appeared first on insideHPC.
|
by staff on (#4XM2G)
IDC MarketScape has recognized WekaIO as a Major Player in this sector. According to the IDC MarketScape 2019, IDC believes that file-based storage (FBS) will continue to evolve to address the needs of traditional and next-generation workloads. The IDC MarketScape noted, “WekaFS was developed from the ground up to utilize the performance of NVMe flash technology to deliver the optimum performance and minimum latency for demanding and unpredictable AI workloads. WekaIO's customers claim satisfaction and that the offering holds to performance promises made by the vendor.â€The post WekaIO Named Major Player in File-Based Storage by IDC MarketScape appeared first on insideHPC.
|
by Rich Brueckner on (#4XM2H)
Today IBM and Delta Air Lines announced a multi-year collaborative effort to explore the potential capabilities of quantum computing to transform experiences for customers and employees. "Delta joins more than 100 clients already experimenting with commercial quantum computing solutions alongside classical computers from IBM to tackle problems like risk analytics and options pricing, advanced battery materials and structures, manufacturing optimization, chemical research, logistics and more."The post Delta Partners with IBM to Explore Quantum Computing appeared first on insideHPC.
|
by Rich Brueckner on (#4XM2K)
Today AI chip startup Groq announced that their new Tensor processor has achieved 21,700 inferences per second (IPS) for ResNet-50 v2 inference. Groq’s level of inference performance exceeds that of other commercially available neural network architectures, with throughput that more than doubles the ResNet-50 score of the incumbent GPU-based architecture. ResNet-50 is an inference benchmark for image classification and is often used as a standard for measuring performance of machine learning accelerators.The post Groq AI Chip Benchmarks Leading Performance on ResNet-50 Inference appeared first on insideHPC.
|
by staff on (#4XKT2)
Scientists are taking advantage of an $2.5 million NSF grant to develop a new framework for integrated geodynamic models that simulate the Earth's molten core. "Most physical phenomena can be described by partial differential equations that explain energy balances or loss,†said Heister, an associate professor of mathematical sciences who will receive $393,000 of the overall funding. “My geoscience colleagues will develop the equations to describe the phenomena and I’ll write the algorithms that solve their equations quickly and accurately.â€The post Simulating the Earth’s mysterious mantle appeared first on insideHPC.
|
by Rich Brueckner on (#4XM2N)
In this podcast, the Radio Free HPC team looks at how Quantum Computing is overhyped and underestimated at the same time. "The episode starts out with Henry being cranky. It also ends with Henry being cranky. But between those two events, we discuss quantum computing and Shahin’s trip to the Q2B quantum computing conference in San Jose."The post Podcast: The Overhype and Underestimation of Quantum Computing appeared first on insideHPC.
|
by staff on (#4XJBY)
Today Atos announced a 4-year contract to supply its BullSequana XH2000 supercomputer to the University of Luxembourg. "The BullSequana XH2000 supercomputer will give researchers 1.5 times more computing capacity with a theoretical peak performance of 1.7 petaflops which will complement the existing supercomputing cluster. It will be equipped with AMD EPYC processors and Mellanox InfiniBand HDR technology, connected to a DDN storage environment."The post Atos to deploy AMD-powered AION Supercomputer at the University of Luxembourg appeared first on insideHPC.
|
by Rich Brueckner on (#4XJBZ)
Today Hyperion Research announced that the company is staffing up with two new analysts for continuing growth and new business opportunities. The analyst firm provides thought leadership and practical guidance for users, vendors, and other members of the HPC community by focusing on key market and technology trends across government, industry, commerce, and academia.The post Hyperion Research Expands Analyst Team appeared first on insideHPC.
|
by Rich Brueckner on (#4XJC1)
Dr. Alice-Agnes Gabriel from LMU is the winner of the 2020 PRACE Ada Lovelace Award for HPC for her outstanding contributions to HPC in Europe. "Dr. Alice-Agnes Gabriel uses numerical simulations coupled to experimental observations to increase our understanding of the underlying physics of earthquakes. The work includes wide scales and can improve our knowledge and safety against these natural phenomena.†says Núria López, Chair of the PRACE Scientific Steering Committee.The post Dr. Alice-Agnes Gabriel from LMU wins Ada Lovelace Award for HPC appeared first on insideHPC.
|
by Rich Brueckner on (#4XJ1B)
In this video from SC19, Sam Mahalingam from Altair describes how the company is enhancing PBS Works software to ease the migration of HPC workloads to the Cloud. "Argonne National Laboratory has teamed with Altair to implement a new scheduling system that will be employed on the Aurora supercomputer, slated for delivery in 2021. PBS Works runs big — 50,000 nodes in one cluster, 10,000,000 jobs in a queue, and 1,000 concurrent active users."The post Altair PBS Works Steps Up to Exascale and the Cloud appeared first on insideHPC.
|
by staff on (#4XGSY)
Today Micron Technology announced that it has begun sampling DDR5 registered DIMMs, based on its industry-leading 1znm process technology, with key industry partners. DDR5, the most technologically advanced DRAM to date, will enable the next generation of server workloads by delivering more than an 85% increase in memory performance. DDR5 doubles memory density while improving reliability at a time when data center system architects are seeking to supply rapidly growing processor core counts with increased memory bandwidth and capacity.The post Micron steps up Memory Performance and Density with DDR5 appeared first on insideHPC.
|
by staff on (#4XGFT)
GIGABYTE is showcasing AI, Cloud, and Smart Applications this week at CES 2020 in Las Vegas. "GIGABYTE is renowned for its craftsmanship and dedication to innovating new technologies that are current with the time and helping humanity leap forward for more than 30 years. GIGABYTE's accomplishments in motherboards and graphics cards have set the standard for the industry to follow, and the quality and performance of its products have been the excellence that competitors look up to. GIGABYTE has leveraged the experience and know-how to establish a trusted reputation in data center expertise, and is responsible in supplying the hardware and support to some of the biggest companies involved in HPC and cloud & web hosting services, enabling their successes in the respective fields."The post GIGABYTE Brings AI and Cloud Solutions to CES 2020 appeared first on insideHPC.
|
by staff on (#4XGFW)
In this special guest feature, Joe Landman from Scalability.org writes that the move to cloud-based HPC is having some unexpected effects on the industry. "When you purchase a cloud HPC product, you can achieve productivity in time scales measurable in hours to days, where previously weeks to months was common. It cannot be overstated how important this is."The post Joe Landman on How the Cloud is Changing HPC appeared first on insideHPC.
|
by Rich Brueckner on (#4XG55)
Today Altair announced the acquisition of newFASANT, offering leading technology in computational and high-frequency electromagnetics. “By combining its people and software into our advanced solutions offerings, we are clearly emerging as the dominant player in high-frequency electromagnetics – technology that is critical for solving some of the world’s toughest engineering problems.â€The post Altair Acquires newFASANT for High-Frequency Electromagnetics appeared first on insideHPC.
|
by Rich Brueckner on (#4XG57)
Karen Willcox from the University of Texas gave this Invited Talk at SC19. "This talk highlights how physics-based models and data together unlock predictive modeling approaches through two examples: first, building a Digital Twin for structural health monitoring of an unmanned aerial vehicle, and second, learning low-dimensional models to speed up computational simulations for design of next-generation rocket engines."The post Predictive Data Science for Physical Systems: From Model Reduction to Scientific Machine Learning appeared first on insideHPC.
|
by Rich Brueckner on (#4XEYR)
NVIDIA will host a full day HPC Summit this year at the GPU Technology Conference 2020. With a full day of plenary sessions and a developer track on high performance computing, the HPC Summit takes place Thursday, March 26 in San Jose, California. "The HPC Summit at GTC brings together HPC leaders, IT professionals, researchers, and developers to advance the state of the art of HPC. Explore content from different HPC communities, engage with experts, and learn about new trends and innovations."The post NVIDIA to host Full-Day HPC Summit at GPU Technology Conference appeared first on insideHPC.
|
by Rich Brueckner on (#4XEYT)
James Coomer gave this talk at the DDN User Group at SC19. James Coomer from DDN presents: Analytics, Multicloud, and the Future of the Datasphere. "We are adding serious data management, collaboration and security capabilities to the most scalable file solution in the world. EXA5 gives you mission critical availability whilst consistently performing at scale†said James Coomer, senior vice president of product, DDN. “Our 20 years’ experience in delivering the most powerful at-scale data platforms is all baked into EXA5. We outperform everything on the market and now we do so with unmatched capability.â€The post Analytics, Multicloud, and the Future of the Datasphere appeared first on insideHPC.
|
by Rich Brueckner on (#4XDSF)
In this episode of Let’s Talk Exascale, Ulrike Meier Yang of LLNL describes the xSDK4ECP and hypre projects within the Exascale Computing Project. The increased number of libraries that exascale will need presents challenges. “The libraries are harder to build in combination, involving many variations of compilers and architectures, and require a lot of testing for new xSDK releases.â€The post Podcast: Optimizing Math Libraries to Prepare Applications for Exascale appeared first on insideHPC.
|
by Rich Brueckner on (#4XDSH)
Stanford University is seeking a Research Computing Specialist in our Job of the Week. "As a Research Computing Specialist, you will draw on both deep technical knowledge and interpersonal skills to facilitate and accelerate academic research at the Stanford University Graduate School of Business (GSB). You will join a team of research analytics scientists, data engineers, and project managers on the Data, Analytics and Research Computing (DARC) team to support research at the GSB. Your clients will include GSB faculty and collaborators who are drawn from a broad spectrum of academic backgrounds, research interests, methodological specialties, and technical backgrounds. As a Research Computing Specialist, you will bring the ability to understand the ecosystem of research computing resources and partner with researchers to use these resources effectively. You should enjoy working directly with researchers, and be equally comfortable introducing novices to research computing systems and helping advanced users optimize their workflow."The post Job of the Week: Research Computing Specialist at Stanford University appeared first on insideHPC.
|
by Rich Brueckner on (#4XCG8)
In this Chip Chat podcast, Carey Kloss from Intel outlines the architecture and potential of the Intel Nervana NNP-T. He gets into major issues like memory and how the architecture was designed to avoid problems like becoming memory-locked, how the accelerator supports existing software frameworks like PaddlePaddle and TensorFlow, and what the NNP-T means for customers who want to keep on eye on power usage and lower TCO.The post Podcast: Advancing Deep Learning with Custom-Built Accelerators appeared first on insideHPC.
|
by Rich Brueckner on (#4XCGA)
Researchers are using the Summit Supercomputer at ORNL to simulate the massive dataflow of the future SKA telescope. "The SKA simulation on Summit marks the first time radio astronomy data have been processed at such a large scale and proves that scientists have the expertise, software tools, and computing resources that will be necessary to process and understand real data from the SKA."The post Simulating SKA Telescope’s Massive Dataflow using the Summit Supercomputer appeared first on insideHPC.
|