Feed insidehpc High-Performance Computing News Analysis | insideHPC

Favorite IconHigh-Performance Computing News Analysis | insideHPC

Link https://insidehpc.com/
Feed http://insidehpc.com/feed/
Updated 2024-11-25 06:00
Exascale: The Movie
In this video from HPE, researchers describe how Exascale will advance science and improve the quality of life for all. "Why is the U.S. government throwing down this gauntlet? Many countries are engaged in what has been referred to as a race to exascale. But getting there isn’t just for national bragging rights. Getting to exascale means reaching a new frontier for humanity, and the opportunity to potentially solve humanity’s most pressing problems."The post Exascale: The Movie appeared first on insideHPC.
New Book: Using OpenMP – The Next Step
Ruud van der Pas from Oracle has co-authored a new book on OpenMP. It covers the OpenMP 4.5 specifications, with a focus on the practical usage of the language features and constructs. "We start where the specifications end and explain the rationale behind the features. In particular the functionality and how a feature may be used in an application."The post New Book: Using OpenMP – The Next Step appeared first on insideHPC.
Video: Why your school should enter the ISC Student Cluster Competition
In this video, future HPC professionals discuss their participation in the ISC Student Cluster Competition. "Now in its seventh year, the Student Cluster Competition enables international teams to take part in a real-time contest focused on advancing STEM disciplines and HPC skills development. To take home top honors, the teams will have to showcase systems of their own design, adhering to strict power constraints and achieve the highest performance across a series of standard HPC benchmarks and applications."The post Video: Why your school should enter the ISC Student Cluster Competition appeared first on insideHPC.
NVIDIA Tesla V100 GPUs Power New TYAN Server
Today TYAN showcased their latest GPU-optimized platforms that target the high performance computing and artificial intelligence sectors at the GPU Technology Conference in Munich. "TYAN’s new GPU computing platforms are designed to provide efficient parallel computing for the analytics of vast amounts of data. By incorporating NVIDIA's latest Tesla V100 GPU accelerators, TYAN provides our customers with the power to accelerate both high performance and cognitive computing workloads” said Danny Hsu, Vice President of MiTAC Computing Technology Corporation's TYAN Business Unit.The post NVIDIA Tesla V100 GPUs Power New TYAN Server appeared first on insideHPC.
Video: Revolution in Computer and Data-enabled Science and Engineering
Ed Seidel from the University of Illinois gave this talk at the 2017 Argonne Training Program on Extreme-Scale Computing. The theme of his talk centers around the need for interdisciplinary research. "Interdisciplinary research (IDR) is a mode of research by teams or individuals that integrates information, data, techniques, tools, perspectives, concepts, and/or theories from two or more disciplines or bodies of specialized knowledge to advance fundamental understanding or to solve problems whose solutions are beyond the scope of a single discipline or area of research practice."The post Video: Revolution in Computer and Data-enabled Science and Engineering appeared first on insideHPC.
OSS Showcases New HDCA Platforms with Volta GPUs at GTC Europe
At GTC Europe this week, One Stop Systems (OSS) will exhibit two of the most powerful GPU accelerators for data scientists and deep learning researchers, the CA16010 and SCA8000. NVIDIA GPU computing is helping researchers and engineers take on some the world's hardest challenges,” said Paresh Kharya, group product marketing manager of Accelerated Computing at NVIDIA. “One Stop Systems' customers can now tap into the power of our Volta architecture to accelerate their deep learning and high performance computing workloads.”The post OSS Showcases New HDCA Platforms with Volta GPUs at GTC Europe appeared first on insideHPC.
Google Compute Engine offers VMs with 96 Skylake CPUs and 624GB of Memory
Google Compute Engine now offers new VMs with the most Skylake vCPUs of any cloud provider. "Skylake in turn provides up to 20% faster compute performance, 82% faster HPC performance, and almost 2X the memory bandwidth compared with the previous generation Xeon. Need even more compute power or memory? We’re also working on a range of new, even larger VMs, with up to 4TB of memory."The post Google Compute Engine offers VMs with 96 Skylake CPUs and 624GB of Memory appeared first on insideHPC.
Accelerating Quantum Chemistry for Drug Discovery
In the pharmaceutical industry, drug discovery is a long and expensive process. This sponsored post from Nvidia explores how the University of Florida and University of North Carolina developed an anakin-me neural network engine to produce computationally fast quantum mechanical simulations with high accuracy at a very low cost to speed drug discovery and exploration.The post Accelerating Quantum Chemistry for Drug Discovery appeared first on insideHPC.
Podcast: Intel Omni-Path adds Performance and Scalalability
"Intel OPA, part of Intel Scalable System Framework, is a high-performance fabric enabling the responsiveness, throughput, and scalability required by today's and tomorrow's most-demanding high performance computing workloads. In this interview, Misage talks about market uptake in Intel OPA's first year of availability, reports on some of the first HPC deployments using the Intel Xeon Scalable platform and Intel OPA, and gives a sneak peek of what Intel OPA will be talking about at SC17."The post Podcast: Intel Omni-Path adds Performance and Scalalability appeared first on insideHPC.
Future HPC Leaders Gather at Argonne Training Program on Extreme-Scale Computing
Over at ALCF, Andrea Manning writes that the recent Argonne Training Program on Extreme-Scale Computing brought together HPC practitioners from around the world. "You can’t get this material out of a textbook,” said Eric Nielsen, a research scientist at NASA’s Langley Research Center. Added Johann Dahm of IBM Research, “I haven’t had this material presented to me in this sort of way ever.”The post Future HPC Leaders Gather at Argonne Training Program on Extreme-Scale Computing appeared first on insideHPC.
PRACE Awards 1.7 Thousand Million Core Hours for Research Projects in Europe
Today the European PRACE initiative announced that 46 Awards from their recent 15th Call for Proposals total up to nearly 1.7 thousand million core hours. The 46 awarded projects are led by principal investigators from 12 different European countries. "Of local interest this time around, the awarded projects involve co-investigators from the USA (7) and Russia (2). All information and the abstracts of the projects awarded under the 15th PRACE Call for Proposals are now available online."The post PRACE Awards 1.7 Thousand Million Core Hours for Research Projects in Europe appeared first on insideHPC.
Radio Free HPC Previews the SC17 Plenary on Smart Cities
In this podcast, the Radio Free HPC team looks at Smart Cities. As the featured topic this year at the SC17 Plenary, the Smart Cities initiative looks to improve the quality of life for residents using urban informatics and other technologies to improve the efficiency of services.The post Radio Free HPC Previews the SC17 Plenary on Smart Cities appeared first on insideHPC.
Fujitsu to Build 37 Petaflop AI Supercomputer for AIST in Japan
Nikkei in Japan reports that Fujitsu is building a 37 Petaflop supercomputer for the National Institute of Advanced Industrial Science and Technology (AIST). "Targeted at Deep Learning workloads, the machine will power the AI research center at the University of Tokyo's Chiba Prefecture campus. The new Fujitsu system feature will comprise 1,088 servers, 2,176 Intel Xeon processors, and 4,352 NVIDIA GPUs."The post Fujitsu to Build 37 Petaflop AI Supercomputer for AIST in Japan appeared first on insideHPC.
Video: Argonne’s Theta Supercomputer Architecture
Scott Parker gave this talk at the Argonne Training Program on Extreme-Scale Computing. "Designed in collaboration with Intel and Cray, Theta is a 9.65-petaflops system based on the second-generation Intel Xeon Phi processor and Cray’s high-performance computing software stack. Capable of nearly 10 quadrillion calculations per second, Theta will enable researchers to break new ground in scientific investigations that range from modeling the inner workings of the brain to developing new materials for renewable energy applications."The post Video: Argonne’s Theta Supercomputer Architecture appeared first on insideHPC.
COMSOL Conference Showcases Next-Gen Multiphysics
Attendees of the COMSOL Conference in Boston this week were treated to a sneak preview future developments of the popular multiphysics software from Svante Littmarck, President and CEO of COMSOL. The conference featured a robust technical program with approximately 300 attendees. "Our customers are at the forefront of innovation behind the products that will shape our future,” says Littmarck. “We work tirelessly to support their efforts by increasing the modeling power of the COMSOL software and by making collaboration among simulation experts and their colleagues the core of everything we do. This annual event is our opportunity to connect and exchange knowledge within the COMSOL community on multiphysics modeling.”The post COMSOL Conference Showcases Next-Gen Multiphysics appeared first on insideHPC.
Job of the Week: HPC Systems Engineer at PDT Partners in NYC
The PDT team is seeking a highly-talented Linux HPC Systems Administrator to enhance and support our research computing clusters. As part of the HPC/Grid team, you will be responsible for improving, extending, and maintaining the HPC/Grid infrastructure, and helping provide a world-class computing and big data environment for PDT’s Quantitative Researchers. You will interface closely with research teams using the Grid, the entire Linux engineering group, software engineers, and PDT’s in-house monitoring team. You will also have the opportunity to serve as PDT’s subject matter expert for various HPC technologies.The post Job of the Week: HPC Systems Engineer at PDT Partners in NYC appeared first on insideHPC.
Video: Scientel Runs Record Breaking Calculation on Owens Cluster at OSC
In this video, Norman Kutemperor from Scientel describes how his company ran a record-setting big data problem on the Owens supcomputer at OSC."The Ohio Supercomputer Center recently displayed the power of its new Owens Cluster by running the single-largest scale calculation in the Center’s history. Scientel IT Corp used 16,800 cores of the Owens Cluster on May 24 to test database software optimized to run on supercomputer systems. The seamless run created 1.25 Terabytes of synthetic data."The post Video: Scientel Runs Record Breaking Calculation on Owens Cluster at OSC appeared first on insideHPC.
Mapping of the Opportunities for Government, Academia, and Industry Engagement in HPC
Mark Sims (DoD) and Bob Sorensen from Hyperion Research gave this talk at the HPC User Forum in Milwaukee. Here, they demonstrate an exciting new tool that aims to map HPC centers across the USA.The post Mapping of the Opportunities for Government, Academia, and Industry Engagement in HPC appeared first on insideHPC.
Advanced Clustering Technologies Deploys Lawrence Supercomputer at University of South Dakota
Today Advanced Clustering Technologies announced the deployment of a new supercomputer at the University of South Dakota. cluster. The machine is named "Lawrence" after Nobel Laureate and University of South Dakota alumnus E. O. Lawrence. "Lawrence makes it possible for us to accelerate scientific progress while reducing the time to discovery,” said Doug Jennewein, the University’s Director of Research Computing. “University researchers will be able to achieve scientific results not previously possible, and our students and faculty will become more engaged in computationally assisted research.”The post Advanced Clustering Technologies Deploys Lawrence Supercomputer at University of South Dakota appeared first on insideHPC.
Video: MareNostrum Supercomputer Powers LIGO Project with 20 Million Processor Hours
Today the Barcelona Supercomputing Center announced it has allocated 20 million processor hours to the LIGO project, the most recent winner of the Nobel Prize for Physics. "The importance of MareNostrum for our work is very easy to explain: without it we could not do the kind of work we do; we would have to change our direction of research.”The post Video: MareNostrum Supercomputer Powers LIGO Project with 20 Million Processor Hours appeared first on insideHPC.
Exploring Evolutionary Relationships through CIPRES
Researchers are exploring the Tree of Life with the help of the CIPRES portal at the San Diego Supercomputer Center. “As a community-built resource, CIPRES addresses what the scientists really want and need to do in the real world of research,” said Mishler. "Aside from increasing our understanding of the evolutionary relationships of this planet’s diverse range of species, the research also has yielded results of critical importance to the health and welfare of humans."The post Exploring Evolutionary Relationships through CIPRES appeared first on insideHPC.
New Book: OpenACC for Programmers
Sunita Chandrasekaran and Guido Juckeland have published a new book on Programming with OpenACC. "Scientists and technical professionals can use OpenACC to leverage the immense power of modern GPUs without the complexity traditionally associated with programming them. OpenACC for Programmers integrates contributions from 19 leading parallel-programming experts from academia, public research organizations, and industry."The post New Book: OpenACC for Programmers appeared first on insideHPC.
Dr. Marius Stan Presents: Uncertainty of Thermodynamic Data – Humans and Machines
Marius Stan from Argonne gave this talk at the 2017 Argonne Training Program on Extreme-Scale Computing. Famous for his part-time acting role on the Breaking Bad TV show, Marius Stan is a physicist and a chemist interested in non-equilibrium thermodynamics, heterogeneity, and multi-scale computational science for energy applications. The goal of his research is to discover or design materials, structures, and device architectures for nuclear energy and energy storage.The post Dr. Marius Stan Presents: Uncertainty of Thermodynamic Data – Humans and Machines appeared first on insideHPC.
Internet2 Technology Exchange Meeting Comes to San Francisco Oct. 15-18
Internet2 will host its annual technical meeting, the Technology Exchange, for the research and education community from October 15-18 in San Francisco. The event will convene over 650 attendees from more than 250 institutions, 17 countries, and 46 states including network engineers, technologists, architects, scientists, operators, and administrators in the fields of advanced networking, trust and identity, information security, applications for research, and web-scale computing.The post Internet2 Technology Exchange Meeting Comes to San Francisco Oct. 15-18 appeared first on insideHPC.
Call For Research Papers: ISC 2018
ISC 2018 has issued their Call for Research Papers. "Submissions are now open for the ISC 2018 conference research paper sessions, which aim to provide first-class opportunities for engineers and scientists in academia, industry, and government to present and discuss issues, trends, and results that will shape the future of high performance computing. Submissions will be accepted through Dec. 22, 2017. The research paper sessions will be held from Monday, June 25, through Wednesday, June 27, 2018."The post Call For Research Papers: ISC 2018 appeared first on insideHPC.
Parallel Applications Speed Up Manufacturing Product Development
The product design process has undergone a significant transformation with the availability of supercomputing power at traditional workstation prices. With over 100 threads available to an application in compact 2 socket servers, scalability of applications that are used as part of the product design and development process are just a keyboard away for a wide range of engineers.The post Parallel Applications Speed Up Manufacturing Product Development appeared first on insideHPC.
SC17 Highlights Nobel Prize Winning LIGO Collaboration
In this video from SC17, researchers discuss the role of HPC in the Nobel Prize-winning discovery of gravitational waves, originally theorized 100 years ago by Albert Einstein in his general theory of relativity. "We are only now beginning to hear the vibrations of space-time that are all around us—we just needed a better ear. And when we detect that, we’re detecting the vibrations of everything that has ever moved in the universe. This is real. This is really there, and we’ve never noticed it until now."The post SC17 Highlights Nobel Prize Winning LIGO Collaboration appeared first on insideHPC.
Clemson to complete $1 million upgrade of Palmetto HPC Cluster
A $1-million upgrade to Clemson University’s Palmetto Cluster is expected to help researchers quicken the pace of scientific discovery and technological innovation in a broad range of fields, from developing new medicines to creating advanced materials. "New hardware that could be in place as early as spring will add even more power to the Palmetto Cluster. Even before the upgrade, it rated eighth in the nation among academic supercomputers, according to the twice-annual TOP500 list of the world’s most powerful computers."The post Clemson to complete $1 million upgrade of Palmetto HPC Cluster appeared first on insideHPC.
HPC Powers High Pressure Casting Simulation at Shiloh Industries
Hal Gerber from Shiloh Industries gave this talk at the HPC User Forum in Milwaukee. "Shiloh is the global leader in high-integrity, high-vacuum, high-pressure die castings, providing high ductility in aluminum and magnesium. Shiloh Industries is a global innovative solutions provider focusing on "lightweighting" technologies that provide environmental and safety benefits to the mobility market."The post HPC Powers High Pressure Casting Simulation at Shiloh Industries appeared first on insideHPC.
Argonne’s Data Science Program Doubles Down with New Projects
Today Argonne announced that the ALCF Data Science Program (ADSP) has awarded computing time to four new projects, bringing the total number of ADSP projects for 2017-2018 to eight. All four of the program’s inaugural projects were also renewed. "The new project award recipients include an industry-based deep learning project; a national laboratory-based cosmology workflow project; and two university-based projects: one that uses machine-learning for materials discovery, and a deep-learning computer science project."The post Argonne’s Data Science Program Doubles Down with New Projects appeared first on insideHPC.
Fighting the West Nile Virus with HPC & Analytical Ultracentrifugation
Researchers are using new techniques with HPC to learn more about how the West Nile virus replicates inside the brain. "Over several years, Demeler has developed analysis software for experiments performed with analytical ultracentrifuges. The goal is to facilitate the extraction of all of the information possible from the available data. To do this, we developed very high-resolution analysis methods that require high performance computing to access this information," he said. "We rely on HPC. It's absolutely critical."The post Fighting the West Nile Virus with HPC & Analytical Ultracentrifugation appeared first on insideHPC.
NVIDIA Tesla GPUs Come to Oracle Bare Metal Cloud
Over at the NVIDIA Blog, Kristin Bryson writes that the Oracle Bare Metal Cloud now offers Tesla P100 GPUs for technical computing. "The move underscores growing demand for public-cloud access to our GPU computing platform from an increasingly wide set of enterprise users. Oracle’s massive customer base means that a broad range of businesses across many industries will have access to accelerated computing to harness the power of AI, accelerated analytics and high performance computing."The post NVIDIA Tesla GPUs Come to Oracle Bare Metal Cloud appeared first on insideHPC.
Jonathan Poggie from Purdue Wins DoD Computing Award
Associate Professor Jonathan Poggie and his team from Purdue have received a large research grant from the U.S. Department of Defense for supercomputing resources. The award enables science and technology research that would not be possible without extraordinary computer resources. "Poggie is the principal investigator for a new U.S. Department of Defense high-performance computing modernization program beginning in October, entitled “Prediction of Hypersonic Laminar-Turbulent Transition through Direct Numerical Simulation.” The project is focused on making conventional hypersonic wind tunnels more useful for vehicle design by helping designers work through the noise and turbulence present in the tunnels and allowing them to more accurately interpret the results of the wind tunnel tests."The post Jonathan Poggie from Purdue Wins DoD Computing Award appeared first on insideHPC.
Engility To Provide NOAA With HPC Expertise
Today Engility announced $14 million in task order awards from NOAA’s Geophysical Fluid Dynamics Laboratory. Engility scientists will conduct HPC software development and optimization, help users gain scientific insights, and maintain cyber security controls on NOAA’s R&D High Performance Computing System. These services assist NOAA GFDL in enhancing and advancing their HPC capability to explore and understand climate and weather. "As we saw with Hurricanes Harvey and Irma, a deeper understanding of climate and weather are critical to America’s preparedness, infrastructure and security stance,” said Lynn Dugle, CEO of Engility. “Engility has been at the forefront of leveraging HPC to advance scientific discovery and solve the toughest engineering problems. HPC is, and will continue to be, an area of high interest and value among our customers as they seek to analyze huge and ever-expanding data sets.”The post Engility To Provide NOAA With HPC Expertise appeared first on insideHPC.
Take Our HPC & AI Survey and Win an Echo Show Device
The rise of AI could potentially spur huge growth for the High Performance Computing market, but what kinds of results are your peers already getting right now? There is one way to find out--by taking our HPC & AI Survey. In return, we'll send you a free report with the results and enter your name in a drawing to win one of two Echo Show devices with Amazon Alexa technology.The post Take Our HPC & AI Survey and Win an Echo Show Device appeared first on insideHPC.
A Vision for Exascale: Simulation, Data and Learning
Rick Stevens gave this talk at the recent ATPESC training program. "The ATPESC program provides intensive, two weeks of training on the key skills, approaches, and tools to design, implement, and execute computational science and engineering applications on current high-end computing systems and the leadership-class computing systems of the future. As a bridge to that future, this two-week program fills the gap that exists in the training computational scientists typically receive through formal education or other shorter courses."The post A Vision for Exascale: Simulation, Data and Learning appeared first on insideHPC.
Supporting Diverse HPC Workloads on a Single Cluster
High Performance Computing is extending its reach into new areas. Not only are modeling and simulation being used more widely, but deep learning and other high performance data analytics (HPDA) applications are becoming essential tools across many disciplines. This sponsored post from Intel explores how Plymouth University's High Performance Computer Centre (HPCC) used Intel HPC Orchestrator to support diverse workloads as it recently deployed a new 1,500-core cluster.The post Supporting Diverse HPC Workloads on a Single Cluster appeared first on insideHPC.
FPGAs Power New Intel Programmable Acceleration Cards
Today, Intel announced a comprehensive hardware and software platform solution to enable faster deployment of customized field programmable gate array (FPGA)-based acceleration of networking, storage and computing workloads. "Intel is making it easier for server equipment makers such as Dell EMC to exploit FPGA technology for data acceleration as a ready-to-use platform,” said Dan McNamara, corporate vice president and general manager of Intel’s Programmable Solutions Group. “With our ecosystem partners, we are enabling the industry with point solutions with a substantial boost in performance while preserving power and cost budgets.”The post FPGAs Power New Intel Programmable Acceleration Cards appeared first on insideHPC.
Exascale Computing to Accelerate Clean Fusion Energy
In this special guest feature, Jon Bashor from LBNL writes that Exascale computing will accelerate the push toward clean fusion energy. "Turning this from a promising technology into a mainstream scientific tool depends critically on high-performance, high-fidelity modeling of complex processes that develop over a wide range of space and time scales."The post Exascale Computing to Accelerate Clean Fusion Energy appeared first on insideHPC.
IEEE Recognizes Three Early Career Researchers in HPC
Today the IEEE Computer Society announced the winners of the IEEE-CS Technical Consortium on HPC Award for Excellence for Early Career Researchers in High Performance Computing. The TCHPC Award recognizes up to three individuals who have made outstanding, influential, and potentially long-lasting contributions in the field of high performance computing within five years of receiving their PhD degree as of January 1 of the year of the award.The post IEEE Recognizes Three Early Career Researchers in HPC appeared first on insideHPC.
How Can We Bring Apps to Racks?
In this special guest feature, Rosemary Dr Rosemary Francis from Ellexus describes why the customized nature of HPC is not a sustainable path forward for the next generation. "The downside is that many of our systems and tools are inaccessible to non-expert users. For example, deep learning is bringing more and more scientists closer towards HPC, but while they bring their knowledge, they also bring their high expectations for what they believe IT can do and not necessarily an understanding of how it works."The post How Can We Bring Apps to Racks? appeared first on insideHPC.
Multiscale Dataflow Computing: Competitive Advantage at the Exascale Frontier
"This talk will explain the motivation behind dataflow computing to escape the end of frequency scaling in the push to exascale machines, introduce the Maxeler dataflow ecosystem including MaxJ code and DFE hardware, and demonstrate the application of dataflow principles to a specific HPC software package (Quantum ESPRESSO)."The post Multiscale Dataflow Computing: Competitive Advantage at the Exascale Frontier appeared first on insideHPC.
Scaling Up and Out with ARM Architectures
Vijay Nagarajan from the University of Edinburgh gave this talk at the ARM Research Summit. "The second annual Arm Research Summit is an academic summit to discuss future trends and disruptive technologies across all sectors of computing. The Summit includes talks from the leaders in their research fields, demonstrations, networking opportunities and the chance to interact and discuss projects with members of Arm Research."The post Scaling Up and Out with ARM Architectures appeared first on insideHPC.
HPC4Mfg Program Seeks New Projects
The High Performance Computing for Manufacturing (HPC4Mfg) program in the Energy Department’s Advanced Manufacturing Office (AMO) announced today their intent to issue their fifth solicitation in January 2018 to fund projects that allow manufacturers to use high-performance computing resources at the Department of Energy’s national laboratories to tackle major manufacturing challenges.The post HPC4Mfg Program Seeks New Projects appeared first on insideHPC.
Bringing Diversity to Computational Science
"Computing is one of the least diverse science, technology, engineering, and mathematics (STEM) fields, with an under-representation of women and minorities, including African Americans and Hispanics. Leveraging this largely untapped talent pool will help address our nation’s growing demand for data scientists. Computational approaches for extracting insights from big data require the creativity, innovation, and collaboration of a diverse workforce."The post Bringing Diversity to Computational Science appeared first on insideHPC.
OpenHPC: Project Overview and Updates
Karl Schulz from Intel gave this talk at the MVAPICH User Group. "There is a growing sense within the HPC community for the need to have an open community effort to more efficiently build, test, and deliver integrated HPC software components and tools. To address this need, OpenHPC launched as a Linux Foundation collaborative project in 2016 with combined participation from academia, national labs, and industry. The project's mission is to provide a reference collection of open-source HPC software components and best practices in order to lower barriers to deployment and advance the use of modern HPC methods and tools."The post OpenHPC: Project Overview and Updates appeared first on insideHPC.
Bright Computing Powers SingleParticle.com for cryo-EM
Today Bright Computing announced a reseller agreement with San Diego-based SingleParticle.com. The company specializes in turn-key HPC infrastructure designed for high performance and low total cost of ownership (TCO), serving the global research community of cryo-electron microscopy (cryoEM). "With Bright, the management of an HPC cluster becomes very straightforward, empowering end users to administer their workloads, rather than relying on HPC experts," said Dr. Clara Cai, Manager at SingleParticle.com. "We are confident that with Bright’s technology, our customers can maintain our turn-key cryoEM cluster with little to no prior HPC experience.”The post Bright Computing Powers SingleParticle.com for cryo-EM appeared first on insideHPC.
A Perspective on HPC-enabled AI
Tim Barr from Cray gave this talk at the HPC User Forum in Milwaukee. "Cray’s unique history in supercomputing and analytics has given us front-line experience in pushing the limits of CPU and GPU integration, network scale, tuning for analytics, and optimizing for both model and data parallelization. Particularly important to machine learning is our holistic approach to parallelism and performance, which includes extremely scalable compute, storage and analytics."The post A Perspective on HPC-enabled AI appeared first on insideHPC.
Supermicro steps up with Optimized Systems for NVIDIA Tesla V100 GPUs
Today Supermicro announced support for NVIDIA Tesla V100 PCI-E and V100 SXM2 GPUs on its industry leading portfolio of GPU server platforms. With our latest innovations incorporating the new NVIDIA V100 PCI-E and V100 SXM2 GPUs in performance-optimized 1U and 4U systems with next-generation NVLink, our customers can accelerate their applications and innovations to help solve the world’s most complex and challenging problems.”The post Supermicro steps up with Optimized Systems for NVIDIA Tesla V100 GPUs appeared first on insideHPC.
Sowing Seeds of Quantum Computation at Berkeley Lab
"Berkeley Lab’s tradition of team science, as well as its proximity to UC Berkeley and Silicon Valley, makes it an ideal place to work on quantum computing end-to-end,” says Jonathan Carter, Deputy Director of Berkeley Lab Computing Sciences. “We have physicists and chemists at the lab who are studying the fundamental science of quantum mechanics, engineers to design and fabricate quantum processors, as well as computer scientists and mathematicians to ensure that the hardware will be able to effectively compute DOE science.”The post Sowing Seeds of Quantum Computation at Berkeley Lab appeared first on insideHPC.
...145146147148149150151152153154...