by Sarah Rubenoff on (#4D8VR)
In today's markets, a successful HPC cluster can be a formidable competitive advantage. And many are turning to these tools to stay competitive in the HPC market. That said, these systems are inherently very complex, and have to be built, deployed and managed properly to realize their full potential. A new report from Bright Computing explore best practices for HPC clusters.The post Best Practices for Building, Deploying & Managing HPC Clusters appeared first on insideHPC.
|
High-Performance Computing News Analysis | insideHPC
Link | https://insidehpc.com/ |
Feed | http://insidehpc.com/feed/ |
Updated | 2024-11-24 12:30 |
by Rich Brueckner on (#4D98K)
Kelly Gaither from TACC gave this talk at the HPC User Forum. "Computing4Change is a competition empowering people to create change through computing. You may have seen articles on the anticipated shortfall of engineers, computer scientists, and technology designers to fill open jobs. Numbers from the Report to the President in 2012 (President Obama’s Council of Advisors on Science and Technology) show a shortfall of one million available workers to fill STEM-related jobs by 2020."The post The Computing4Change Program takes on STEM and Workforce Issues appeared first on insideHPC.
|
by staff on (#4D95G)
Today Quobyte announced that the company's Data Center File System is the first distributed file system to offer a TensorFlow plug-in, providing increased throughput performance and linear scalability for ML-powered applications to enable faster training across larger data sets while achieving higher-accuracy results. "By providing the first distributed file system with a TensorFlow plug-in, we are ensuring as much as a 30 percent faster throughput performance improvement for ML training workflows, helping companies better meet their business objectives through improved operational efficiency,†said Bjorn Kolbeck, Quobyte CEO.The post Quobyte Distributed File System adds TensorFlow Plug-In for Machine Learning appeared first on insideHPC.
|
by Rich Brueckner on (#4D90B)
Mark Govett from NOAA gave this talk at the GPU Technology Conference. "We'll discuss the revolution in computing, modeling, data handling and software development that's needed to advance U.S. weather-prediction capabilities in the exascale computing era. Creating prediction models to cloud-resolving 1 KM-resolution scales will require an estimated 1,000-10,000 times more computing power, but existing models can't exploit exascale systems with millions of processors. We'll examine how weather-prediction models must be rewritten to incorporate new scientific algorithms, improved software design, and use new technologies such as deep learning to speed model execution, data processing, and information processing."The post Video: Advancing U.S. Weather Prediction Capabilities with Exascale HPC appeared first on insideHPC.
|
by staff on (#4D90D)
Today Wolfram Research released Version 12 of Mathematica for advanced data science and computational discovery. "After three decades of continuous R&D and the introduction of Mathematica Version 1.0, Wolfram Research has released its most powerful software offering with Version 12 of Wolfram Language, the symbolic backbone of Mathematica. The latest version includes over a thousand new functions and features for multiparadigm data science, automated machine learning, and blockchain manipulation for modern software development and technical computing."The post Wolfram Research Releases Mathematica Version 12 for Advanced Data Science appeared first on insideHPC.
|
by Rich Brueckner on (#4D6Q9)
In this Big Compute podcast, Gabriel Broner hosts Mike Hollenbeck, founder and CTO at Optisys. Optisys is a startup that is changing the antenna industry. Using HPC in the cloud and 3D printing they are able to design customized antennas which are much smaller, lighter and higher performing than traditional antennas.The post Podcast: Rescale powers Innovation in Antenna Design appeared first on insideHPC.
|
by Rich Brueckner on (#4D6J9)
The good folks at Sylabs have added plugin support to Singularity, an open source-based container platform designed for scientific and HPC environments. As this post from February 2018 indicates, plugin support in Singularity has been on our minds for some time. After the successful reimplementation of the Singularity core in a combination of the Go […]The post Video: Singularity adds Plugin Support appeared first on insideHPC.
|
by Rich Brueckner on (#4D6JA)
The EuroMPI conference has issued its Call for Papers. The event takes place September 10-13 in Zurich, Switzerland. "The EuroMPI conference is since 1994 the preeminent meeting for users, developers and researchers to interact and discuss new developments and applications of message-passing parallel computing, in particular in and related to the Message Passing Interface (MPI). This includes parallel programming interfaces, libraries and langauges, architectures, networks, algorithms, tools, applications, and High Performance Computing with particular focus on quality, portability, performance and scalability."The post Call for Papers: EuroMPI Conference in Zurich appeared first on insideHPC.
|
by Rich Brueckner on (#4D6JC)
Today Univa launched a HPC Cloud Migration, a new portal for news, events, and resources centered around High Performance Computing. "The high-performance computing industry is at an exciting growth stage, fueled by new application deployment models (such as AI and machine learning) new cloud-service offerings and advances in management software. HPC is all about scale and speed in which users are pushing to accelerate their most complex HPC workloads’ time to completion. As a result, IT organizations are looking to maximize their HPC computing resources for harnessing the cloud."The post Univa Launches HPC Cloud Migration News Portal appeared first on insideHPC.
|
by staff on (#4D6DN)
Today Fujitsu announced that it has completed the design of Post-K supercomputer for deployment at RIKEN in Japan. While full-production of the full machine is not scheduled until 2021-2022, Fujitsu disclosed plans to productize the Post-K technologies and begin global sales in the second half of fiscal 2019. "Reaching the production milestone marks a significant achievement for Post-K and we are excited to see the potential for broader deployment of Arm-based Fujitsu technologies in support of HPC and AI applications."The post Fujitsu to Productize Post-K Supercomputer Technologies appeared first on insideHPC.
|
by Rich Brueckner on (#4D4YJ)
In this video from the HPC User Forum, Henry Newman from Seagate Government Solutions leads a panel discussion on Metadata and Archiving at Scale. "Metadata is the key to keeping track of all this unstructured scientific data. It is “data about data.†It makes scientific data easy to find, track, share, move and manage – at low cost. Unfortunately, today’s high capacity storage systems only provide bare bones system consisting of as little as file name, owner and creation/access timestamps. Data intensive scientific workflows need supplemental enhanced metadata, along with access rights and security safeguards."The post Panel Discussion: Metadata and Archiving at Scale appeared first on insideHPC.
|
by staff on (#4D4YM)
Today AI startup Wave Computing announced its new TritonAI 64 platform, which integrates a triad of powerful technologies into a single, future-proof intellectual property (IP) licensable solution. Wave’s TritonAI 64 platform delivers 8-to-32-bit integer-based support for high-performance AI inferencing at the edge now, with bfloat16 and 32-bit floating point-based support for edge training in the future.The post Wave Computing Launches TritonAI 64 Platform for High-Speed Inferencing appeared first on insideHPC.
|
by staff on (#4D37Q)
The UT Southwestern Medical Center in Dallas is seeking a Computational Scientist in our Job of the Week. "The Computational Scientist will support faculty and students in adapting computational strategies to the specific features of the HPC infrastructure. The successful candidate will work with a range of systems and technologies such as compute cluster, parallel file systems, high speed interconnects, GPU-based computing and database servers."The post Job of the Week: Computational Scientist at UT Southwestern Medical Center appeared first on insideHPC.
|
by staff on (#4D32X)
In this podcast, the Radio Free HPC team looks at news from the GPU Technology Conference. "Dan has been attending GTC since well before it became the big and important conference that it is today. We get a quick update on what was covered: the long keynote, automotive and robotics, the Mellanox acquisition, how a growing fraction of enterprise applications will be AI."The post Podcast: Enterprises go HPC at GPU Technology Conference appeared first on insideHPC.
|
by Rich Brueckner on (#4D18C)
Lawrence Livermore National Lab has deployed a 170-node HPC cluster from Penguin Computing. Based on AMD EPYC processors and Radeon Instinct GPUs, the new Corona cluster will be used to support the NNSA Advanced Simulation and Computing (ASC) program in an unclassified site dedicated to partnerships with American industry. “Even as we do more of our computing on GPUs, many of our codes have serial aspects that need really good single core performance. That lines up well with AMD EPYC.â€The post AMD Powers Corona Cluster for HPC Analytics at Livermore appeared first on insideHPC.
|
by staff on (#4D134)
Engineers at the University of California, Berkeley have built a new photonic switch that can control the direction of light passing through optical fibers faster and more efficiently than ever. This optical "traffic cop" could one day revolutionize how information travels through data centers and high-performance supercomputers that are used for artificial intelligence and other data-intensive applications.The post Berkeley Engineers build World’s Fastest Optical Switch Arrays appeared first on insideHPC.
|
by Rich Brueckner on (#4D136)
Satoshi Matsuoka from RIKEN gave this talk at the HPC User Forum in Santa Fe. "Post-K is the flagship next generation national supercomputer being developed by Riken and Fujitsu in collaboration. Post-K will have hyperscale class resource in one exascale machine, with well more than 100,000 nodes of sever-class A64fx many-core Arm CPUs, realized through extensive co-design process involving the entire Japanese HPC community."The post Arm A64fx and Post-K: A Game-Changing CPU & Supercomputer appeared first on insideHPC.
|
by Sarah Rubenoff on (#4D0YP)
A new whitepaper from Penguin Computing contends “a new era of supercomputing†has arrived — driven primarily by the emergence of graphics processing units or GPUs. The tools once specific to gaming are now being used by investment and financial services to gain greater insights and generate actionable data. Learn how GPUs are spurring innovation and changing how today's finance companies address their data processing needs.The post GPUs Address Growing Data Needs for Finance & Insurance Sectors appeared first on insideHPC.
|
by Rich Brueckner on (#4CYRH)
Jesse Martinez from Los Alamos National Laboratory gave this talk at the OpenFabrics Workshop in Austin. "High speed networking has become extremely important in the world of HPC. As parallel processing capabilities increases and storage solutions increase in capacity, the network must be designed and implemented in a way to keep up with these trends. LANL has a very diverse use of high speed fabrics within its environment, from the compute clusters, to the storage solutions. This keynote/introduction session to the Sys Admin theme at the workshop will focus on how LANL has made use of these diverse fabrics to optimize and simplify the notion of data movement and communication to obtain these results for scientists solving real world problems."The post Video: HPC Networking in the Real World appeared first on insideHPC.
|
by Rich Brueckner on (#4CYJS)
Jack Wells from ORNL gave this talk at the GPU Technology Conference. "HPC centers have been traditionally configured for simulation workloads, but deep learning has been increasingly applied alongside simulation on scientific datasets. These frameworks do not always fit well with job schedulers, large parallel file systems, and MPI backends. We'll share benchmarks between native compiled versus containers on Power systems, like Summit, as well as best practices for deploying learning and models on HPC resources on scientific workflows."The post Scaling Deep Learning for Scientific Workloads on the #1 Summit Supercomputer appeared first on insideHPC.
|
by staff on (#4CYJT)
Scientists from Brookhaven National Laboratory, Stony Brook University, and DOE’s Energy Sciences Network (ESnet) are collaborating on an experiment that puts U.S. quantum networking research on the international map. Researchers have built a quantum network testbed that connects several buildings on the Brookhaven Lab campus using unique portable quantum entanglement sources and an existing DOE ESnet communications fiber network—a significant step in building a large-scale quantum network that can transmit information over long distances.The post DOE Extending Quantum Networks for Long Distance Entanglement appeared first on insideHPC.
|
by staff on (#4CYEG)
While there are many advantages to running in the cloud, the issues can be complex. Users need to figure out how to securely extend on-premise clusters, devise solutions for data handling and constantly keep an eye on costs. Univa's Robert Lalonde, Vice President and General Manager, Cloud, explores how to turbocharge your HPC hybrid cloud with tools like policy-based automation, and how closing the loop between workload scheduling and cloud-automation can drive higher performance and dramatic cost efficiencies.The post Turbocharge your HPC Hybrid Cloud with Policy-based Automation appeared first on insideHPC.
|
by staff on (#4CWAH)
In this Chip Chat podcast, Paul Nash from the Google Cloud Platform discusses the industry trends impacting IaaS and how Google Cloud Platform together with Intel are driving innovation in the cloud. "The two companies will collaborate on Anthos, a new reference design based on the 2nd-Generation Intel Xeon Scalable processor and an optimized Kubernetes software stack that will deliver increased workload portability to customers who want to take advantage of hybrid cloud environments. Intel will publish the production design as an Intel Select Solution, as well as a developer platform."The post Podcast: Intel to power Anthos Google Cloud Platform appeared first on insideHPC.
|
by staff on (#4CW5F)
The Lustre User Group has posted their speaker agenda for LUG 2019. The event takes place May 14-17 in Houston. "LUG 2019 is the industry’s primary venue for discussion and seminars on the Lustre parallel file system and other open source file system technologies. Don’t miss your chance to actively participate in industry dialogue on best practices and emerging technologies, explore upcoming developments of the Lustre file system, and immerse in the strong Lustre community."The post Agenda Posted for LUG 2019 in Houston appeared first on insideHPC.
|
by staff on (#4CW5H)
Today Qualcomm announced that it is bringing the Company’s artificial intelligence expertise to the cloud with the Qualcomm Cloud AI 100. "Our all new Qualcomm Cloud AI 100 accelerator will significantly raise the bar for the AI inference processing relative to any combination of CPUs, GPUs, and/or FPGAs used in today’s data centers,†said Keith Kressin, senior vice president, product management, Qualcomm Technologies, Inc.The post Qualcomm to bring power-efficient AI Inference to the Cloud appeared first on insideHPC.
|
by staff on (#4CW0Q)
Today Sylabs announced a multi-phase collaboration with Google Cloud as a technology partner. Aimed at systematically addressing enterprise requirements in a cloud-native fashion, the first phase of the collaboration will be based upon availability of Sylabs' SingularityPRO via the Google Cloud Platform Marketplace. "Singularity is a widely adopted container runtime that implements a unique security model to mitigate privilege escalation risks, and provides a platform to capture a complete application environment into a single file."The post SingularityPRO comes to Google Cloud appeared first on insideHPC.
|
by Rich Brueckner on (#4CW0S)
Terry Wallace from Los Alamos National Lab gave this talk at the HPC User Forum. "The Laboratory was established in 1943 as site Y of the Manhattan Project for a single purpose: to design and build an atomic bomb. It took just 27 months. The Los Alamos of today has a heightened focus on worker safety and security awareness, with the ever-present core values of intellectual freedom, scientific excellence, and national service. Outstanding science underpins the Laboratory's past and its future."The post Video: A History of Los Alamos National Lab appeared first on insideHPC.
|
by staff on (#4CSVY)
Today Spectra Logic announced enhancements to its family of tape libraries that help end users simplify and enhance their workflows. With the introduction of Spectra Swarm ethernet connectivity, the company makes tape connectivity easier and adds a modern interface to LTO tape libraries by leveraging all of the same infrastructure and networking capabilities as the rest of the equipment in a modern data center. Spectra Swarm is tested and qualified to work with Spectra LTO tape libraries — Spectra Stack through Spectra T950.The post Spectra Swarm Brings Ethernet Connectivity to LTO Tape Libraries appeared first on insideHPC.
|
by staff on (#4CSNR)
Today WekaIO announced the opening of an office in Germany, along with the appointment of Kim Gardner as Regional Sales Manager. Gardner, a seasoned storage veteran, will continue the rapid market expansion of Matrix in Germany, Austria, and Switzerland (DACH) that has been seen in the US and other markets. "We have many customer deployments in artificial intelligence (AI), high-performance computing (HPC), finance, media & entertainment, and genomics in the US and UK,†said Richard Dyke, VP of Sales at WekaIO. “The opportunities inherent in Germany’s fully-fledged AI and HPC sector made this the next logical target for us and prompted our expansion into this fertile market.â€The post WekaIO HPC Storage Solutions come to Central Europe appeared first on insideHPC.
|
by staff on (#4CSNS)
In this Big Compute Podcast, host Gabriel Broner interviews Josh Krall co-founder and VP of Technology at Boom Supersonic. Boom is using HPC in the cloud to design a passenger supersonic plane and address the technical and business challenges it poses. "We witnessed technical success with supersonic flying with Concorde, but the economics did not work out. More than forty years later, Boom is embarking in building Overture, a supersonic plane, where passengers will pay the price of today’s business class seats."The post Big Compute Podcast: Boom Supersonic looks to HPC Cloud appeared first on insideHPC.
|
by staff on (#4CSF7)
Today E8 Storage announced a technology partnership with ThinkParQ to enable the integration of BeeGFS with E8 Storage’s NVMe-oF solution. The combined solution will offer customers new levels of improved performance and scalability for data intensive workloads. "With the combination of E8 Storage’s RAID functionality and ThinkParQ’s BeeGFS file system, customers can scale out by adding as many nodes as needed to meet or exceed capacity requirements."The post ThinkParQ Brings BeeGFS to E8 Storage appeared first on insideHPC.
|
by Rich Brueckner on (#4CSA8)
Martin Reiger from Penguin Computing gave this talk at the HPC User Forum. "Built on a secure, high-performance bare-metal server platform with supercomputing-grade, non-blocking InfiniBand interconnects infrastructure, Penguin on Demand can handle the most challenging simulation and analytics. But, because of access via the cloud (from either a traditional Linux command line interface (CLI) or a secure web portal) you get both instant accesses and extreme scalability — without having to invest in on-premise infrastructure or the associated operational costs."The post Making HPC Cloud a Reality in the Federal Space appeared first on insideHPC.
|
by staff on (#4CQSY)
Today NVIDIA and the American College of Radiology announced a collaboration to enable thousands of radiologists nationwide to create and use AI for diagnostic radiology in their own facilities, using their own data, to meet their own clinical needs. "NVIDIA builds platforms that democratize the use of AI and we purpose-built the Clara AI toolkit to give every radiologist the opportunity to develop AI tools that are customized to their patients and their clinical practice,†said Kimberly Powell, vice president of Healthcare at NVIDIA. “Our successful pilot with the ACR is the first of many that will make AI more accessible to the entire field of radiology.â€The post NVIDIA Powers New Lab for AI Radiology appeared first on insideHPC.
|
by staff on (#4CQCD)
In this video, engineers install a Spectra Logic tape library at STFC's Scientific Data Centre at the Rutherford Appleton Laboratory in the UK. The new Spectra TFinity Tape Library has an initial capacity of 65PB. "This system will provide for the predicted data-growth from existing groups over the next decade, and an active archive for JASMIN users and the IRIS science communities. It brings SCD's total tape storage capacity within the RAL Scientific Data Centre to 240PB."The post Time-Lapse Video: Installation of Spectra Logic Tape Robot at Rutherford Appleton Laboratory appeared first on insideHPC.
|
by staff on (#4CQ37)
Excelero is bringing its NVMesh software-defined block storage solutions to Lenovo customers and channel partners worldwide. "Already proven in Lenovo deployments at SciNet, Canada’s largest supercomputing facility, and at a London-based machine learning firm, Excelero’s NVMesh provides an optimal choice for web-scale deployments and in Big Data uses in concert with Lenovo’s ThinkSystem portfolio."The post Excelero NVMesh comes to Lenovo ThinkSystems appeared first on insideHPC.
|
by staff on (#4CQ39)
The Public-private consortium ATOM has announced today that it is collaborating with NVIDIA to scale ATOM’s AI-driven drug discovery platform. “Scientists at ATOM have created a predictive model development pipeline that calls upon large datasets to build and test predictive machine learning models which consider pharmacokinetics, safety, developability, and efficacy. NVIDIA will provide additional resources that will enable this pipeline to be run at increased scale and speed.â€The post Video: ATOM Consortium to Accelerate AI in Drug Discovery with NVIDIA appeared first on insideHPC.
|
by Rich Brueckner on (#4CQ3B)
In this video from the HPC User Forum, Alex Norton from Hyperion Research presents: New AI Hardware and Trends. This presentation will highlight some of the trends in emerging technologies associated with the AI ecosystem. Much of the information in this presentation is broad trends of the overall market in terms of the emerging technology […]The post Video: New AI Hardware and Trends appeared first on insideHPC.
|
by staff on (#4CNM9)
Hyperion Research has released their latest High-Performance Technical Server QView, a comprehensive report on the state of the HPC Market. The QView presents the HPC market from various perspectives, including competitive segment, vendor, cluster versus non-cluster, geography, and operating system. It also contains detailed revenue and shipment information by HPC models.The post Hyperion Research: HPC Server Market Beat Forecast in 2018 appeared first on insideHPC.
|
by Rich Brueckner on (#4CNJ3)
Mike Heroux from Sandia National Labs gave this talk at the HPC User Forum. "The Exascale Computing Project is accelerating delivery of a capable exascale computing ecosystem for breakthroughs in scientific discovery, energy assurance, economic competitiveness, and national security.The goal of the ECP Software Technology focus area is to develop a comprehensive and coherent software stack that will enable application developers to productively write highly parallel applications that can portably target diverse exascale architectures."The post Exascale Computing Project Software Activities appeared first on insideHPC.
|
by Rich Brueckner on (#4CKYP)
Ed Turkel from Dell EMC gave this talk at the HPC User Forum in Santa Fe. "As data analytics, HPC and AI converge and the technology evolves, Dell EMC’s worldwide HPC and AI innovation centers provide expert leadership, test new machine learning technologies, and share best practices. In working with the HPC Community, Dell EMC HPC and AI Centers of Excellence provide a network of resources based on the wide-ranging know-how and experience of technology developers, service providers, and end-users."The post Video: Making Innovation Real – Dell EMC Update appeared first on insideHPC.
|
by staff on (#4CKVZ)
Chevron is seeking an HPC Technology Researcher in our Job of the Week. This position will be accountable for strategic research, technology development and business engagement to deliver High Performance Computing solutions that differentiate Chevron’s performance. The successful candidate is expected to manage projects and small programs and personally apply and grow technical skills in the Advanced Computing space."The post Job of the Week: HPC Technology Researcher at Chevron appeared first on insideHPC.
|
by staff on (#4CJ09)
A project carried out at the National University of Ireland Galway and Eindhoven University of Technology (TU/e) and KU Leuven has been exploring the role of aerodynamic science in Paralympic cycling. "This work also opens the door for world-class Paralympic athletes to have the same expertise and equipment available to them as other professional athletes. At the world championships and Paralympics where tenths of seconds can decide medals this work can unlock that vital time!"The post Supercomputing Aerodynamics in Paralympic Cycling appeared first on insideHPC.
|
by staff on (#4CJ0B)
Researchers are using HPC to design potentially life-saving proteins. In this TACC podcast, host Jorge Salazar discusses this groundbreaking work with the science team. "The scientists say their methods could be applied to useful technologies such as pharmaceutical targeting, artificial energy harvesting, 'smart' sensing and building materials, and more."The post Podcast: Supercomputing Synthetic Biomolecules appeared first on insideHPC.
|
by staff on (#4CHV9)
In this special guest feature, SC19 General Chair Michela Taufer discusses with Lynne Parker, Assistant Director for Artificial Intelligence at The White House Office of Science and Technology Policy. Parker describes her new role, share her insights on the state of AI in the US (and beyond), and opine on the future impact of HPC on the evolution of AI.The post A look inside the White House AI Initiative appeared first on insideHPC.
|
by Rich Brueckner on (#4CHVB)
David Wolpert from the Santa Fe Institute gave this talk at the HPC User Forum. "The thermodynamic restrictions on all systems that perform computation provide major challenges to modern design of computers. As a result, the time is ripe to pursue a new field of science and engineering: a modern thermodynamics of computation. This would combine the resource/time tradeoffs of concern in conventional CS with the thermodynamic tradeoffs in computation that are now being revealed. In this way we should be able to develop the tools necessary both for analyzing thermodynamic costs in biological systems and for engineering next-generation computers."The post Thermodynamics of Computation: Far More Than Counting Bit Erasure appeared first on insideHPC.
|
by Rich Brueckner on (#4CG5Z)
Today Atos inaugurated a new AI laboratory in France. Set up as part of the global partnership between Atos and Google Cloud, the laboratory will enable clients, businesses and public organizations to identify practical cases, for which AI could provide innovative and effective solutions. "In order for France to continue to play a key role in the information space, it has to invest heavily in artificial intelligence and new technologies," said Thierry Breton, Chairman and CEO of Atos. "Beyond economic development, being able to offer technological excellence while protecting European data is a matter of sovereignty. With this joint laboratory between Atos and Google Cloud, we are enabling the adoption of artificial intelligence by our clients by offering them the best technologies and the highest level of security for their data processing, all within a clearly defined European regulatory framework. As such, Atos combines economic and technological development with sovereignty, compliance and security and helps to design a secure and valued European information space.â€The post Atos Opens AI Laboratory in France with the Google Cloud appeared first on insideHPC.
|
by staff on (#4CG61)
Today Penguin Computing announced that the company's Relion family of Linux-based servers is now available with the latest generation of Intel Xeon Scalable processors including both the processor formerly codenamed Cascade Lake-SP as well as the Walker Pass-based Cascade Lake-AP technology. This enhancement will enable Penguin Computing to improve performance for data center, HPC, and AI customers while also delivering the flexibility, density, and scalability of the Relion server design.The post Penguin Computing steps up with 2nd Generation Intel Xeon Scalable Processors appeared first on insideHPC.
|
by Rich Brueckner on (#4CG62)
"With his precious scientific and technical contribution, Prof. Dr. Thomas Schulthess has laid important foundations for the success of research groups that use the CSCS infrastructure and carry out computational research. For about five years the CSCS has had the best computing performance in Europe and is currently one of the world's leading centers in this sector: it is with these arguments that the Foundation Board motivates the choice of the winner of the award."The post Thomas Schulthess from CSCS Awarded Doron Prize appeared first on insideHPC.
|
by Rich Brueckner on (#4CG1B)
In this video from the HPC User Forum in Santa Fe, Leonardo Flores from the European Commission presents: EuroHPC - The EU Strategy in HPC. "EuroHPC is a joint collaboration between European countries and the European Union about developing and supporting exascale supercomputing by 2022/2023. EuroHPC will permit the EU and participating countries to coordinate their efforts and share resources with the objective of deploying in Europe a world-class supercomputing infrastructure and a competitive innovation ecosystem in supercomputing technologies, applications and skills."The post Video: EuroHPC – The EU Strategy in HPC appeared first on insideHPC.
|
by staff on (#4CDCJ)
Today West Virginia University announced of one of the state's most powerful computer clusters to help power research and innovation statewide. "The Thorny Flat High Performance Computer Cluster, named after the state's second highest peak, joins the Spruce Knob cluster as resources. With 1,000 times more computing power than a desktop computer, the Thorny Flat cluster could benefit a variety research: forest hydrology; genetic studies; forensic chemistry of firearms; modeling of solar-to-chemical energy harvesting; and design and discovery of new materials."The post Thorny Flat Supercomputer comes to West Virginia University appeared first on insideHPC.
|