by Rich Brueckner on (#49R1S)
The Milwaukee School of Engineering is seeking an HPC System Administrator in our Job of the Week. "The HPC Systems Administrator will lead efforts related to the daily operation of a small to medium-sized GPU-based high-performance computing (HPC) cluster. The individual in this position will provide engineering and administration support for HPC hardware and software."The post Job of the Week: HPC System Administrator at the Milwaukee School of Engineering appeared first on insideHPC.
|
High-Performance Computing News Analysis | insideHPC
Link | https://insidehpc.com/ |
Feed | http://insidehpc.com/feed/ |
Updated | 2024-11-24 14:15 |
by staff on (#49P8C)
Over at XSEDE, Kimberly Mann Bruch & Jan Zverina from the San Diego Supercomputer Center write that researchers are using supercomputers to create detailed simulations of neutron star structures and mergers to better understand gravitational waves, which were detected for the first time in 2015. "XSEDE resources significantly accelerated our scientific output," noted Paschalidis, whose group has been using XSEDE for well over a decade, when they were students or post-doctoral researchers. "If I were to put a number on it, I would say that using XSEDE accelerated our research by a factor of three or more, compared to using local resources alone."The post Supercomputing Neutron Star Structures and Mergers appeared first on insideHPC.
|
by Rich Brueckner on (#49P8E)
Atos recently announced two new flash accelerators for HPC) workloads as part of Atos Smart Data Management Suite. These two new solutions – Smart Burst Buffer (SBB) and Smart Bunch of Flash (SBF) – increase the performance and productivity of HPC I/O intensive applications. They are the most flexible and cost-effective flash accelerators on the […]The post Atos steps up with NVMeOF Flash Accelerator Solutions appeared first on insideHPC.
|
by Rich Brueckner on (#49P33)
Michael Aguilar from Sandia National Laboratories gave this talk at the Stanford HPC Conference. "This talk will discuss the Sandia National Laboratories Astra HPC system as mechanism for developing and evaluating large-scale deployments of alternative and advanced computational architectures. As part of the Vanguard program, the new Arm-based system will be used by the National Nuclear Security Administration (NNSA) to run advanced modeling and simulation workloads for addressing areas such as national security, energy and science."The post ASTRA: A Large Scale ARM64 HPC Deployment appeared first on insideHPC.
|
by staff on (#49KR7)
Today the UberCloud announced that it has been recognized in three leading HPC industry awards for two of its innovative engineering projects in the cloud. "We are proud and humbled for the two prestigious Hyperion Innovation Excellence Awards and the HPCwire Editors’ Choice Award,†said Wolfgang Gentzsch, President of UberCloud, “Our innovative projects on UberCloud’s Cloud Simulation Platform showcase engineers, how they can benefit from the cloud to accelerate their simulations and innovate."The post UberCloud Recognized with three HPC Industry Awards appeared first on insideHPC.
|
by Rich Brueckner on (#49KR8)
"When we are optimizing our objective is to determine which hardware resource the code is exhausting (there must be one, otherwise it would run faster!), and then see how to modify the code to reduce its need for that resource. It is therefore essential to understand the maximum theoretical performance of that aspect of the machine, since if we are already achieving the peak performance we should give up, or choose a different algorithm."The post Improving HPC Performance with the Roofline Model appeared first on insideHPC.
|
by Rich Brueckner on (#49KJX)
Addison Snell gave this talk at the Stanford HPC Conference. "Intersect360 Research returns with an annual deep dive into the trends, technologies and usage models that will be propelling the HPC community through 2017 and beyond. Emerging areas of focus and opportunities to expand will be explored along with insightful observations needed to support measurably positive decision making within your operations."The post The New HPC appeared first on insideHPC.
|
by staff on (#49KJZ)
Today Atos announced the deployment of a BullSequana X1000 supercomputer at CALMIP, one of the biggest multi-scale inter-university supercomputing centers in France. Called Olympe, the supercomputer will be used for over 200 research projects in materials, fluid mechanics, universe sciences and chemistry. "The supercomputer has a peak performance of 1,3 petaflop/s bringing 5 times more computing and processing power than that of the previous supercomputer, for the same energy consumption."The post Video: Atos Olympe Supercomputer Powers Research at CALMIP appeared first on insideHPC.
|
by Sarah Rubenoff on (#49KE4)
High performance computing has gone through numerous shifts in the past few years. The new HPC, inclusive of analytics and AI, and with its wide range of technology components and choices, presents significant challenges to a commercial enterprise. A new report from Lenovo explores the intersection of HPC and AI, as well tips on how to choose the right HPC provider for your enterprise.The post Advancement in AI & Machine Learning Calls For New HPC Solutions appeared first on insideHPC.
|
by staff on (#49H5J)
In this video, IBM researchers describe an all-optical approach to developing direct in-memory multiplication on an integrated photonic device based on non-volatile multilevel phase-change memories. Using integrated photonic technology will potentially offer attractive solutions for using light to carry out computational tasks on a chip in future.The post Video: In-Memory Computing Using Photonic Memory Devices appeared first on insideHPC.
|
by staff on (#49H5M)
Researchers at the Barcelona Supercomputing Centre have created a new artificial intelligence-based computational method that accelerates the identification of new genes related to cancer. Prof. Pržulj highlights that this new method to analyze cells "enables the identification of perturbed genes in cancer that do not appear as perturbed in any data type alone. This discovery emphasizes the importance of integrative approaches to analyze biological data and paves the way towards comparative integrative analyzes of all cells."The post Ai allows for identification of new cancer genes appeared first on insideHPC.
|
by staff on (#49H5P)
Today Excelero announced that it has appointed HPC pioneer Sven Breuner as Field Chief Technical Officer (CTO.) With over a decade of leadership in the HPC industry, Sven will help expand the innovative capabilities in Excelero’s award-winning NVMesh solution by anticipating customers' future requirements while bringing deep technical and product knowledge to the field teams.The post Sven Breuner Joins Excelero as Field CTO appeared first on insideHPC.
|
by Rich Brueckner on (#49H0D)
The PASC19 conference will feature a Public Lecture by Keren Bergman on Flexibly Scalable High Performance Architectures with Embedded Photonics. The event takes place June 12-14 in Zurich, Switzerland the week before ISC 2019. "Integrated silicon photonics with deeply embedded optical connectivity is on the cusp of enabling revolutionary data movement and extreme performance capabilities."The post PASC19 to feature talk on Scalable High Performance Architectures with Embedded Photonics appeared first on insideHPC.
|
by Rich Brueckner on (#49H0F)
Greg Kurtzer from Sylabs gave this talk at the Stanford HPC Conference. "Singularity is a widely adopted container technology specifically designed for compute-based workflows making application and environment reproducibility, portability and security a reality for HPC and AI researchers and resources. Here we will describe a high-level overview of Singularity and demonstrate how to integrate Singularity containers into existing application and resource workflows as well as describe some new trending models that we have been seeing."The post Singularity: Container Workflows for Compute appeared first on insideHPC.
|
by Rich Brueckner on (#49EMK)
The Arm Research Summit, has issued its Call for Submissions. The event takes place September 15-18 in Austin, Texas. As a one-of-a-kind forum for topics that are shaping our world, the Summit focuses on presentations and discussions, and welcomes research at all stages of development and/or publication. The committee encourages submissions of early-stage, high-impact ideas seeking feedback, new […]The post Call for Submissions: Arm Research Summit in Austin appeared first on insideHPC.
|
by staff on (#49EMN)
Today the 25 Gigabit Ethernet Consortium announced the availability of a low-latency forward error correction (FEC) specification for 50 Gbps, 100 Gbps and 200 Gbps Ethernet networks. "Five years ago, only HPC developers cared about low latency, but today latency sensitivity has come to many more mainstream applications,†said Rob Stone, technical working group chair of the 25G Ethernet Consortium. “With this new specification, the consortium is improving the single largest source of packet processing latency, which improves the performance that high-speed Ethernet brings to these applications.â€The post 25 Gigabit Ethernet Consortium Offers Low Latency Specification for 50GbE, 100GbE and 200GbE HPC Networks appeared first on insideHPC.
|
by staff on (#49EFC)
Today Hyperion Research launched a new Cloud Application Assessment Tool. Available free of charge to the public, this tool is an interactive grading system to assess the characteristics of HPC applications to help understand whether they fit better on an on-premise HPC system or are they well suited for running in a public cloud. "By using this tool, end users will be able to see which characteristics of their applications drive or restrict the ability to run them in clouds, and what future improvements in public clouds may make their application fit better in the cloud. It also shows the characteristics that drive certain applications to stay on-premise."The post Hyperion Research Launches Cloud Application Assessment Tool appeared first on insideHPC.
|
by Rich Brueckner on (#49EFE)
Steve Oberlin from NVIDIA gave this talk at the Stanford HPC Conference. "Clearly, AI has benefited greatly from HPC. Now, AI methods and tools are starting to be applied to HPC applications to great effect. This talk will describe an emerging workflow that uses traditional numeric simulation codes to generate synthetic data sets to train machine learning algorithms, then employs the resulting AI models to predict the computed results, often with dramatic gains in efficiency, performance, and even accuracy."The post HPC + Ai: Machine Learning Models in Scientific Computing appeared first on insideHPC.
|
by staff on (#49EFF)
A new special report from insideHPC, courtesy of Dell EMC and NIVIDa explores current machine learning applications in government. And this excerpt breaks down solutions for AI in government, including Dell EMC Ready Solutions. Dell EMC Ready Solutions for AI are validated hardware and software stacks optimized to accelerate AI initiatives, shortening the time to architect a new solution by six to 12 months.The post Finding a Solution for Ai in Government appeared first on insideHPC.
|
by Rich Brueckner on (#49CFA)
Michael Jennings from LANL gave this talk at the Stanford HPC Conference. "As containers initially grew to prominence within the greater Linux community, particularly in the hyperscale/cloud and web application space, there was very little information out there about using Linux containers for HPC at all. In this session, we'll confront this problem head-on by clearing up some common misconceptions about containers, bust some myths born out of misunderstanding and marketing hype alike, and learn how to safely (and securely!) navigate the Linux container landscape with an eye toward what the future holds for containers in HPC and how we can all get there together!"The post Video: Container Mythbusters appeared first on insideHPC.
|
by staff on (#49C7J)
It was with one goal – accelerating Python execution performance – that lead to the creation of Intel Distribution for Python, a set of tools designed to provide Python application performance right out of the box, usually with no code changes required. This sponsored post from Intel highlights how Intel SDK can enhance Python development and execution, as Python continues to grow in popularity.The post Python Power: Intel SDK Accelerates Python Development and Execution appeared first on insideHPC.
|
by Rich Brueckner on (#49APW)
The Creative Destruction Lab is now accepting applications for their 2019 Quantum Machine Learning and Blockchain-AI Incubator Streams. As a seed-stage program for massively scalable, science-based companies, the mission of the CDL is to enhance the prosperity of humankind. "The CDL Blockchain-AI Incubator Stream is a 10-month incubator program which gives blockchain founders personalized mentorship from blockchain thought leaders, successful tech entrepreneurs, scientists, economists and venture capitalists. Founders are also eligible for up to US$100K in investment, in exchange for equity."The post Seeking Seed Money? Creative Destruction Lab offers Incubator Streams for Quantum, Blockchain, and Ai appeared first on insideHPC.
|
by Rich Brueckner on (#49APY)
In this video from the 2019 Stanford HPC Conference, Usha Upadhyayula & Tom Krueger from Intel present: Introduction to Intel Optane Data Center Persistent Memory. For decades, developers had to balance data in memory for performance with data in storage for persistence. The emergence of data-intensive applications in various market segments is stretching the existing […]The post Video: Introduction to Intel Optane Data Center Persistent Memory appeared first on insideHPC.
|
by Rich Brueckner on (#49942)
DK Panda from Ohio State University gave this talk at the 2019 Stanford HPC Conference. "This talk will provide an overview of challenges in designing convergent HPC and BigData software stacks on modern HPC clusters. An overview of RDMA-based designs for Hadoop (HDFS, MapReduce, RPC and HBase), Spark, Memcached, Swift, and Kafka using native RDMA support for InfiniBand and RoCE will be presented. Enhanced designs for these components to exploit HPC scheduler (SLURM), parallel file systems (Lustre) and NVM-based in-memory technology will also be presented. Benefits of these designs on various cluster configurations using the publicly available RDMA-enabled packages from the OSU HiBD project will be shown."The post Designing Convergent HPC and Big Data Software Stacks: An Overview of the HiBD Project appeared first on insideHPC.
|
by Rich Brueckner on (#49943)
The University of Chicago Center for Research Informatics is seeking an HPC Systems Administrator in our Job of the Week. "This position will work with the Lead HPC Systems Administrator to build and maintain the BSD High Performance Computing environment, assist life-sciences researchers to utilize the HPC resources, work with stakeholders and research partners to successfully troubleshoot computational applications, handle customer requests, and respond to suggestions for improvements and enhancements from end-users."The post Job of the Week: HPC Systems Administrator at the University of Chicago Center for Research Informatics appeared first on insideHPC.
|
by staff on (#497ER)
The European High Performance Computing Joint Undertaking (EuroHPC JU) has launched its first calls for expressions of interest, to select the sites that will host the Joint Undertaking’s first supercomputers (petascale and precursor to exascale machines) in 2020. "Deciding where Europe will host its most powerful petascale and precursor to exascale machines is only the first step in this great European initiative on high performance computing," said Mariya Gabriel, Commissioner for Digital Economy and Society. "Regardless of where users are located in Europe, these supercomputers will be used in more than 800 scientific and industrial application fields for the benefit of European citizens."The post EuroHPC Takes First Steps Towards Exascale appeared first on insideHPC.
|
by staff on (#494D6)
The San Diego Supercomputer Center (SDSC) and Sylabs have teamed up to bring the Community its first ever meeting of the Singularity User Group (SUG). Singularity has become what it is today through the engagement of users, developers, and providers that have collectively developed a sense of community around the software and its ecosystem. Read on as Ian Lumb, Technical Writer at Sylabs, shares the origins of the new annual Singularity User Group Meeting.The post The Inaugural Singularity User Group Meeting: Registration Open Now appeared first on insideHPC.
|
by staff on (#497AC)
In this podcast, the Radio Free HPC team looks at how Lawrence Livermore National Lab is working to simulate and help modernize the electric grid. They discuss how the ‘new grid’ will need to be two-way, both delivering and accepting electricity. The new grid will also have to communicate with smart homes and other buildings in order to predict demand and adjust real time pricing.The post Podcast: Modernizing the Electric Grid with HPC appeared first on insideHPC.
|
by staff on (#497AD)
Students at Virginia Tech are using HPC for their creative work to build robust project portfolios. "The School of Visual Arts in Virginia Tech’s College of Architecture and Urban Studies offers an advanced rendering class for students in design-based programs, including architecture, industrial design, and interior design. Students in the school’s graduate and undergraduate creative technologies programs also take the course. Rendering involves converting 3D wireframe models into still or animated 2D images that can be displayed on a screen. It is in this class where students hone their skills learning advanced techniques to create complex animations."The post HPC Boosts Art and Design at Virginia Tech appeared first on insideHPC.
|
by Rich Brueckner on (#4975X)
Exascale computing is only a few years away. Today the Exascale Computing Project (ECP) put out the second release of their Extreme-Scale Scientific Software Stack. The E4S Release 0.2 includes a subset of ECP ST software products, and demonstrates the target approach for future delivery of the full ECP ST software stack. Also available are […]The post Exascale Computing Project updates Extreme-Scale Scientific Software Stack appeared first on insideHPC.
|
by Rich Brueckner on (#4975Y)
In this video from the 2019 Stanford HPC Conference, Naoki Shibata from XTREME-D presents: XTREME-Stargate: The New Era of HPC Cloud Platforms. "XTREME-D is an award-winning, funded Japanese startup whose mission is to make HPC cloud computing access easy, fast, efficient, and economical for every customer. The company recently introduced XTREME-Stargate, which was developed as a cloud-based bare-metal appliance specifically for high-performance computations, optimized for AI data analysis and conventional supercomputer usage."The post XTREME-Stargate: The New Era of HPC Cloud Platforms appeared first on insideHPC.
|
by staff on (#494PW)
Over at the SC19 Blog, Charity Plata continues the HPC is Now series of interviews with Enrico Rinaldi, a physicist and special postdoctoral fellow with the Riken BNL Research Center. This month, Rinaldi discusses why HPC is the right tool for physics and shares the best formula for garnering a Gordon Bell Award nomination. "Sierra and Summit are incredible machines, and we were lucky to be among the first teams to use them to produce new scientific results. The impact on my lattice QCD research was tremendous, as demonstrated by the Gordon Bell paper submission."The post Interview: Why HPC is the Right Tool for Physics appeared first on insideHPC.
|
by staff on (#494D4)
In this special guest feature from Scientific Computing World, Robert Roe reports on the Gordon Bell Prize finalists for 2018. "The finalist’s research ranges from AI to mixed precision workloads, with some taking advantage of the Tensor Cores available in the latest generation of Nvidia GPUs. This highlights the impact of AI and GPU technologies, which are opening up not only new applications to HPC users but also the opportunity to accelerate mixed precision workloads on large scale HPC systems."The post Gordon Bell Prize Highlights the Impact of Ai appeared first on insideHPC.
|
by staff on (#494D8)
Atos will soon deploy a Bull supercomputer at the Centro Nacional de Análisis Genómico (CNAG-CRG) in Barcelona for large-scale DNA sequencing and analysis. To support the vast process and calculation demands needed for this analysis, CNAG-CRG worked with Atos to build this custom-made analytics platform, which helps drive new insights ten times faster than its previous HPC system. "Atos helped us to set up a robust platform to conduct in-depth high-performance data analytics on genome sequences, which is the perfect complement to our outstanding sequencing platformâ€, stated Ivo Gut, CNAG-CRG Director.The post Custom Atos Supercomputer to Speed Genome Analysis at CNAG-CRG in Barcelona appeared first on insideHPC.
|
by staff on (#492M3)
Today GigaIO, announced that that the company's new FabreX product was successfully used in the Student Cluster Competition at SC18. Students from the University of Warsaw incorporated GigaIO’s alpha product into their self-designed supercomputing cluster, marking the team’s fourth cluster competition event. "GigaIO offers cutting-edge high-performance computing technology, so it was a privilege to be among the first group to experiment with their newest product, FabreX.â€The post GigaIO Technology Boosts Performance at the SC18 Student Cluster Competition appeared first on insideHPC.
|
by staff on (#492M5)
Today WekaIO announced the results of the independent SPEC SFS 2014 benchmark testing of its flagship WekaIO Matrix product. "Having established itself in the number one position for the SPEC SFS 2014 software build in January 2019, WekaIO has now posted winning results for all remaining benchmarks in the SPEC test suite. In addition to unbeatable performance and scalability, WekaIO has demonstrated extraordinarily low latencies across the benchmark suite, reaffirming Matrix is the world’s fastest parallel file system."The post WekaIO Matrix Cluster Excels on SPEC SFS 2014 Benchmark appeared first on insideHPC.
|
by Richard Friedman on (#491YX)
OpenVINO is a single toolkit, optimized for Intel hardware, that the data scientist and AI software developer can use for quickly developing high-performance applications that employ neural network inference and deep learning to emulate human vision over various platforms. "This toolkit supports heterogeneous execution across CPUs and computer vision accelerators including GPUs, Intel® Movidius™ hardware, and FPGAs."The post Putting Computer Vision to Work with OpenVINO appeared first on insideHPC.
|
by Rich Brueckner on (#491YZ)
Koen Bertels from Delft University of Technology gave this talk at HiPEAC 2019. "In my talk, I will introduce what quantum computers are but also how they can be used as a quantum accelerator. I will discuss why a quantum computer can be more powerful than any classical computer and what the components are of its system architecture. In this context, I will talk about our current research topics on quantum computing, what the main challenges are and what is available to our community."The post Quantum Computing: From Qubits to Quantum Accelerators appeared first on insideHPC.
|
by staff on (#48ZQR)
Today Lenovo announced TruScale Infrastructure Services, a subscription-based offering that allows customers to use and pay for data center hardware and services – on-premise or at a customer-preferred location – without having to purchase the equipment. "Lenovo’s TruScale as-a-Service offering is truly revolutionary, changing how IT departments procure and refresh their data center infrastructure. With our subscription-based model, customers pay for what they use, eliminating upfront capital purchase risk,†said Laura Laltrello, Vice President and General Manager of Services at Lenovo Data Center Group. “Our offering can be applied to any configuration that meets the customer’s needs – whether storage-rich, server-heavy, hyperconverged or high-performance compute – and can be scaled as business dictates.â€The post Lenovo Launches Cloud Hardware Subscriptions with TruScale Infrastructure Services appeared first on insideHPC.
|
by staff on (#48ZJE)
In this video, Computational biologist Laura Boykin describes the threat to lives and livelihoods the whitefly represents, the international effort to fight it, and how supercomputing flips the script on a once unwinnable war. "Cray supports visionaries like Laura and her scientific colleagues in East Africa in combining computation and creativity to change outcomes."The post Truly Inspiring: Fighting World Hunger with Cray Supercomputers appeared first on insideHPC.
|
by staff on (#48Z7T)
Across the globe, governments are acknowledging the potential of AI technologies to impact our daily lives, from how we make purchasing decisions to improving our healthcare. But there are several reasons why government agencies may be hesitant to adopt AI technologies. A new insideHPC Guide, courtesy of Dell EMC and NVIDIA, explores what's next for government AI, as well as already tangible results of AI and machine learning.The post Today’s Application of Ai Within Government appeared first on insideHPC.
|
by Rich Brueckner on (#48ZJG)
Today NVIDIA announced that NVIDIA Nsight Systems 2019.1 is now available for download. As a system-wide performance analysis tool. With it, developers can visualize application algorithms, identify large optimization opportunities, and tune/scale efficiently across CPUs and GPUs. "In this release, we introduce a wide range of new features, refinements, and fixes. The enhancements aim to improve a user’s ability to analyze neural network performance, locate graphical stutter, and increase pattern discoverability."The post NVIDIA steps up with Nsight Systems Performance Analysis Tool appeared first on insideHPC.
|
by Rich Brueckner on (#48ZDF)
In this podcast, the Radio Free HPC team asks whether a supercomputer can or cannot be a “AI Supercomputer.†The question came up after HPE announced a new AI system called Jean Zay that will double the capacity of French supercomputing. "So what are the differences between a traditional super and a AI super? According to Dan, it mostly comes down to how many GPUs the system is configured with, while Shahin and Henry think it has something to do with the datasets."The post Podcast: What is an Ai Supercomputer? appeared first on insideHPC.
|
by staff on (#48X11)
In this special guest feature, Paul Grun and Doug Ledford from the OpenFabrics Alliance describe the industry trends in the fabrics space, its state of affairs and emerging applications. "Originally, ‘high-performance fabrics’ were associated with large, exotic HPC machines. But in the modern world, these fabrics, which are based on technologies designed to improve application efficiency, performance, and scalability, are becoming more and more common in the commercial sphere because of the increasing demands being placed on commercial systems."The post The State of High-Performance Fabrics: A Chat with the OpenFabrics Alliance appeared first on insideHPC.
|
by staff on (#48X12)
Researchers at NERSC face the daunting task of moving 43 years worth of archival data across the network to new tape libraries, a whopping 120 Petabytes! "Even with all of this in place, it will still take about two years to move 43 years’ worth of NERSC data. Several factors contribute to this lengthy copy operation, including the extreme amount of data to be moved and the need to balance user access to the archive."The post Moving Mountains of Data at NERSC appeared first on insideHPC.
|
by staff on (#48WVV)
Dr. Omar Ghattas from the University of Texas at Austin has been selected as the recipient of the 2019 SIAM Geosciences Career Prize. He is being recognized for “groundbreaking contributions in analysis, methods, algorithms, and software for grand challenge computational problems in geosciences, and for exceptional influence as mentor, educator, and collaborator.â€The post Dr. Omar Ghattas Receives 2019 SIAM Geosciences Career Prize appeared first on insideHPC.
|
by staff on (#48WVX)
In this special guest feature, Wolfgang Gentzsch from The UberCloud writes that we’ve never been so close to ubiquitous computing for researchers and engineers. "High-performance computing continues to progress, but the next big step toward ubiquitous HPC is coming from software container technology based on Docker, facilitating software packaging and porting, ease of access and use, service stack automation and self-service, and simplifying software maintenance and support."The post HPC in the Hands of Every Engineer – With Software Containers appeared first on insideHPC.
|
by Rich Brueckner on (#48WQG)
Thomas Schwinge from Mentor gave this talk at FOSDEM'19. "Requiring only few changes to your existing source code, OpenACC allows for easy parallelization and code offloading to accelerators such as GPUs. We will present a short introduction of GCC and OpenACC, implementation status, examples, and performance results."The post Video: Speeding up Programs with OpenACC in GCC appeared first on insideHPC.
|
by staff on (#48V5N)
Over at Argonne, Nils Heinonen writes that Researchers are using the open source Singularity framework as a kind of Rosetta Stone for running supercomputing code almost anywhere. "Once a containerized workflow is defined, its image can be snapshotted, archived, and preserved for future use. The snapshot itself represents a boon for scientific provenance by detailing the exact conditions under which given data were generated: in theory, by providing the machine, the software stack, and the parameters, one’s work can be completely reproduced."The post Argonne Looks to Singularity for HPC Code Portability appeared first on insideHPC.
|
by Rich Brueckner on (#48V5Q)
In this video from Arm HPC Asia 2019, Elsie Wahlig leads a panel discussion on Frontiers of AI deployments in HPC on Arm. "Topics at the workshop covered all aspects of the Arm server ecosystem, from chip design, hardware, software architecture and standardization to performance tuning, and applications in biology, medicine, meteorology, astronomy, geography etc. It is exciting to see that Arm servers are being used in so many areas, contributing significantly to the global economy."The post Video: Frontiers of AI Deployments in HPC on Arm appeared first on insideHPC.
|