by staff on (#3GAY9)
This is the final entry in a insideHPC series of features that explores new resource management solutions for workload convergence, such as Bright Cluster Manager by Bright Computing. This article highlights how resource management systems that can manage clusters on-premises or in the cloud greatly simplify cluster management. That way, different tools do not have to be learned for managing a cluster based on whether it is located in the company data center or in the cloud.The post Resource Management Across the Private/Public Cloud Divide appeared first on insideHPC.
|
High-Performance Computing News Analysis | insideHPC
Link | https://insidehpc.com/ |
Feed | http://insidehpc.com/feed/ |
Updated | 2024-11-05 05:30 |
by Rich Brueckner on (#3GAQY)
The European Horizon 2020 AllScale project has launched a survey on exascale resilience. "As we approach ExaScale, compute node failure will become commonplace. @AllScaleEurope wants to know how #HPC software developers view fault tolerance today, & how they plan to incorporate fault tolerance in their software in the ExaScale era."The post Take the Exascale Resilience Survey from AllScale Europe appeared first on insideHPC.
|
by staff on (#3G8S6)
Researchers at North Carolina State University are using the Blue Waters Supercomputer to explore graphene’s applications, including its use in nanoscale electronics and electrical DNA sequencing. "We’re looking at what’s beyond Moore’s law, whether one can devise very small transistors based on only one atomic layer, using new methods of making materials,†said Professor Jerry Bernholc, from North Carolina University. “We are looking at potential transistor structures consisting of a single layer of graphene, etched into lines of nanoribbons, where the carbon atoms are arranged like a chicken wire pattern. We are looking at which structures will function well, at a few atoms of width.â€The post Supercomputing Graphene Applications in Nanoscale Electronics appeared first on insideHPC.
|
by Rich Brueckner on (#3G8S8)
The OpenPOWER Summit has posted its speaker agenda. Held in conjunction with IBM Think 2018, the event takes place March 19 in Las Vegas. "The OpenPOWER Foundation is an open technical community based on the POWER architecture, enabling collaborative development and opportunity for member differentiation and industry growth. The goal of the OpenPOWER Foundation is to create an open ecosystem, using the POWER Architecture to share expertise, investment, and server-class intellectual property to serve the evolving needs of customers and industry."The post Agenda Posted: OpenPOWER 2018 Summit in Las Vegas appeared first on insideHPC.
|
by Rich Brueckner on (#3G8MD)
CERN's Maria Girona gave this talk at the HiPEAC 2018 conference in Manchester. "The Large Hadron Collider (LHC) is one of the largest and most complicated scientific apparata ever constructed. "In this keynote, I will discuss the challenges of capturing, storing and processing the large volumes of data generated at CERN. I will also discuss how these challenges will evolve towards the High-Luminosity Large Hadron Collider (HL-LHC), the upgrade programme scheduled to begin taking data in 2026 and to run into the 2030s, generating some 30 times more data than the LHC has currently produced."The post Video: Computing Challenges at the Large Hadron Collider appeared first on insideHPC.
|
by Rich Brueckner on (#3G6PG)
Nicholas Frontiere from the University of Chicago gave this talk at the DOE CSGF Program Review meeting. "In response to the plethora of data from current and future large-scale structure surveys of the universe, sophisticated simulations are required to obtain commensurate theoretical predictions. We have developed the Hardware/Hybrid Accelerated Cosmology Code (HACC), capable of sustained performance on powerful and architecturally diverse supercomputers to address this numerical challenge. We will investigate the numerical methods utilized to solve a problem that evolves trillions of particles, with a dynamic range of a million to one."The post HACC: Fitting the Universe inside a Supercomputer appeared first on insideHPC.
|
by Rich Brueckner on (#3G6K2)
The Center for Institutional Research Computing at Washington State University is seeking a High-Performance Computing Systems Engineer in our Job of the Week. "This position will play a vital role in the engineering and administration of HPC clusters used by the research community at Washington State University. This position is an exciting opportunity to participate in the frontiers of research computing through the selection, configuration, and management of HPC infrastructure including all computing systems, networking, and storage. This position is key to ensuring the high quality of service and performance of WSU's research computing resources."The post Job of the Week: HPC Systems Engineer at Washington State University appeared first on insideHPC.
|
by Rich Brueckner on (#3G4W8)
In this video from HiPEAC 2018 in Manchester, Dan Belov from DeepMind describe the company's machine learning technology and some of the challenges ahead. "DeepMind Inc. is well known for state of the art Deep Reinforcement Learning (DRL) algorithms such as DQN on Atari, A3C on DMLab and AlphaGo Zero. I would like to take you on a tour of challenges we encounter when training DRL agents on large workloads with hundreds of terabytes of data. I’ll talk about why DRL poses unique challenges when designing distributed systems and hardware as opposed to simple supervised learning. Finally I’d like to discuss opportunities for DRL to help systems design and operation."The post Video: Deep Reinforcement Learning and Systems Infrastructure at DeepMind appeared first on insideHPC.
|
by Rich Brueckner on (#3G4TV)
Today Adaptive Computing announced the release of Moab 9.1.2, an update which has undergone thousands of quality tests and includes scores of customer-requested enhancements. "Moab is a world leader in dynamically optimizing large-scale computing environments. It intelligently places and schedules workloads and adapts resources to optimize application performance, increase system utilization, and achieve organizational objectives. Moab's unique intelligent and predictive capabilities evaluate the impact of future orchestration decisions across diverse workload domains (HPC, HTC, Big Data, Grid Computing, SOA, Data Centers, Cloud Brokerage, Workload Management, Enterprise Automation, Workflow Management, Server Consolidation, and Cloud Bursting); thereby optimizing cost reduction and speeding product delivery."The post Adaptive Computing rolls out Moab HPC Suite 9.1.2 appeared first on insideHPC.
|
by staff on (#3G2JV)
Today, Intel announced the Intel SSD DC P4510 Series for data center applications. As a high performance storage device, the P4510 Series uses 64-layer TLC Intel 3D NAND to enable end users to do more per server, support broader workloads, and deliver space-efficient capacity. "The P4510 Series enables up to four times more terabytes per server and delivers up to 10 times better random read latency at 99.99 percent quality of service than previous generations. The drive can also deliver up to double the input-output operations per second (IOPS) per terabyte."The post Intel Rolls out new 3D NAND SSDs appeared first on insideHPC.
|
by Rich Brueckner on (#3G2ED)
The cHIPSet Annual Plenary Meeting takes place in France next month. To learn more, we caught up with the Vice-Chair for the project, Dr. Horacio González-Vélez, Associate Professor and Head of the Cloud Competency Centre at the National College of Ireland. "The plenary meeting will feature a workshop entitled "Accelerating Modeling and Simulation in the Data Deluge Era". We are expecting keynote presentations and panel discussions on how the forthcoming exascale systems will influence the analysis and interpretation of data, including the simulation of models, to match observation to theory."The post Interview: European cHiPSet Event focuses on High-Performance Modeling and Simulation for Big Data Applications appeared first on insideHPC.
|
by staff on (#3G25V)
In this TACC podcast, Suzanne Pierce from the Texas Advanced Computing Center describes her upcoming panel discussion on AI and water management and the work TACC is doing to support efforts to bridge advanced computing with Earth science. "It's about letting the AI help us be better decision makers. And it helps us move towards answering, discussing, and exploring the questions that are most important and most critical for our quality of life and our communities so that we can develop a future together that's brighter."The post TACC Podcast Looks at AI and Water Management appeared first on insideHPC.
|
by Rich Brueckner on (#3G1XB)
The 3rd International Workshop on In Situ Visualization has issued it's Call for Papers. Held in conjunction with ISC 2018, WOIV 2018 takes place June 28 in Frankfurt, Germany. "Our goal is to appeal to a wide-ranging audience of visualization scientists, computational scientists, and simulation developers, who have to collaborate in order to develop, deploy, and maintain in situ visualization approaches on HPC infrastructures. We hope to provide practical take-away techniques and insights that serve as inspiration for attendees to implement or refine in their own HPC environments and to avoid pitfalls."The post Call for Papers: International Workshop on In Situ Visualization appeared first on insideHPC.
|
by Rich Brueckner on (#3FZPC)
Today the PASC18 conference announced that Alice-Agnes Gabriel from Ludwig-Maximilian-University of Munich will deliver a keynote address on earthquake simulation. " This talk will focus on using physics-based scenarios, modern numerical methods and hardware specific optimizations to shed light on the dynamics, and severity, of earthquake behavior. It will present the largest-scale dynamic earthquake rupture simulation to date, which models the 2004 Sumatra-Andaman event - an unexpected subduction zone earthquake which generated a rupture of over 1,500 km in length within the ocean floor followed by a series of devastating tsunamis."The post PASC18 Keynote to Focus on Extreme-Scale Multi-Physics Earthquake Simulations appeared first on insideHPC.
|
by Rich Brueckner on (#3FZKP)
In this video, Information Technology Subcommittee Chairman Will Hurd begins a three-part hearing on Artificial Intelligence. "Over the next three months, the IT Subcommittee will hear from industry professionals such as Intel and NVIDIA as well as government stakeholders with the goal of working together to keep the United States the world leader in artificial intelligence technology.â€The post Video: Intel and NVIDIA at Congressional Hearing on Artificial Intelligence appeared first on insideHPC.
|
by staff on (#3FZGW)
In this special guest feature, SC18 Technical Program Chair David Keyes from KAUST writes that important changes are coming to the world's biggest HPC conference this November in Dallas.The post Updating the SC18 Technical Program to Inspire the Future appeared first on insideHPC.
|
by Rich Brueckner on (#3FZGY)
Ingrid Barcena from KU Leuven gave this talk at the HPC Knowledge Portal meeting in San Sebastián, Spain. "One of the biggest challenges when procuring High Performance Computing systems is to ensure that not only a faster machine than the previous one is bought but that the new system is well suited for the organization needs, fit within a limited budget and prove value for money. However, this is not a simple task and failing on buying the right HPC system can have tremendous consequences for an organization."The post Buying for Tomorrow: HPC Systems Procurement Matters appeared first on insideHPC.
|
by MichaelS on (#3FZAZ)
Using the Intel® Advisor Flow Graph Analyzer (FGA), an application such as those that are needed for autonomous driving can be developed and implemented using very high performing underlying software and hardware. Under the Intel FGA, are the Intel Threaded Building Blocks which take advantage of the multiple cores that are available on all types of systems today.The post Flow Graph Analyzer – Speed Up Your Applications appeared first on insideHPC.
|
by staff on (#3FWYM)
Researchers are using the Blue Waters supercomputer to create better tools for long-Term crop prediction. “We built this new tool to bridge these two types of crop models combining their strengths and eliminating the weaknesses. This work is an outstanding example of the convergence of simulation and data science that is a driving factor in the National Strategic Computing Initiative announced by the White House in 2015."The post Supercomputing Better Tools for Long-Term Crop Prediction appeared first on insideHPC.
|
by Rich Brueckner on (#3FWMR)
Todd Gamblin from LLNL gave this talk at FOSDEM'18. "This talk will introduce binary packaging in Spack and some of the open infrastructure we have planned for distributing packages. We'll talk about challenges to providing binaries for a combinatorially large package ecosystem, and what we're doing in Spack to address these problems. We'll also talk about challenges for implementing relocatable binaries with a multi-compiler system like Spack. "The post Binary Packaging for HPC with Spack appeared first on insideHPC.
|
by staff on (#3FWHD)
MIT researchers have developed a special-purpose chip that increases the speed of neural-network computations by three to seven times over its predecessors, while reducing power consumption 94 to 95 percent. "The computation these algorithms do can be simplified to one specific operation, called the dot product. Our approach was, can we implement this dot-product functionality inside the memory so that you don’t need to transfer this data back and forth?â€The post MIT helps move Neural Nets back to Analog appeared first on insideHPC.
|
by staff on (#3FWF5)
In this special guest feature, Robert Roe from Scientific Computing World explores efforts to diversify the HPC processor market. "With the arrival of Arm and now the reintroduction of AMD to HPC, there are signs of new life in an HPC processor market that has been dominated by Intel Xeon processors for a number of years."The post HPC Processor Competition Heats Up appeared first on insideHPC.
|
by Rich Brueckner on (#3FSXJ)
AI and Machine Learning have been called the Next Big Thing in High Performance Computing, but what kinds of results are your peers already getting right now? There is one way to find out--by taking our HPC & AI Survey. "We invite you to take our insideHPC Survey on the intersection HPC & AI. In return, we'll send you a free report with the results and enter your name in a drawing to win one of two Echo Show devices with Alexa technology. The Echo Show is a voice-activated smart screen device that Amazon unveiled back in 2017."The post Take our AI & HPC Survey to Win an Amazon Echo Show Device appeared first on insideHPC.
|
by Rich Brueckner on (#3FSXK)
Rob Farber from TechEnablement gave this talk at the HPC Knowledge Portal 2017 meeting. "This talk will merge two state-of-the-art briefings: Massive scale and state-of-the art algorithm mappings for both machine learning and unstructured data analytics including how they are affected by current and forthcoming hardware and the technology trends at Intel, NVIDIA, IBM, ARM, and OpenPower that will affect algorithm developments."The post State-Of-The-Art Machine Learning Algorithms and Near-Term Technology Trends appeared first on insideHPC.
|
by staff on (#3FSMT)
Today the Gen-Z Consortium released the Gen-Z Core Specification 1.0 on its website. As an open systems interconnect, Gen-Z is designed to provide memory semantic access to data and devices via direct-attached, switched or fabric topologies. "The release of core specification 1.0 today is a significant step towards realization of new architectures and evolution of existing technologies to expand into new roles. Samsung is excited to be a member of the Gen-Z Consortium and is committed towards industry open standards.â€The post Gen-Z Consortium Announces the Public Release of Its Core Specification 1.0 appeared first on insideHPC.
|
by Rich Brueckner on (#3FSJ3)
Kenneth Hoste from Ghent University this talk at the EasyBuild User Meeting in Amsterdam. "EasyBuild is a software build and installation framework that allows you to manage (scientific) software on HPC systems in an efficient way. The EasyBuild User Meeting is an open and highly interactive event that provides a great opportunity to meet fellow EasyBuild enthusiasts, discuss related topics and learn about new aspects of the tool."The post EasyBuild: Past, Present & Future appeared first on insideHPC.
|
by staff on (#3FSJ5)
Whether the application is floating-point intensive, integer based, uses a lot of memory, has significant I/O requirements, or its widespread use is limited by purchased licenses, a system that assigns the right job to the right server is key to maximizing the computing infrastructure. We continue our insideHPC series of features exploring new resource management solutions for workload convergence, such as Bright Cluster Manager by Bright Computing. This article discusses how scheduling can work to optimize infrastructure and improve HPC system management.The post HPC System Management: Scheduling to Optimize Infrastructure appeared first on insideHPC.
|
by staff on (#3FQBZ)
Today the Texas Advanced Computing Center announced a partnership with the U.S. Department of Defense to provide researchers with access to advanced computing resources as part of an effort to develop novel computational approaches for complex manufacturing and design problems. "TRADES, which stands for TRAnsformative DESign, is a program within the DOD Defense Advanced Research Projects Agency (DARPA). The essence of the program is to synthesize components of complex mechanical platforms (e.g., ground vehicles, ships, or air and space craft), which leverage advanced materials and manufacturing methods such as direct digital manufacturing.The post TACC and DOD to Co-Develop Novel Computational Approaches appeared first on insideHPC.
|
by staff on (#3FQ32)
John Barrus writes that Cloud TPUs are available in beta on Google Cloud Platform to help machine learning experts train and run their ML models more quickly. "Cloud TPUs are a family of Google-designed hardware accelerators that are optimized to speed up and scale up specific ML workloads programmed with TensorFlow. Built with four custom ASICs, each Cloud TPU packs up to 180 teraflops of floating-point performance and 64 GB of high-bandwidth memory onto a single board.The post Google Cloud TPU Machine Learning Accelerators now in Beta appeared first on insideHPC.
|
by staff on (#3FQ01)
The GENESYS project applied Optalysys’s unique optical processing technology to perform large-scale DNA sequence alignment. "The collaboration with EI has been a great success,†said Dr. Nick New, founder and CEO of Optalysys. “We have demonstrated the technology at several international conferences including Advances in Genome Biology and Technology, Plant and Animal Genome Conference and Genome 10K/Genome Science, to an overwhelmingly enthusiastic response. We are looking forward to continuing our strong relationship with EI through the beta program and beyond.â€The post Optalysys Optical Processing Achieves 90 percent energy savings for DNA Sequence Alignment appeared first on insideHPC.
|
by staff on (#3FPX2)
HPC managed services provider CPU 24/7 has been acquired by IAV GmbH, a leading engineering services firm for the automotive industry. "We are very happy we found such a good home for our companyâ€, says Dr. Matthias Reyer, Co-Founder at CPU 24/7. “By combining CPU 24/7’s strength in CAE and HPC applications with IAV’s leading automotive engineering services, the company is perfectly positioned to continue providing their clients with the best services available on the market today.â€The post CPU 24/7 Acquired by IAV GmbH for Automotive HPC Managed Services appeared first on insideHPC.
|
by Rich Brueckner on (#3FPSZ)
In this video, Mark Handley will explain what modern CPUs actually do to go fast, discuss how this leads to the Meltdown and Spectre vulnerabilities, and summarize the mitigations that are being put in place. "Operating systems and hypervisors need significant changes to how memory management is performed, CPU firmware needs updating, compilers are being modified to avoid risky instruction sequences, and browsers are being patched to prevent scripts having access to accurate time."The post Alan Turing Institute Looks at Meltdown and Spectre appeared first on insideHPC.
|
by Rich Brueckner on (#3FMT2)
The 47th International Conference on Parallel Processing has issued its Call for Proposals. Sponsored by ACM SIGHPC, the event takes place August 13-16 in Eugene, Oregon. "Parallel and distributed computing is a central topic in science, engineering and society. ICPP, the International Conference on Parallel Processing, provides a forum for engineers and scientists in academia, industry and government to present their latest research findings in all aspects of parallel and distributed computing."The post Call for Proposals: International Conference on Parallel Processing in Oregon appeared first on insideHPC.
|
by Rich Brueckner on (#3FMPZ)
Zvonimir Bandic from Western Digital gave this talk at the SNIA Persistent Memory Summit. "Much has been debated about would it take to scale a system to exabyte main memory with the right levels of latencies to address the world’s growing and diverse data needs. This presentation will explore legacy distributed system architectures based on traditional CPU and peripheral attachment of persistent memory, scaled out through the use of RDMA networking."The post Realizing Exabyte-scale PM Centric Architectures and Memory Fabrics appeared first on insideHPC.
|
by Rich Brueckner on (#3FJY0)
In this video, Professor Derek Leinweber from the University of Adelaide presents his research in Lattice Quantum Field Theory; revealing the origin of mass in the universe. "While the fundamental interactions are well understood, elucidating the complex phenomena emerging from this quantum field theory is fascinating and often surprising. My explorations of QCD-vacuum structure featured in Professor Wilczek's 2004 Physics Nobel Prize Lecture. Our approach to discovering the properties of this key component of the Standard Model of the Universe favors fundamental first-principles numerical simulations of QCD on supercomputers. This field of study is commonly referred to as Lattice QCD."The post Supercomputing the Origin of Mass appeared first on insideHPC.
|
by Rich Brueckner on (#3FJWE)
The Supercomputing Frontiers conference has announced it Full Agenda and Keynote Speakers. The event takes place March 12-15 in Warsaw, Poland. "Supercomputing Frontiers is an annual international conference that provides a platform for thought leaders from both academia and industry to interact and discuss visionary ideas, important visionary trends and substantial innovations in supercomputing."The post Supercomputing Frontiers Europe Announces Keynote Speakers appeared first on insideHPC.
|
by staff on (#3FGKX)
PRACE is once again seeking nominations for the Ada Lovelace Award for HPC 2018. The award recognizes women in Europe who are breaking new barriers in science and engineering.The post Nominations open for PRACE Ada Lovelace Award for HPC 2018 appeared first on insideHPC.
|
by staff on (#3FGHB)
For the first time, scientists have used HPC to reconstruct the data collected by a nuclear physics experiment—an advance that could dramatically reduce the time it takes to make detailed data available for scientific discoveries. "By running multiple computing jobs simultaneously on the allotted supercomputing cores, the team transformed 4.73 petabytes of raw data into 2.45 petabytes of “physics-ready†data in a fraction of the time it would have taken using in-house high-throughput computing resources, even with a two-way transcontinental data journey."The post Reconstructing Nuclear Physics Experiments with Supercomputers appeared first on insideHPC.
|
by Rich Brueckner on (#3FGE3)
The MEMSYS 2018 International Symposium on Memory Systems has issued its Call for Papers. The conference takes place Oct. 1-4 in Washington, D.C.. "The memory system has become extremely important. Memory is slow, and this is the primary reason that computers don’t run significantly faster than they do. In large-scale computer installations such as the building-sized systems powering Google.com, Amazon.com, and the financial sector, memory is often the largest dollar cost as well as the largest consumer of energy. Consequently, improvements in the memory system can have significant impact on the real world, improving power and energy, performance, and/or dollar cost."The post Call for Papers: MEMSYS 2018 appeared first on insideHPC.
|
by Rich Brueckner on (#3FGBG)
Phil Rogers from NVIDIA gave this talk at SC17. "Like its namesake, In this talk, we describe the architecture of SATURNV, and how we use it every day at NVIDIA to run our deep learning workloads for both production and research use cases. We explore how the NVIDIA GPU Cloud software is used to manage and schedule work on SATURNV, and how it gives us the agility to rapidly respond to business-critical projects. We also present some of the results of our research in operating this unique GPU-accelerated data center."The post Inside SATURNV – Insights from NVIDIA’s Deep Learning Supercomputer appeared first on insideHPC.
|
by Rich Brueckner on (#3FEEK)
Today an HPC Startup called Sylabs entered the market to provide solutions and services based on Singularity, an open source container technology designed for high performance computing. Founded by the inventor and project lead for Singularity, Sylabs will license and support Singularity Pro, an enterprise version of the software, and introduce it to businesses in the enterprise and HPC commercial markets.The post Sylabs Startup forms Commercial Entity behind Singularity for HPC appeared first on insideHPC.
|
by Rich Brueckner on (#3FDRV)
In this Let's Talk Exascale podcast, Franck Cappello from Argonne National Laboratory describes the VeloC project. "The VeloC project endeavors to provide ECP applications an optimal fault-tolerance environment, while the aim of the EZ project is to provide data reduction. Interviewee: Franck Cappello, Argonne National Laboratory."The post Addressing Fault Tolerance and Data Compression at Exascale appeared first on insideHPC.
|
by Rich Brueckner on (#3FDNG)
Researchers at LLNL are using supercomputers to simulate the onset of earthquakes in California. "This study shows that powerful supercomputing can be used to calculate earthquake shaking on a large, regional scale with more realism than we’ve ever been able to produce before,†said Artie Rodgers, LLNL seismologist and lead author of the paper."The post Hayward Fault Earthquake Simulations Increase Fidelity of Ground Motions appeared first on insideHPC.
|
by staff on (#3FDJJ)
"NSF’s participation with major cloud providers is an innovative approach to combining resources to better support data science research," said Jim Kurose, assistant director of NSF for Computer and Information Science and Engineering (CISE). “This type of collaboration enables fundamental research and spurs technology development and economic growth in areas of mutual interest to the participants, driving innovation for the long-term benefit of our nation.â€The post Big 3 Cloud Providers join with NSF to Support Data Science appeared first on insideHPC.
|
by staff on (#3FDF6)
In this WSJ Podcast, scientists from the IBM's T J Watson Research Center describe how quantum computing will enable a new age of scientific discovery. "Quantum computers are incredibly powerful machines that take a new approach to processing information. Built on the principles of quantum mechanics, they exploit complex and fascinating laws of nature that are always there, but usually remain hidden from view. By harnessing such natural behavior, quantum computing can run new types of algorithms to process information more holistically."The post Podcast: The Age of Quantum Computing is (Almost) Here appeared first on insideHPC.
|
by Richard Friedman on (#3FDCW)
Vectorization, the hardware optimization technique synonymous with early vector supercomputers like the Cray-1 (1975), has reappeared with even greater importance than before. Today, 40+ years later, the AVX-512 vector instructions in the most recent many-core Intel Xeon and Intel® Xeon PhiTM processors can increase application performance by 16x for single-precision codes.The post Vectorization Now More Important Than Ever appeared first on insideHPC.
|
by Rich Brueckner on (#3FAW9)
Adam Roe from Intel gave this talk at LAD'17 in Paris. "Lustre has had a number of compelling new features added in recent releases; this talk will look at those features in detail and see how well they all work together from both a performance and functionality perspective. Comparing some of the numbers from last year we will see how far the Lustre* filesystem has come in such a short period of time (LAD’16 to LAD’17), comparing the same use cases observing the generational improvements in the technology."The post Video: Lustre Generational Performance Improvements & New Features appeared first on insideHPC.
|
by staff on (#3FASG)
Today the Numerical Algorithms Group (NAG) in the UK announced the appointment of John Elmer to its Board of Directors. "I am delighted that John Elmer is joining the NAG Board,†said Rob Meyer, CEO, NAG. “His track record of helping technology-driven companies achieve their strategic growth objectives brings a valuable and complementary skill set for the Board. His 30+ years of experience in the upstream oil & gas industry will help us expand our offerings in this key market segment. The NAG Board of Directors welcome John and look forward to his contributions to our success.â€The post John Elmer joins NAG Board of Directors appeared first on insideHPC.
|
by Rich Brueckner on (#3FAPE)
The Center for Institutional Research Computing at Washington State University is seeking a Computational Scientist in our Job of the Week. "This position will play a key role in assisting the research community in the development and optimization of applications on high-performance computing clusters across a broad spectrum of application domains."The post Job of the Week: Computational Scientist at Washington State University appeared first on insideHPC.
|
by staff on (#3FAKP)
Today One Stop Systems introduced the new 2U Ion Accelerator Flash Storage Array. The new array offers flexible capacity while maintaining the high-bandwidth and low-latency pedigree of Ion Accelerator arrays deployed in hundreds of global installations. This OSS shared flash storage array boasts the latest Ion Accelerator 5.0 software, NVMe drives, networking options and dual Intel Xeon Scalable Processors to support the most demanding applications.The post One Stop Systems Rolls Out High Bandwidth NVMe Ion Flash Storage Array appeared first on insideHPC.
|