by Rich Brueckner on (#3GKK3)
Today D-Wave Systems announced that the company has completed fabrication and testing of a working prototype next-generation processor, and the installation of a D-Wave 2000Q system for a customer. The prototype processor uses an advanced new architecture that will be the basis for D-Wave’s next-generation quantum processor. The D-Wave 2000Q system, the fourth generation of commercial products delivered by D-Wave, was installed at the Quantum Artificial Intelligence Lab run by Google, NASA, and Universities Space Research Association.The post D-Wave Completes Prototype of Next-Gen Quantum Processor appeared first on insideHPC.
|
High-Performance Computing News Analysis | insideHPC
Link | https://insidehpc.com/ |
Feed | http://insidehpc.com/feed/ |
Updated | 2024-11-25 02:30 |
by Rich Brueckner on (#3GKG0)
Scot Schultz from Mellanox gave this talk at the Stanford HPC Conference. "Today, many agree that the next wave of disruptive technology blurring the lines between the digital, physical and even the biological, will be the fourth industrial revolution of AI. The fusion of state-of-the-art computational capabilities, extensive automation and extreme connectivity is already affecting nearly every aspect of society, driving global economics and extending into every aspect of our daily life."The post Highest Peformance and Scalability for HPC and AI appeared first on insideHPC.
|
by Rich Brueckner on (#3GGX3)
Today Lenovo unveiled the new ThinkSystem SD650 server Direct Water Cooling for energy-efficient, high-density computing. Already deployed at the Leibniz Supercomputing Centre in Germany, the ThinkSystem SD650 will save customers up to 40% on energy costs while delivering a 10-15% performance improvement over air-cooled systems. "The system utilizes warm water instead of air to cool the components, including the CPUs and memory. Water conducts heat more efficiently, allowing customers to run their processors in “turbo†mode continuously, resulting in a performance improvement. The SD650 HPC servers have no system fans, operate at lower temperatures when compared to standard air-cooled systems and have negligible datacenter chilled water requirements. The result – lower datacenter power consumption of 30-40% compared to traditional cooling methods."The post Warm Water-Cooling enables a fanless design for new Lenovo ThinkSystem SD650 appeared first on insideHPC.
|
by Rich Brueckner on (#3GGTR)
OCF in the UK has deployed a new supercomputer at the University of Southampton. Named Iridis 5, the 1.3 Petaflop system will support research demanding traditional HPC as well as projects requiring large scale deep storage, big data analytics, web platforms for bioinformatics, and AI services. "We’ve had early access to Iridis 5 and it’s substantially bigger and faster than its previous iteration – it’s well ahead of any other in use at any University across the UK for the types of calculations we’re doing."The post Lenovo ThinkSystem Servers Power 1.3 Petaflop Supercomputer at University of Southampton appeared first on insideHPC.
|
by Rich Brueckner on (#3GGN8)
DK Panda from Ohio State University gave this talk at the Stanford HPC Conference. "This talk will focus on challenges in designing HPC, Deep Learning, and HPC Cloud middleware for Exascale systems with millions of processors and accelerators. For the HPC domain, we will discuss the challenges in designing runtime environments for MPI+X (PGAS-OpenSHMEM/UPC/CAF/UPC++, OpenMP and Cuda) programming models by taking into account support for multi-core systems (KNL and OpenPower), high networks, GPGPUs (including GPUDirect RDMA) and energy awareness."The post Designing HPC, Deep Learning, and Cloud Middleware for Exascale Systems appeared first on insideHPC.
|
by Richard Friedman on (#3GGJN)
The latest version of Intel® Math Kernel Library (MKL) offers vectorized compact functions for general and specialized matrix computations of this type. These functions rely on true SIMD (single instruction, multiple data) matrix computations, and provide significant performance benefits compared to traditional techniques that exploit multithreading but rely on standard data formats.The post Intel MKL Compact Matrix Functions Attain Significant Speedups appeared first on insideHPC.
|
by Rich Brueckner on (#3GDRN)
Today Samsung announced that it has begun mass producing the industry’s largest capacity SAS solid state drive – the PM1643 – for use in next-generation enterprise storage systems. Leveraging Samsung’s latest V-NAND technology with 64-layer, 3-bit 512-gigabit chips, the 30.72 terabyte drive delivers twice the capacity and performance of the previous 15.36TB high-capacity lineup introduced in March 2016. "With our launch of the 30.72TB SSD, we are once again shattering the enterprise storage capacity barrier, and in the process, opening up new horizons for ultra-high capacity storage systems worldwide,†said Jaesoo Han, executive vice president, Memory Sales & Marketing Team at Samsung Electronics. “Samsung will continue to move aggressively in meeting the shifting demand toward SSDs over 10TB and at the same time, accelerating adoption of our trail-blazing storage solutions in a new age of enterprise systems.â€The post Samsung Unveils 30.72TB Enterprise SSD appeared first on insideHPC.
|
by Rich Brueckner on (#3GDRQ)
We are going to present WekaIO, the lowest latency, highest throughput file system solution that scales to 100s of PB in a single namespace supporting the most challenging deep learning projects that run today. We will present real life benchmarks comparing WekaIO performance to a local SSD file system, showing that we are the only coherent shared storage that is even faster than the current caching solutions, while allowing customers to linearly scale performance by adding more GPU servers."The post WekaIO: Making Machine Learning Compute-Bound Again appeared first on insideHPC.
|
by staff on (#3GC1D)
Today HPE announced it has been selected to provide new supercomputers for the DoD High Performance Computing Modernization Program (HPCMP) to accelerate the development and acquisition of advanced national security capabilities. “The DoD’s continuous investment in supercomputing innovation is a clear testament to this development and an important contribution to U.S. national security. HPE has been a strategic partner with the HPCMP for two decades, and we are proud that the DoD now significantly extends this partnership, acknowledging HPE’s sustained leadership in high performance computing.â€The post HPE wins $57 Million Supercomputing Contract for DoD Modernization Program appeared first on insideHPC.
|
by Rich Brueckner on (#3GBTJ)
Adam Bertsch from LLNL gave this talk at the Stanford HPC Conference. "Our next flagship HPC system at LLNL will be called Sierra. A collaboration between multiple government and industry partners, Sierra and its sister system Summit at ORNL, will pave the way towards Exascale computing architectures and predictive capability."The post Video: The Sierra Supercomputer – Science and Technology on a Mission appeared first on insideHPC.
|
by Rich Brueckner on (#3GBE1)
The 34th International Conference on Massive Storage Systems and Technologies (MSST 2018) has issued its Call for Participation. The event takes place May 14-16 in Santa Clara, California. "The conference invites you to share your research, ideas and solutions, as we continue to face challenges in the rapidly expanding need for massive, distributed storage solutions. Join us and learn about disruptive storage technologies and the challenges facing data centers, as the demand for massive amounts of data continues to increase. Join the discussion on webscale IT, and the demand on storage systems from IoT, healthcare, scientific research, and the continuing stream of smart applications (apps) for mobile devices."The post Call for Participation: MSST Mass Storage Conference 2018 appeared first on insideHPC.
|
by staff on (#3GAY9)
This is the final entry in a insideHPC series of features that explores new resource management solutions for workload convergence, such as Bright Cluster Manager by Bright Computing. This article highlights how resource management systems that can manage clusters on-premises or in the cloud greatly simplify cluster management. That way, different tools do not have to be learned for managing a cluster based on whether it is located in the company data center or in the cloud.The post Resource Management Across the Private/Public Cloud Divide appeared first on insideHPC.
|
by Rich Brueckner on (#3GAQY)
The European Horizon 2020 AllScale project has launched a survey on exascale resilience. "As we approach ExaScale, compute node failure will become commonplace. @AllScaleEurope wants to know how #HPC software developers view fault tolerance today, & how they plan to incorporate fault tolerance in their software in the ExaScale era."The post Take the Exascale Resilience Survey from AllScale Europe appeared first on insideHPC.
|
by staff on (#3G8S6)
Researchers at North Carolina State University are using the Blue Waters Supercomputer to explore graphene’s applications, including its use in nanoscale electronics and electrical DNA sequencing. "We’re looking at what’s beyond Moore’s law, whether one can devise very small transistors based on only one atomic layer, using new methods of making materials,†said Professor Jerry Bernholc, from North Carolina University. “We are looking at potential transistor structures consisting of a single layer of graphene, etched into lines of nanoribbons, where the carbon atoms are arranged like a chicken wire pattern. We are looking at which structures will function well, at a few atoms of width.â€The post Supercomputing Graphene Applications in Nanoscale Electronics appeared first on insideHPC.
|
by Rich Brueckner on (#3G8S8)
The OpenPOWER Summit has posted its speaker agenda. Held in conjunction with IBM Think 2018, the event takes place March 19 in Las Vegas. "The OpenPOWER Foundation is an open technical community based on the POWER architecture, enabling collaborative development and opportunity for member differentiation and industry growth. The goal of the OpenPOWER Foundation is to create an open ecosystem, using the POWER Architecture to share expertise, investment, and server-class intellectual property to serve the evolving needs of customers and industry."The post Agenda Posted: OpenPOWER 2018 Summit in Las Vegas appeared first on insideHPC.
|
by Rich Brueckner on (#3G8MD)
CERN's Maria Girona gave this talk at the HiPEAC 2018 conference in Manchester. "The Large Hadron Collider (LHC) is one of the largest and most complicated scientific apparata ever constructed. "In this keynote, I will discuss the challenges of capturing, storing and processing the large volumes of data generated at CERN. I will also discuss how these challenges will evolve towards the High-Luminosity Large Hadron Collider (HL-LHC), the upgrade programme scheduled to begin taking data in 2026 and to run into the 2030s, generating some 30 times more data than the LHC has currently produced."The post Video: Computing Challenges at the Large Hadron Collider appeared first on insideHPC.
|
by Rich Brueckner on (#3G6PG)
Nicholas Frontiere from the University of Chicago gave this talk at the DOE CSGF Program Review meeting. "In response to the plethora of data from current and future large-scale structure surveys of the universe, sophisticated simulations are required to obtain commensurate theoretical predictions. We have developed the Hardware/Hybrid Accelerated Cosmology Code (HACC), capable of sustained performance on powerful and architecturally diverse supercomputers to address this numerical challenge. We will investigate the numerical methods utilized to solve a problem that evolves trillions of particles, with a dynamic range of a million to one."The post HACC: Fitting the Universe inside a Supercomputer appeared first on insideHPC.
|
by Rich Brueckner on (#3G6K2)
The Center for Institutional Research Computing at Washington State University is seeking a High-Performance Computing Systems Engineer in our Job of the Week. "This position will play a vital role in the engineering and administration of HPC clusters used by the research community at Washington State University. This position is an exciting opportunity to participate in the frontiers of research computing through the selection, configuration, and management of HPC infrastructure including all computing systems, networking, and storage. This position is key to ensuring the high quality of service and performance of WSU's research computing resources."The post Job of the Week: HPC Systems Engineer at Washington State University appeared first on insideHPC.
|
by Rich Brueckner on (#3G4W8)
In this video from HiPEAC 2018 in Manchester, Dan Belov from DeepMind describe the company's machine learning technology and some of the challenges ahead. "DeepMind Inc. is well known for state of the art Deep Reinforcement Learning (DRL) algorithms such as DQN on Atari, A3C on DMLab and AlphaGo Zero. I would like to take you on a tour of challenges we encounter when training DRL agents on large workloads with hundreds of terabytes of data. I’ll talk about why DRL poses unique challenges when designing distributed systems and hardware as opposed to simple supervised learning. Finally I’d like to discuss opportunities for DRL to help systems design and operation."The post Video: Deep Reinforcement Learning and Systems Infrastructure at DeepMind appeared first on insideHPC.
|
by Rich Brueckner on (#3G4TV)
Today Adaptive Computing announced the release of Moab 9.1.2, an update which has undergone thousands of quality tests and includes scores of customer-requested enhancements. "Moab is a world leader in dynamically optimizing large-scale computing environments. It intelligently places and schedules workloads and adapts resources to optimize application performance, increase system utilization, and achieve organizational objectives. Moab's unique intelligent and predictive capabilities evaluate the impact of future orchestration decisions across diverse workload domains (HPC, HTC, Big Data, Grid Computing, SOA, Data Centers, Cloud Brokerage, Workload Management, Enterprise Automation, Workflow Management, Server Consolidation, and Cloud Bursting); thereby optimizing cost reduction and speeding product delivery."The post Adaptive Computing rolls out Moab HPC Suite 9.1.2 appeared first on insideHPC.
|
by staff on (#3G2JV)
Today, Intel announced the Intel SSD DC P4510 Series for data center applications. As a high performance storage device, the P4510 Series uses 64-layer TLC Intel 3D NAND to enable end users to do more per server, support broader workloads, and deliver space-efficient capacity. "The P4510 Series enables up to four times more terabytes per server and delivers up to 10 times better random read latency at 99.99 percent quality of service than previous generations. The drive can also deliver up to double the input-output operations per second (IOPS) per terabyte."The post Intel Rolls out new 3D NAND SSDs appeared first on insideHPC.
|
by Rich Brueckner on (#3G2ED)
The cHIPSet Annual Plenary Meeting takes place in France next month. To learn more, we caught up with the Vice-Chair for the project, Dr. Horacio González-Vélez, Associate Professor and Head of the Cloud Competency Centre at the National College of Ireland. "The plenary meeting will feature a workshop entitled "Accelerating Modeling and Simulation in the Data Deluge Era". We are expecting keynote presentations and panel discussions on how the forthcoming exascale systems will influence the analysis and interpretation of data, including the simulation of models, to match observation to theory."The post Interview: European cHiPSet Event focuses on High-Performance Modeling and Simulation for Big Data Applications appeared first on insideHPC.
|
by staff on (#3G25V)
In this TACC podcast, Suzanne Pierce from the Texas Advanced Computing Center describes her upcoming panel discussion on AI and water management and the work TACC is doing to support efforts to bridge advanced computing with Earth science. "It's about letting the AI help us be better decision makers. And it helps us move towards answering, discussing, and exploring the questions that are most important and most critical for our quality of life and our communities so that we can develop a future together that's brighter."The post TACC Podcast Looks at AI and Water Management appeared first on insideHPC.
|
by Rich Brueckner on (#3G1XB)
The 3rd International Workshop on In Situ Visualization has issued it's Call for Papers. Held in conjunction with ISC 2018, WOIV 2018 takes place June 28 in Frankfurt, Germany. "Our goal is to appeal to a wide-ranging audience of visualization scientists, computational scientists, and simulation developers, who have to collaborate in order to develop, deploy, and maintain in situ visualization approaches on HPC infrastructures. We hope to provide practical take-away techniques and insights that serve as inspiration for attendees to implement or refine in their own HPC environments and to avoid pitfalls."The post Call for Papers: International Workshop on In Situ Visualization appeared first on insideHPC.
|
by Rich Brueckner on (#3FZPC)
Today the PASC18 conference announced that Alice-Agnes Gabriel from Ludwig-Maximilian-University of Munich will deliver a keynote address on earthquake simulation. " This talk will focus on using physics-based scenarios, modern numerical methods and hardware specific optimizations to shed light on the dynamics, and severity, of earthquake behavior. It will present the largest-scale dynamic earthquake rupture simulation to date, which models the 2004 Sumatra-Andaman event - an unexpected subduction zone earthquake which generated a rupture of over 1,500 km in length within the ocean floor followed by a series of devastating tsunamis."The post PASC18 Keynote to Focus on Extreme-Scale Multi-Physics Earthquake Simulations appeared first on insideHPC.
|
by Rich Brueckner on (#3FZKP)
In this video, Information Technology Subcommittee Chairman Will Hurd begins a three-part hearing on Artificial Intelligence. "Over the next three months, the IT Subcommittee will hear from industry professionals such as Intel and NVIDIA as well as government stakeholders with the goal of working together to keep the United States the world leader in artificial intelligence technology.â€The post Video: Intel and NVIDIA at Congressional Hearing on Artificial Intelligence appeared first on insideHPC.
|
by staff on (#3FZGW)
In this special guest feature, SC18 Technical Program Chair David Keyes from KAUST writes that important changes are coming to the world's biggest HPC conference this November in Dallas.The post Updating the SC18 Technical Program to Inspire the Future appeared first on insideHPC.
|
by Rich Brueckner on (#3FZGY)
Ingrid Barcena from KU Leuven gave this talk at the HPC Knowledge Portal meeting in San Sebastián, Spain. "One of the biggest challenges when procuring High Performance Computing systems is to ensure that not only a faster machine than the previous one is bought but that the new system is well suited for the organization needs, fit within a limited budget and prove value for money. However, this is not a simple task and failing on buying the right HPC system can have tremendous consequences for an organization."The post Buying for Tomorrow: HPC Systems Procurement Matters appeared first on insideHPC.
|
by MichaelS on (#3FZAZ)
Using the Intel® Advisor Flow Graph Analyzer (FGA), an application such as those that are needed for autonomous driving can be developed and implemented using very high performing underlying software and hardware. Under the Intel FGA, are the Intel Threaded Building Blocks which take advantage of the multiple cores that are available on all types of systems today.The post Flow Graph Analyzer – Speed Up Your Applications appeared first on insideHPC.
|
by staff on (#3FWYM)
Researchers are using the Blue Waters supercomputer to create better tools for long-Term crop prediction. “We built this new tool to bridge these two types of crop models combining their strengths and eliminating the weaknesses. This work is an outstanding example of the convergence of simulation and data science that is a driving factor in the National Strategic Computing Initiative announced by the White House in 2015."The post Supercomputing Better Tools for Long-Term Crop Prediction appeared first on insideHPC.
|
by Rich Brueckner on (#3FWMR)
Todd Gamblin from LLNL gave this talk at FOSDEM'18. "This talk will introduce binary packaging in Spack and some of the open infrastructure we have planned for distributing packages. We'll talk about challenges to providing binaries for a combinatorially large package ecosystem, and what we're doing in Spack to address these problems. We'll also talk about challenges for implementing relocatable binaries with a multi-compiler system like Spack. "The post Binary Packaging for HPC with Spack appeared first on insideHPC.
|
by staff on (#3FWHD)
MIT researchers have developed a special-purpose chip that increases the speed of neural-network computations by three to seven times over its predecessors, while reducing power consumption 94 to 95 percent. "The computation these algorithms do can be simplified to one specific operation, called the dot product. Our approach was, can we implement this dot-product functionality inside the memory so that you don’t need to transfer this data back and forth?â€The post MIT helps move Neural Nets back to Analog appeared first on insideHPC.
|
by staff on (#3FWF5)
In this special guest feature, Robert Roe from Scientific Computing World explores efforts to diversify the HPC processor market. "With the arrival of Arm and now the reintroduction of AMD to HPC, there are signs of new life in an HPC processor market that has been dominated by Intel Xeon processors for a number of years."The post HPC Processor Competition Heats Up appeared first on insideHPC.
|
by Rich Brueckner on (#3FSXJ)
AI and Machine Learning have been called the Next Big Thing in High Performance Computing, but what kinds of results are your peers already getting right now? There is one way to find out--by taking our HPC & AI Survey. "We invite you to take our insideHPC Survey on the intersection HPC & AI. In return, we'll send you a free report with the results and enter your name in a drawing to win one of two Echo Show devices with Alexa technology. The Echo Show is a voice-activated smart screen device that Amazon unveiled back in 2017."The post Take our AI & HPC Survey to Win an Amazon Echo Show Device appeared first on insideHPC.
|
by Rich Brueckner on (#3FSXK)
Rob Farber from TechEnablement gave this talk at the HPC Knowledge Portal 2017 meeting. "This talk will merge two state-of-the-art briefings: Massive scale and state-of-the art algorithm mappings for both machine learning and unstructured data analytics including how they are affected by current and forthcoming hardware and the technology trends at Intel, NVIDIA, IBM, ARM, and OpenPower that will affect algorithm developments."The post State-Of-The-Art Machine Learning Algorithms and Near-Term Technology Trends appeared first on insideHPC.
|
by staff on (#3FSMT)
Today the Gen-Z Consortium released the Gen-Z Core Specification 1.0 on its website. As an open systems interconnect, Gen-Z is designed to provide memory semantic access to data and devices via direct-attached, switched or fabric topologies. "The release of core specification 1.0 today is a significant step towards realization of new architectures and evolution of existing technologies to expand into new roles. Samsung is excited to be a member of the Gen-Z Consortium and is committed towards industry open standards.â€The post Gen-Z Consortium Announces the Public Release of Its Core Specification 1.0 appeared first on insideHPC.
|
by Rich Brueckner on (#3FSJ3)
Kenneth Hoste from Ghent University this talk at the EasyBuild User Meeting in Amsterdam. "EasyBuild is a software build and installation framework that allows you to manage (scientific) software on HPC systems in an efficient way. The EasyBuild User Meeting is an open and highly interactive event that provides a great opportunity to meet fellow EasyBuild enthusiasts, discuss related topics and learn about new aspects of the tool."The post EasyBuild: Past, Present & Future appeared first on insideHPC.
|
by staff on (#3FSJ5)
Whether the application is floating-point intensive, integer based, uses a lot of memory, has significant I/O requirements, or its widespread use is limited by purchased licenses, a system that assigns the right job to the right server is key to maximizing the computing infrastructure. We continue our insideHPC series of features exploring new resource management solutions for workload convergence, such as Bright Cluster Manager by Bright Computing. This article discusses how scheduling can work to optimize infrastructure and improve HPC system management.The post HPC System Management: Scheduling to Optimize Infrastructure appeared first on insideHPC.
|
by staff on (#3FQBZ)
Today the Texas Advanced Computing Center announced a partnership with the U.S. Department of Defense to provide researchers with access to advanced computing resources as part of an effort to develop novel computational approaches for complex manufacturing and design problems. "TRADES, which stands for TRAnsformative DESign, is a program within the DOD Defense Advanced Research Projects Agency (DARPA). The essence of the program is to synthesize components of complex mechanical platforms (e.g., ground vehicles, ships, or air and space craft), which leverage advanced materials and manufacturing methods such as direct digital manufacturing.The post TACC and DOD to Co-Develop Novel Computational Approaches appeared first on insideHPC.
|
by staff on (#3FQ32)
John Barrus writes that Cloud TPUs are available in beta on Google Cloud Platform to help machine learning experts train and run their ML models more quickly. "Cloud TPUs are a family of Google-designed hardware accelerators that are optimized to speed up and scale up specific ML workloads programmed with TensorFlow. Built with four custom ASICs, each Cloud TPU packs up to 180 teraflops of floating-point performance and 64 GB of high-bandwidth memory onto a single board.The post Google Cloud TPU Machine Learning Accelerators now in Beta appeared first on insideHPC.
|
by staff on (#3FQ01)
The GENESYS project applied Optalysys’s unique optical processing technology to perform large-scale DNA sequence alignment. "The collaboration with EI has been a great success,†said Dr. Nick New, founder and CEO of Optalysys. “We have demonstrated the technology at several international conferences including Advances in Genome Biology and Technology, Plant and Animal Genome Conference and Genome 10K/Genome Science, to an overwhelmingly enthusiastic response. We are looking forward to continuing our strong relationship with EI through the beta program and beyond.â€The post Optalysys Optical Processing Achieves 90 percent energy savings for DNA Sequence Alignment appeared first on insideHPC.
|
by staff on (#3FPX2)
HPC managed services provider CPU 24/7 has been acquired by IAV GmbH, a leading engineering services firm for the automotive industry. "We are very happy we found such a good home for our companyâ€, says Dr. Matthias Reyer, Co-Founder at CPU 24/7. “By combining CPU 24/7’s strength in CAE and HPC applications with IAV’s leading automotive engineering services, the company is perfectly positioned to continue providing their clients with the best services available on the market today.â€The post CPU 24/7 Acquired by IAV GmbH for Automotive HPC Managed Services appeared first on insideHPC.
|
by Rich Brueckner on (#3FPSZ)
In this video, Mark Handley will explain what modern CPUs actually do to go fast, discuss how this leads to the Meltdown and Spectre vulnerabilities, and summarize the mitigations that are being put in place. "Operating systems and hypervisors need significant changes to how memory management is performed, CPU firmware needs updating, compilers are being modified to avoid risky instruction sequences, and browsers are being patched to prevent scripts having access to accurate time."The post Alan Turing Institute Looks at Meltdown and Spectre appeared first on insideHPC.
|
by Rich Brueckner on (#3FMT2)
The 47th International Conference on Parallel Processing has issued its Call for Proposals. Sponsored by ACM SIGHPC, the event takes place August 13-16 in Eugene, Oregon. "Parallel and distributed computing is a central topic in science, engineering and society. ICPP, the International Conference on Parallel Processing, provides a forum for engineers and scientists in academia, industry and government to present their latest research findings in all aspects of parallel and distributed computing."The post Call for Proposals: International Conference on Parallel Processing in Oregon appeared first on insideHPC.
|
by Rich Brueckner on (#3FMPZ)
Zvonimir Bandic from Western Digital gave this talk at the SNIA Persistent Memory Summit. "Much has been debated about would it take to scale a system to exabyte main memory with the right levels of latencies to address the world’s growing and diverse data needs. This presentation will explore legacy distributed system architectures based on traditional CPU and peripheral attachment of persistent memory, scaled out through the use of RDMA networking."The post Realizing Exabyte-scale PM Centric Architectures and Memory Fabrics appeared first on insideHPC.
|
by Rich Brueckner on (#3FJY0)
In this video, Professor Derek Leinweber from the University of Adelaide presents his research in Lattice Quantum Field Theory; revealing the origin of mass in the universe. "While the fundamental interactions are well understood, elucidating the complex phenomena emerging from this quantum field theory is fascinating and often surprising. My explorations of QCD-vacuum structure featured in Professor Wilczek's 2004 Physics Nobel Prize Lecture. Our approach to discovering the properties of this key component of the Standard Model of the Universe favors fundamental first-principles numerical simulations of QCD on supercomputers. This field of study is commonly referred to as Lattice QCD."The post Supercomputing the Origin of Mass appeared first on insideHPC.
|
by Rich Brueckner on (#3FJWE)
The Supercomputing Frontiers conference has announced it Full Agenda and Keynote Speakers. The event takes place March 12-15 in Warsaw, Poland. "Supercomputing Frontiers is an annual international conference that provides a platform for thought leaders from both academia and industry to interact and discuss visionary ideas, important visionary trends and substantial innovations in supercomputing."The post Supercomputing Frontiers Europe Announces Keynote Speakers appeared first on insideHPC.
|
by staff on (#3FGKX)
PRACE is once again seeking nominations for the Ada Lovelace Award for HPC 2018. The award recognizes women in Europe who are breaking new barriers in science and engineering.The post Nominations open for PRACE Ada Lovelace Award for HPC 2018 appeared first on insideHPC.
|
by staff on (#3FGHB)
For the first time, scientists have used HPC to reconstruct the data collected by a nuclear physics experiment—an advance that could dramatically reduce the time it takes to make detailed data available for scientific discoveries. "By running multiple computing jobs simultaneously on the allotted supercomputing cores, the team transformed 4.73 petabytes of raw data into 2.45 petabytes of “physics-ready†data in a fraction of the time it would have taken using in-house high-throughput computing resources, even with a two-way transcontinental data journey."The post Reconstructing Nuclear Physics Experiments with Supercomputers appeared first on insideHPC.
|
by Rich Brueckner on (#3FGE3)
The MEMSYS 2018 International Symposium on Memory Systems has issued its Call for Papers. The conference takes place Oct. 1-4 in Washington, D.C.. "The memory system has become extremely important. Memory is slow, and this is the primary reason that computers don’t run significantly faster than they do. In large-scale computer installations such as the building-sized systems powering Google.com, Amazon.com, and the financial sector, memory is often the largest dollar cost as well as the largest consumer of energy. Consequently, improvements in the memory system can have significant impact on the real world, improving power and energy, performance, and/or dollar cost."The post Call for Papers: MEMSYS 2018 appeared first on insideHPC.
|