Feed insidehpc Inside HPC & AI News | High-Performance Computing & Artificial Intelligence

Favorite IconInside HPC & AI News | High-Performance Computing & Artificial Intelligence

Link https://insidehpc.com/
Feed http://insidehpc.com/feed/
Updated 2025-12-14 20:30
Univa Grid Engine Comes to AWS Marketplace
Bill Bryce writes that Univa Grid Engine is now available on the Amazon AWS Marketplace. As a leading distributed resource management system, Univa Grid engine powers enterprises in life sciences, oil and gas, and other sectors use it to manage workloads automatically and optimize shared resources for some of the largest clusters in the world, both on-premises or in the cloud. "It’s now easier than ever to spin-up fully functional Univa Grid Engine (UGE) clusters on AWS Marketplace using a 1-click installation process. Whether you’re deploying clusters just for testing or running large-scale simulations, the automated installation process makes installing a cluster easier than ever."The post Univa Grid Engine Comes to AWS Marketplace appeared first on insideHPC.
Living Heart Project: Using HPC in the Cloud to Save Lives
Burak Yenier and Francisco Sahli gave this talk at the Stanford HPC Conference. "Cardiac arrhythmia can be a potentially lethal side effect of medications. Before a new drug reaches the market, pharmaceutical companies need to check for the risk of inducing arrhythmias. Currently, this process takes years and involves costly animal and human studies. In this project, the Living Matter Laboratory of Stanford University developed a new software tool enabling drug developers to quickly assess the viability of a new compound. During this session we will look at how High Performance Computing in the Cloud is being used to prevent severe side effects and save lives."The post Living Heart Project: Using HPC in the Cloud to Save Lives appeared first on insideHPC.
Call for Proposals: International Conference on Distributed and Event-Based Systems in New Zealand
The 12th ACM International Conference on Distributed and Event-Based Systems (DEBS 2018) in New Zealand has issued their Call for Proposals. The event takes place June 25-29 in Hamilton, New Zealand. "Over the past decade, the ACM International Conference on Distributed and Event‐Based Systems (DEBS) has become one of the leading venues for contributions in the fields of distributed and event‐based systems. The ACM DEBS conference provides a forum dedicated to the dissemination of original research, the discussion of practical insights, and the reporting of experiences relevant to distributed systems and event‐based computing. It brings together academia and industry to discuss innovative technology and exchange ideas."The post Call for Proposals: International Conference on Distributed and Event-Based Systems in New Zealand appeared first on insideHPC.
SC18 Papers Submissions Open Today with New Review Process
In this special guest feature, SC18 Papers Chair Torsten Hoefler from ETH Zurich writes about big changes to the conference papers program. It's timely news as SC18 Paper Submissions open today. "In the light of this year’s “HPC Inspires” theme, we are looking forward to working with the technical papers team to make SC18 the best technical program ever and consolidate the leading position of the SC Conference Series in the field of HPC."The post SC18 Papers Submissions Open Today with New Review Process appeared first on insideHPC.
Spectra Logic rolls out New Tape Library Offerings
Today Spectra Logic announced a pair of new tape library products. These products include the all-new Spectra Stack, a highly scalable, modular and affordable tape library that allows users to start with a single tape drive and 10 tape slots, growing incrementally as their data needs increase, and the new Spectra T950v Tape Library, an affordable, entry-level model of the popular high-end Spectra T950 Tape Library family.The post Spectra Logic rolls out New Tape Library Offerings appeared first on insideHPC.
Sharing High-Performance Interconnects Across Multiple Virtual Machines
Mohan Potheri from VMware gave this talk at the Stanford HPC Conference. "Virtualized devices offer maximum flexibility. This session introduces SR-IOV, explains how it is enabled in VMware vSphere, and provides details of specific use cases that important for machine learning and high-performance computing. It includes performance comparisons that demonstrate the benefits of SR-IOV and information on how to configure and tune these configurations."The post Sharing High-Performance Interconnects Across Multiple Virtual Machines appeared first on insideHPC.
Performance Insights Using the Intel Advisor Python API
Tuning a complex application for today’s heterogeneous platforms requires an understanding of the application itself as well as familiarity with tools that are available for assisting with analyzing where in the code itself to look for bottlenecks. The process for optimizing the performance of an application, in general, requires the following steps that are most likely applicable for a wide range of applications.The post Performance Insights Using the Intel Advisor Python API appeared first on insideHPC.
Liquid Cooling is Hot – CoolIT Systems Reports 60% Revenue Growth in 2017
Today CoolIT Systems reported company revenue for the fiscal year ended December 31, 2017 (FY2017) increased by 60%. Among the market segments driving growth in FY2017, Data Center saw the most significant escalation with sales up by 75% from the previous year. "Our focus on product quality and reliability continues to be a source of competitive strength. Second, through establishing close engineering partnerships with today’s leading server manufacturers and data center operators we help our customers efficiently manage the increasing heat loads of modern data center environments.”The post Liquid Cooling is Hot – CoolIT Systems Reports 60% Revenue Growth in 2017 appeared first on insideHPC.
Dr. Alex Zhavoronkov Joins Buck Institute to fight Aging with AI
“The Buck has been at the forefront of asking the most important questions in the field. Now, with the latest in bioinformatics and artificial intelligence, and with the involvement of world-class experts like Dr. Zhavoronkov, we will finally have the tools to answer them. Fully utilizing these powerful technologies, we will dramatically increase our understanding of how aging works, and what we can do about it.”The post Dr. Alex Zhavoronkov Joins Buck Institute to fight Aging with AI appeared first on insideHPC.
Alibaba Cloud launches ECS Baremetal Instances in Europe
Infrastructure and security will continue to be crucial as the foundation and fundamental consideration for enterprises undertaking cloud migration. In view of this, Alibaba Cloud will launch ECS Baremetal Instance, a new high performance computing solution of ECS that combines the strengths of virtualized systems and bare metal servers. When connected as a supercomputer, ECS Baremetal Instances becomes a Super Computing Cluster that will reduce network latency to the level of micro-seconds while offering elasticity and supercomputing capabilities.The post Alibaba Cloud launches ECS Baremetal Instances in Europe appeared first on insideHPC.
HPCG Benchmark offers a alternative way to rank Top Computers
“The LINPACK program used to represent a broad spectrum of the core computations that needed to be performed, but things have changed,” said Sandia researcher Mike Heroux, who created and developed the HPCG program. “The LINPACK program performs compute-rich algorithms on dense data structures to identify the theoretical maximum speed of a supercomputer. Today’s applications often use sparse data structures, and computations are leaner.”The post HPCG Benchmark offers a alternative way to rank Top Computers appeared first on insideHPC.
Announcing the 2018 HPC-AI Competition in APAC
Today the HPC AI Advisory Council announced the 2018 APAC HPC-AI Competition. Co-sponsored by National Supercomputing Centre in Singapore, the 2018 APAC HPC-AI Competition will start on March 27, 2018 and continue until August 2018. "The APAC HPC-AI competition encourages international teams in the APAC region to showcase their HPC and AI expertise in a friendly yet spirited competition that builds critical skills, professional relationships, competitive spirits and lifelong comraderies. The competition is open to university and technical institute teams from the entire APAC region, and includes both creating missions and addressing challenges around AI development and testing, and high-performance computing workloads."The post Announcing the 2018 HPC-AI Competition in APAC appeared first on insideHPC.
NSF Sponsors EPiQC ‘expedition’ for Practical Quantum Computing
University of Chicago computer scientists will lead a $10 million “expedition” into the burgeoning field of quantum computing, bringing applications of the nascent technology for computer science, physics, chemistry, and other fields at least a decade closer to practical use. Called EPiQC, the $10 million NSF 'expedition' for practical quantum computing is designed to help scientists realize the potential of quantum computing more rapidly.The post NSF Sponsors EPiQC ‘expedition’ for Practical Quantum Computing appeared first on insideHPC.
New Intel Movidius AI Program Enables Developers to Go To Market
Today Intel unveiled “AI: In Production,” a new program that makes it easier for developers to bring their artificial intelligence prototypes to market. Since its introduction last July, the Intel Movidius Neural Compute Stick has gained a developer base in the tens of thousands. "Intel AI: In Production means we can expect many more innovative AI-centric products coming to market from the diverse and growing segment of technologies utilizing Intel technology for low-power inference at the edge,” said Remi El-Ouazzane, Intel vice president and general manager of Intel Movidius.The post New Intel Movidius AI Program Enables Developers to Go To Market appeared first on insideHPC.
Mid-career Women help build SCinet with WINS Apprenticeship at SC17
In this special guest feature, Alisa Alering from ScienceNode writes that a team of women engineers helped build SCinet as part of the WINS program at SC17. “A lot of these women come from small schools, and they may not normally get exposed to the technology SCinet provides,” says Meehl. “It’s a great experience technically, but they also get a great professional experience meeting people—that really brings value to your ability to do your job.”The post Mid-career Women help build SCinet with WINS Apprenticeship at SC17 appeared first on insideHPC.
The Mont-Blanc project: Updates from the Barcelona Supercomputing Center
Filippo Mantovani from BSC gave this talk at the GoingARM workshop at SC17. "Since 2011, Mont-Blanc has pushed the adoption of Arm technology in High Performance Computing, deploying Arm-based prototypes, enhancing system software ecosystem and projecting performance of current systems for developing new, more powerful and less power hungry HPC computing platforms based on Arm SoC. In this talk, Filippo introduces the last Mont-Blanc system, called Dibona, designed and integrated by the coordinator and industrial partner of the project, Bull/ATOS."The post The Mont-Blanc project: Updates from the Barcelona Supercomputing Center appeared first on insideHPC.
How Spectre and Meltdown Could Affect Future Processors
In this special guest feature from Scientific Computing World, Adrian Giordani reports on recent vulnerabilities found in many modern CPUs. "These problems are here for the long term until the next generation of silicon processors hit the market. In the end, one of the original teams that found these security vulnerabilities says it best on their website: "As it is not easy to fix, it will haunt us for quite some time."The post How Spectre and Meltdown Could Affect Future Processors appeared first on insideHPC.
High Availability HPC: Microservice Architectures for Supercomputing
Ryan Quick from Providentia Worldwide gave this talk at the Stanford HPC Conference. "Microservices power cloud-native applications to scale thousands of times larger than single deployments. We introduce the notion of microservices for traditional HPC workloads. We will describe microservices generally, highlighting some of the more popular and large-scale applications. Then we examine similarities between large-scale cloud configurations and HPC environments. Finally we propose a microservice application for solving a traditional HPC problem, illustrating improved time-to-market and workload resiliency."The post High Availability HPC: Microservice Architectures for Supercomputing appeared first on insideHPC.
HPC Carpentry Learning Portal Offers an Intro to HPC
The good folks at HPC Carpentry have posted a new set of teaching materials designed to help new users take advantage of high-performance computing systems. No prior computational experience is required - these lessons are ideal for either an in-person workshop or independent study. "HPC Carpentry is not an organization - it is merely a set of publicly available teaching materials designed to make the task of teaching HPC a little easier. We welcome all contributions, in particular adaptations of our Intro to HPC lesson for other schedulers besides SLURM."The post HPC Carpentry Learning Portal Offers an Intro to HPC appeared first on insideHPC.
Video: Intel Ships Stratix 10 TX FPGAs for Multi-Terabit Network Infrastructure
Today Intel announced it has begun shipping its Intel Stratix 10 TX FPGAs, the industry’s only field programmable gate array (FPGA) with 58G PAM4 transceiver technology. "In this smart and connected world, billions of devices are creating massive amounts of data that need faster, flexible, and scalable connectivity solutions,” said Reynette Au, vice president of marketing, Intel Programmable Solutions Group. “With Stratix 10 TX FPGAs, Intel continues to provide architects with higher transceiver bandwidth and hardened IP to address the insatiable demand for faster and higher-density connectivity."The post Video: Intel Ships Stratix 10 TX FPGAs for Multi-Terabit Network Infrastructure appeared first on insideHPC.
SpaRC: Scalable Sequence Clustering using Apache Spark
Zhong Wang from the Genome Institute at LBNL gave this talk at the Stanford HPC Conference. "Whole genome shotgun based next generation transcriptomics and metagenomics studies often generate 100 to 1000 gigabytes (GB) sequence data derived from tens of thousands of different genes or microbial species. Here we describe an Apache Spark-based scalable sequence clustering application, SparkReadClust (SpaRC) that partitions reads based on their molecule of origin to enable downstream assembly optimization."The post SpaRC: Scalable Sequence Clustering using Apache Spark appeared first on insideHPC.
NREL Report Evaluates LiquidCool Solutions for the Datacenter
NREL researchers are testing immersive liquid cooling technologies that could potentially bring huge energy savings to HPC datacenters. With worldwide datacenters consuming an estimated 70 billion kWh per year, a disruptive energy-saving solution is needed, and a liquid-submerged server (LSS) technology from LiquidCool Solutions might be the answer. "The testing confirmed that the LSS technology could not only maintain target temperatures under heavy computational load, but that the hot liquid could be used to heat buildings more efficiently than NREL's current solution."The post NREL Report Evaluates LiquidCool Solutions for the Datacenter appeared first on insideHPC.
Video: HPC Computing Trends
Chris Willard from Intersect360 Research gave this talk at the Stanford HPC Conference. "Intersect360 Research returns with an annual deep dive into the trends, technologies and usage models that will be propelling the HPC community through 2018 and beyond. Emerging areas of focus and opportunities to expand will be explored along with insightful observations needed to support measurably positive decision making within your operations."The post Video: HPC Computing Trends appeared first on insideHPC.
State of Linux Containers
Christian Kniep from Docker Inc. gave this talk at the Stanford HPC Conference. "This talk will recap the history of and what constitutes Linux Containers, before laying out how the technology is employed by various engines and what problems these engines have to solve. Afterward, Christian will elaborate on why the advent of standards for images and runtimes moved the discussion from building and distributing containers to orchestrating containerized applications at scale."The post State of Linux Containers appeared first on insideHPC.
HDR 200G InfiniBand: Empowering Next Generation Data Centers
The need for faster data movement has never been more critical to the worlds of HPC and machine learning. In light of this demand, companies like Mellanox Technologies are working to introduce solutions to address the need for HPC and deep learning platforms to move and analyze data both in real-time and at faster speeds than ever.Download the new white paper from Mellanox that explores the company’s end-to-end HDR 200G InfiniBand product portfolio and the benefits of in-network computing.The post HDR 200G InfiniBand: Empowering Next Generation Data Centers appeared first on insideHPC.
NVIDIA Powers 22.4 Petaflop HPC4 Supercomputer at Eni
Oil & Gas giant Eni of Italy has expanded the computing capacity of their Green Data Center with a massive GPU-powered system called HPC4. Built by HPE, the 22.4 Petaflop supercomputer is powered by 3,200 Tesla GPU accelerators. "Based in Ferrera Erbognone near Milan, HPC4 quadruples the company’s computational power and makes its HPC infrastructure the world’s most powerful industrial computing system today."The post NVIDIA Powers 22.4 Petaflop HPC4 Supercomputer at Eni appeared first on insideHPC.
Podcast: Open MPI for Exascale
In this Let’s Talk Exascale podcast, David Bernholdt from ORNL discusses the Open MPI for Exascale project, which is focusing on the communication infrastructure of MPI, or message-passing interface, an extremely widely used standard for interprocessor communications for parallel computing. "It’s possible that even though applications may make millions or billions of short calls to the MPI library during the course of an execution, performance improvements can have a significant overall impact on the application runtime."The post Podcast: Open MPI for Exascale appeared first on insideHPC.
Altair to boost UNLV Supercomputing
Today UNLV announced new Memorandum of Understanding (MOU) between the university and Altair Engineering. "Per the agreement, Altair’s technology will enable Cherry Creek II users to simplify access and utilization of the supercomputer’s capabilities and capacity. When deployed, PBS Works, Altair’s high-performance computing management suite, will securely manage all Cherry Creek II compute workload."The post Altair to boost UNLV Supercomputing appeared first on insideHPC.
Rigetti Computing Releases Forest 1.3 Quantum Software Platform
Rigetti Computing has released a new version of Forest, their quantum software platform. Forest 1.3 offers upgraded developer tools, improved stability, and faster execution. "Starting today, researchers using Forest will be upgraded to version 1.3, which provides better tools for optimizing and debugging quantum programs. The upgrade also provides greater stability in our quantum processor (QPU), which will let researchers run more powerful quantum programs. Forest is the easiest and most powerful way to build quantum applications today. We believe the combination of one of the most powerful gate-model quantum computers, cutting-edge classical hardware, and our unique hybrid classical/quantum architecture creates the clearest and shortest path toward the demonstration of unequivocal quantum advantage."The post Rigetti Computing Releases Forest 1.3 Quantum Software Platform appeared first on insideHPC.
D-Wave Completes Prototype of Next-Gen Quantum Processor
Today D-Wave Systems announced that the company has completed fabrication and testing of a working prototype next-generation processor, and the installation of a D-Wave 2000Q system for a customer. The prototype processor uses an advanced new architecture that will be the basis for D-Wave’s next-generation quantum processor. The D-Wave 2000Q system, the fourth generation of commercial products delivered by D-Wave, was installed at the Quantum Artificial Intelligence Lab run by Google, NASA, and Universities Space Research Association.The post D-Wave Completes Prototype of Next-Gen Quantum Processor appeared first on insideHPC.
Highest Peformance and Scalability for HPC and AI
Scot Schultz from Mellanox gave this talk at the Stanford HPC Conference. "Today, many agree that the next wave of disruptive technology blurring the lines between the digital, physical and even the biological, will be the fourth industrial revolution of AI. The fusion of state-of-the-art computational capabilities, extensive automation and extreme connectivity is already affecting nearly every aspect of society, driving global economics and extending into every aspect of our daily life."The post Highest Peformance and Scalability for HPC and AI appeared first on insideHPC.
Warm Water-Cooling enables a fanless design for new Lenovo ThinkSystem SD650
Today Lenovo unveiled the new ThinkSystem SD650 server Direct Water Cooling for energy-efficient, high-density computing. Already deployed at the Leibniz Supercomputing Centre in Germany, the ThinkSystem SD650 will save customers up to 40% on energy costs while delivering a 10-15% performance improvement over air-cooled systems. "The system utilizes warm water instead of air to cool the components, including the CPUs and memory. Water conducts heat more efficiently, allowing customers to run their processors in “turbo” mode continuously, resulting in a performance improvement. The SD650 HPC servers have no system fans, operate at lower temperatures when compared to standard air-cooled systems and have negligible datacenter chilled water requirements. The result – lower datacenter power consumption of 30-40% compared to traditional cooling methods."The post Warm Water-Cooling enables a fanless design for new Lenovo ThinkSystem SD650 appeared first on insideHPC.
Lenovo ThinkSystem Servers Power 1.3 Petaflop Supercomputer at University of Southampton
OCF in the UK has deployed a new supercomputer at the University of Southampton. Named Iridis 5, the 1.3 Petaflop system will support research demanding traditional HPC as well as projects requiring large scale deep storage, big data analytics, web platforms for bioinformatics, and AI services. "We’ve had early access to Iridis 5 and it’s substantially bigger and faster than its previous iteration – it’s well ahead of any other in use at any University across the UK for the types of calculations we’re doing."The post Lenovo ThinkSystem Servers Power 1.3 Petaflop Supercomputer at University of Southampton appeared first on insideHPC.
Designing HPC, Deep Learning, and Cloud Middleware for Exascale Systems
DK Panda from Ohio State University gave this talk at the Stanford HPC Conference. "This talk will focus on challenges in designing HPC, Deep Learning, and HPC Cloud middleware for Exascale systems with millions of processors and accelerators. For the HPC domain, we will discuss the challenges in designing runtime environments for MPI+X (PGAS-OpenSHMEM/UPC/CAF/UPC++, OpenMP and Cuda) programming models by taking into account support for multi-core systems (KNL and OpenPower), high networks, GPGPUs (including GPUDirect RDMA) and energy awareness."The post Designing HPC, Deep Learning, and Cloud Middleware for Exascale Systems appeared first on insideHPC.
Intel MKL Compact Matrix Functions Attain Significant Speedups
The latest version of Intel® Math Kernel Library (MKL) offers vectorized compact functions for general and specialized matrix computations of this type. These functions rely on true SIMD (single instruction, multiple data) matrix computations, and provide significant performance benefits compared to traditional techniques that exploit multithreading but rely on standard data formats.The post Intel MKL Compact Matrix Functions Attain Significant Speedups appeared first on insideHPC.
Samsung Unveils 30.72TB Enterprise SSD
Today Samsung announced that it has begun mass producing the industry’s largest capacity SAS solid state drive – the PM1643 – for use in next-generation enterprise storage systems. Leveraging Samsung’s latest V-NAND technology with 64-layer, 3-bit 512-gigabit chips, the 30.72 terabyte drive delivers twice the capacity and performance of the previous 15.36TB high-capacity lineup introduced in March 2016. "With our launch of the 30.72TB SSD, we are once again shattering the enterprise storage capacity barrier, and in the process, opening up new horizons for ultra-high capacity storage systems worldwide,” said Jaesoo Han, executive vice president, Memory Sales & Marketing Team at Samsung Electronics. “Samsung will continue to move aggressively in meeting the shifting demand toward SSDs over 10TB and at the same time, accelerating adoption of our trail-blazing storage solutions in a new age of enterprise systems.”The post Samsung Unveils 30.72TB Enterprise SSD appeared first on insideHPC.
WekaIO: Making Machine Learning Compute-Bound Again
We are going to present WekaIO, the lowest latency, highest throughput file system solution that scales to 100s of PB in a single namespace supporting the most challenging deep learning projects that run today. We will present real life benchmarks comparing WekaIO performance to a local SSD file system, showing that we are the only coherent shared storage that is even faster than the current caching solutions, while allowing customers to linearly scale performance by adding more GPU servers."The post WekaIO: Making Machine Learning Compute-Bound Again appeared first on insideHPC.
HPE wins $57 Million Supercomputing Contract for DoD Modernization Program
Today HPE announced it has been selected to provide new supercomputers for the DoD High Performance Computing Modernization Program (HPCMP) to accelerate the development and acquisition of advanced national security capabilities. “The DoD’s continuous investment in supercomputing innovation is a clear testament to this development and an important contribution to U.S. national security. HPE has been a strategic partner with the HPCMP for two decades, and we are proud that the DoD now significantly extends this partnership, acknowledging HPE’s sustained leadership in high performance computing.”The post HPE wins $57 Million Supercomputing Contract for DoD Modernization Program appeared first on insideHPC.
Video: The Sierra Supercomputer – Science and Technology on a Mission
Adam Bertsch from LLNL gave this talk at the Stanford HPC Conference. "Our next flagship HPC system at LLNL will be called Sierra. A collaboration between multiple government and industry partners, Sierra and its sister system Summit at ORNL, will pave the way towards Exascale computing architectures and predictive capability."The post Video: The Sierra Supercomputer – Science and Technology on a Mission appeared first on insideHPC.
Call for Participation: MSST Mass Storage Conference 2018
The 34th International Conference on Massive Storage Systems and Technologies (MSST 2018) has issued its Call for Participation. The event takes place May 14-16 in Santa Clara, California. "The conference invites you to share your research, ideas and solutions, as we continue to face challenges in the rapidly expanding need for massive, distributed storage solutions. Join us and learn about disruptive storage technologies and the challenges facing data centers, as the demand for massive amounts of data continues to increase. Join the discussion on webscale IT, and the demand on storage systems from IoT, healthcare, scientific research, and the continuing stream of smart applications (apps) for mobile devices."The post Call for Participation: MSST Mass Storage Conference 2018 appeared first on insideHPC.
Resource Management Across the Private/Public Cloud Divide
This is the final entry in a insideHPC series of features that explores new resource management solutions for workload convergence, such as Bright Cluster Manager by Bright Computing. This article highlights how resource management systems that can manage clusters on-premises or in the cloud greatly simplify cluster management. That way, different tools do not have to be learned for managing a cluster based on whether it is located in the company data center or in the cloud.The post Resource Management Across the Private/Public Cloud Divide appeared first on insideHPC.
Take the Exascale Resilience Survey from AllScale Europe
The European Horizon 2020 AllScale project has launched a survey on exascale resilience. "As we approach ExaScale, compute node failure will become commonplace. @AllScaleEurope wants to know how #HPC software developers view fault tolerance today, & how they plan to incorporate fault tolerance in their software in the ExaScale era."The post Take the Exascale Resilience Survey from AllScale Europe appeared first on insideHPC.
Supercomputing Graphene Applications in Nanoscale Electronics
Researchers at North Carolina State University are using the Blue Waters Supercomputer to explore graphene’s applications, including its use in nanoscale electronics and electrical DNA sequencing. "We’re looking at what’s beyond Moore’s law, whether one can devise very small transistors based on only one atomic layer, using new methods of making materials,” said Professor Jerry Bernholc, from North Carolina University. “We are looking at potential transistor structures consisting of a single layer of graphene, etched into lines of nanoribbons, where the carbon atoms are arranged like a chicken wire pattern. We are looking at which structures will function well, at a few atoms of width.”The post Supercomputing Graphene Applications in Nanoscale Electronics appeared first on insideHPC.
Agenda Posted: OpenPOWER 2018 Summit in Las Vegas
The OpenPOWER Summit has posted its speaker agenda. Held in conjunction with IBM Think 2018, the event takes place March 19 in Las Vegas. "The OpenPOWER Foundation is an open technical community based on the POWER architecture, enabling collaborative development and opportunity for member differentiation and industry growth. The goal of the OpenPOWER Foundation is to create an open ecosystem, using the POWER Architecture to share expertise, investment, and server-class intellectual property to serve the evolving needs of customers and industry."The post Agenda Posted: OpenPOWER 2018 Summit in Las Vegas appeared first on insideHPC.
Video: Computing Challenges at the Large Hadron Collider
CERN's Maria Girona gave this talk at the HiPEAC 2018 conference in Manchester. "The Large Hadron Collider (LHC) is one of the largest and most complicated scientific apparata ever constructed. "In this keynote, I will discuss the challenges of capturing, storing and processing the large volumes of data generated at CERN. I will also discuss how these challenges will evolve towards the High-Luminosity Large Hadron Collider (HL-LHC), the upgrade programme scheduled to begin taking data in 2026 and to run into the 2030s, generating some 30 times more data than the LHC has currently produced."The post Video: Computing Challenges at the Large Hadron Collider appeared first on insideHPC.
HACC: Fitting the Universe inside a Supercomputer
Nicholas Frontiere from the University of Chicago gave this talk at the DOE CSGF Program Review meeting. "In response to the plethora of data from current and future large-scale structure surveys of the universe, sophisticated simulations are required to obtain commensurate theoretical predictions. We have developed the Hardware/Hybrid Accelerated Cosmology Code (HACC), capable of sustained performance on powerful and architecturally diverse supercomputers to address this numerical challenge. We will investigate the numerical methods utilized to solve a problem that evolves trillions of particles, with a dynamic range of a million to one."The post HACC: Fitting the Universe inside a Supercomputer appeared first on insideHPC.
Job of the Week: HPC Systems Engineer at Washington State University
The Center for Institutional Research Computing at Washington State University is seeking a High-Performance Computing Systems Engineer in our Job of the Week. "This position will play a vital role in the engineering and administration of HPC clusters used by the research community at Washington State University. This position is an exciting opportunity to participate in the frontiers of research computing through the selection, configuration, and management of HPC infrastructure including all computing systems, networking, and storage. This position is key to ensuring the high quality of service and performance of WSU's research computing resources."The post Job of the Week: HPC Systems Engineer at Washington State University appeared first on insideHPC.
Video: Deep Reinforcement Learning and Systems Infrastructure at DeepMind
In this video from HiPEAC 2018 in Manchester, Dan Belov from DeepMind describe the company's machine learning technology and some of the challenges ahead. "DeepMind Inc. is well known for state of the art Deep Reinforcement Learning (DRL) algorithms such as DQN on Atari, A3C on DMLab and AlphaGo Zero. I would like to take you on a tour of challenges we encounter when training DRL agents on large workloads with hundreds of terabytes of data. I’ll talk about why DRL poses unique challenges when designing distributed systems and hardware as opposed to simple supervised learning. Finally I’d like to discuss opportunities for DRL to help systems design and operation."The post Video: Deep Reinforcement Learning and Systems Infrastructure at DeepMind appeared first on insideHPC.
Adaptive Computing rolls out Moab HPC Suite 9.1.2
Today Adaptive Computing announced the release of Moab 9.1.2, an update which has undergone thousands of quality tests and includes scores of customer-requested enhancements. "Moab is a world leader in dynamically optimizing large-scale computing environments. It intelligently places and schedules workloads and adapts resources to optimize application performance, increase system utilization, and achieve organizational objectives. Moab's unique intelligent and predictive capabilities evaluate the impact of future orchestration decisions across diverse workload domains (HPC, HTC, Big Data, Grid Computing, SOA, Data Centers, Cloud Brokerage, Workload Management, Enterprise Automation, Workflow Management, Server Consolidation, and Cloud Bursting); thereby optimizing cost reduction and speeding product delivery."The post Adaptive Computing rolls out Moab HPC Suite 9.1.2 appeared first on insideHPC.
Intel Rolls out new 3D NAND SSDs
Today, Intel announced the Intel SSD DC P4510 Series for data center applications. As a high performance storage device, the P4510 Series uses 64-layer TLC Intel 3D NAND to enable end users to do more per server, support broader workloads, and deliver space-efficient capacity. "The P4510 Series enables up to four times more terabytes per server and delivers up to 10 times better random read latency at 99.99 percent quality of service than previous generations. The drive can also deliver up to double the input-output operations per second (IOPS) per terabyte."The post Intel Rolls out new 3D NAND SSDs appeared first on insideHPC.
...151152153154155156157158159160...