by staff on (#2D1C1)
"The IO infrastructure of TSUBAME3.0 combines fast in-node NVMe SSDs and a large, fast, Lustre-based system from DDN. The 15.9PB Lustre* parallel file system, composed of three of DDN’s high-end ES14KX storage appliances, is rated at a peak performance of 150GB/s. The TSUBAME collaboration represents an evolutionary branch of HPC that could well develop into the dominant HPC paradigm at about the time the most advanced supercomputing nations and consortia achieve Exascale computing."The post DDN and Lustre to Power TSUBAME3.0 Supercomputer appeared first on insideHPC.
|
High-Performance Computing News Analysis | insideHPC
Link | https://insidehpc.com/ |
Feed | http://insidehpc.com/feed/ |
Updated | 2024-11-05 21:15 |
by Rich Brueckner on (#2D1M8)
In this video, Dr Tim Stitt from the Earlham Institute describes why moving their HPC workload to Iceland made economic sense. Through the Verne Global datacenter, the Earlham Institute will have access to one of the world’s most reliable power grids producing 100% geothermal and hydro-electric renewable energy. As EI’s HPC analysis requirements continue to grow, Verne Global will enable the institute to save up to 70% in energy costs (based on 14p to 4p KWH rate and with no additional power for cooling, significantly benefiting the organization in their advanced genomics and bioinformatics research of living systems.The post Earlham Institute Moves HPC Workloads to Iceland appeared first on insideHPC.
|
by staff on (#2D1DY)
Today Dutch startup Asperitas rolled out Immersed Computing cooling technology for datacenters. "The company's first market ready solution, the AIC24, 'the first water-cooled oil-immersion system which relies on natural convection for circulation of the dielectric liquid.' This results in a fully self-contained and Plug and Play modular system. The AIC24 needs far less infrastructure than any other liquid installation, saving energy and costs on all levels of datacentre operations. The AIC24 is the most sustainable solution available for IT environments today. Ensuring the highest possible efficiency in availability, energy reduction and reuse, while increasing capacity. Greatly improving density, while saving energy at the same time."The post Asperitas Startup Brings Immersive Cooling to Datacenters appeared first on insideHPC.
|
by staff on (#2D15Z)
"TSUBAME3.0 is expected to deliver more than two times the performance of its predecessor, TSUBAME2.5," writes Marc Hamilton from Nvidia. "It will use Pascal-based Tesla P100 GPUs, which are nearly three times as efficient as their predecessors, to reach an expected 12.2 petaflops of double precision performance. That would rank it among the world’s 10 fastest systems according to the latest TOP500 list, released in November. TSUBAME3.0 will excel in AI computation, expected to deliver more than 47 PFLOPS of AI horsepower. When operated concurrently with TSUBAME2.5, it is expected to deliver 64.3 PFLOPS, making it Japan’s highest performing AI supercomputer."The post Pascal GPUs to Accelerate TSUBAME 3.0 Supercomputer at Tokyo Tech appeared first on insideHPC.
|
by Rich Brueckner on (#2D14E)
"Servers today have hundreds of knobs that can be tuned for performance and energy efficiency. While some of these knobs can have a dramatic effect on these metrics, manually tuning them is a tedious task. It is very labor intensive, it requires a lot of expertise, and the tuned settings are only relevant for the hardware and software that were used in the tuning process. In addition to that, manual tuning can't take advantage of application phases that may each require different settings. In this presentation, we will talk about the concept of dynamic tuning and its advantages. We will also demo how to improve performance using manual tuning as well as dynamic tuning using DatArcs Optimizer."The post Video: The Era of Self-Tuning Servers appeared first on insideHPC.
|
by staff on (#2CXMS)
"This breakthrough has unlocked new potential for ExxonMobil's geoscientists and engineers to make more informed and timely decisions on the development and management of oil and gas reservoirs," said Tom Schuessler, president of ExxonMobil Upstream Research Company. "As our industry looks for cost-effective and environmentally responsible ways to find and develop oil and gas fields, we rely on this type of technology to model the complex processes that govern the flow of oil, water and gas in various reservoirs."The post Exxon Mobil and NCSA Achieve New Levels of Scalability on complex Oil & Gas Reservoir Simulation Models appeared first on insideHPC.
|
by Rich Brueckner on (#2CXK0)
SC17 has issued its Call for Panel Sessions. The conference takes place Nov. 12-17 in Denver. "As in past years, panels at SC17 will be some of the most heavily attended events of the Conference. Panels will bring together the key thinkers and producers in the field to consider in a lively and rapid-fire context some of the key questions challenging high performance computing, networking, storage and associated analysis technologies for the foreseeable future."The post Call for Panels: SC17 in Denver appeared first on insideHPC.
|
by Rich Brueckner on (#2CXEH)
"In recent years, major breakthroughs were achieved in different fields using deep learning. From image segmentation, speech recognition or self-driving cars, deep learning is everywhere. Performance of image classification, segmentation, localization have reached levels not seen before thanks to GPUs and large scale GPU-based deployments, leading deep learning to be a first class HPC workload."The post Deep Learning & HPC: New Challenges for Large Scale Computing appeared first on insideHPC.
|
by Rich Brueckner on (#2CXC7)
Today ISC 2017 announced that it's Distinguished Talk series will focus on Data Analytics in manufacturing and scientific applications. One of the Distinguished Talks will be given by Dr. Sabine Jeschke from the Cybernetics Lab at the RWTH Aachen University on the topic of, “Robots in Crowds – Robots and Clouds.†Jeschke’s presentation will be followed by one from physicist Kerstin Tackmann, from the German Electron Synchrotron (DESY) research center, who will discuss big data and machine learning techniques used for the ATLAS experiment at the Large Hadron Collider.The post ISC 2017 Distinguished Talks to Focus on Data Analytics in Manufacturing & Science appeared first on insideHPC.
|
by MichaelS on (#2CX6J)
"As with all new technology, developers will have to create processes in order to modernize applications to take advantage of any new feature. Rather than randomly trying to improve the performance of an application, it is wise to be very familiar with the application and use available tools to understand bottlenecks and look for areas of improvement."The post Six Steps Towards Better Performance on Intel Xeon Phi appeared first on insideHPC.
|
by staff on (#2CTS8)
"Machine Learning and deep learning represent new frontiers in analytics. These technologies will be foundational to automating insight at the scale of the world’s critical systems and cloud services,†said Rob Thomas, General Manager, IBM Analytics. “IBM Machine Learning was designed leveraging our core Watson technologies to accelerate the adoption of machine learning where the majority of corporate data resides. As clients see business returns on private cloud, they will expand for hybrid and public cloud implementations.â€The post IBM Machine Learning Platform Comes to the Private Cloud appeared first on insideHPC.
|
by Rich Brueckner on (#2CSMG)
"In this guide, we take a high-level view of AI and deep learning in terms of how it’s being used and what technological advances have made it possible. We also explain the difference between AI, machine learning and deep learning, and examine the intersection of AI and HPC. We also present the results of a recent insideBIGDATA survey to see how well these new technologies are being received. Finally, we take a look at a number of high-profile use case examples showing the effective use of AI in a variety of problem domains."The post Defining AI, Machine Learning, and Deep Learning appeared first on insideHPC.
|
by Rich Brueckner on (#2CSH9)
"Coursera has named Intel as one of its first corporate content partners. Together, Coursera and Intel will develop and distribute courses to democratize access to artificial intelligence and machine learning. In this interview, Ibrahim talks about her and Coursera's history, reports on Coursera's progress delivering education at massive scale, and discusses Coursera and Intel's unique partnership for AI."The post Podcast: Democratizing Education for the Next Wave of AI appeared first on insideHPC.
|
by staff on (#2CSCS)
The OpenFog Consortium was founded over one year ago to accelerate adoption of fog computing through an open, interoperable architecture. The newly published OpenFog Reference Architecture is a high-level framework that will lead to industry standards for fog computing. The OpenFog Consortium is collaborating with standards development organizations such as IEEE to generate rigorous user, functional and architectural requirements, plus detailed application program interfaces (APIs) and performance metrics to guide the implementation of interoperable designs.The post OpenFog Consortium Publishes Reference Architecture appeared first on insideHPC.
|
by Rich Brueckner on (#2CS65)
Jeffrey Welser from IBM Research Almaden presented this talk at the Stanford HPC Conference. "Whether exploring new technical capabilities, collaborating on ethical practices or applying Watson technology to cancer research, financial decision-making, oil exploration or educational toys, IBM Research is shaping the future of AI."The post Video: Computing of the Future appeared first on insideHPC.
|
by Rich Brueckner on (#2CNR6)
Purdue University is seeking a Senior HPC Systems Administrator in our Job of the Week. "In this role, you will assist world renowned researchers in advancing science. Additionally, as Senior HPC Systems Administrator, you will be responsible for large sections of Purdue's innovative computational research environment and help set direction of future research systems. This role requires an individual to work closely with researchers, systems administrators, and developers throughout the University and partner institutions to develop large-impact projects and computational systems."The post Job of the Week: Senior HPC Systems Administrator at Purdue appeared first on insideHPC.
|
by staff on (#2CNG4)
Today Cycle Computing announced that the HyperXite team is using CycleCloud software to manage Hyperloop simulations using ANSYS Fluent on the Azure Cloud. "Our mission is optimize and economize the transportation of the future and Cycle Computing has made that endeavor so much easier, said Nima Mohseni, Simulation Lead, HyperXite. “We absolutely require a solution that can compress and condense our timeline while providing the powerful computational results we require. Thank you to Cycle Computing for making a significant difference in our ability to complete our work.â€The post Supercomputing the Hyperloop on Azure appeared first on insideHPC.
|
by Rich Brueckner on (#2CNA3)
Francis Lam from Huawei presented this talk at the Stanford HPC Conference. "High performance computing is rapidly finding new uses in many applications and businesses, enabling the creation of disruptive products and services. Huawei, a global leader in information and communication technologies, brings a broad spectrum of innovative solutions to HPC. This talk examines Huawei's world class HPC solutions and explores creative new ways to solve HPC problems.The post Huawei: A Fresh Look at High Performance Computing appeared first on insideHPC.
|
by Rich Brueckner on (#2CN7Z)
Over at TACC, Faith Singer-Villalobos writes that researchers are using the Rustler supercomputer to tackle Big Data from self-driving connected vehicles (CVs). "The volume and complexity of CV data are tremendous and present a big data challenge for the transportation research community," said Natalia Ruiz-Juri, a research associate with The University of Texas at Austin's Center for Transportation Research. While there is uncertainty in the characteristics of the data that will eventually be available, the ability to efficiently explore existing datasets is paramount.The post Supercomputing Transportation System Data using TACC’s Rustler appeared first on insideHPC.
|
by staff on (#2CMYM)
High-performance computing (HPC) tools are helping financial firms survive and thrive in this highly demanding and data-intensive industry. As financial models grow in complexity and greater amounts of data must be processed and analyzed on a daily basis, firms are increasingly turning to HPC solutions to exploit the latest technology performance improvements. Suresh Aswani, Senior Manager, Solutions Marketing, at Hewlett Packard Enterprise, shares how to overcome the learning curve of new processor architectures.The post Overcoming the Learning Curve of New Processor Architectures appeared first on insideHPC.
|
by Rich Brueckner on (#2CHHG)
In his keynote, Mr. Geist will discuss the need for future Department of Energy supercomputers to solve emerging data science and machine learning problems in addition to running traditional modeling and simulation applications. In August 2016, the Exascale Computing Project (ECP) was approved to support a huge lift in the trajectory of U.S. High Performance Computing (HPC). The ECP goals are intended to enable the delivery of capable exascale computers in 2022 and one early exascale system in 2021, which will foster a rich exascale ecosystem and work toward ensuring continued U.S. leadership in HPC. He will also share how the ECP plans to achieve these goals and the potential positive impacts for OFA.The post ORNL’s Al Geist to Keynote OpenFabrics Workshop in Austin appeared first on insideHPC.
|
by staff on (#2CHBY)
Today Mellanox announced superior crypto throughput of line rate using the company’s Innova IPsec Network Adapter, demonstrating more than three times higher throughput and more than four times better CPU utilization when compared to x86 software-based server offerings. Mellanox’s Innova IPsec adapter provides seamless crypto capabilities and advanced network accelerations to modern data centers, thereby enabling the ubiquitous use of encryption across the network while sustaining unmatched performance, scalability and efficiency. By replacing software-based offerings, Innova can reduce data center expenses by 60 percent or more.The post Mellanox Demos 4X Improvement in Crypto Performance with 40G Ethernet Network Adapter appeared first on insideHPC.
|
by Rich Brueckner on (#2CH78)
DK Panda from Ohio State University presented this deck at the 2017 HPC Advisory Council Stanford Conference. "This talk will focus on challenges in designing runtime environments for exascale systems with millions of processors and accelerators to support various programming models. We will focus on MPI, PGAS (OpenSHMEM, CAF, UPC and UPC++) and Hybrid MPI+PGAS programming models by taking into account support for multi-core, high-performance networks, accelerators (GPGPUs and Intel MIC), virtualization technologies (KVM, Docker, and Singularity), and energy-awareness. Features and sample performance numbers from the MVAPICH2 libraries will be presented."The post Designing HPC & Deep Learning Middleware for Exascale Systems appeared first on insideHPC.
|
by Rich Brueckner on (#2CH43)
In this podcast, the Radio Free HPC team discusses a recent presentation by John Gustafson on Next Generation Computer Arithmetic. “A new data type called a “posit†is designed for direct drop-in replacement for IEEE Standard 754 floats. Unlike unum arithmetic, posits do not require interval-type mathematics or variable size operands, and they round if an answer is inexact, much the way floats do. However, they provide compelling advantages over floats, including simpler hardware implementation that scales from as few as two-bit operands to thousands of bits."The post Radio Free HPC Looks at the Posit and Next Generation Computer Arithmetic appeared first on insideHPC.
|
by Rich Brueckner on (#2CE2M)
The PEARC17 Conference has issued its Call for Participation. Formerly known as the Extreme Science and Engineering Discovery Environment (XSEDE) annual conference, PEARC17 will take place July 9-13 in New Orleans. "The Technical Program for the PEARC17 includes four Paper tracks, Tutorials, Posters, a Visualization Showcase and Birds of a Feather (BoF) sessions. All submissions should emphasize experiences and lessons derived from operation and use of advanced research computing on campuses or provided for the academic and open science communities. Submissions aligned with the conference theme—Sustainability, Success, and Impact—are particularly encouraged."The post Call for Participation: PEARC17 in New Orleans appeared first on insideHPC.
|
by Rich Brueckner on (#2CDZQ)
"Linux Containers gain more and more momentum in all IT ecosystems. This talk provides an overview about what happened in the container landscape (in particular Docker) during the course of the last year and how it impacts datacenter operations, HPC and High-Performance Big Data. Furthermore Christian will give an update/extend on the ‘things to explore’ list he presented in the last Lugano workshop, applying what he learned and came across during the year 2016."The post Video: State of Linux Containers appeared first on insideHPC.
|
by Rich Brueckner on (#2CASB)
Today Intel announced the open-source BigDL, a Distributed Deep Learning Library for the Apache Spark* open-source cluster-computing framework. “BigDL is an open-source project, and we encourage all developers to connect with us on the BigDL Github, sample the code and contribute to the project,†said Doug Fisher, senior vice president and general manager of the Software and Services Group at Intel.The post Intel Rolls Out BigDL Deep Learning Library for Apache Spark appeared first on insideHPC.
|
by Rich Brueckner on (#2CAQJ)
"The University of Oregon (UO) High Performance Computing Research Core Facility (HPCRCF) seeks experienced applicants for the position of Research Systems and Application Administrator. The HPCRCF is a new facility located on the campus of the UO in Eugene, Oregon. The mission of the HPCRCF is to support computational research at the UO and collaborating institutions, and is home to a new flagship research cluster."The post Job of the Week: Research Systems and Application Administrator at University of Oregon appeared first on insideHPC.
|
by staff on (#2C7X5)
Leaders in hybrid accelerated HPC in the United States, Japan, and Switzerland have signed a memorandum of understanding establishing an international institute dedicated to common goals, the sharing of HPC expertise, and forward-thinking evaluation of computing architecture. “Forecasting the future of leadership-class computing and managing the risk of architectural change is a shared interest among ORNL, Tokyo Tech, and ETH Zurich,†said Jeff Nichols, associate laboratory director of computing and computational sciences at ORNL. “What unites our three organizations is a willingness to embrace change, actively partner with HPC vendors, and devise solutions that advance the work of our scientific users. ADAC provides a framework for member organizations to pursue mutual interests such as accelerated node architectures as computing moves toward the exascale era and beyond.â€The post Global HPC Centers Form Accelerated Computing Institute appeared first on insideHPC.
|
by staff on (#2C79X)
"We are very excited to be working closely with Bright Computing to bring its supercomputing software tools to the embedded Aerospace & Defense market as part of our OpenHPEC Accelerator Suite software development toolset,†said Lynn Bamford, Senior Vice President and General Manager, Defense Solutions division. “Together, we are providing HPEC system integrators with proven and robust development tools from the Commercial HPC market to speed and ease the design of COTS-based highly scalable supercomputer-class solutions.â€The post Curtiss-Wright Defense Solutions Group Teams with Bright Computing appeared first on insideHPC.
|
by Rich Brueckner on (#2C768)
Shahin Khan from OrionX presented this talk at the Stanford HPC Conference. "From BitCoins and AltCoins to Design Thinking, Autonomous tech and the changing nature of jobs, IoT and cyber risk, and the impact of application architecture on cloud computing, we’ll touch on some of the hottest technologies in 2017 that are changing the world and how HPC will be the engine that drives it."The post Shahin Khan Presents: Hot Technology Topics in 2017 appeared first on insideHPC.
|
by staff on (#2C729)
Industry veterans Jason Coposky and Terrell Russell have taken lead roles at the membership-based foundation that leads development and support of the integrated Rule-Oriented Data System (iRODS). "With data becoming the currency of the knowledge economy, now is an exciting time to be involved with developing and sustaining a world-class data management platform like iRODS,†said Coposky. “Our consortium membership is growing, and our increasing ability to integrate with commonly used hardware and software is translating into new users and an even more robust product."The post Coposky and Russell Tapped to Lead iRODS Consortium appeared first on insideHPC.
|
by Rich Brueckner on (#2C6WK)
Frank Ham from Cascade Technologies presented this talk at the Stanford HPC Conference. "A spin-off of the Center for Turbulence Research at Stanford University, Cascade Technologies grew out of a need to bridge between fundamental research from institutions like Stanford University and its application in industries. In a continual push to improve the operability and performance of combustion devices, high-fidelity simulation methods for turbulent combustion are emerging as critical elements in the design process. Multiphysics based methodologies can accurately predict mixing, study flame structure and stability, and even predict product and pollutant concentrations at design and off-design conditions."The post Best Practices – Large Scale Multiphysics appeared first on insideHPC.
|
by Rich Brueckner on (#2C389)
"This tutorial will present several features that the draft Fortran 2015 standard introduces to meet challenges that are expected to dominate massively parallel programming in the coming exascale era. The expected exascale challenges include higher hardware- and software-failure rates, increasing hardware heterogeneity, a proliferation of execution units, and deeper memory hierarchies."The post Tutorial: Towards Exascale Computing with Fortran 2015 appeared first on insideHPC.
|
by staff on (#2C35Y)
Today Allinea Software launched the first update to its well-established toolset for debugging, profiling and optimizing high performance code since being acquired by ARM in December 2016. "The V7.0 release provides new integrations for the Allinea Forge debugger and profiler and Allinea Performance Reports and will mean more efficient code development and optimization for users, especially those wishing to take software performance to new levels across Xeon Phi, CUDA and IBM Power platforms,†said Mark O’Connor, ARM Director, Product Management HPC tools.The post Allinea Updates Code Optimization Tools Across Platforms appeared first on insideHPC.
|
by Rich Brueckner on (#2C33D)
“China and the United States have been in the race to develop the most capable supercomputer. China has announced that its exascale computer could be released sooner than originally planned. Steve Conway, VP for high performance computing at IDC, joins Federal Drive with Tom Temin for analysis.â€The post Podcast: IDC’s Steve Conway on China’s New Plan for Exascale appeared first on insideHPC.
|
by Rich Brueckner on (#2C2ZC)
In this video from KAUST Live: Patricia Damkroger discusses her new role as Vice President, Data Center Group and General Manager, Technical Computing Initiative, Enterprise and Government at Intel. "As the former Associate Director for Computation at Lawrence Livermore National Laboratory (LLNL), Trish Damkroger lead the 1,000-employee workforce behind the Laboratory’s high performance computing efforts. She is a longtime committee member and one-time general chair of the SC conference. Most recently, Damkroger was the SC16 Diverse HPC Workforce Chair."The post Video: Trish Damkroger on her New Mission at Intel appeared first on insideHPC.
|
by staff on (#2C2VQ)
“U.S. Patent 9,496,200 protects the invention of utilizing a modular, building block approach for datacenter cooling with direct contact liquid cooling,†said Geoff Lyon CEO, CoolIT Systems. “CoolIT’s commitment to developing and patenting unique solutions provides our customers with the assured competitive advantage they are looking for. The 60 patent milestone adds confirmation to CoolIT’s leadership in developing innovative liquid cooling solutions for modern data centers.â€The post CoolIT Systems Issued U.S. Patent for Modular Heat-Transfer Solutions appeared first on insideHPC.
|
by Richard Friedman on (#2C2SV)
"By implementing popular Python packages such as NumPy, SciPy, scikit-learn, to call the Intel Math Kernel Library (Intel MKL) and the Intel Data Analytics Acceleration Library (Intel DAAL), Python applications are automatically optimized to take advantage of the latest architectures. These libraries have also been optimized for multithreading through calls to the Intel Threading Building Blocks (Intel TBB) library. This means that existing Python applications will perform significantly better merely by switching to the Intel distribution."The post Intel Releases Optimized Python for HPC appeared first on insideHPC.
|
by staff on (#2BZ9C)
Today Silicon Valley startup Tachyum Inc. launched, announcing its mission to conquer the performance plateau in nanometer-class chips and the systems they power. "We have entered a post-Moore’s Law era where performance hit a plateau, cost reduction slowed dramatically, and process node shrinks and CPU release cycles are getting longer,†said Danilak, Tachyum CEO. “An innovative new approach, from first principles is the only realistic chance we have of achieving performance improvements to rival those that powered the tech industry of past decades, and the opportunity is a hundred times greater than any venture I’ve been involved in.â€The post Tachyum Startup Looks to Break Performance and Cost Barriers to Intelligent Information Processing appeared first on insideHPC.
|
by Rich Brueckner on (#2BZ6Z)
Gilad Shainer moderated this panel discussion on Exascale Computing at the Stanford HPC Conference. "The creation of a capable exascale ecosystem will have profound effects on the lives of Americans, improving our nation’s national security, economic competitiveness, and scientific capabilities. The exponential increase of computation power enabled with exascale will fuel a vast range of breakthroughs and accelerate discoveries in national security, medicine, earth sciences and many other fields."The post Panel Discussion: The Exascale Endeavor appeared first on insideHPC.
|
by staff on (#2BZ4M)
Today the Active Archive Alliance announced that Oak Ridge National Laboratory (ORNL) has upgraded its active archive solutions to enhance the integrity and accessibility of its vast amount of data. The new solutions allow ORNL to meet its increasing data demands and enable fast file recall for its users. "These active archive upgrades were crucial to ensuring our users’ data is both accessible and fault-tolerant so they can continue performing high-priority research at our facilities,†said Jack Wells, director of science for the National Center for Computational Sciences at ORNL. “Our storage-intensive users have been very pleased with our new data storage capabilities.â€The post Oak Ridge steps up to Active Archive Solutions appeared first on insideHPC.
|
by Rich Brueckner on (#2BYRC)
"High Performance Computing (HPC) is considered the unlimited class of computing where performance is all that matters. Increasingly, business enterprises are looking to apply the technology and techniques from HPC to help them solve their complex business challenges. Weka.IO’s CTO, Liran Zvibel, will discuss how affordable HPC class storage performance and scale can be achieved using Flash technology and a hardware independent software architecture."The post Startup Video: Architecting Flash for Scale and Performance in HPC appeared first on insideHPC.
|
by Rich Brueckner on (#2BYHD)
“Deploying DDN’s end-to-end storage solution has allowed us to elevate the standard of protection, increase compliance and push the boundaries of science on a single, highly scalable storage platform,†said Ramjan. “We’ve also saved hundreds of thousands of dollars by centralizing the storage of our data-intensive research and a dozen data-hungry scientific instruments on DDN. With all these advantages it is easy to see why DDN is core to our operation and a major asset to our scientists.â€The post DDN Drives Discoveries at Van Andel Research Institute appeared first on insideHPC.
|
by Rich Brueckner on (#2BW16)
"Explore how Singularity liberates non-privileged users and host resources (such as interconnects, resource managers, file systems, accelerators ...) allowing users to take full control to set-up and run in their native environments. This talk explores Singularity how it combines software packaging models with minimalistic containers to create very lightweight application bundles which can be simply executed and contained completely within their environment or be used to interact directly with the host file systems at native speeds. A Singularity application bundle can be as simple as containing a single binary application or as complicated as containing an entire workflow and is as flexible as you will need."The post Video: Singularity – Containers for Science, Reproducibility, and HPC appeared first on insideHPC.
|
by Rich Brueckner on (#2BV34)
Today ISC 2017 announced that their Tuesday keynote will be delivered by Dr. Peter Bauer from the European Centre for Medium-Range Weather Forecasts (ECMWF). As Deputy Director of the Research Department Center at ECMWF, Dr. Bauer will discuss the computing and data challenges, as well as the current avenues the weather and climate prediction community is taking in preparing for the new computing era.The post Dr. Peter Bauer from ECMWF to Keynote ISC 2017 appeared first on insideHPC.
|
by Rich Brueckner on (#2BTVC)
Team applications are now being accepted for the SC17 Student Cluster Competition. "SC17 is excited to hold another nail-biting Student Cluster Competition, or SCC, now in its eleventh year, as an opportunity to showcase student expertise in a friendly yet spirited competition. Held as part of SC17’s Students@SC, the Student Cluster Competition is designed to introduce the next generation of students to the high-performance computing community."The post Apply Now for the SC17 Student Cluster Competition appeared first on insideHPC.
|
by staff on (#2BT9A)
Kathy Yelick, the Associate Lab Director for Computing Sciences at LBNL, has been named to the Alameda County Women’s Hall of Fame for her leadership in science, technology and engineering. Twelve women, each representing a different field, were named as 2017 inductees. "According the organization's announcement, Yelick is being recognized as "an international leader in computational sciences and a leading force in applying high performance computing to efforts to develop alternative energy sources and combat climate change. She is an advocate for diversity in computer science education and the use of computing to solve societal challenges."The post Kathy Yelick Joins Alameda County Women’s Hall of Fame appeared first on insideHPC.
by Rich Brueckner on (#2BSNZ)
The WHOI Information Services Department is looking for a Senior Systems Administrator to join their team. This is a regular, full-time position, and is eligible for benefits. The Senior Systems Administrator works within the Information Services Department supporting WHOI’s Linux servers, data storage, and HPC clusters.The post Job of the Week: Senior System Administrator for HPC at WHOI Institute appeared first on insideHPC.
|
by Rich Brueckner on (#2BPM1)
The European PRACE initiative has published a new Best Practice Guide for Intel Xeon Phi, Knights Landing Edition. "This best practice guide provides information about Intel’s MIC architecture and programming models for the Intel Xeon Phi co-processor in order to enable programmers to achieve good performance of their applications. The guide covers a wide range of topics from the description of the hardware of the Intel Xeon Phi co-processor through information about the basic programming models as well as information about porting programs up to tools and strategies how to analyze and improve the performance of applications."The post PRACE Posts Best Practice Guide for Intel Xeon Phi appeared first on insideHPC.
|