by staff on (#2DE6S)
In this week’s Sponsored Post, Katie Garrison, of One Stop Systems explains how GPUs and Flash solutions are used in radar simulation and anti-submarine warfare applications. "High-performance compute and flash solutions are not just used in the lab anymore. Government agencies, particularly the military, are using GPUs and flash for complex applications such as radar simulation, anti-submarine warfare and other areas of defense that require intensive parallel processing and large amounts of data recording."The post GPUs and Flash in Radar Simulation and Anti-Submarine Warfare Applications appeared first on insideHPC.
|
High-Performance Computing News Analysis | insideHPC
Link | https://insidehpc.com/ |
Feed | http://insidehpc.com/feed/ |
Updated | 2024-11-25 14:45 |
by staff on (#2DB3B)
Missouri-based Advanced Clustering Technologies is helping customers solve challenges by integrating NVIDIA Tesla P100 accelerators into its line of high performance computing clusters. Advanced Clustering Technologies builds custom, turn-key HPC clusters that are used for a wide range of workloads including analytics, deep learning, life sciences, engineering simulation and modeling, climate and weather study, energy exploration, and improving manufacturing processes. "NVIDIA-enabled GPU clusters are proving very effective for our customers in academia, research and industry,†said Jim Paugh, Director of Sales at Advanced Clustering. “The Tesla P100 is a giant step forward in accelerating scientific research, which leads to breakthroughs in a wide variety of disciplines.â€The post NVIDIA Pascal GPUs come to Advanced Clustering Technologies appeared first on insideHPC.
|
by staff on (#2DB0T)
Today UK-based Hammer PLC announced that it will be a distributer of Spectra Logic storage technology in Europe. "This is an excellent opportunity to increase our high-performance computing offering to our partners and customers," said Jason Beeson, Hammer’s Commercial Director. "By adding Spectra Logic’s bespoke data workflow storage solutions we can reach a whole new genre of highly data-dependent users who are seeking a complete data workflow, from input and day-to-day use right through to deep storage and archiving.â€The post Hammer PLC to Distribute Spectra Logic Storage in Europe appeared first on insideHPC.
|
by Rich Brueckner on (#2DAWK)
In this video from KAUST, Steve Scott from at Cray explains where supercomputing is going and why there is a never-ending demand for faster and faster computers. Responsible for guiding Cray's long term product roadmap in high-performance computing, storage and data analytics, Mr. Scott is chief architect of several generations of systems and interconnects at Cray.The post Interview: Cray’s Steve Scott on What’s Next for Supercomputing appeared first on insideHPC.
|
by Rich Brueckner on (#2DAPB)
In this podcast, the Radio Free HPC team hosts Dan’s daughter Elizabeth. How did Dan get this way? We’re on a mission to find out even as Elizabeth complains of the early onset of Curmudgeon's Syndrome. After that, we take a look at the Tsubame3.0 supercomputer coming to Tokyo Tech.The post Radio Free HPC Gets the Scoop from Dan’s Daughter in Washington, D.C. appeared first on insideHPC.
|
by Rich Brueckner on (#2D80P)
When the DOE's pre-exascale supercomputers come online soon, all three will be running an optimized version of the XGC dynamic fusion code. Developed by a team at the DOE's Princeton Plasma Physics Laboratory (PPPL), the XGC code was one of only three codes out of more than 30 science and engineering programs selected to participate in Early Science programs on all three new supercomputers, which will serve as forerunners for even more powerful exascale machines that are to begin operating in the United States in the early 2020s.The post XGC Fusion Code Selected for all 3 Pre-exascale Supercomputers appeared first on insideHPC.
|
by Rich Brueckner on (#2D7WV)
In this fascinating talk, Cockcroft describes how hardware networking has reshaped how services like Machine Learning are being developed rapidly in the cloud with AWS Lamda. "We've seen the same service oriented architecture principles track advancements in technology from the coarse grain services of SOA a decade ago, through microservices that are usually scoped to a more fine grain single area of responsibility, and now functions as a service, serverless architectures where each function is a separately deployed and invoked unit."The post Adrian Cockcroft Presents: Shrinking Microservices to Functions appeared first on insideHPC.
|
by Rich Brueckner on (#2D4Y8)
Industry and academic institutions are invited to showcase their R&D at PASC17, an interdisciplinary event in high performance computing that brings together domain science, applied mathematics and computer science. The event takes place June 26-28 in Lugano, Switzerland. "The PASC17 Conference offers a unique opportunity for your organization to gain visibility at a national and international level, to showcase your R&D and to network with leaders in the fields of HPC simulation and data science. PASC17 builds on a successful history – with 350 attendees in 2016 – and continues to expand its program and international profile year on year."The post Call for Exhibitors: PASC17 in Lugano appeared first on insideHPC.
|
by Rich Brueckner on (#2D4VX)
Addison Snell presented this deck at the Stanford HPC Conference. "Intersect360 Research returns with an annual deep dive into the trends, technologies and usage models that will be propelling the HPC community through 2017 and beyond. Emerging areas of focus and opportunities to expand will be explored along with insightful observations needed to support measurably positive decision making within your operations."The post Addison Snell Presents: HPC Computing Trends appeared first on insideHPC.
|
by staff on (#2D1C1)
"The IO infrastructure of TSUBAME3.0 combines fast in-node NVMe SSDs and a large, fast, Lustre-based system from DDN. The 15.9PB Lustre* parallel file system, composed of three of DDN’s high-end ES14KX storage appliances, is rated at a peak performance of 150GB/s. The TSUBAME collaboration represents an evolutionary branch of HPC that could well develop into the dominant HPC paradigm at about the time the most advanced supercomputing nations and consortia achieve Exascale computing."The post DDN and Lustre to Power TSUBAME3.0 Supercomputer appeared first on insideHPC.
|
by Rich Brueckner on (#2D1M8)
In this video, Dr Tim Stitt from the Earlham Institute describes why moving their HPC workload to Iceland made economic sense. Through the Verne Global datacenter, the Earlham Institute will have access to one of the world’s most reliable power grids producing 100% geothermal and hydro-electric renewable energy. As EI’s HPC analysis requirements continue to grow, Verne Global will enable the institute to save up to 70% in energy costs (based on 14p to 4p KWH rate and with no additional power for cooling, significantly benefiting the organization in their advanced genomics and bioinformatics research of living systems.The post Earlham Institute Moves HPC Workloads to Iceland appeared first on insideHPC.
|
by staff on (#2D1DY)
Today Dutch startup Asperitas rolled out Immersed Computing cooling technology for datacenters. "The company's first market ready solution, the AIC24, 'the first water-cooled oil-immersion system which relies on natural convection for circulation of the dielectric liquid.' This results in a fully self-contained and Plug and Play modular system. The AIC24 needs far less infrastructure than any other liquid installation, saving energy and costs on all levels of datacentre operations. The AIC24 is the most sustainable solution available for IT environments today. Ensuring the highest possible efficiency in availability, energy reduction and reuse, while increasing capacity. Greatly improving density, while saving energy at the same time."The post Asperitas Startup Brings Immersive Cooling to Datacenters appeared first on insideHPC.
|
by staff on (#2D15Z)
"TSUBAME3.0 is expected to deliver more than two times the performance of its predecessor, TSUBAME2.5," writes Marc Hamilton from Nvidia. "It will use Pascal-based Tesla P100 GPUs, which are nearly three times as efficient as their predecessors, to reach an expected 12.2 petaflops of double precision performance. That would rank it among the world’s 10 fastest systems according to the latest TOP500 list, released in November. TSUBAME3.0 will excel in AI computation, expected to deliver more than 47 PFLOPS of AI horsepower. When operated concurrently with TSUBAME2.5, it is expected to deliver 64.3 PFLOPS, making it Japan’s highest performing AI supercomputer."The post Pascal GPUs to Accelerate TSUBAME 3.0 Supercomputer at Tokyo Tech appeared first on insideHPC.
|
by Rich Brueckner on (#2D14E)
"Servers today have hundreds of knobs that can be tuned for performance and energy efficiency. While some of these knobs can have a dramatic effect on these metrics, manually tuning them is a tedious task. It is very labor intensive, it requires a lot of expertise, and the tuned settings are only relevant for the hardware and software that were used in the tuning process. In addition to that, manual tuning can't take advantage of application phases that may each require different settings. In this presentation, we will talk about the concept of dynamic tuning and its advantages. We will also demo how to improve performance using manual tuning as well as dynamic tuning using DatArcs Optimizer."The post Video: The Era of Self-Tuning Servers appeared first on insideHPC.
|
by staff on (#2CXMS)
"This breakthrough has unlocked new potential for ExxonMobil's geoscientists and engineers to make more informed and timely decisions on the development and management of oil and gas reservoirs," said Tom Schuessler, president of ExxonMobil Upstream Research Company. "As our industry looks for cost-effective and environmentally responsible ways to find and develop oil and gas fields, we rely on this type of technology to model the complex processes that govern the flow of oil, water and gas in various reservoirs."The post Exxon Mobil and NCSA Achieve New Levels of Scalability on complex Oil & Gas Reservoir Simulation Models appeared first on insideHPC.
|
by Rich Brueckner on (#2CXK0)
SC17 has issued its Call for Panel Sessions. The conference takes place Nov. 12-17 in Denver. "As in past years, panels at SC17 will be some of the most heavily attended events of the Conference. Panels will bring together the key thinkers and producers in the field to consider in a lively and rapid-fire context some of the key questions challenging high performance computing, networking, storage and associated analysis technologies for the foreseeable future."The post Call for Panels: SC17 in Denver appeared first on insideHPC.
|
by Rich Brueckner on (#2CXEH)
"In recent years, major breakthroughs were achieved in different fields using deep learning. From image segmentation, speech recognition or self-driving cars, deep learning is everywhere. Performance of image classification, segmentation, localization have reached levels not seen before thanks to GPUs and large scale GPU-based deployments, leading deep learning to be a first class HPC workload."The post Deep Learning & HPC: New Challenges for Large Scale Computing appeared first on insideHPC.
|
by Rich Brueckner on (#2CXC7)
Today ISC 2017 announced that it's Distinguished Talk series will focus on Data Analytics in manufacturing and scientific applications. One of the Distinguished Talks will be given by Dr. Sabine Jeschke from the Cybernetics Lab at the RWTH Aachen University on the topic of, “Robots in Crowds – Robots and Clouds.†Jeschke’s presentation will be followed by one from physicist Kerstin Tackmann, from the German Electron Synchrotron (DESY) research center, who will discuss big data and machine learning techniques used for the ATLAS experiment at the Large Hadron Collider.The post ISC 2017 Distinguished Talks to Focus on Data Analytics in Manufacturing & Science appeared first on insideHPC.
|
by MichaelS on (#2CX6J)
"As with all new technology, developers will have to create processes in order to modernize applications to take advantage of any new feature. Rather than randomly trying to improve the performance of an application, it is wise to be very familiar with the application and use available tools to understand bottlenecks and look for areas of improvement."The post Six Steps Towards Better Performance on Intel Xeon Phi appeared first on insideHPC.
|
by staff on (#2CTS8)
"Machine Learning and deep learning represent new frontiers in analytics. These technologies will be foundational to automating insight at the scale of the world’s critical systems and cloud services,†said Rob Thomas, General Manager, IBM Analytics. “IBM Machine Learning was designed leveraging our core Watson technologies to accelerate the adoption of machine learning where the majority of corporate data resides. As clients see business returns on private cloud, they will expand for hybrid and public cloud implementations.â€The post IBM Machine Learning Platform Comes to the Private Cloud appeared first on insideHPC.
|
by Rich Brueckner on (#2CSMG)
"In this guide, we take a high-level view of AI and deep learning in terms of how it’s being used and what technological advances have made it possible. We also explain the difference between AI, machine learning and deep learning, and examine the intersection of AI and HPC. We also present the results of a recent insideBIGDATA survey to see how well these new technologies are being received. Finally, we take a look at a number of high-profile use case examples showing the effective use of AI in a variety of problem domains."The post Defining AI, Machine Learning, and Deep Learning appeared first on insideHPC.
|
by Rich Brueckner on (#2CSH9)
"Coursera has named Intel as one of its first corporate content partners. Together, Coursera and Intel will develop and distribute courses to democratize access to artificial intelligence and machine learning. In this interview, Ibrahim talks about her and Coursera's history, reports on Coursera's progress delivering education at massive scale, and discusses Coursera and Intel's unique partnership for AI."The post Podcast: Democratizing Education for the Next Wave of AI appeared first on insideHPC.
|
by staff on (#2CSCS)
The OpenFog Consortium was founded over one year ago to accelerate adoption of fog computing through an open, interoperable architecture. The newly published OpenFog Reference Architecture is a high-level framework that will lead to industry standards for fog computing. The OpenFog Consortium is collaborating with standards development organizations such as IEEE to generate rigorous user, functional and architectural requirements, plus detailed application program interfaces (APIs) and performance metrics to guide the implementation of interoperable designs.The post OpenFog Consortium Publishes Reference Architecture appeared first on insideHPC.
|
by Rich Brueckner on (#2CS65)
Jeffrey Welser from IBM Research Almaden presented this talk at the Stanford HPC Conference. "Whether exploring new technical capabilities, collaborating on ethical practices or applying Watson technology to cancer research, financial decision-making, oil exploration or educational toys, IBM Research is shaping the future of AI."The post Video: Computing of the Future appeared first on insideHPC.
|
by Rich Brueckner on (#2CNR6)
Purdue University is seeking a Senior HPC Systems Administrator in our Job of the Week. "In this role, you will assist world renowned researchers in advancing science. Additionally, as Senior HPC Systems Administrator, you will be responsible for large sections of Purdue's innovative computational research environment and help set direction of future research systems. This role requires an individual to work closely with researchers, systems administrators, and developers throughout the University and partner institutions to develop large-impact projects and computational systems."The post Job of the Week: Senior HPC Systems Administrator at Purdue appeared first on insideHPC.
|
by staff on (#2CNG4)
Today Cycle Computing announced that the HyperXite team is using CycleCloud software to manage Hyperloop simulations using ANSYS Fluent on the Azure Cloud. "Our mission is optimize and economize the transportation of the future and Cycle Computing has made that endeavor so much easier, said Nima Mohseni, Simulation Lead, HyperXite. “We absolutely require a solution that can compress and condense our timeline while providing the powerful computational results we require. Thank you to Cycle Computing for making a significant difference in our ability to complete our work.â€The post Supercomputing the Hyperloop on Azure appeared first on insideHPC.
|
by Rich Brueckner on (#2CNA3)
Francis Lam from Huawei presented this talk at the Stanford HPC Conference. "High performance computing is rapidly finding new uses in many applications and businesses, enabling the creation of disruptive products and services. Huawei, a global leader in information and communication technologies, brings a broad spectrum of innovative solutions to HPC. This talk examines Huawei's world class HPC solutions and explores creative new ways to solve HPC problems.The post Huawei: A Fresh Look at High Performance Computing appeared first on insideHPC.
|
by Rich Brueckner on (#2CN7Z)
Over at TACC, Faith Singer-Villalobos writes that researchers are using the Rustler supercomputer to tackle Big Data from self-driving connected vehicles (CVs). "The volume and complexity of CV data are tremendous and present a big data challenge for the transportation research community," said Natalia Ruiz-Juri, a research associate with The University of Texas at Austin's Center for Transportation Research. While there is uncertainty in the characteristics of the data that will eventually be available, the ability to efficiently explore existing datasets is paramount.The post Supercomputing Transportation System Data using TACC’s Rustler appeared first on insideHPC.
|
by staff on (#2CMYM)
High-performance computing (HPC) tools are helping financial firms survive and thrive in this highly demanding and data-intensive industry. As financial models grow in complexity and greater amounts of data must be processed and analyzed on a daily basis, firms are increasingly turning to HPC solutions to exploit the latest technology performance improvements. Suresh Aswani, Senior Manager, Solutions Marketing, at Hewlett Packard Enterprise, shares how to overcome the learning curve of new processor architectures.The post Overcoming the Learning Curve of New Processor Architectures appeared first on insideHPC.
|
by Rich Brueckner on (#2CHHG)
In his keynote, Mr. Geist will discuss the need for future Department of Energy supercomputers to solve emerging data science and machine learning problems in addition to running traditional modeling and simulation applications. In August 2016, the Exascale Computing Project (ECP) was approved to support a huge lift in the trajectory of U.S. High Performance Computing (HPC). The ECP goals are intended to enable the delivery of capable exascale computers in 2022 and one early exascale system in 2021, which will foster a rich exascale ecosystem and work toward ensuring continued U.S. leadership in HPC. He will also share how the ECP plans to achieve these goals and the potential positive impacts for OFA.The post ORNL’s Al Geist to Keynote OpenFabrics Workshop in Austin appeared first on insideHPC.
|
by staff on (#2CHBY)
Today Mellanox announced superior crypto throughput of line rate using the company’s Innova IPsec Network Adapter, demonstrating more than three times higher throughput and more than four times better CPU utilization when compared to x86 software-based server offerings. Mellanox’s Innova IPsec adapter provides seamless crypto capabilities and advanced network accelerations to modern data centers, thereby enabling the ubiquitous use of encryption across the network while sustaining unmatched performance, scalability and efficiency. By replacing software-based offerings, Innova can reduce data center expenses by 60 percent or more.The post Mellanox Demos 4X Improvement in Crypto Performance with 40G Ethernet Network Adapter appeared first on insideHPC.
|
by Rich Brueckner on (#2CH78)
DK Panda from Ohio State University presented this deck at the 2017 HPC Advisory Council Stanford Conference. "This talk will focus on challenges in designing runtime environments for exascale systems with millions of processors and accelerators to support various programming models. We will focus on MPI, PGAS (OpenSHMEM, CAF, UPC and UPC++) and Hybrid MPI+PGAS programming models by taking into account support for multi-core, high-performance networks, accelerators (GPGPUs and Intel MIC), virtualization technologies (KVM, Docker, and Singularity), and energy-awareness. Features and sample performance numbers from the MVAPICH2 libraries will be presented."The post Designing HPC & Deep Learning Middleware for Exascale Systems appeared first on insideHPC.
|
by Rich Brueckner on (#2CH43)
In this podcast, the Radio Free HPC team discusses a recent presentation by John Gustafson on Next Generation Computer Arithmetic. “A new data type called a “posit†is designed for direct drop-in replacement for IEEE Standard 754 floats. Unlike unum arithmetic, posits do not require interval-type mathematics or variable size operands, and they round if an answer is inexact, much the way floats do. However, they provide compelling advantages over floats, including simpler hardware implementation that scales from as few as two-bit operands to thousands of bits."The post Radio Free HPC Looks at the Posit and Next Generation Computer Arithmetic appeared first on insideHPC.
|
by Rich Brueckner on (#2CE2M)
The PEARC17 Conference has issued its Call for Participation. Formerly known as the Extreme Science and Engineering Discovery Environment (XSEDE) annual conference, PEARC17 will take place July 9-13 in New Orleans. "The Technical Program for the PEARC17 includes four Paper tracks, Tutorials, Posters, a Visualization Showcase and Birds of a Feather (BoF) sessions. All submissions should emphasize experiences and lessons derived from operation and use of advanced research computing on campuses or provided for the academic and open science communities. Submissions aligned with the conference theme—Sustainability, Success, and Impact—are particularly encouraged."The post Call for Participation: PEARC17 in New Orleans appeared first on insideHPC.
|
by Rich Brueckner on (#2CDZQ)
"Linux Containers gain more and more momentum in all IT ecosystems. This talk provides an overview about what happened in the container landscape (in particular Docker) during the course of the last year and how it impacts datacenter operations, HPC and High-Performance Big Data. Furthermore Christian will give an update/extend on the ‘things to explore’ list he presented in the last Lugano workshop, applying what he learned and came across during the year 2016."The post Video: State of Linux Containers appeared first on insideHPC.
|
by Rich Brueckner on (#2CASB)
Today Intel announced the open-source BigDL, a Distributed Deep Learning Library for the Apache Spark* open-source cluster-computing framework. “BigDL is an open-source project, and we encourage all developers to connect with us on the BigDL Github, sample the code and contribute to the project,†said Doug Fisher, senior vice president and general manager of the Software and Services Group at Intel.The post Intel Rolls Out BigDL Deep Learning Library for Apache Spark appeared first on insideHPC.
|
by Rich Brueckner on (#2CAQJ)
"The University of Oregon (UO) High Performance Computing Research Core Facility (HPCRCF) seeks experienced applicants for the position of Research Systems and Application Administrator. The HPCRCF is a new facility located on the campus of the UO in Eugene, Oregon. The mission of the HPCRCF is to support computational research at the UO and collaborating institutions, and is home to a new flagship research cluster."The post Job of the Week: Research Systems and Application Administrator at University of Oregon appeared first on insideHPC.
|
by staff on (#2C7X5)
Leaders in hybrid accelerated HPC in the United States, Japan, and Switzerland have signed a memorandum of understanding establishing an international institute dedicated to common goals, the sharing of HPC expertise, and forward-thinking evaluation of computing architecture. “Forecasting the future of leadership-class computing and managing the risk of architectural change is a shared interest among ORNL, Tokyo Tech, and ETH Zurich,†said Jeff Nichols, associate laboratory director of computing and computational sciences at ORNL. “What unites our three organizations is a willingness to embrace change, actively partner with HPC vendors, and devise solutions that advance the work of our scientific users. ADAC provides a framework for member organizations to pursue mutual interests such as accelerated node architectures as computing moves toward the exascale era and beyond.â€The post Global HPC Centers Form Accelerated Computing Institute appeared first on insideHPC.
|
by staff on (#2C79X)
"We are very excited to be working closely with Bright Computing to bring its supercomputing software tools to the embedded Aerospace & Defense market as part of our OpenHPEC Accelerator Suite software development toolset,†said Lynn Bamford, Senior Vice President and General Manager, Defense Solutions division. “Together, we are providing HPEC system integrators with proven and robust development tools from the Commercial HPC market to speed and ease the design of COTS-based highly scalable supercomputer-class solutions.â€The post Curtiss-Wright Defense Solutions Group Teams with Bright Computing appeared first on insideHPC.
|
by Rich Brueckner on (#2C768)
Shahin Khan from OrionX presented this talk at the Stanford HPC Conference. "From BitCoins and AltCoins to Design Thinking, Autonomous tech and the changing nature of jobs, IoT and cyber risk, and the impact of application architecture on cloud computing, we’ll touch on some of the hottest technologies in 2017 that are changing the world and how HPC will be the engine that drives it."The post Shahin Khan Presents: Hot Technology Topics in 2017 appeared first on insideHPC.
|
by staff on (#2C729)
Industry veterans Jason Coposky and Terrell Russell have taken lead roles at the membership-based foundation that leads development and support of the integrated Rule-Oriented Data System (iRODS). "With data becoming the currency of the knowledge economy, now is an exciting time to be involved with developing and sustaining a world-class data management platform like iRODS,†said Coposky. “Our consortium membership is growing, and our increasing ability to integrate with commonly used hardware and software is translating into new users and an even more robust product."The post Coposky and Russell Tapped to Lead iRODS Consortium appeared first on insideHPC.
|
by Rich Brueckner on (#2C6WK)
Frank Ham from Cascade Technologies presented this talk at the Stanford HPC Conference. "A spin-off of the Center for Turbulence Research at Stanford University, Cascade Technologies grew out of a need to bridge between fundamental research from institutions like Stanford University and its application in industries. In a continual push to improve the operability and performance of combustion devices, high-fidelity simulation methods for turbulent combustion are emerging as critical elements in the design process. Multiphysics based methodologies can accurately predict mixing, study flame structure and stability, and even predict product and pollutant concentrations at design and off-design conditions."The post Best Practices – Large Scale Multiphysics appeared first on insideHPC.
|
by Rich Brueckner on (#2C389)
"This tutorial will present several features that the draft Fortran 2015 standard introduces to meet challenges that are expected to dominate massively parallel programming in the coming exascale era. The expected exascale challenges include higher hardware- and software-failure rates, increasing hardware heterogeneity, a proliferation of execution units, and deeper memory hierarchies."The post Tutorial: Towards Exascale Computing with Fortran 2015 appeared first on insideHPC.
|
by staff on (#2C35Y)
Today Allinea Software launched the first update to its well-established toolset for debugging, profiling and optimizing high performance code since being acquired by ARM in December 2016. "The V7.0 release provides new integrations for the Allinea Forge debugger and profiler and Allinea Performance Reports and will mean more efficient code development and optimization for users, especially those wishing to take software performance to new levels across Xeon Phi, CUDA and IBM Power platforms,†said Mark O’Connor, ARM Director, Product Management HPC tools.The post Allinea Updates Code Optimization Tools Across Platforms appeared first on insideHPC.
|
by Rich Brueckner on (#2C33D)
“China and the United States have been in the race to develop the most capable supercomputer. China has announced that its exascale computer could be released sooner than originally planned. Steve Conway, VP for high performance computing at IDC, joins Federal Drive with Tom Temin for analysis.â€The post Podcast: IDC’s Steve Conway on China’s New Plan for Exascale appeared first on insideHPC.
|
by Rich Brueckner on (#2C2ZC)
In this video from KAUST Live: Patricia Damkroger discusses her new role as Vice President, Data Center Group and General Manager, Technical Computing Initiative, Enterprise and Government at Intel. "As the former Associate Director for Computation at Lawrence Livermore National Laboratory (LLNL), Trish Damkroger lead the 1,000-employee workforce behind the Laboratory’s high performance computing efforts. She is a longtime committee member and one-time general chair of the SC conference. Most recently, Damkroger was the SC16 Diverse HPC Workforce Chair."The post Video: Trish Damkroger on her New Mission at Intel appeared first on insideHPC.
|
by staff on (#2C2VQ)
“U.S. Patent 9,496,200 protects the invention of utilizing a modular, building block approach for datacenter cooling with direct contact liquid cooling,†said Geoff Lyon CEO, CoolIT Systems. “CoolIT’s commitment to developing and patenting unique solutions provides our customers with the assured competitive advantage they are looking for. The 60 patent milestone adds confirmation to CoolIT’s leadership in developing innovative liquid cooling solutions for modern data centers.â€The post CoolIT Systems Issued U.S. Patent for Modular Heat-Transfer Solutions appeared first on insideHPC.
|
by Richard Friedman on (#2C2SV)
"By implementing popular Python packages such as NumPy, SciPy, scikit-learn, to call the Intel Math Kernel Library (Intel MKL) and the Intel Data Analytics Acceleration Library (Intel DAAL), Python applications are automatically optimized to take advantage of the latest architectures. These libraries have also been optimized for multithreading through calls to the Intel Threading Building Blocks (Intel TBB) library. This means that existing Python applications will perform significantly better merely by switching to the Intel distribution."The post Intel Releases Optimized Python for HPC appeared first on insideHPC.
|
by staff on (#2BZ9C)
Today Silicon Valley startup Tachyum Inc. launched, announcing its mission to conquer the performance plateau in nanometer-class chips and the systems they power. "We have entered a post-Moore’s Law era where performance hit a plateau, cost reduction slowed dramatically, and process node shrinks and CPU release cycles are getting longer,†said Danilak, Tachyum CEO. “An innovative new approach, from first principles is the only realistic chance we have of achieving performance improvements to rival those that powered the tech industry of past decades, and the opportunity is a hundred times greater than any venture I’ve been involved in.â€The post Tachyum Startup Looks to Break Performance and Cost Barriers to Intelligent Information Processing appeared first on insideHPC.
|
by Rich Brueckner on (#2BZ6Z)
Gilad Shainer moderated this panel discussion on Exascale Computing at the Stanford HPC Conference. "The creation of a capable exascale ecosystem will have profound effects on the lives of Americans, improving our nation’s national security, economic competitiveness, and scientific capabilities. The exponential increase of computation power enabled with exascale will fuel a vast range of breakthroughs and accelerate discoveries in national security, medicine, earth sciences and many other fields."The post Panel Discussion: The Exascale Endeavor appeared first on insideHPC.
|