by staff on (#3VXNG)
The Department of Energy’s Exascale Computing Project (ECP) has named Lori Diachin as its new Deputy Director effective August 7, 2018. Lori replaces Stephen Lee who has retired from Los Alamos National Laboratory. “Lori has deep technical expertise, years of experience, and a collegial leadership style that qualify her uniquely for the ECP Deputy Director role,†said Bill Goldstein, director of Lawrence Livermore National Laboratory and chairman of the ECP board of directors.The post Lori Diachin Named Deputy Director of Exascale Computing Project appeared first on insideHPC.
|
High-Performance Computing News Analysis | insideHPC
Link | https://insidehpc.com/ |
Feed | http://insidehpc.com/feed/ |
Updated | 2024-11-02 20:15 |
by Rich Brueckner on (#3VXNH)
In this video from ISC 2018, Sam Mahalingam from Altair provides an update on the company's latest software innovations for managing HPC Workloads. "This year, with the announcement of the new PBS Works 2018, Altair is once again furthering its commitment to support the needs of the entire HPC community. This new release includes significant PBS Professional updates, a brand new intuitive user interface, and several advanced admin features. It also introduces a new HPC cloud management platform that allows seamless cloud bursting and puts cloud infrastructures under your control."The post Altair Orchestrates HPC Workloads with PBS Pro at ISC 2018 appeared first on insideHPC.
|
by staff on (#3VVJ0)
The ASCR Leadership Computing Challenge has awarded 20 projects for a total of 1.5 billion core-hours at Argonne to pursue challenging, high-risk, high-payoff simulations. "The Advanced Scientific Computing Program (ASCR), which manages some of the world’s most powerful supercomputing facilities, selects projects every year in areas directly related to the DOE mission for broadening the community of researchers capable of using leadership computing resources, and serving national interests for the advancement of scientific discovery, technological innovation, and economic competitiveness."The post DOE Awards 1.5 billion Hours of Computing Time at Argonne appeared first on insideHPC.
|
by Rich Brueckner on (#3VVJ1)
In this video from ISC 2018 in Frankfurt, Naoki Shibata from Xtreme-D describes the company's the company’s innovative solutions for deploying and managing HPC clouds. "Customers can use our easy-to-deploy turnkey HPC cluster system on public cloud, including setup of HPC middleware (OpenHPC-based packages), configuration of SLURM, OpenMPI, and OSS HPC applications such as OpenFOAM. The user can start the HPC cluster (submitting jobs) within 10 minutes on the public cloud."The post Xtreme-D Showcases Innovative Solutions for Managing HPC Clouds at ISC 2018 appeared first on insideHPC.
|
by Rich Brueckner on (#3VVD1)
In this video from the Dell EMC HPC Community Meeting, Brian Kucic from R Systems describes how Dell EMC powers the company's HPC Cloud solutions. "R Systems NA, Inc. provides High Performance Computer Cluster resources and technical expertise to commercial and institutional research clients through the R Systems brand and the Dell HPC Cloud Services Partnership. In addition to our industry standard solutions, R Systems Engineers assist clients in selecting the components of their optimal cluster configuration."The post How Dell EMC Powers HPC in the Cloud with R Systems appeared first on insideHPC.
|
by Rich Brueckner on (#3VV8R)
In this podcast, the Radio Free HPC team looks at a new whitepaper from Lincoln Labs focused on the performance hits Spectre/Meltdown mitigations. The news is not good. After that, Shahin point us to the story about how DARPA just allocated $75 Million in awards for thinking-outside-the-box computing innovation. They call it the Electronics Resurgence Initiative and the list of projects funded includes something called Software Defined Hardware.The post Radio Free HPC Reviews Lincoln Labs Paper on Spectre/Meltdown Performance Hits appeared first on insideHPC.
|
by Rich Brueckner on (#3VSSE)
"NVIDIA researchers are gearing up to present 19 accepted papers and posters, seven of them during speaking sessions, at the annual Computer Vision and Pattern Recognition conference next week in Salt Lake City, Utah. Joining us to discuss some of what's being presented at CVPR, and to share his perspective on the world of deep learning and AI in general is one of the pillars of the computer science world, Bill Dally, chief scientist at NVIDIA."The post Podcast: Bill Daly from NVIDIA on What’s Next for AI appeared first on insideHPC.
|
by Rich Brueckner on (#3VSSF)
In this video from PASC18, Saumil Patel from Argonne describes his poster on engine combustion simulation. "This work marks a milestone achievement in using Nek5000, a highly-scalable computational fluid dynamics (CFD) solver, to capture turbulent flow and thermal fields inside realistic engine geometries. In the context of an arbitrary Lagrangian-Eulerian (ALE) framework, several algorithms have been developed and integrated into Nek5000 in order overcome the computational challenges associated with moving boundaries (i.e. valves and pistons)."The post Low-Mach Simulation of Flow and Heat Transfer in an Internal Combustion Engine appeared first on insideHPC.
|
by Rich Brueckner on (#3VRKZ)
Dr. Peter Clapham from the Sanger Institute gave this talk at the DDN User Group at ISC 2018. "The DDN User Group at ISC is an annual conference that brings together the best and brightest scientists, researchers and technologists to share and learn how leading global HPC organizations are executing cutting-edge initiatives that are transforming the world. The goal of the event is to gather the community during ISC to discover how HPC organizations are assessing and leveraging technology to raise the bar on HPC innovations and best practices."The post Enabling a Secure Multi-Tenant Environment for HPC at the Sanger Institute appeared first on insideHPC.
|
by Rich Brueckner on (#3VRHP)
ANSYS is seeking an HPC Software Developer in our Job of the Week. "The Software Developer II is responsible for assisting in the design, development and maintenance of the next generation, multi-physics coupling engine. This developer can expect to be engaged in all stages of code and capability planning, creation and evolution, and to work closely with other groups as they adopt the coupling infrastructure. Emphasis is on planning and executing work at the iteration (approximately monthly) time scale."The post Job of the Week: HPC Software Developer at ANSYS appeared first on insideHPC.
|
by staff on (#3VPMM)
Today Hewlett Packard Enterprise and PLDA announced a joint collaboration to meet the challenges of next-generation connectivity for advanced workloads. Gen-Z is a new open interconnect protocol and connector developed by the Gen-Z Consortium to solve the challenges associated with processing and analyzing huge amounts of data in real time. HPE and PLDA are working together to develop Gen-Z semiconductor IP designed to the Gen-Z Core Specification 1.0. "PLDA is proud to collaborate with HPE to provide comprehensive design IP to silicon providers to enable volume production of Gen-Z compatible components and to enable system vendors to utilize the Gen-Z silicon components to build network, storage and compute systems and solutions,†said Arnaud Schleich, CEO, at PLDA. “This will enable an open ecosystem of Gen-Z building blocks for a variety of solutions from the intelligent edge to the cloud.â€The post PLDA and HPE to Develop Gen-Z semiconductor IP appeared first on insideHPC.
|
by Rich Brueckner on (#3VPMP)
In this video from ISC 2018, Cray CTO Steve Scott describes how the company is moving towards providing Exascale computing capabilities for scientists and engineers worldwide. "It takes many years and many generations of technology developments to be a successful supercomputing provider. With every Cray system, you get the benefit of decades of supercomputing experience. We offer a comprehensive portfolio of supercomputer, storage, data analytics and AI solutions for a range of budgets. All hardware and software is integrated, and every solution comes with the assurance of Cray support.â€The post Cray Sets the Stage for Exascale at ISC 2018 appeared first on insideHPC.
|
by staff on (#3VPMR)
Today SIGHPC announced the third annual recipients of the ACM SIGHPC/Intel Computational and Data Science Fellowship. Funded by Intel, the Fellowship was established to increase the diversity of students pursuing graduate degrees in data science and computational science.The post SIGHPC Announces Computational and Data Science Fellowships appeared first on insideHPC.
|
by Rich Brueckner on (#3VPMT)
In this video from ISC 2018, Marc Hamilton from NVIDIA describes how the company is working with Dell EMC to accelerate AI and HPC workloads. "Dell EMC Deep Learning Ready Bundle customers include the new Dell EMC PowerEdge C4140 server, supporting latest generation NVIDIA Tesla V100 GPU accelerators with PCIe and NVLink high-speed interconnect technology."The post How NVIDIA Powers Dell EMC Systems for HPC & AI appeared first on insideHPC.
|
by Sarah Rubenoff on (#3VPGQ)
According to a new Ovum white paper, sponsored by NVIDIA, there is a huge opportunity to help physicians with AI-based systems that can cut down on the workload across protocoling, imaging analysis, and automated reporting of the results.The post How AI & Machine Intelligence is Assisting Clinicians appeared first on insideHPC.
|
by Rich Brueckner on (#3VN39)
In this video from ISC 2018 in Frankfurt, Naoki Shibata from Xtreme-D demonstrates XTREME-Stargate, a solution that combines Infrastructure as a Service (IaaS) and an HPC gateway appliance to deliver a new way to access and manage high performance cloud. The appliance will deliver an entirely new HPC cloud experience via a robust UI/UX.The post Video: Xtreme Stargate Launches at ISC 2018 appeared first on insideHPC.
|
by staff on (#3VME6)
Today Cray announced that it has delivered and installed a Cray XC50 supercomputer at the National Astronomical Observatory of Japan (NAOJ). The supercomputer, nicknamed NS-05 “ATERUI II,†provides more than 3 peak petaflops making it the world’s most powerful supercomputer dedicated to astrophysical calculations. "NAOJ will use the system as a new “telescope†for theoretical astronomy to perform full-scale, high-resolution simulations of the formation and evolution of the Milky Way galaxy as well as three-dimensional simulations of a supernova explosion with realistic microphysics, among other models."The post Cray XC50 Supercomputer Powers National Astronomical Observatory of Japan appeared first on insideHPC.
|
by staff on (#3VM95)
Today Silicon Valley startup Tachyum Inc. announced that Professor Steve Furber, the highly regarded original designer of the world’s leading embedded processor, the ARM microprocessor, has joined its Board of Advisors. In this capacity, Prof. Furber will help position the company’s Prodigy processor to achieve disruptive performance in Spiking Model neural simulations, as well as standard data center workloads. "Steve Furber is a true giant in processor architecture development, as well as one of the world’s leading experts on human brain simulation research,†said Dr. Radoslav Danilak, Tachyum Co-founder and CEO. “We are extremely gratified to be able to collaborate with Professor Furber on our Prodigy processor.â€The post Professor Steve Furber Joins Tachyum Board of Advisors appeared first on insideHPC.
|
by staff on (#3VM96)
Next generation workloads in High Performance Computing involve more unstructured data than ever before. In this edition of Industry Perspectives, HPE explores the next generation of data management requirements as the growing volume of data places even more demand on data management capabilities.The post Next Generation Data Management Requirements appeared first on insideHPC.
|
by Rich Brueckner on (#3VM98)
In this video from ISC 2018, Derek Bouius from AMD describes how HPC users can take advantage of new AMD EPYC processors and Radeon GPUs to accelerate their applications. "With the introduction of new EPYC processor based servers with Radeon Instinct GPU accelerators, combined with our ROCm open software platform, AMD is ushering in a new era of heterogeneous compute for HPC and Deep Learning."The post AMD steps up to HPC Workloads at ISC 2018 appeared first on insideHPC.
|
by Rich Brueckner on (#3VJJ7)
In this video, Markus Eisenbach and Dmitry Liakh from ORNL present: Intro to OpenMP, Part 1. "This video was recorded as part of the "Introduction to HPC" workshop that took place at ORNL from June 26-28. This is video 1 of 2, which gives a brief overview of parallel computing with OpenMP."The post Video: Intro to OpenMP appeared first on insideHPC.
|
by Rich Brueckner on (#3VJDG)
In this episode of Let's Talk Exascale, Computational scientist Ben Bergen of Los Alamos National Laboratory describes the Advanced Technology Development and Mitigation (ATDM) subprogram‘s Flexible Computational Science Infrastructure (FleCSI) project.The post Podcast: Supporting Multiphysics Application Development appeared first on insideHPC.
|
by Rich Brueckner on (#3VJ2N)
In this video from the Dell EMC HPC Community Meeting, Kevin Shinpaugh from Virginia Tech describes how Dell EMC works with the Biocomplexity Institute. "The Biocomplexity Institute of Virginia Tech's IT group provides the infrastructure, applications, and services that power our research. With high-performance computing technologies, our researchers are fighting multi-drug resistant tuberculosis, modeling the spread of beliefs through social media, and simulating complex systems to inform policy-making."The post Biocomplexity Institute moves forward with HPC and Dell EMC appeared first on insideHPC.
|
by MichaelS on (#3VHXM)
Many modern applications are being developed with so called run-time languages, which are compiled at execution time. The performance of these applications in cloud data centers is important for anyone considering moving their applications and workloads to the cloud. Download Intel Distribution for Python for free today to supercharge your applications.The post Performance in the Datacenter appeared first on insideHPC.
|
by staff on (#3VG17)
"XSEDE has awarded 145 deserving research teams at 109 universities and other institutions access to nearly two dozen NSF-funded computational and storage resources, as well as other services unique to XSEDE, such as the Extended Collaborative Support Services (ECSS). Total allocations this cycle, running July 1, 2018, through June 30, 2019, represent an estimated $7.3 million of time on multi-core, many-core, GPU-accelerated, and large-memory computing resources (which does not include additional consulting resources) – all at no cost to the researchers. Since its founding in 2011, XSEDE and XSEDE 2.0 have allocated an estimated $270M of computing time."The post XSEDE allocates $7.3M worth of computing time to U.S. Researchers appeared first on insideHPC.
|
by staff on (#3VFWF)
In this special guest feature, Gemma Church from Scientific Computing World discusses advances to FEA software as it is now used to simulate a wide range of physical phenomena. "With 40 years†experience in developing FEA technologies, the most significant challenges are not in the internal technical work. In fact, the major challenge might be termed institutional inertia. Many companies are hardwired with the thought that product design requires extensive physical prototyping."The post Bringing Big Compute to FEA appeared first on insideHPC.
|
by Rich Brueckner on (#3VFQ7)
Leonardo Bautista from the Barcelona Supercomputing Center gave this talk at PASC18. "Extreme scale supercomputers offer thousands of computing nodes to their users to satisfy their computing needs. As the need for massively parallel computing increases in industry, computing centers are being forced to increase in size and to transition to new computing technologies. In this talk, we will discuss how to guarantee high reliability to high performance applications running in extreme scale supercomputers. In particular, we cover the tools necessary to implement scalable multilevel checkpointing for tightly coupled applications."The post Easy and Efficient Multilevel Checkpointing for Extreme Scale Systems appeared first on insideHPC.
|
by staff on (#3VFQ9)
Today Altair announced the finalists for the 2018 Altair Enlighten Award, which recognizes technology-focused organizations throughout the automotive industry driving innovation in vehicle lightweighting. “Every year brings more submissions and increasingly clever ways of removing weight from our vehicles. Observing the submissions grow in quantity and quality over the years is very encouraging because it demonstrates the commitment by the automotive companies and their suppliers to decrease the mass of our vehicles leading to a cleaner future for all of us.â€The post Finalists Vie for Coveted 2018 Altair Enlighten Award appeared first on insideHPC.
|
by MichaelS on (#3VFQB)
"Understanding a cluster can be complex if tools are not available such as Intel Cluster Checker. Think of how many times users complain that their applications are not runing with the expected performance and how long it takes system administrators to diagnose the issue. With Intel Cluster Checker, diagnosing and debugging of these issues is easier and less complex. By usingthis tool, customers will be more statisfied and a higher return on the investment will be realized."The post Streamline Your HPC Setup with Intel Cluster Checker appeared first on insideHPC.
|
by staff on (#3VDNK)
ISV vendor Atomicus is approaching the release of an advanced software component AtomicusChart developed for scientists researching physical, chemical, biological phenomena and for those who need a convenient tool to present the results in scientific formats. "The ATOMICUS CHART is a product from Atomicus, which can be re-used and integrated in any software requiring high-speed graphics for large volumes of data (including big data) and dedicated to the needs of analytical applications."The post Atomicus Chart Software Brings Easy Analytics to Scientists appeared first on insideHPC.
|
by Rich Brueckner on (#3VDNN)
"Dell EMC Ready Solutions for AI are validated, hardware and software stacks optimized to accelerate AI initiatives, shortening deployment time from weeks to days. They increase data scientist productivity by offering selfâ€service workspaces, allowing each data scientist to configure their environment from a library of AI models and frameworks in just five clicks. Customers report that Dell EMC Hadoop solutions can help boost data scientist productivity by as much as 30 percent. IT operations are also simplified through a single console for monitoring the health and configuration of the cluster."The post How Dell EMC is Moving Forward as Thought Leader in AI appeared first on insideHPC.
|
by Rich Brueckner on (#3VDD0)
In this podcast, the Radio Free HPC team goes through a fascinating presentation that provides details on China’s Three-Pronged Plan for Exascale. "China may not be the first to Exascale, but they are building three divergent architectural prototypes that pave the way forward. We’ve got the details in this not-to-miss podcast."The post Radio Free HPC Looks at China’s Three-Pronged Plan for Exascale appeared first on insideHPC.
|
by staff on (#3VDD1)
The global manufacturing industry is moving down the path to a fourth industrial revolution — Industry 4.0 — empowered by the opportunity to collect and analyze massive amounts of data. This guest post from Intel explores how the global manufacturing industry is moving toward HPC adoption, and how it is approaching an inflection point which Intel refers to as “HPC for Everyone.â€The post Growing HPC Adoption Among Manufacturers appeared first on insideHPC.
|
by staff on (#3VBWK)
In this special guest feature from Scientific Computing World, Robert Roe speaks with Dr Maria Girone, Chief Technology Officer at CERN openlab ahead of her keynote presentation at ISC High Performance. "The challenge of creating the largest particle accelerator is now complete but there is another challenge – harnessing all of the data produced through experimentation. This will become even greater when the ‘high-luminosity’ LHC experiments begin in 2026."The post Addressing Computing Challenges at CERN openlab appeared first on insideHPC.
|
by Rich Brueckner on (#3VBWM)
Robert Searles from the University of Delaware gave this talk at PASC18. "Architectures are rapidly evolving, and exascale machines are expected to offer billion-way concurrency. We need to rethink algorithms, languages and programming models among other components in order to migrate large scale applications and explore parallelism on these machines. Although directive-based programming models allow programmers to worry less about programming and more about science, expressing complex parallel patterns in these models can be a daunting task especially when the goal is to match the performance that the hardware platforms can offer."The post Abstractions and Directives for Adapting Wavefront Algorithms to Future Architectures appeared first on insideHPC.
|
by Rich Brueckner on (#3VAN9)
Intel in Silicon Valley is seeking a Research Scientist for HPC. "Intel Labs is seeking motivated researchers in the area of parallel and distributed computing research applied towards high performance computing and machine learning. This is a full-time position with the Parallel Computing Lab. The Parallel Computing Lab researches new algorithms, architectures, and approaches to address the most challenging compute- and data-intensive applications. We are focused on delivering new Intel software and hardware technologies that will transform the enterprise and technical computing experience. We work in close collaboration with leading academic and industry partners to accomplish our mission."The post Job of the Week: Research Scientist for HPC at Intel Labs appeared first on insideHPC.
|
by Rich Brueckner on (#3VAJH)
In this video from ISC 2018, John Bent and Jay Lofstead describe how the IO500 benchmark measures storage performance in HPC environments. "The IO500 benchmark suite is designed to be easy to run and the community has multiple active support channels to help with any questions. The list is about much more than just the raw rank; all submissions help the community by collecting and publishing a wider corpus of data."The post IO500 List Showcases World’s Fastest Storage Systems for HPC appeared first on insideHPC.
|
by Rich Brueckner on (#3V8RG)
Nils P. Wedi from ECMWF gave this talk at PASC18. "The increasingly large amounts of data being produced b weather and climate simulations and earth system observations is sometimes characterised as a deluge. This deluge of data is both a challenge and an opportunity. The main opportunities are to make use of this wealth of data to 1) improve knowledge by extracting additional knowledge from the data and 2) to improve the quality of the models themselves by analysing the accuracy, or lack thereof, of the resultant simulation data."The post From Weather Dwarfs to Kilometre-Scale Earth System Simulations appeared first on insideHPC.
|
by Rich Brueckner on (#3V8RJ)
In this video from ISC 2018, Takeo Hosomi from NEC describes how vector computing can accelerate Machine Learning workloads. "Machine learning is the key technology for data analytics and artificial intelligence. Recent progress in this field opens opportunities for a wide variety of new applications. Our department has been at the forefront of developments in such areas as deep learning, support vector machines and semantic analysis for over a decade. Many of our technologies have been integrated in innovative products and services of NEC."The post NEC Accelerates Machine Learning with Vector Computing appeared first on insideHPC.
|
by staff on (#3V8RM)
The DOE's Exascale Computing Project has initiated a new Co-Design Center called ExaLearn. Led by Principal Investigator Francis J. Alexander from Brookhaven National Laboratory, ExaLearn is a co-design center for Exascale Machine Learning (ML) Technologies. "Our multi-laboratory team is very excited to have the opportunity to tackle some of the most important challenges in machine learning at the exascale,†Alexander said. “There is, of course, already a considerable investment by the private sector in machine learning. However, there is still much more to be done in order to enable advances in very important scientific and national security work we do at the Department of Energy. I am very happy to lead this effort on behalf of our collaborative team.â€The post ECP Launches ExaLearn Co-Design Center appeared first on insideHPC.
|
by Rich Brueckner on (#3NNY6)
In this video from the Dell EMC HPC Community meeting, Alan Sill from Texas Tech University describes how DMTF and the Redfish project will ease system administration for HPC clusters. "DMTF’s Redfish is a standard API designed to deliver simple and secure management for converged, hybrid IT and the Software Defined Data Center (SDDC). An open industry standard specification and schema, Redfish specifies a RESTful interface and utilizes defined JSON payloads - usable by existing client applications and browser-based GUI."The post How DMTF and Redfish Ease System Administration appeared first on insideHPC.
|
by Rich Brueckner on (#3V68S)
David Bader from Georgia Tech gave this talk at PASC18. "Emerging real-world graph problems include: detecting and preventing disease in human populations; revealing community structure in large social networks; and improving the resilience of the electric power grid. Unlike traditional applications in computational science and engineering, solving these social problems at scale often raises new challenges because of the sparsity and lack of locality in the data, the need for research on scalable algorithms and development of frameworks for solving these real-world problems on high performance computers, and for improved models that capture the noise and bias inherent in the torrential data streams."The post Massive-Scale Analytics Applied to Real-World Problems appeared first on insideHPC.
|
by staff on (#3V68V)
Micron and Intel have announced that their partnership to develop 3D XPoint memory will be disbanded over the next 12 months. "The partnership will be disbanded once the second generation of the technology has been completed next year. Technology development beyond the second generation of 3D XPoint technology will be pursued independently by the two companies."The post Intel and Micron to Disband 3D XPoint Memory Partnership appeared first on insideHPC.
|
by Rich Brueckner on (#3V68X)
In this video from ISC 2018, Michael Wolfe from OpenACC.org describes how scientists can port their code to accelerated computing. "OpenACC is a user-driven directive-based performance-portable parallel programming model designed for scientists and engineers interested in porting their codes to a wide-variety of heterogeneous HPC hardware platforms and architectures with significantly less programming effort than required with a low-level model."The post Porting HPC Codes with Directives and OpenACC appeared first on insideHPC.
|
by Rich Brueckner on (#3V64C)
"AI, machine learning, is not a (traditional) HPC workload. However, it takes an HPC machine to do it. If you look at HPC, generally, you take a model or things like that, you turn it into an extraordinarily large amount of data, and then you go find some information for that data. Machine learning, on the other hand, takes an extraordinarily large amount of information and collapses it into an idea or a model."The post Why the World is Starting to look like a Giant HPC Cluster appeared first on insideHPC.
|
by Rich Brueckner on (#3V4FM)
Today the InfiniBand Trade Association (IBTA) announced that the latest TOP500 List results that report the world’s new fastest supercomputer, Oak Ridge National Laboratory’s Summit system, is accelerated by InfiniBand EDR. InfiniBand now powers the top three and four of the top five systems. The latest rankings underscore InfiniBand’s continued position as the interconnect of choice for the industry’s most demanding high performance computing (HPC) platforms. "As the makeup of the world’s fastest supercomputers evolve to include more non-HPC systems such as cloud and hyperscale, the IBTA remains confident in the InfiniBand Architecture’s flexibility to support the increasing variety of demanding deployments,†said Bill Lee, IBTA Marketing Working Group Co-Chair. “As evident in the latest TOP500 List, the reinforced position of InfiniBand among the most powerful HPC systems and growing prominence of RoCE-capable non-HPC platforms demonstrate the technology’s unparalleled performance capabilities across a diverse set of applications.â€The post InfiniBand Powers World’s Fastest Supercomputer appeared first on insideHPC.
|
by Rich Brueckner on (#3V46E)
Software development for future DOE Exascale machines is on track, according to a new Report. While the first Exascale machine is not slated for delivery to Argonne until 2021, ongoing efforts continue towards the monumental task of preparing applications to run 50x faster than possible today. "Software Technology is a key focus area of the ECP and represents the key bridge between Exascale systems and the scientists developing applications that will run on those platforms."The post Report: Exascale Software Development Project on Track appeared first on insideHPC.
|
by staff on (#3V46G)
Today FPGA maker Xilinx announced that it has acquired DeePhi Technology, a Beijing-based privately held start-up with industry-leading capabilities in machine learning, specializing in deep compression, pruning, and system-level optimization for neural networks. "Xilinx will continue to invest in DeePhi Tech to advance our shared goal of deploying accelerated machine learning applications in the cloud as well as at the edge.â€The post Xilinx Acquires DeePhi Tech, a Machine Learning Startup based in China appeared first on insideHPC.
|
by staff on (#3V40X)
Today Penguin Computing (a subsidiary of SMART Global Holdings) announced that it will deliver the new national supercomputer to the Irish Centre for High-End Computing (ICHEC) at the National University of Ireland (NUI) Galway. “With 11 supercomputers in the Top500 list and a bare-metal HPC Cloud service since 2009, we knew we could rely on Penguin Computing’s HPC expertise to address our needs in an innovative way.â€The post Penguin Computing to Deploy Supercomputer at ICHEC in Ireland appeared first on insideHPC.
|
by Rich Brueckner on (#3V40Z)
In this video from ISC 2018, Oliver Tennert from NEC Deutschland GmbH introduces the company's vector computing technologies for HPC and Machine Learning. "The NEC SX-Aurora TSUBASA is the newest in the line of NEC SX Vector Processors with the worlds highest memory bandwidth. The Processor that is implemented in a PCI-e form factor can be configured in many flexible configurations together with a standard x86 cluster."The post NEC Accelerates HPC with Vector Computing at ISC 2018 appeared first on insideHPC.
|