by staff on (#4KBX0)
Over at the SC19 blog, Christine Baissac-Hayden writes that seven candidates have been chosen for the Women in IT Networking at SC program. "Since 2015, the Women in IT Networking at SC (WINS) program has supported talented early-to-mid career women who help build the ephemeral high-speed network that powers the annual SC Conference."The post Women in IT Networking team to help build SCinet at SC19 appeared first on insideHPC.
|
High-Performance Computing News Analysis | insideHPC
Link | https://insidehpc.com/ |
Feed | http://insidehpc.com/feed/ |
Updated | 2024-11-24 10:45 |
by staff on (#4KBQV)
"Micron's automata processor (AP) exploits massively parallel in-memory processing capability of DRAM for executing NFAs and hence, it can provide orders of magnitude performance improvement compared to traditional architectures. This paper presents a survey of techniques that propose architectural optimizations to AP and use it for accelerating problems from various application domains such as bioinformatics, data-mining, network security, natural language, high-energy physics, etc."The post New Paper Surveys Micron’s Automata Processor appeared first on insideHPC.
|
by Rich Brueckner on (#4KBQX)
In this video from ISC 2019, Marc Lehrer from GigaIO describes the company's innovative HPC interconnect technology based on PCIe Gen 4. "For your most demanding workloads, you want time to solution. The GigaIO hyper-performance network breaks the constraints of old architectures, opening up new configuration possibilities that radically reduces system cost and protect your investment by enabling you to easily adopt new compute or business processes."The post GigaIO Steps Up with PCIe Gen 4 Interconnect for HPC appeared first on insideHPC.
|
by staff on (#4KBJJ)
SDSC has been awarded a five-year grant from the NSF valued at $10 million to deploy Expanse, a new supercomputer designed to advance research that is increasingly dependent upon heterogeneous and distributed resources. “As a standalone system, Expanse represents a substantial increase in the performance and throughput compared to our highly successful, NSF-funded Comet supercomputer. But with innovations in cloud integration and composable systems, as well as continued support for science gateways and distributed computing via the Open Science Grid, Expanse will allow researchers to push the boundaries of computing and answer questions previously not possible.â€The post NSF Funds $10 Million for ‘Expanse’ Supercomputer at SDSC appeared first on insideHPC.
|
by staff on (#4K95G)
Last week, Data Vortex Technologies hosted the Texas Women in HPC (TXWHPC) PechaKucha Summer Mixer at their Austin headquarters. A wide selection of topics was presented, ranging from quantum entanglement to the analysis of novels with software. "It’s very exciting to see a younger generation of women reaching out and engaging their peers in the field,†says Data Vortex CEO and TXWHPC Co-Chair Carolyn Devany. “Our industry is one that drives forward so much of human development – it is pivotal that inclusive representation is made a priority.â€The post Data Vortex hosts Texas Women in HPC Summer Mixer appeared first on insideHPC.
|
by staff on (#4K8ZS)
At the DARPA ERI summit this week, Intel Labs director Rich Uhlig unveiled “Pohoiki Beach†– a 64-Loihi Chip Neuromorphic system capable of simulating eight million neurons. Now available to the broader research community, the Pohoiki Beach enables researchers to experiment with Intel’s brain-inspired research chip, Loihi, which applies the principles found in the biological brains to computer architectures.The post Intel Labs Unveils Pohoiki Beach 64-Chip Neuromorphic System appeared first on insideHPC.
|
by Rich Brueckner on (#4K8ZV)
Alex Bouzari gave this talk at the DDN User Group meeting at ISC 2019. "In this rapidly changing landscape of HPC, DDN brings fresh innovation with the stability and support experience you need. Stay in front of your challenges with the most reliable long term partner in data at scale."The post DDN: Innovating to Create a Brighter Future for AI, HPC, and Big Data appeared first on insideHPC.
|
by staff on (#4K8TW)
The European Commission is planning to tackle the challenges of big data processing with the Evolve Project as part of the Horizon 2020 Research and Innovation program. Evolve aims to take concrete steps in bringing the big data, high-performance computing, and cloud computing technology into a testbed that will increase researchers ability to extract value from massive and demanding datasets. This could impact the way that big data applications by enabling researchers to process large amounts of data much faster.The post Evolve Project in the EU to Tackle Big Data Processing appeared first on insideHPC.
|
by Sarah Rubenoff on (#4K8TY)
Artificial Intelligence (AI) is rapidly becoming an essential business and research tool, providing valuable new insights into corporate data and delivering those insights with high velocity and accuracy. While these AI capabilities add significant value to our lives, they are the most demanding workloads in modern computing history. Download the new report from Bright Computing to explore how NVIDIA DGX and Bright Cluster Manager are designed to deliver more of what AI infrastructure users need.The post Exploring Bright Cluster Manager on NVIDIA DGX Systems appeared first on insideHPC.
|
by staff on (#4K6A5)
HPC clusters used for simulation and modeling today in government, academic, and corporate institutions drive scientific discovery and engineering innovation. This sponsored post from Bill Magro, chief technologist for HPC, and HPC solutions lead at Intel Corp., explores how Intel, OEMs, systems integrators, and ISVs partner to reduce deployment barriers through pre-configured and validated HPC systems.The post Faster Path to HPC Enabled with Intel Select Solutions for Simulation & Modeling appeared first on insideHPC.
|
by staff on (#4K6M4)
In this special guest feature, Brent Gorda from Arm shares his impressions of ISC 2019 in Frankfurt. "From the perspective of Arm in HPC, it was an excellent event with several high-profile announcements that caught everyone’s attention. The Arm ecosystem was well represented with our partners visible on the show floor and around town."The post Brent Gorda from Arm looks back at ISC 2019 appeared first on insideHPC.
|
by Rich Brueckner on (#4K6M5)
In this video, Rich Brueckner from insideHPC tours the Tyan booth at ISC 2019. The company exhibited a full line of HPC, storage and cloud computing server platforms. “TYAN’s leading portfolio of HPC, storage and cloud server platforms are based on the 2nd gen Intel Xeon Scalable processors and are designed to help enterprises and data center service providers capture, process, and analyze big data faster and at a more powerful rate than ever before.â€The post Video: Tyan Computing Steps Up with Intel Xeon Scalable Processors for HPC at ISC 2019 appeared first on insideHPC.
|
by staff on (#4K6M7)
In this special guest feature, Dan Olds from OrionX continues his first-hand coverage of the Student Cluster Competition at the recent ISC 2019 conference. "The ISC19 Student Cluster Competition in Frankfurt, Germany had one of the closest and most exciting finishes in cluster competition history. The overall winner was decided by just over two percentage points and the margin between third and fourth place was less than a single percentage point."The post ISC 2019 Student Cluster Competition: Day-by-Day Drama, Winners Revealed! appeared first on insideHPC.
|
by staff on (#4K4VT)
The ISAV 2019 Workshop has issued its Call for Papers. Held in conjunction with with SC19, the In Situ Infrastructures for Enabling Extreme-Scale Analysis and Visualization Workshop takes place Nov. 18, 2019 in Denver. "The workshop brings together researchers, developers and practitioners from industry, academia, and government laboratories developing, applying, and deploying in situ methods in extreme-scale, high performance computing. The goal is to present research findings, lessons learned, and insights related to developing and applying in situ methods and infrastructure across a range of science and engineering applications in HPC environments."The post Call for Papers: ISAV 2019 In Situ Infrastructures for Enabling Extreme-Scale Analysis and Visualization appeared first on insideHPC.
|
by Rich Brueckner on (#4K4VW)
Liu Yu from Inspur gave this talk at PASC19. "Ensuring performance of applications running on large-scale clusters is one of the primary focuses in HPC research. In this talk, we will show our strategies on performance analysis and optimization for applications in different fields of research using large-scale HPC clusters."The post Large-Scale Optimization Strategies for Typical HPC Workloads appeared first on insideHPC.
|
by staff on (#4K3HQ)
"As an innovator in hyper-efficient immersion cooling technology, Submer Technologies is partnering with Goonhilly and 2CRSI to provide HPC solutions with higher performance at less than half the energy consumption of traditional datacenters. A SmartPod immersion cooling system installed on-site will be running CPU and GPU-intensive simulations using HPC servers provided by 2CRSI using the latest Nvidia and AMD chipsets to showcase the future of datacenters during a series of special events over the next several months."The post HPC Goes Green at Goonhilly Earth Station with Submer Immersive Cooling appeared first on insideHPC.
|
by staff on (#4K3FK)
Arm in Austin, Texas is seeking an Application Engineer in our job of the Week. "The Applications Engineer is part of a focused professional services team part of the Development Solutions Group, that has responsibility for supporting and enabling key HPC customers and partners in their development of HPC software, using the Arm HPC Tools across various Linux/UNIX HPC platforms (Arm and other architectures). In this position, you will sharpen your HPC application expertise working on a wide range of scientific fields and environments. You will gain an excellent knowledge of Arm’s HPC development tools, alongside a deep understanding on Arm architecture and Arm IP roadmap. This position is located in the Austin Arm office. This role involves working with sensitive government customers and will involve up to 50% travel, primarily across the US."The post Job of the Week: Application Engineer for Arm in Austin appeared first on insideHPC.
|
by staff on (#4K1F6)
Deploying AI at scale can be an extreme challenge for data centers. To ease the process, NVIDIA has expanded its DGX-Ready Data Center Program to 19 validated partners around the world. "DGX-Ready Data Center partners help companies access modern data center facilities for their AI infrastructure. They offer world-class facilities to host DGX AI compute infrastructure, giving more organizations access to AI-ready data center facilities while saving on capital expenditures and keeping operational costs low."The post NVIDIA DGX-Ready Program Goes Global appeared first on insideHPC.
|
by staff on (#4K19H)
In this special guest feature from Scientific Computing World, Robert Roe interviews John Shalf from LBNL on the development of digital computing in the post Moore’s law era. "In his keynote speech at the ISC conference in Frankfurt, Shalf described the lab-wide project at Berkeley and the DOE’s efforts to overcome these challenges through the development acceleration of the design of new computing technologies."The post John Shalf from LBNL on Computing Challenges Beyond Moore’s Law appeared first on insideHPC.
|
by staff on (#4K19K)
In this Chip Chat podcast, Brandon Draeger from Cray describes the unique needs of HPC customers and how new Intel technologies in Cray systems are helping to deliver improved performance and scalability. "More and more, we are seeing the convergence of AI and HPC – users investigating how they can use AI to complement what they are already doing with their HPC workloads. This includes using machine and deep learning to analyze results from a simulation, or using AI techniques to steer where to take a simulation on the fly."The post Podcast: Tackling Massive Scientific Challenges with AI/HPC Convergence appeared first on insideHPC.
|
by Rich Brueckner on (#4K19N)
Sven Oehme gave this talk at the DDN User Group meeting at ISC 2019. "New AI and ML frameworks, advances in computational power (primarily driven by GPU’s), and sophisticated, maturing use-cases are demanding more from the storage platform. Sven will share some of DDN’s recent innovations around performance and talks about how they translate into real-world customer value."The post Time to Value: Storage Performance in the Epoch of AI appeared first on insideHPC.
|
by staff on (#4JZAP)
Today the High Performance Computing for Energy Innovation program (HPC4EI) announced the nine public/private projects awarded more than $2 million from the DOE, with aims of improving energy production, enhancing or developing new material properties and reducing energy usage in manufacturing. "We see increasing interest by both industry and the DOE Applied Energy Offices to leverage the world-class computational capabilities of leading national laboratories to address the significant challenges in improving the efficiency of our national energy footprint,†said HPC4EI Director Robin Miles."The post HPC4Energy Innovation Program Funds Manufacturing Research appeared first on insideHPC.
|
by staff on (#4JZ5A)
Today the Krell Institute announced that Rebecca Hartman-Baker, a computer scientist at the Department of Energy’s (DOE’s) National Energy Research Scientific Computing Center (NERSC), is the inaugural recipient of the James Corones Award in Leadership, Community Building and Communication. "Hartman-Baker leads the User Engagement Group at NERSC, a DOE Office of Science user facility based at Lawrence Berkeley National Laboratory. A selection committee representing the DOE national laboratories, academia and Krell cited Hartman-Baker’s “broad impact on HPC training; her hands-on approach to building a diverse and inclusive HPC user community, particularly among students and early-career computational scientists; and her mastery in communicating the excitement and potential of computational science.â€The post NERSC Computer Scientist wins First Corones Award appeared first on insideHPC.
|
by Rich Brueckner on (#4JYSF)
In this podcast, the RadioFree team discusses the 4-way competition for Exascale computing between the US, China, Japan, and Europe. "The European effort is targeting 2 pre-exa installation in the coming months, and 2 actual ExaScale installations in the 2022-2023 timeframe at least one of which will be based on European technology."The post Podcast: ExaScale is a 4-way Competition appeared first on insideHPC.
|
by Rich Brueckner on (#4JYSG)
In this video, Bob Fletcher from Verne Global describes advantages the HPC cloud provider offers through the NVIDIA DGX Ready Data Center program. "Enterprises and research organizations seeking to leverage the NVIDIA DGX-2 System – the world’s most powerful AI system – now have the option to deploy their AI infrastructure using a cost-effective Op-Ex solution in Verne Global’s HPC-optimized campus in Iceland, which utilizes 100 percent renewable energy and relies on one of the world’s most reliable and affordable power grids."The post Video: Verne Global joins NVIDIA DGX-Ready Program as HPC & AI Colocation Partner appeared first on insideHPC.
|
by staff on (#4JYMK)
The rapid growth in popularity of Python as a programming language for mathematics, science, and engineering applications has been amazing. Not only is it easy to learn, but there is a vast treasure of packaged open source libraries out there targeted at just about every computational domain imaginable. This sponsored post from Intel highlights how today's enterprises can achieve high levels of parallelism in large scale Python applications using the Intel Distribution for Python with Numba.The post Achieving Parallelism in Intel Distribution for Python with Numba appeared first on insideHPC.
|
by staff on (#4JYZ0)
Today Rigetti Computing announced it has acquired QxBranch, a quantum computing and data analytics software startup. "Our mission is to deliver the power of quantum computing to our customers and help them solve difficult and valuable problems,†said Chad Rigetti, founder and C.E.O. of Rigetti Computing. “We believe we have the leading hardware platform, and QxBranch is the leader at the application layer. Together we can shorten the timeline to quantum advantage and open up new opportunities for our customers.â€The post Rigetti Computing acquires QxBranch for Quantum-powered Analytics appeared first on insideHPC.
|
by staff on (#4JWX8)
Today the MLPerf effort released results for MLPerf Training v0.6, the second round of results from their machine learning training performance benchmark suite. MLPerf is a consortium of over 40 companies and researchers from leading universities, and the MLPerf benchmark suites are rapidly becoming the industry standard for measuring machine learning performance. "We are creating a common yardstick for training and inference performance," said Peter Mattson, MLPerf General Chair.The post Google Cloud and NVIDIA Set New Training Records on MLPerf v0.6 Benchmark appeared first on insideHPC.
|
by staff on (#4JWX9)
Professor David Bader will lead the new Institute for Data Science at the New Jersey Institute of Technology. Focused on cutting-edge interdisciplinary research and development in all areas pertinent to digital data, the institute will bring existing research centers in big data, medical informatics and cybersecurity together to conduct both basic and applied research. "The institute will bring together scientists, engineers and users to develop data-driven technologies and apply them to solve fundamental and real-world problems. Beyond academic research, the institute will interact closely with the outside world to identify and solve important problems in the modern data-driven economy."The post David Bader to Lead New Institute for Data Science at NJIT appeared first on insideHPC.
|
by Rich Brueckner on (#4JWJW)
In this video from PASC19 in Zurich, Benedikt Riedel from the University of Wisconsin describes the challenges researchers face when it comes to updating their scientific codes for new HPC architectures. After that he describes his work on the IceCube Neutrino Observatory.The post The Challenges of Updating Scientific Codes for New HPC Architectures appeared first on insideHPC.
|
by staff on (#4JWJY)
National lab researchers from Lawrence Livermore and Berkeley Lab are using supercomputers to quantify earthquake hazard and risk across the Bay Area. Their work is focused on the impact of high-frequency ground motion on thousands of representative different-sized buildings spread out across the California region. "While working closely with the NERSC operations team in a simulation last week, we used essentially the entire Cori machine – 8,192 nodes, and 524,288 cores – to execute an unprecedented 5-hertz run of the entire San Francisco Bay Area region for a magnitude 7 Hayward Fault earthquake."The post Supercomputing Potential Impacts of a Major Quake by Building Location and Size appeared first on insideHPC.
|
by staff on (#4JWDF)
In this video from ISC 2019, Dr. Erich Focht from NEC Deutschland GmbH describes how the company is embracing open source frameworks for the SX-Aurora TSUBASA Vector Supercomputer. "Until now, with the existing server processing capabilities, developing complex models on graphical information for AI has consumed significant time and host processor cycles. NEC Laboratories has developed the open-source Frovedis framework over the last 10 years, initially for parallel processing in Supercomputers. Now, its efficiencies have been brought to the scalable SX-Aurora vector processor."The post NEC Embraces Open Source Frameworks for SX-Aurora Vector Computing appeared first on insideHPC.
|
by staff on (#4JTEB)
The Deep Learning (DL) on Supercomputers workshop has issued its Call for Submissions. Now in its third year, the workshop will be held with the SC19 conference in Denver on Nov 17. "This third workshop in the Deep Learning on Supercomputers series provides a forum for practitioners working on any and all aspects of DL for scientific research in the High Performance Computing (HPC) context to present their latest research results. The general theme of this workshop series is the intersection of DL and HPC. Its scope encompasses application development in scientific scenarios using HPC platforms; DL methods applied to numerical simulation; fundamental algorithms, enhanced procedures, and software development methods to enable scalable training and inference; hardware changes with impact on future supercomputer design; and machine deployment, performance evaluation, and reproducibility practices for DL applications, with an emphasis on scientific usage."The post Call for Submissions: Deep Learning on Supercomputers Workshop at SC19 appeared first on insideHPC.
|
by staff on (#4JT8B)
A new startup called IQM in Finland aims to drive disruptive advancements in quantum computing. A spinout from Aalto University and the VTT Technical Research Centre of Finland, IQM is developing high-speed quantum processors to reduce the error rates currently limiting quantum computers. "IQM's fault-tolerant quantum processor architecture will open the door for more powerful computationally intensive tasks. The talented team is set to enable a unique quantum cloud offering and ignite a stronger quantum software eco-system in Europe.â€The post IQM Startup in Finland Developing Fault-tolerant Quantum Processor appeared first on insideHPC.
|
by staff on (#4JT3J)
Today the Globus research data management service announced the largest single file transfer in its history: a team led by Argonne National Laboratory scientists moved 2.9 petabytes of data as part of a research project involving three of the largest cosmological simulations to date. “With exascale imminent, AI on the rise, HPC systems proliferating, and research teams more distributed than ever, fast, secure, reliable data movement and management are now more important than ever,†said Ian Foster.The post Argonne Team Breaks Record with 2.9 Petabytes Globus Data Transfer appeared first on insideHPC.
|
by Rich Brueckner on (#4JT3M)
In this video from ISC 2019, Thomas Lippert from the Jülich Supercomputing Centre describes how modular supercomputing is paving the way forward for HPC in Europe. "The Modular Supercomputer Architecture (MSA) is an innovative approach to build High-Performance Computing (HPC) and High-Performance Data Analytics (HPDA) systems by coupling various compute modules, following a building-block principle. Each module is tailored to the needs of a specific group of applications, and all modules together behave as a single machine."The post Modular Supercomputing Moves Forward in Europe appeared first on insideHPC.
|
by staff on (#4JSY2)
NSF is funding $10 Million for a new supercomputer at the Pittsburgh Supercomputing Center (PSC), a joint research center of Carnegie Mellon University and the University of Pittsburgh. We designed Bridges-2 to drive discoveries that will come from the rapid evolution of research, which increasingly needs new, scalable ways for combining large, complex data with high-performance simulation and modeling."The post HPE to Build Bridges-2 Supercomputer at PSC appeared first on insideHPC.
|
by staff on (#4JQWV)
Today the Jülich Supercomputing Centre announced it is partnering with Google in the field of quantum computing research. The partnership will include joint research and expert trainings in the fields of quantum technologies and quantum algorithms and the mutual use of quantum hardware. "The German research center will operate and make publicly accessible a European quantum computer with 50 to 100 superconducting qubits, to be developed within the EU's Quantum Flagship Program, a large-scale initiative in the field of quantum technologies funded at the 1 billion € level on a 10 years timescale."The post Jülich Supercomputing Centre Announces Quantum Computing Research Partnership with Google appeared first on insideHPC.
|
by Rich Brueckner on (#4JQQN)
John West from TACC is the newly elected Chair of SIGHPC. SIGHPC is the first international group within a major professional society that is devoted exclusively to the needs of students, faculty, researchers, and practitioners in high performance computing. SIGHPC’s mission is to help spread the use of HPC, help raise the standards of the profession, […]The post John West from TACC Elected Chair of SIGHPC appeared first on insideHPC.
|
by staff on (#4JQQQ)
The HPC-AI Advisory Council Dan Olds have just posted the first-ever Student Cluster Competition Leadership List. "The list is a ranking of every institution that has ever competed in a cluster competition. The teams are ranked by the number of times they’ve participated and the awards they’ve earned throughout the years. It covers every cluster competition including the ISC competition in Europe, the SC competition in the US, and the Asian competition in China."The post Announcing the Student Cluster Competition Leadership List appeared first on insideHPC.
|
by Rich Brueckner on (#4JQJB)
In this slidecast, Torsten Hoefler from ETH Zurich presents: Data-Centric Parallel Programming. "To maintain performance portability in the future, it is imperative to decouple architecture-specific programming paradigms from the underlying scientific computations. We present the Stateful DataFlow multiGraph (SDFG), a data-centric intermediate representation that enables separating code definition from its optimization."The post Video: Data-Centric Parallel Programming appeared first on insideHPC.
|
by staff on (#4JQDX)
A growing number of commercial businesses are implementing HPC solutions to derive actionable business insights, to run higher performance applications and to gain a competitive advantage. Complexities abound as HPC becomes more pervasive across industries and markets, especially as companies adopt, scale, and optimize both HPC and Artificial Intelligence (AI) workloads. Bill Mannel, VP & GM HPC & AI Solutions Segment at Hewlett Packard Enterprise, walks readers through three strategies to ensure HPC and AI success.The post 3 Ways to Unlock the Power of HPC and AI appeared first on insideHPC.
|
by staff on (#4JNZ7)
In this special guest feature, Dan Olds from OrionX shares first-hand coverage of the Student Cluster Competition at the recent ISC 2019 conference. "The benchmark results from the recently concluded ISC19 Student Cluster Competition have been compiled, sliced, diced, and analyzed senseless. As you cluster comp fanatics know, this year the student teams are required to run LINPACK, HPCG, and HPCC as part of the ISC19 competition."The post ISC19 Student Cluster Competition: LINs Packed & Conjugates Gradient-ed appeared first on insideHPC.
|
by Rich Brueckner on (#4JNW2)
Keren Bergman from Columbia University gave this talk at PASC19. "Data movement, dominated by energy costs and limited ‘chip-escape’ bandwidth densities, is a key physical layer roadblock to these systems’ scalability. Integrated silicon photonics with deeply embedded optical connectivity is on the cusp of enabling revolutionary data movement and extreme performance capabilities."The post Flexibly Scalable High Performance Architectures with Embedded Photonics appeared first on insideHPC.
|
by staff on (#4JMBR)
"The recently announced Ready Solution, Deep Learning with Intel, is powered by Dell EMC servers and networking, new 2nd Generation Intel Xeon Scalable processors, and Nauta, Intel's recently released open source platform for distributed deep learning training. Phil tells us how this new solution can be a game changer for many different use cases and help enterprises get a leg up when deploying AI."The post Podcast: Dell EMC Ready Solutions for AI with Intel Nauta appeared first on insideHPC.
|
by Rich Brueckner on (#4JM6S)
Cray in Minnesota is seeking a Software Engineer for Scientific and Math Libraries in our Job of the Week. "The Cray Scientific and Math Libraries group has an opening for a motivated and skilled low-level software engineer to design and develop state of the art numerical libraries for Cray's current and future supercomputers. The group develops high-performance linear algebra and Fourier transform libraries for Cray systems."The post Job of the Week: Software Engineer for Scientific and Math Libraries at Cray appeared first on insideHPC.
|
by staff on (#4JJX7)
Today Atos announced the winners of the 2019 Joseph Fourier Award. The award aims to accelerate research and innovation by rewarding projects in the fields of numerical simulation and Artificial Intelligence (AI). "It’s really exciting to see such high-quality projects in the fields of HPC and AI and I’d like to congratulate all the scientists and researchers for their hard work and innovative ideas," said Sophie Proust, Group CTO at Atos. "At Atos we’re proud to be supporting innovations that will lead to tangible industrial applications.â€The post Atos announces winners of 2019 Joseph Fourier Award appeared first on insideHPC.
|
by Rich Brueckner on (#4JJN4)
In this podcast, the Radio Free HPC team tackles the HPE-Cray acquisition as it reviews the companies' recent moves, strengths, and market conditions. Hint: part of it has to do with the emergence of AI as a must-do enterprise app and the increasing commonality between supercomputers and enterprise servers.The post Podcast: Why is HPE buying Cray? appeared first on insideHPC.
|
by Rich Brueckner on (#4JJG4)
In this video, Dr. Michela Taufer from the University of Tennessee recaps PASC19 and looks ahead to SC19. "SC is still a place to share new and futuristic ideas, but our plenary sessions, panels, papers and conversations, aren’t just about the future anymore. They’re about what’s happening in the world of HPC today. Because … HPC is Now. That’s the theme of SC19 in Denver next November, and I believe it perfectly captures the state of this exciting field."The post Video: Dr. Michela Taufer sets the stage for SC19 with ‘HPC is Now’ appeared first on insideHPC.
|
by Rich Brueckner on (#4JGKZ)
In this video from ISC 2019, Shintaro Momose, Ph. D. from NEC describes the company's SX-Aurora Vector Engine. "These days, vector functions are added to scalar supercomputers to improve performance in most cases. That means that improving vector performance is the most important way to achieve high effective performance, even with scalar processors. In that respect, it could be said that vector processors are ahead of the trend."The post NEC Steps up with SX-Aurora Vector Engine for HPC appeared first on insideHPC.
|