by staff on (#33GY3)
Today Penguin Computing announced strategic support for the field of artificial intelligence through availability of its servers based on the highly-advanced NVIDIA Tesla V100 GPU accelerator, powered by the NVIDIA Volta GPU architecture. "Deep learning, machine learning and artificial intelligence are vital tools for addressing the world’s most complex challenges and improving many aspects of our lives,†said William Wu, Director of Product Management, Penguin Computing. “Our breadth of products covers configurations that accelerate various demanding workloads – maximizing performance, minimizing P2P latency of multiple GPUs and providing minimal power consumption through creative cooling solutions.â€The post Penguin Computing Launches NVIDIA Tesla V100-based Servers appeared first on insideHPC.
|
High-Performance Computing News Analysis | insideHPC
Link | https://insidehpc.com/ |
Feed | http://insidehpc.com/feed/ |
Updated | 2024-11-25 07:45 |
by staff on (#33GTC)
Today AMAX.AI launched the [SMART]Rack AI Machine Learning cluster, an all-inclusive rackscale platform is maximized for performance featuring up to 96x NVIDIA Tesla P40, P100 or V100 GPU cards, providing well over 1 PetaFLOP of compute power per rack. "The [SMART]Rack AI is revolutionary to Deep Learning data centers," said Dr. Rene Meyer, VP of Technology, AMAX. "Because it not only provides the most powerful application-based computing power, but it expedites DL model training cycles by improving efficiency and manageability through integrated management, network, battery and cooling all in one enclosure."The post AMAX.AI Unveils [SMART]Rack Machine Learning Cluster appeared first on insideHPC.
|
by staff on (#33GQ8)
Today IBM announced the Integrated Analytics System, a new unified data system designed to give users fast, easy access to advanced data science capabilities and the ability to work with their data across private, public or hybrid cloud environments. "Today’s announcement is a continuation of our aggressive strategy to make data science and machine learning more accessible than ever before and to help organizations like AMC, begin harvesting their massive data volumes – across infrastructures – for insight and intelligence,†said Rob Thomas, General Manager, IBM Analytics.The post IBM Moves Data Science Forward with Integrated Analytics System appeared first on insideHPC.
|
by Rich Brueckner on (#33GKW)
Adam Moody from LLNL presented this talk at the MVAPICH User Group. "High-performance computing is being applied to solve the world's most daunting problems, including researching climate change, studying fusion physics, and curing cancer. MPI is a key component in this work, and as such, the MVAPICH team plays a critical role in these efforts. In this talk, I will discuss recent science that MVAPICH has enabled and describe future research that is planned. I will detail how the MVAPICH team has responded to address past problems and list the requirements that future work will demand."The post Video: How MVAPICH & MPI Power Scientific Research appeared first on insideHPC.
|
by Richard Friedman on (#33GFP)
This year, OpenMP*, the widely used API for shared memory parallelism supported in many C/C++ and Fortran compilers, turns 20. OpenMP is a great example of how hardware and software vendors, researchers, and academia, volunteering to work together, can successfully design a specification that benefits the entire developer community.The post OpenMP at 20 Moving Forward to 5.0 appeared first on insideHPC.
|
by Rich Brueckner on (#33DD1)
In this video from the 2017 CGSF Review Meeting, Barbara Helland from the Department of Energy presents: With Exascale Looming, this is an Exciting Time for Computational Science. "Helland was also a presenter this week at the ASCR Advisory Committee Meeting, where she disclosed that the Aurora 21 Supercomputer coming to Argonne in 2021 will indeed be an exascale machine."The post With Exascale Looming, this is an Exciting Time for Computational Science appeared first on insideHPC.
|
by Rich Brueckner on (#33D6S)
Today NVIDIA and its systems partners Dell EMC, Hewlett Packard Enterprise, IBM and Supermicro today unveiled more than 10 servers featuring NVIDIA Volta architecture-based Tesla V100 GPU accelerators -- the world's most advanced GPUs for AI and other compute-intensive workloads. "Volta systems built by our partners will ensure that enterprises around the world can access the technology they need to accelerate their AI research and deliver powerful new AI products and services," said Ian Buck, vice president and general manager of Accelerated Computing at NVIDIA.The post Server Vendors Announce NVIDIA Volta Systems for Accelerated AI appeared first on insideHPC.
|
by Rich Brueckner on (#33D6T)
Today EPSRC in the UK announced the 10 winners of the recent ARCHER Best-Use Travel Competition. The competition aimed to identify the best scientific use of ARCHER, the UK’s national supercomputing facility, within the arena of the engineering and physical sciences. "As we see the increasing need for high performance computing to tackle today’s complex scientific questions, we recognize the need to encourage today’s young researchers to bring their skills to the world," said Dr. Eddie Clarke, EPSRC’s Contract Manager for ARCHER. "The winners of these awards have shown ability, enthusiasm and real skill in their research and these prizes will help them work together with partners overseas to benefit science in the UK.â€The post EPSRC Recognizes Young Scientists using ARCHER Supercomputing Facility appeared first on insideHPC.
|
by Rich Brueckner on (#33D3E)
In this special guest feature, Brad McCredie from IBM writes that launch of Volta GPUs from NVIDIA heralds a new era of AI. "We’re excited about the launch of NVIDIA’s Volta GPU accelerators. Together with the NVIDIA NVLINK “information superhighway†at the core of our IBM Power Systems, it provides what we believe to be the closest thing to an unbounded platform for those working in machine learning and deep learning and those dealing with very large data sets."The post No speed limit on NVIDIA Volta with rise of AI appeared first on insideHPC.
|
by Rich Brueckner on (#33D0F)
Vineeth Ram from HPE gave this talk at the HPC User Forum in Milwaukee. "Organizations across all sectors are putting Big Data to work. They are optimizing their IT operations and enhancing the way they communicate, learn, and grow their businesses in order to harness the full power of artificial intelligence (AI). Backed by high performance computing technologies, AI is revolutionizing the world as we know it—from web searches, digital assistants, and translations; to diagnosing and treating diseases; to powering breakthroughs in agriculture, manufacturing, and electronic design automation."The post Accelerate Innovation and Insights with HPC and AI appeared first on insideHPC.
|
by staff on (#33A2S)
Today Panasas announced that the Science and Technology Facilities Council’s (SFTC) Rutherford Appleton Laboratory (RAL) in the UK has expanded its JASMIN super-data-cluster with an additional 1.6 petabytes of Panasas ActiveStor storage, bringing total storage capacity to 20PB. This expansion required the formation of the largest realm of Panasas storage worldwide, which is managed by a single systems administrator. Thousands of users worldwide find, manipulate and analyze data held on JASMIN, which processes an average of 1-3PB of data every day.The post Panasas Upgrades JASMIN Super-Data-Cluster Facility to 20PB appeared first on insideHPC.
|
by staff on (#339Z7)
Today Bright Computing announced that Bright Cluster Manager 8.0 now integrates with IBM Power Systems. "The integration of Bright Cluster Manager 8.0 with IBM Power Systems has created an important new option for users running complex workloads involving high-performance data analytics,†said Sumit Gupta, VP, HPC, AI & Machine Learning, IBM Cognitive Systems. “Bright Computing’s emphasis on ease-of-use for Linux-based clusters within public, private and hybrid cloud environments speaks to its understanding that while data is becoming more complicated, the management of its workloads must remain accessible to a changing workforce.â€The post Bright Computing Announces Integration with IBM Power Systems appeared first on insideHPC.
|
by Rich Brueckner on (#339SD)
In this RichReport slidecast, James Coomer from DDN presents an overview of the Infinite Memory Engine IME. "IME is a scale-out, flash-native, software-defined, storage cache that streamlines the data path for application IO. IME interfaces directly to applications and secures IO via a data path that eliminates file system bottlenecks. With IME, architects can realize true flash-cache economics with a storage architecture that separates capacity from performance."The post Infinite Memory Engine: HPC in the FLASH Era appeared first on insideHPC.
|
by Rich Brueckner on (#339NT)
Today GTC China, NVIDIA made a series of announcements around Deep Learning, and GPU-accelerated computing for Hyperscale datacenters. "Demand is surging for technology that can accelerate the delivery of AI services of all kinds. And NVIDIA’s deep learning platform — which the company updated Tuesday with new inferencing software — promises to be the fastest, most efficient way to deliver these services."The post NVIDIA Brings Deep Learning to Hyperscale at GTC China appeared first on insideHPC.
|
by staff on (#339B7)
Cloud adoption is accelerating at the blink of an eye, easing the burden of managing data-rich workloads for enterprises big and small. Yet, common myths and misconceptions about the hybrid cloud are delaying enterprises from reaping the benefits. "In this article, we will debunk five of the top most commonly believed myths that keep companies from strengthening their infrastructure with a hybrid approach."The post Common Myths Stalling Organizations From Cloud Adoption appeared first on insideHPC.
|
by staff on (#336CG)
Today Nimbix announced the immediate availability of a new high-performance storage platform in the Nimbix Cloud specifically designed for the demands of artificial intelligence and deep learning applications and workflows. "As enterprises, researchers and startups begin to invest in GPU-accelerated artificial intelligence technologies and workflows, they are realizing that data is a big part of this challenge,†said Steve Hebert, CEO of Nimbix. “With the new storage platform, we are helping our customers achieve performance that breaks through the bottlenecks of commodity or traditional platforms and does so with a turnkey deep learning cloud offering.â€The post Nimbix Launches High Speed Cloud Storage for AI and Deep Learning appeared first on insideHPC.
|
by staff on (#3374W)
Today Cray announced it has completed the previously announced transaction and strategic partnership with Seagate centered around the addition of the ClusterStor high-performance storage business. “As a pioneer in providing large-scale storage systems for supercomputers, it’s fitting that Cray will take over the ClusterStor line.â€The post Cray Assimilates ClusterStor from Seagate appeared first on insideHPC.
|
by staff on (#336Z8)
The CENATE Proving Ground for HPC Technologies at PNNL has named Kevin Barker as their new Director. "The goal of CENATE is to evaluate innovative and transformational technologies that will enable future DOE leadership class computing systems to accelerate scientific discovery," said PNNL's Laboratory Director Steven Ashby. "We will partner with major computing companies and leading researchers to co-design and test the leading-edge components and systems that will ultimately be used in future supercomputing platforms."The post Kevin Barker to Lead CENATE Proving Ground for HPC Technologies appeared first on insideHPC.
|
by staff on (#336CE)
Scalability of scientific applications is a major focus of the Department of Energy’s Exascale Computing Project (ECP) and in that vein, a project known as IDEAS-ECP, or Interoperable Design of Extreme-scale Application Software, is also being scaled up to deliver insight on software development to the research community.The post IDEAS Program Fostering Better Software Development for Exascale appeared first on insideHPC.
|
by staff on (#3361M)
Computing Pioneer Gordon Bell will share insights and inspiration at SC17 in Denver. "We are honored to have the legendary Gordon Bell speak at SC17," said Conference Chair Bernd Mohr, from Germany's Jülich Supercomputing Centre. "The prize he established has helped foster the rapid adoption of new paradigms, given recognition for specialized hardware, as well as rewarded the winners' tremendous efforts and creativity - especially in maximizing the application of the ever-increasing capabilities of parallel computing systems. It has been a beacon for discovery and making the 'might be possible' an actual reality."The post Computing Pioneer Gordon Bell to Present at SC17 appeared first on insideHPC.
|
by Rich Brueckner on (#3361P)
UC Berkeley professor Kathy Yelick presented this talk at the 2017 ACM Europe Conference. "Yelick's keynote lecture focused on the exciting opportunities that High Performance Computing presents, the need for advanced in algorithms and mathematics to advance along with the system performance, and how the variety of workloads will stress the different aspects of exascale hardware and software systems."The post Kathy Yelick Presents: Breakthrough Science at the Exascale appeared first on insideHPC.
|
by Rich Brueckner on (#335YE)
In this podcast, the Radio Free HPC team looks at China’s massive upgrade of the Tianhe-2A supercomputer to 95 Petaflops peak performance. "As detailed in a new 21-page report by Jack Dongarra from the University of Tennessee, the upgrade should nearly double the performance of the system, which is currently ranked at #2 on TOP500."The post Radio Free HPC Looks at China’s 95 Petaflop Tianhe-2A Supercomputer appeared first on insideHPC.
|
by staff on (#33368)
"Understanding and predicting material performance under extreme environments is a foundational capability at Los Alamos,†said David Teter, Materials Science and Technology division leader at Los Alamos. “We are well suited to apply our extensive materials capabilities and our high-performance computing resources to industrial challenges in extreme environment materials, as this program will better help U.S. industry compete in a global market.â€The post LANL Steps Up to HPC for Materials Program appeared first on insideHPC.
|
by Rich Brueckner on (#3336A)
Abhinav Vishnu from PNNL gave this talk at the MVAPICH User Group. "Deep Learning (DL) is ubiquitous. Yet leveraging distributed memory systems for DL algorithms is incredibly hard. In this talk, we will present approaches to bridge this critical gap. Our results will include validation on several US supercomputer sites such as Berkeley's NERSC, Oak Ridge Leadership Class Facility, and PNNL Institutional Computing."The post Scaling Deep Learning Algorithms on Extreme Scale Architectures appeared first on insideHPC.
|
by staff on (#330S2)
Deep Learning was recently scaled to obtain 15PF performance on the Cori supercomputer at NERSC. Cori Phase II features over 9600 KNL processors. It can significantly impact how we do computing and what computing can do for us. In this talk, I will discuss some of the application-level opportunities and system-level challenges that lie at the heart of this intersection of traditional high performance computing with emerging data-intensive computing.The post SC17 Session Preview: Dr. Pradeep Dubey on AI & The Virtuous Cycle of Compute appeared first on insideHPC.
|
by staff on (#330PT)
Researchers at the University of Minnesota are using Argonne supercomputers to to look for new ways to reduce the noise produced by jet engines. Among the loudest sources of human-made noise that exist, jet engines can produce sound in excess of 130 decibels. "The University of Minnesota team developed a new method based on input-output analysis that can predict both the downstream noise and the sideline noise. While it was thought that the sideline noise was random, the input-output modes show coherent structure in the jet that is connected to the sideline noise, such that it can be predicted and controlled."The post Supercomputing Jet Noise for a Quieter World appeared first on insideHPC.
|
by staff on (#32XSV)
Gabriel Broner from Rescale gave this talk at the HPC User Forum. "HPC has transitioned from unique and proprietary designs, to clusters of many dual-CPU Intel nodes. Vendors’ products are now differentiated more by packaging, density, and cooling than the uniqueness of the architecture. In parallel, cloud computing has gained momentum in the larger IT industry. Intel is now selling more processors to run in the cloud than in company-owned facilities, and cloud is starting to drive innovation and efficiencies at a rate faster than on premises."The post Video: Will HPC Move to the Cloud? appeared first on insideHPC.
|
by staff on (#32XP7)
Is your organization looking to hire HPC talent? Be sure to book a table at the SC17 Student/Post Doc Job Fair. "This face-to-face event will be held from 10 a.m. to 3 p.m. Wednesday, Nov. 15, in rooms 702-704-706 in the Colorado Convention Center. The Student/Postdoc Job Fair is open to all students and postdocs attending SC17, giving them an opportunity to meet with potential employers."The post Hiring? Sign up for the SC17 Student/Post Doc Job Fair appeared first on insideHPC.
|
by Rich Brueckner on (#32XKM)
PSSC Labs will work with BSI to create truly, turn-key HPC clusters, servers and storage solutions. PSSC Labs has already delivered several hundred computing platforms for worldwide genomics and bioinformatics research. Utilizing the PowerWulf HPC Cluster as a base solution platform, PSSC Labs and BSI can customize individual components for a specific end user’s research goals.The post PSSC Labs to Power Biosoft Devices for Genetics Research appeared first on insideHPC.
|
by staff on (#32XKN)
The Paderborn Center for Parallel Computing (PC²) has been selected by Intel to host a computer cluster that uses Intel’s Xeon processor with its Arria 10 FPGA software development platform." The availability of these systems allows us to further expand our leadership in this area and – as a next step – bring Intel FPGA accelerators from the lab to HPC production systems,†says Prof. Dr. Christian Plessl, director of the Paderborn Center for Parallel Computing, who is been active in this research area for almost two decades."The post Intel awards Paderborn University a Hybrid Cluster with Arria 10 FPGAs appeared first on insideHPC.
|
by staff on (#32V85)
Today Cray announced the Korea Institute of Science and Technology Information (KISTI) has awarded the Company a contract valued at more than $48 million for a Cray CS500 cluster supercomputer. The 128-rack system, which includes Intel Xeon Scalable processors and Intel Xeon Phi processors, will be the largest supercomputer in South Korea and will provide supercomputing services for universities, research institutes, and industries. "Our supercomputing division is focused on maximizing research performance while significantly reducing research duration and costs by building a top-notch supercomputing infrastructure,†said Pillwoo Lee, General Director, KISTI. “Cray’s proficiency in designing large and complex high-performance computing systems ensures our researchers can now apply highly-advanced HPC cluster technologies towards resolving scientific problems using the power of Cray supercomputers.â€The post KISTI in South Korea orders up a Cray CS500 Supercomputer appeared first on insideHPC.
|
by staff on (#32TN9)
Today the good folks at the Google Compute Platform announced the availability of NVIDIA GPUs in the Cloud for multiple geographies. Cloud GPUs can accelerate workloads such as machine learning training and inference, geophysical data processing, simulation, seismic analysis, molecular modeling, genomics and many more high performance compute use cases. "Today, we're happy to make some massively parallel announcements for Cloud GPUs. First, Google Cloud Platform (GCP) gets another performance boost with the public launch of NVIDIA P100 GPUs in beta.The post NVIDIA P100 GPUs come to Google Cloud Platform appeared first on insideHPC.
|
by staff on (#32THF)
"BigDL’s efficient large-scale distributed deep learning framework, built on Apache Spark, expands the accessibility of deep learning to a broader range of big data users and data scientists,†said Michael Greene, Vice President, Software and Services Group, General Manager, System Technologies and Optimization, Intel Corporation. “The integration with GigaSpaces’ in-memory insight platform, InsightEdge, unifies fast-data analytics, artificial intelligence, and real-time applications in one simplified, affordable, and efficient analytics stack.â€The post GigaSpaces Simplifies Artificial Intelligence Development with Intel BigDL appeared first on insideHPC.
|
by Rich Brueckner on (#32T7F)
In this RichReport slidecast, Dr. Nick New from Optalysys describes how the company's optical processing technology delivers accelerated performance for FFTs and Bioinformatics. "Our prototype is on track to achieve game-changing improvements to process times over current methods whilst providing high levels of accuracy that are associated with the best software processes.â€The post Slidecast: How Optalysys Accelerates FFTs with Optical Processing appeared first on insideHPC.
|
by staff on (#32T3S)
Earlier this week, U.S. Secretary of Energy Rick Perry announced a new high-performance computing initiative that will help U.S. industry accelerate the development of new or improved materials for use in severe environments. "The High Performance Computing for Materials Program will provide opportunities for our industry partners to access the high-performance computing capabilities and expertise of DOE’s national labs as they work to create and improve technologies that combat extreme conditions,†said Secretary Perry.The post New HPC for Materials Program to Help American Industry appeared first on insideHPC.
|
by MichaelS on (#32T0P)
"For those that develop HPC applications, there are usually two main areas that must be considered. The first is the translation of the algorithm, whether simulation based, physics based or pure research into the code that a modern computer system can run. A second challenge is how to move from the implementation of an algorithm to the performance that takes advantage of modern CPUs and accelerators."The post Intel Parallel Studio XE 2018 For Demanding HPC Applications appeared first on insideHPC.
|
by staff on (#32QSY)
Today Cray announced that the Yokohama City University in Japan has put a Cray XC50-AC supercomputer into production. Located in the University’s Advanced Medical Research Center, the new Cray supercomputer will power computational drug-discovery research used in the design of new medicines. The University will also use its Cray system to conduct atomic-level molecular simulations of proteins, nucleic acids, and other complexes.The post Yokohama City University Installs Cray XC50-AC Supercomputer for Life Sciences appeared first on insideHPC.
|
by staff on (#32PVD)
Optalysys, a start-up pioneering the development of light-speed optical coprocessors, today announced the company raised 3.95 million U.S. dollars from angel investors. Optalysys will use the funds to manufacture the first commercially available high-performance computing processor based on its patented optical processing technology.The post Freshly Funded Optalysys Optical Processing to Speed Genomics appeared first on insideHPC.
|
by Rich Brueckner on (#32PQT)
Today Altair announced a multi-year original equipment manufacturing (OEM) agreement with HPE. This agreement represents an expansion of the long-term partnership between HPE and SGI (whom HPE recently acquired). HPE will now be able to include Altair’s PBS Professional workload manager and job scheduler on all of HPE’s high performance computing (HPC) systems, ensuring scalability of price and performance as system sizes and CPU-core counts continue to increase.The post HPE to Bundle Altair PBS Pro Workload Manager appeared first on insideHPC.
|
by staff on (#32PQW)
Today Preferred Networks announced the launch of a private supercomputer designed to facilitate research and development of deep learning, including autonomous driving and cancer diagnosis. The new 4.7 Petaflop machine is one of the most powerful to be developed by the private sector in Japan and is equipped with NTT Com and NTTPC’s GPU platform, and contains 1,024 NVIDIA Tesla P100 GPUs.The post Preferred Networks in Japan Deploys 4.7 Petaflop Supercomputer for Deep Learning appeared first on insideHPC.
|
by Rich Brueckner on (#32PMN)
Today Verne Global announced that DeepL has deployed its 5.1 petaFLOPS supercomputer in its campus in Iceland. Designed to support DeepL’s artificial intelligence driven, neural network translation service, this supercomputer is viewed by many as the world’s most accurate and natural-sounding machine translation service. “We are seeing growing interest from companies using AI tools, such as deep neural network (DNN) applications, to revolutionize how they move their businesses forward, create change, and elevate how we work, live and communicate.â€The post DeepL Deployes 5 Petaflop Supercomputer at Verne Global in Iceland appeared first on insideHPC.
|
by staff on (#32PHM)
The Department of Energy’s Exascale Computing Project (ECP) has named Doug Kothe as its new director effective October 1. "Doug’s credentials in this area and familiarity with every aspect of the ECP make him the ideal person to build on the project’s strong momentum,†said Bill Goldstein, director of Lawrence Livermore National Laboratory and chairman of the ECP Board of Directors, which hired Kothe."The post Exascale Computing Project Names Doug Kothe as Director appeared first on insideHPC.
|
by Rich Brueckner on (#32KVG)
Today at the O’Reilly Artificial Intelligence Conference in San Francisco, Intel’s Lisa Spelman announced the Intel Nervana DevCloud, a cloud-hosted hardware and software platform for developers, data scientists, researchers, academics and startups to learn, sandbox and accelerate development of AI solutions with free compute cloud access powered by Intel Xeon Scalable processors. By providing compute resources for machine learning and deep learning training and inference compute needs, Intel is enabling users to start exploring AI innovation without making their own investments in compute resources up front. In addition to cloud compute resources, frameworks, tools and support are provided.The post Intel offers up AI Developer Resources in the Cloud appeared first on insideHPC.
|
by Rich Brueckner on (#32KHJ)
Researchers in China are busy upgrading the MilkyWay 2 (Tianhe-2) system to nearly 95 Petaflops (peak). This should nearly double the performance of the system, which is currently ranked at #2 on TOP500 with 33.86 Petaflops on the Linpack benchmark. The upgraded system, dubbed Tianhe -2A, should be completed in the coming months.The post China Upgrading Milky Way 2 Supercomputer to 95 Petaflops appeared first on insideHPC.
|
by Rich Brueckner on (#32KAH)
Nikunj Oza from NASA Ames gave this talk at the HPC User Forum. "This talk will give a broad overview of work at NASA in the space of data sciences, data mining, machine learning, and related areas at NASA. This will include work within the Data Sciences Group at NASA Ames, together with other groups at NASA and university and industry partners. We will delineate our thoughts on the roles of NASA, academia, and industry in advancing machine learning to help with NASA problems."The post NASA Perspectives on Deep Learning appeared first on insideHPC.
|
by Rich Brueckner on (#32K4Q)
Today Cray announced the Japan Advanced Institute for Science and Technology (JAIST) has put a Cray XC40 supercomputer into production. The Cray XC40 supercomputers incorporate the Aries high performance network interconnect for low latency and scalable global bandwidth, as well as the latest Intel Xeon processors, Intel Xeon Phi processors, and NVIDIA Tesla GPU accelerators. “Our new Cray XC40 supercomputer will support our mission of becoming a premier center of excellence in education and research.â€The post JAIST in Japan installs Cray XC40 Supercomputer appeared first on insideHPC.
|
by staff on (#32K1E)
With the Earth’s population at 7 billion and growing, understanding population distribution is essential to meeting societal needs for infrastructure, resources and vital services. This article highlights how NVIDIA GPU-powered AI is accelerating mapping and analysis of population distribution around the globe. “If there is a disaster anywhere in the world,†said Bhaduri, “as soon as we have imaging we can create very useful information for responders, empowering recovery in a matter of hours rather than days.â€The post GPUs Accelerate Population Distribution Mapping Around the Globe appeared first on insideHPC.
|
by staff on (#32H0Y)
Today Northrop Grumman Corporation announced they have entered into a definitive agreement to acquire Orbital ATK for approximately $7.8 billion in cash, plus the assumption of $1.4 billion in net debt. "Through our combination, customers will benefit from expanded capabilities, accelerated innovation and greater competition in critical global security domains. Our complementary portfolios and technology-focused cultures will yield significant value creation through revenue synergies associated with new opportunities, cost savings, operational synergies, and enhanced growth."The post Northrop Grumman to Acquire Orbital ATK for $9.2 Billion appeared first on insideHPC.
|
by Rich Brueckner on (#32FXJ)
Bob Sorensen from Hyperion Research describes an ongoing study on the Development Trends of Next-Generation Supercomputers. The project will gather information on pre-exascale and exascale systems today and through 2028 and build a database of technical information on the research and development efforts on these next-generation machines.The post Development Trends of Next-Generation Supercomputers appeared first on insideHPC.
|
by Rich Brueckner on (#32FXM)
In this special guest feature from Scientific Computing World, Robert Roe looks at research from the University of Alaska that is using HPC to change the way we look at the movement of ice sheets. "The computational muscle behind this research project comes from the UAF’s Geophysical Institute which houses two HPC systems ‘Chinook’, an Intel based cluster from Penguin Computing and ‘Fish’ a Cray system installed in 2012 based on the Cray XK6m-200 that uses AMD processors."The post HPC Reveals Glacial Flow appeared first on insideHPC.
|