![]() |
by Rich Brueckner on (#KPT8)
Today Atos announced that the company has installed the first Petascale supercomputer in Brazil. Designed by Bull, the "Santos Dumont" system will be the largest supercomputer in Latin America. "We are very proud to equip Brazil with a world-class, Petascale High-Performance Computing (HPC) infrastructure and to launch a R&D Center in Petrópolis that is fully integrated with our global R&D,†said Philippe Vannier, Vice-President Executive and Chief Technology Officer at Atos. “With a presence in this country stretching back over more than 50 years, the collaborative ties that bind Bull and now Atos to Brazil in terms of leading-edge technologies are significant.â€The post Atos Deploys Petaflop Supercomputer in Brazil appeared first on insideHPC.
|
Inside HPC & AI News | High-Performance Computing & Artificial Intelligence
Link | https://insidehpc.com/ |
Feed | http://insidehpc.com/feed/ |
Updated | 2025-08-19 11:15 |
![]() |
by Rich Brueckner on (#KM44)
The AMD graphics cards are uniquely equipped with AMD Multiuser GPU technology embedded into the GPU delivering consistent and predictable performance," said Sean Burke, AMD corporate vice president and general manager, Professional Graphics. "When these AMD GPUs are appropriately configured to the needs of an organization, end users get the same access to the GPU no matter their workload. Each user is provided with the virtualized performance to design, create and execute their workflows without any one user tying up the entire GPU."The post AMD Demos World’s First Hardware-Based Virtualized GPU Solution appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#KM1S)
In this video, LLNL scientists discuss the challenges of debugging programs at scale on the Sequoia supercomputer, which has 1.6 million processors. "Bugs in parallel HPC applications are difficult to debug because errors propagate among compute nodes, programmers must debug thousands of nodes or more, and bugs might manifest only at large scale."The post Video: Debugging HPC Applications at Massive Scales appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#KHHV)
This video celebrates the 50th anniversary of Britain's first supercomputer, the Ferranti Atlas. "When first switched on in December 1962, Atlas was the world's most powerful computer. Some of the software concepts it pioneered, like 'virtual memory', are among the most important breakthroughs in computer design and still used today."The post Video Retrospective: Ferranti Atlas – Britain’s First Supercomputer appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#KHFY)
Stanford University is seeking an HPC Virtualization Specialist in our Job of the Week.The post Job of the Week: HPC Virtualization Specialist at Stanford University appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#KESJ)
In this video from the Department of Energy Computational Science Graduate Fellowship meeting, Jarrod McClean from Harvard University presents: Quantum Computers and Quantum Chemistry.The post Video: Quantum Computers and Quantum Chemistry appeared first on insideHPC.
|
![]() |
by staff on (#KEKP)
The Alan Turing Institute is the UK’s national institute for data science. It has marked its first few days of operations with the announcement of its new director, the confirmation of £10 million of research funding from Lloyd’s Register Foundation, a research partnership with GCHQ, collaboration with the EPSRC and Cray, and the commencement of its first research activities.The post Alan Turing Institute Hits the Ground Running for HPC & Data Science appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#KEGF)
"Argonne National Laboratory is one of the laboratories helping to lead the exascale push for the nation with the DOE. We lead in a numbers of areas with software and storage systems and applied math. And we're really focusing, our expertise is focusing on those new ideas, those novel new things that will allow us to sort of leapfrog the standard slow evolution of technology and get something further out ahead, three years, five years out ahead. And that's where our research is focused."The post Video: Argonne’s Pete Beckman Describes the Challenges of Exascale appeared first on insideHPC.
|
![]() |
by staff on (#KEEX)
Today Engility announced it has been awarded a prime position on a $25 million multiple award contract by the National Oceanic and Atmospheric Administration (NOAA) to provide broad spectrum IT support including high performance computing initiatives for the agency's Geophysical Fluid Dynamics Laboratory (GFDL).The post Engility to Support NOAA’s Geophysical Fluid Dynamics Laboratory appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#KDSH)
In this podcast, the Radio Free HPC team previews three of the excellent Tutorial sessions coming up at SC15. "The SC tutorials program is one of the highlights of the SC Conference series, and it is one of the largest tutorial programs at any computing-related conference in the world. It offers attendees the chance to learn from and to interact with leading experts in the most popular areas of high performance computing (HPC), networking, and storage."The post Radio Free HPC Previews Tutorials at SC15 appeared first on insideHPC.
|
![]() |
by staff on (#KB3S)
HPC and Beer have always had a certain affinity ever since the days when Cray Research would include a case of Leinenkugel's with every supercomputer. Now, Brian Caulfield from Nvidia writes that a Pennsylvania startup is using GPUs and Deep Learning technologies to enable brewers to make better beer.The post GPUs Power Analytical Flavor Systems for Brewing Better Beer appeared first on insideHPC.
|
![]() |
by MichaelS on (#KB2G)
A convergence in the fields of High Performance Computing (HPC) and Big Data has led to new opportunities for software developers to create and deliver products that can help to analyze very large amounts of data. The HPC software ecosystem over the years have created and maintained sets of numerical libraries, communication API’s (MPI) and applications to make running HPC type applications faster and simpler to design. Low level libraries have been developed so that developers can concentrate on higher level algorithms. Products such as the Intel Math Kernel Library (Intel MKL) have been highly tuned to take advantage of multiple cores and newer instructions sets.The post Data Analytics Requires New Libraries appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#KB0N)
Today Intel announced a 10-year collaborative relationship with the Delft University of Technology and TNO, the Dutch Organization for Applied Research, to accelerate advancements in quantum computing. To achieve this goal, Intel will invest US$50 million and will provide significant engineering resources both on-site and at Intel, as well as technical support. "Quantum computing holds the promise of solving complex problems that are practically insurmountable today, including intricate simulations such as large-scale financial analysis and more effective drug development."The post Intel to Invest $50 Million in Quantum Computing appeared first on insideHPC.
|
![]() |
by staff on (#KAYT)
European researchers are welcome to use the world’s fastest supercomputer, the Tianhe-2, to pursue their research in collaboration with Chinese scientists and HPC specialists. "Enough Ivy Bridge Xeon E5 2692 processors had already been delivered to allow the Tianhe-2 to be upgraded from its current 55 Petaflops peak performance to the 100 Petaflops mark."The post An Open Invitation to Work on the Tianhe-2 Supercomputer appeared first on insideHPC.
|
![]() |
by staff on (#KA6Y)
A team at Oak Ridge has developed a set of automated calibration techniques for tuning residential and commercial building energy efficiency software models to match measured data. Their open source Autotune code is now available on GitHub.The post Autotune Code from ORNL Tunes Your Building Energy Efficiency appeared first on insideHPC.
|
![]() |
by staff on (#K8P9)
The XSEDE project and the University of California, Berkeley are offering an online course on parallel computing for graduate students and advanced undergraduates.The post XSEDE & UC Berkeley Offer Online Parallel Computing Course appeared first on insideHPC.
|
by staff on (#K82F)
Cisco UCS solutions allow for a faster and more optimized deployment of a computing infrastructure. This solution brief details how the Cisco UCS infrastructure can help your organization become more productive more quickly and can achieve business results without having to be concerned with fitting together various pieces of disparate hardware and software.The post Five Reasons To Deploy Cicso UCS Infrastructure appeared first on insideHPC.
![]() |
by Rich Brueckner on (#K7TA)
Today Microsoft announced their GS-Series of premium VMs for Compute-intensive workloads. "Powered by the Intel Xeon E5 v3 family processors, the GS-series can have up to 64TB of storage, provide 80,000 IOPs (storage I/Os per second) and deliver 2,000 MB/s of storage throughput. The GS-series offers the highest disk throughput, by more than double, of any VM offered by another hyperscale public cloud provider."The post Microsoft Boosts Azure with GS-Series VMs for Compute-intensive Workloads appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#K7Q6)
"I will describe a decade-long, multi-disciplinary, multi-institutional effort spanning neuroscience, supercomputing and nanotechnology to build and demonstrate a brain-inspired computer and describe the architecture, programming model and applications. I also will describe future efforts in collaboration with DOE to build, literally, a “brain-in-a-boxâ€. The work was built on simulations conducted on Lawrence Livermore National Laboratory's Dawn and Sequoia HPC systems in collaboration with Lawrence Berkeley National Laboratory."The post Video: DARPA’s SyNAPSE and the Cortical Processor appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#K7CB)
The HPC User Forum has posted their Agendas for upcoming meetings in Europe next month.The post HPC User Forum Meetings Coming to Paris and Munich in October appeared first on insideHPC.
|
![]() |
by staff on (#K7A5)
Today Mellanox announced that its Spectrum 10, 25, 40, 50 and 100 Gigabit Ethernet switches are now shipping. According to the company, Mellanox is the first vendor to deliver comprehensive end-to-end 10, 25, 40, 50 and 100 Gigabit Ethernet data center connectivity solutions.The post Mellanox Shipping Spectrum Open Ethernet 25/50/100 Gigabit Switch appeared first on insideHPC.
|
![]() |
by Douglas Eadline on (#K76K)
When discussing GPU accelerators, the focus is often on the price-to-performance benefits to the end user. The true cost of managing and using GPUs goes far beyond the hardware price, however. Understanding and managing these costs helps provide more efficient and productive systems.The post Strategies for Managing High Performance GPU Clusters appeared first on insideHPC.
|
![]() |
by staff on (#K3RR)
The ISC High Performance conference has issued its Call for Papers. As Europe’s most renowned forum for high performance computing, ISC 2016 will take place June 20-22, 2016 in Frankfurt, Germany.The post ISC 2016 Issues Call for Papers appeared first on insideHPC.
|
![]() |
by staff on (#K3MF)
Pioneering a new consulting services model for strategy, marketing, and PR, OrionX today announced the appointment of Peter ffoulkes as Partner based in San Francisco. Mr. ffoulkes was most recently Research Director for Cloud Computing and Enterprise Platforms at 451 Research.The post Peter ffoulkes Joins OrionX Consulting appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#K3JJ)
"Existing computational chemistry packages are tightly integrated, with few interchangeable components, data structures and algorithms. Interoperation between packages is an issue of particular interest to the chemistry community but is only slowly gaining traction. In this talk I will show how the Aquarius quantum chemistry framework takes the philosophy that abstraction layers can be created to interface with external programs and libraries that use expert knowledge of those systems to leverage their native interfaces. This philosophy is illustrated with the example of tensor contraction, where an interface layer has been created to dynamically build on the features of various external tensor libraries."The post Video: Doing Computational Chemistry with Square Pegs and Round Holes appeared first on insideHPC.
|
![]() |
by staff on (#K35A)
The Intel Omni-Path Architecture (Intel® OPA) whitepaper goes through the multitude of improvements that Intel OPA technology provides to the HPC community. In particular, HPC readers will appreciate how collective operations can be optimized based on message size, collective communicator size and topology using the point-to-point send and receive primitives.The post New Intel® Omni-Path White Paper Details Technology Improvements appeared first on insideHPC.
|
by lewey on (#K2TQ)
It's me again--Dr. Lewey Anton. I’ve been commissioned by insideHPC to get the scoop on who’s jumping ship and moving on up in high performance computing. Familiar names in this edition include: David Barkai, Ben Bennet, Bob Buck, and Peter ffoulkes.The post HPC People on the Move: Back to School Edition appeared first on insideHPC.
![]() |
by MichaelS on (#K0MJ)
Creating a large server farm with fast CPUs doesn’t map well to applications that require storage connectivity, as most do, or socket to socket communication within the overall system. Thus, a flexible and high speed networking solution is critical to the overall performance of the computing system.The post Cray XC Series Networking appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#K05H)
The Distributed European Computing Initiative (DECI) in Europe has issued its 13th Call for Proposals for HPC Compute Resources. "Administered by PRACE, DECI enables European researchers to obtain access to the most powerful national (Tier-1) computing resources in Europe regardless of their country of origin or employment and to enhance the impact of European science and technology at the highest level."The post DECI in Europe Issues Call for Proposals for HPC Compute Resources appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#K0C1)
In this video, technicians install the first phase of a new Cray XC supercomputer at the U.K. Met Office. "Consisting of three phases spanning multiple years, the $128 million contract expands Cray's significant presence in the global weather and climate community, and is the largest supercomputer contract ever for Cray outside of the United States.The post Time-lapse Video: Installation of Cray Supercomputer at UK Met Office appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#K02Q)
In this video from the SF Big Analytics Meetup, Bryan Catanzaro from Baidu presents: Why is HPC so important to AI? "We built Deep Speech because we saw the opportunity to re-conceive speech recognition in light of the new capabilities afforded by Deep Learning, to take advantage of even larger datasets to solve even harder problems."The post Video: Why is HPC so Important to AI? appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#JZXH)
Today the HPC Advisory Council announced the return of the widely successful HPCAC-ISC Student Cluster Competition in next year’s ISC program of events. In a real-time competition, 11 teams of undergraduate students from around the world will build a small cluster of their own design on the ISC 2016 exhibit floor and race to demonstrate the greatest performance across a series of benchmarks and applications.The post Call for Student Cluster Teams at ISC 2016 appeared first on insideHPC.
|
![]() |
by staff on (#JZCA)
Scientists from the NERSC and Berkeley National Laboratory are using a supercomputer to create one of the most complete, three-dimensional maps of the adolescent universe — using extremely faint light from galaxies 10.8 billion light years away.The post Edison Supercomputer Creates 3D Map of Adolescent Universe appeared first on insideHPC.
|
![]() |
by staff on (#JX9M)
Machine learning is the science of getting computers to act without being explicitly programmed. The new R2D3 Blog offers an instructive Visual Introduction to Machine Learning.The post Interactive Design Powers Visual Introduction to Machine Learning appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#JX5K)
"For more than 40 years now, we have enjoyed a proud and storied history with Chippewa Falls, and the opening of our new manufacturing facility affirms our commitment to building our supercomputers in a town that is synonymous with Cray," said Peter Ungaro, president and CEO of Cray. "Maintaining direct control of our manufacturing process ensures our systems are built with the highest level of quality that customers expect in a Cray product. "The post Video: A Rare Look Inside Cray Manufacturing in Chippewa Falls appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#JTYC)
The 2015 PRC Lustre* Users Group conference has issued its Call for Papers. The event takes place Oct. 20 in Beijing.The post Lustre* Users Group in China Issues Call for Papers appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#JTXD)
The Ohio Supercomputer Center is seeking a Systems Developer/Engineer in our Job of the Week.The post Job of the Week: Systems Developer/Engineer at Ohio Supercomputer Center appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#JRFD)
In this video from the Barcelona Supercomputer Center, Big Data is presented as a key challenge for researchers studying global climate change. "Changes in the composition of the atmosphere can affect the habitability of the planet by modifying the air quality and altering long-term climate. Research in this area is devoted to the development, implementation and refinement of global and regional state-of-the-art models for short-term air quality forecasting and long-term climate predictions."The post Video: Big Data Powers Climate Research at BSC appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#JR46)
In this video from the 2015 OLCF User Meeting, Buddy Bland from Oak Ridge presents: Present and Future Leadership Computers at OLCF. "As the home of Titan, the fastest supercomputer in the USA, OLCF has an exciting future ahead with the 2017 deployment of the Summit supercomputer. Summit will deliver more than five times the computational performance of Titan’s 18,688 nodes, using only approximately 3,400 nodes when it arrives in 2017."The post Video: Present and Future Leadership Computers at OLCF appeared first on insideHPC.
|
![]() |
by staff on (#JR11)
Today GENCI announced a collaboration with IBM aimed at speeding up the path to exascale computing. "The collaboration, planned to run for at least 18 months, focuses on readying complex scientific applications for systems under development expected to achieve more than 100 petaflops, a solid step forward on the path to exascale. Working closely with supercomputing experts from IBM, GENCI will have access to some of the most advanced high performance computing technologies stemming from the rapidly expanding OpenPOWER ecosystem."The post GENCI to Collaborate with IBM in Race to Exascale appeared first on insideHPC.
|
![]() |
by MichaelS on (#JQYC)
Applications that use 3D Finite Difference (3DFD) calculations are numerically intensive and can be optimized quite heavily to take advantage of accelerators that are available in today's systems. The performance of an implementation can and should be optimized using numerical stencils. Choices made when designing and implementing algorithms can affect the Arithmetic Intensity (AI), which is a measure of how efficient an implementation, by comparing the flops and memory access.The post Arithmetic Intensity of Stencil Operations appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#JQ9R)
"Ultimately, we must accept that research is best served through using a combination of open-source and proprietary software, through developing new software and through the use of existing software. This approach allows the research community to focus on what is optimal for scientific discovery: the one point on which everyone in this debate agrees."The post The Price of Open-source Software – a Joint Response appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#JN7N)
In what has to be one of the most beautiful simulations I've ever seen, this video from the European Space Agency shows simulated interaction of solar winds with 67P/Churyumov-Gerasimenko, the famous comet targeted the Rosetta mission. "The simulated conditions represent those expected at 1.3 AU from the Sun, close to perihelion, where the comet is strongly active."The post Video: Stunning Simulation Shows Comet in the Solar Wind appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#JN2W)
Early Bird registration rates are now available for ISC Cloud & Big Data Conference, which takes place Sept. 28-30 in Frankfurt, Germany. This year the event will kick off with one full day of workshops. The new program will highlight performance demanding cloud and big data applications and technologies and will consist of three tracks: Business, Technology and Research.The post ISC Cloud & Big Data Conference to Focus on Business, Technology and Research appeared first on insideHPC.
|
![]() |
by staff on (#JMTN)
Over at NERSC, Linda Vu writes that the SciDB open source database system is a powerful tool for helping scientists wrangle Big Data. "SciDB is an open source database system designed to store and analyze extremely large array-structured data—like pictures from light sources and telescopes, time-series data collected from sensors, spectral data produced by spectrometers and spectrographs, and graph-like structures that illustrate relationships between entities."The post Accelerating Science with SciDB from NERSC appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#JMJ7)
"Sea level rise is one of the most visible signatures of our changing climate, and rising seas have profound impacts on our nation, our economy and all of humanity," said Michael Freilich, director of NASA's Earth Science Division. "By combining space-borne direct measurements of sea level with a host of other measurements from satellites and sensors in the oceans themselves, NASA scientists are not only tracking changes in ocean heights but are also determining the reasons for those changes."The post NASA Charts Sea Level Rise appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#JMD6)
Today Rescale announced availability of its Europe region simulation and HPC platforms. As an HPC cloud provider, Rescale offers a software platform and hardware infrastructure for companies to perform scientific and engineering simulations.The post Rescale Launches Cloud HPC Platform in Europe appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#JHRM)
Today Intel Corporation and BlueData announced a broad strategic technology and business collaboration, as well as an additional equity investment in BlueData from Intel Capital. BlueData is a Silicon Valley startup that makes it easier for companies to install Big Data infrastructure, such as Apache Hadoop and Spark, in their own data centers or in the cloud.The post Intel Invests in BlueData for Spinning Up Spark Clusters on the Fly appeared first on insideHPC.
|
![]() |
by staff on (#JHAN)
Geert Wenes writes in the Cray Blog that the next generation of Grand Challenges will focus on critical workflows for Exascale. "For every historical HPC grand challenge application, there is now a critical dependency on a series of other processing and analysis steps, data movement and communications that goes well beyond the pre- and post-processing of yore. It is iterative, sometimes synchronous (in situ) and generally more on an equal footing with the “main†application."The post From Grand Challenges to Critical Workflows appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#JH5X)
"Supercomputing should be available for everyone who wants it. With that mission in mind, a team of engineers created Parallella, an 18-core supercomputer that’s a little bigger than a credit card. Parallella is open source hardware; the circuit diagrams are on GitHub and the machine runs Linux. Icing on the cake: Parallella is the most energy efficient computer on the planet, and you can buy one for a hundred bucks. Why does parallel computing matter? How can developers use parallel computing to deliver better results for clients? Let’s explore these questions together."The post Video: Parallella – The Most Energy Efficient Supercomputer on the Planet appeared first on insideHPC.
|