by staff on (#4ZPFA)
The United States is reclaiming a global top spot in high performance computing to support weather and climate forecasts. NOAA, part of the Department of Commerce, today announced a significant upgrade to compute capacity, storage space, and interconnect speed of its Weather and Climate Operational Supercomputing System. This upgrade keeps the agency’s supercomputing capacity on par with other leading weather forecast centers around the world.The post NOAA to triple weather and climate supercomputing capacity appeared first on insideHPC.
|
High-Performance Computing News Analysis | insideHPC
Link | https://insidehpc.com/ |
Feed | http://insidehpc.com/feed/ |
Updated | 2024-11-24 03:45 |
by Rich Brueckner on (#4ZPFB)
The Standard Performance Evaluation Corporation (SPEC) has finalized the election of new Open System Steering Committee (OSSC) executive members, which include Inspur, Intel, AMD, IBM, Oracle and three other companies. "It is worth noting that Inspur, a re-elected OSSC member, was also re-elected as the chair of the SPEC Machine Learning (SPEC ML) working group. The development plan of ML test benchmark proposed by Inspur has been approved by members which aims to provide users with standard on evaluating machine learning computing performance."The post Inspur Re-Elected as Member of SPEC OSSC and Chair of SPEC Machine Learning appeared first on insideHPC.
|
by Rich Brueckner on (#4ZP6Y)
Bill Savage from Intel gave this talk at the Intel HPC Developer Conference. "Learn about oneAPI, the new Intel-led industry initiative to deliver a high-performance unified programming model specification spanning CPU, GPU, FPGA, and other specialized architectures. It includes the Data Parallel C++ cross-architecture language, a set of libraries, and a low-level hardware interface. Intel oneAPI Beta products are also available for developers who want to try out the programming model and influence its evolution."The post oneAPI: Single Programming Model to Deliver Cross-Architecture Performance appeared first on insideHPC.
|
by Rich Brueckner on (#4ZMQQ)
The Swiss National Supercomputing Centre will host the 11th annual Swiss Conference and bi-annual HPCXXL Winter Meeting April 6-9 in Lugano, Switzerland. "Explore the domains and disciplines driving change and progress at an unprecedented pace and join us at the Swiss HPC Conference. Gather with fellow colleagues, a recognizable lineup of industry giants, startups, technology innovators and renowned subject matter experts to share insights on the tools, techniques and technologies that are bringing private and public research communities and interests together and inspiring entirely new possibilities."The post Swiss Conference & HPCXXL User Group Events Return to Lugano appeared first on insideHPC.
|
by staff on (#4ZMQS)
HLRS officially dedicated their new Hawk supercomputer computer this week at a ceremony in Stuttgart, Germany. With a peak performance of approximately 26 Petaflops, Hawk is an HPE Apollo 9000 System and is among the fastest supercomputers worldwide and the fastest general purpose system for scientific and industrial computing in Europe. "Computers like Hawk are tools for advanced research in the sciences and in industry," said Parliamentary State Secretary Dr. Michael Meister. "They enable excellent science and innovation, and solidify Germany's international position as a top location for supercomputing."The post HLRS Inaugurates Hawk Supercomputer from HPE appeared first on insideHPC.
|
by Rich Brueckner on (#4ZMGC)
In this interview, Dr. Meyer discusses AMAX’s focus on appliances for storage, cloud, and hyper-scale integration. Dr. Meyer goes into more depth about why training in AI is now moving to deploying models at the edge, and why 2nd Generation Intel Xeon Scalable processors can be a good fit for such tasks given their advancements in machine learning and security technologies.The post Podcast: Preparing for HPC & AI Convergence in the Enterprise appeared first on insideHPC.
|
by Rich Brueckner on (#4ZMGE)
Abe Stern from NVIDIA gave this talk at the ECSS Symposium. "We will introduce Numba and RAPIDS for GPU programming in Python. Numba allows us to write just-in-time compiled CUDA code in Python, giving us easy access to the power of GPUs from a powerful high-level language. RAPIDS is a suite of tools with a Python interface for machine learning and dataframe operations. Together, Numba and RAPIDS represent a potent set of tools for rapid prototyping, development, and analysis for scientific computing. We will cover the basics of each library and go over simple examples to get users started."The post CUDA-Python and RAPIDS for blazing fast scientific computing appeared first on insideHPC.
|
by Rich Brueckner on (#4ZM53)
A new high efficiency cooling unit installed on the roof of Sandia National Laboratories’ supercomputer center saved 554,000 gallons of water during its first six months of operation last year, says David J. Martinez, engineering project lead for Sandia’s Infrastructure Computing Services. "The dramatic decrease in water use, important for a water-starved state, could be the model for cities and other large users employing a significant amount of water to cool thirsty supercomputer clusters springing up like mushrooms around the country, says Martinez.The post Super cooling unit saves water at Sandia HPC data center appeared first on insideHPC.
|
by Rich Brueckner on (#4ZK78)
Today Hyperion Research announced high-profile speakers from major banking and investment firms will highlight the agenda at the next HPC User Forum. Thomas Thurston, CTO of WR Hambrecht Ventures, and Brad Spiers, executive director at JP Morgan Chase will deliver keynote talks at the event, which takes place March 30-April 1 in Princeton, New Jersey. "The HPC User Forum meeting will also feature talks by U.S. and international experts on exascale computing and architectures, massive-scale analytics, AI for cyber operations, cancer research, fusion energy, seismology, HPC for small businesses, cloud computing, and quantum computing, along with technical updates from HPC vendors."The post HPC User Forum to Explore AI-HPDA Use In Banking and Investment Firms appeared first on insideHPC.
|
by staff on (#4ZJWN)
In this podcast, the Radio Free HPC team looks at how supercomputing can help predict and track the progress of the Covid-19 virus. "Shahin also wonders about the economic effect on the tech business as inventories dry up while producers are sidelined by the virus."The post Podcast: Tracking the Coronavirus with HPC appeared first on insideHPC.
|
by staff on (#4ZJWP)
Berkeley Lab has been at the forefront of efforts to design, build, and optimize energy-efficient hyperscale data centers. “In the march to exascale computing, there are real questions about the hard limits you run up against in terms of energy consumption and cooling loads,†Elliott said. “NERSC is very interested in optimizing its facilities to be leaders in energy-efficient HPC.â€The post LBNL Breaks New Ground in Data Center Optimization appeared first on insideHPC.
|
by Rich Brueckner on (#4ZJWR)
GE Research has been awarded access to the world’s #1-ranked supercomputer to discover new ways to optimize the efficiency of jet engines and power generation equipment. Michal Osusky, the project’s leader from GE Research’s Thermosciences group, says access to the supercomputer and support team at OLCF will greatly accelerate learning insights for turbomachinery design improvements that lead to more efficient jet engines and power generation assets, stating, “We’re able to conduct experiments at unprecedented levels of speed, depth and specificity that allow us to perceive previously unobservable phenomena in how complex industrial systems operate. Through these studies, we hope to innovate new designs that enable us to propel the state of the art in turbomachinery efficiency and performance.â€The post GE Research Leverages World’s Top Supercomputer to Boost Jet Engine Efficiency appeared first on insideHPC.
|
by Rich Brueckner on (#4ZJWT)
Jane Wang from DeepMind gave this talk NeurIPS 2019. "Building on the connection between biological and artificial reinforcement learning, our workshop will bring together leading and emergent researchers from Neuroscience, Psychology and Machine Learning to share: how neural and cognitive mechanisms can provide insights to tackle challenges in RL research and how machine learning advances can help further our understanding of the brain and behavior."The post Video: From Brains to Agents and Back appeared first on insideHPC.
|
by staff on (#4ZJFJ)
At the International Solid-State Circuits Conference this week, Intel presented a research paper demonstrating the technical details and experimental results of its new Horse Ridge cryogenic quantum computing control chip. "Building fault-tolerant, commercial-scale quantum computers requires a scalable architecture for both qubits and control electronics. Horse Ridge is a highly integrated System-on-a-Chip (SoC) that provides an elegant solution to enable control of multiple qubits with high fidelity—a major milestone on the path to quantum practicality."The post Intel Horse Ridge Chip Addresses Key Barriers to Quantum Scalability appeared first on insideHPC.
|
by staff on (#4ZH96)
If you are considering moving some of your HPC workload to the Cloud, nothing leads the way like a good set of case studies in your scientific domain. To this end, our good friends at the UberCloud have published a compendium entitled, Exploring Life Sciences in the Cloud. The document includes 36 CFD case studies summarizing HPC Cloud projects that the UberCloud has performed together with the engineering community over the last six years. "From the 220 cloud experiments we have done so far, we selected 15 case studies related to the life sciences. We document the results of these research teams, their applications, findings, challenges, lessons learned, and recommendations."The post UberCloud Publishes Compendium Of Case Studies in Life Sciences appeared first on insideHPC.
|
by staff on (#4ZH98)
Today Google Cloud announced the beta availability of N2D VMs on Google Compute Engine powered by 2nd Gen AMD EPYC processors. The N2D family of VMs is a great option for customers running general purpose and high-performance workloads requiring a balance of compute and memory. “AMD and Google have worked together closely on these initial VMs to help ensure Google Cloud customers have a high-performance and cost-effective experience across a variety of workloads, and we will continue to work together to provide that experience this year and beyond.â€The post AMD EPYC Cloud Adoption Grows with Google Cloud appeared first on insideHPC.
|
by staff on (#4ZH99)
Today Dell Technologies announced new solutions to help customers analyze data at the edge, outside of a traditional data center. With a host of new offerings—including new edge server designs, smaller modular data centers, enhanced telemetry management and a streaming analytics engine—customers are better positioned to realize the value of their data wherever it resides. "As we enter the next Data Decade, the challenge moves from keeping pace with volumes of data to gaining valuable insights from the many types of data and touchpoints across various edge locations to core data centers and public clouds,†said Jeff Boudreau, president, Infrastructure Solutions Group, Dell Technologies. “We offer a portfolio that’s engineered to help customers address the constraints of edge operations and deliver analytics for greater business insights wherever their edge may be.â€The post New Servers from Dell Technologies analyze data wherever it resides appeared first on insideHPC.
|
by staff on (#4ZH9B)
The N8 Centre of Excellence in Computationally Intensive Research, N8 CIR, has been awarded £3.1m from the Engineering and Physical Sciences Resources Council to establish a new Tier 2 computing facility in the north of England. This investment will be matched by £5.3m from the eight universities in the N8 Research Partnership which will fund operational costs and dedicated research software engineering support. "The new facility, known as the Northern Intensive Computing Environment or NICE, will be housed at Durham University and co-located with the existing STFC DiRAC Memory Intensive National Supercomputing Facility. NICE will be based on the same technology that is used in current world-leading supercomputers and will extend the capability of accelerated computing. The technology has been chosen to combine experimental, modelling and machine learning approaches and to bring these specialist communities together to address new research challenges."The post UK to establish Northern Intensive Computing Environment (NICE) appeared first on insideHPC.
|
by Rich Brueckner on (#4ZH06)
The DLR German Aerospace Center dedicated its new CARA supercomputer in Dresden on February 5, 2020. With 1.746 Petaflops of performance on the Linpack benchmark, the AMD-powered system from NEC is currently rated #221 on the TOP500. "With its almost 150,000 computing cores, CARA is one of the most powerful supercomputers available internationally for aerospace research," said Prof. Markus Henke from TU Dresden.The post AMD Powers CARA Supercomputer from NEC in Dresden appeared first on insideHPC.
|
by Rich Brueckner on (#4ZFV0)
The Supercomputing Asia 2020 conference has been cancelled in light of COVID-19 developments. "We regret to inform you that we will be cancelling the coming SupercomputingAsia Conference 2020 (SCA20), which was being scheduled for 24-27 February 2020 in Singapore. It was a difficult decision to make but one which we had weighed carefully. However, in the past few days, we have received your concerns and communications from some of our key partners, sponsors and Keynote speakers citing advisories from their home countries to avoid non-essential travel to Singapore."The post Supercomputing Asia 2020 Conference Cancelled due to COVID-19 Virus appeared first on insideHPC.
|
by Rich Brueckner on (#4ZFK7)
Today the UK announced plans to invest £1.2 billion for the world’s most powerful weather and climate supercomputer. The government investment will replace Met Office supercomputing capabilities over a 10-year period from 2022 to 2032. The current Met Office Cray supercomputers reach their end of life in late 2022. The first phase of the new supercomputer will increase the Met Office computing capacity by 6-fold alone."The post UK to invest £1.2 billion for Supercomputing Weather and Climate Science appeared first on insideHPC.
|
by Rich Brueckner on (#4ZFK9)
The UK Met Office been awarded £4.1m by EPSRC to create Isambard 2, the largest Arm-based supercomputer in Europe. The powerful new £6.5m facility, to be hosted by the Met Office in Exeter and utilized by the universities of Bath, Bristol, Cardiff and Exeter, will double the size of GW4 Isambard, to 21,504 high performance cores and 336 nodes. "Isambard 2 will incorporate the latest novel technologies from HPE and new partner Fujitsu, including next-generation Arm CPUs in one of the world’s first A64fx machines from Cray."The post Isambard 2 at UK Met Office to be largest Arm supercomputer in Europe appeared first on insideHPC.
|
by staff on (#4ZFKB)
In this special guest feature from Scientific Computing World, Laurence Horrocks-Barlow from OCF predicts that containerization, cloud, and GPU-based workloads are all going to dominate the HPC environment in 2020. "Over the last year, we’ve seen a strong shift towards the use of cloud in HPC, particularly in the case of storage. Many research institutions are working towards a ‘cloud first’ policy, looking for cost savings in using the cloud rather than expanding their data centres with overheads, such as cooling, data and cluster management and certification requirements."The post Predictions for HPC in 2020 appeared first on insideHPC.
|
by staff on (#4ZFKD)
GigaIO has developed a new whitepaper to describe GigaIO FabreX, a fundamentally new network architecture that integrates computing, storage, and other communication I/O into a single-system cluster network, using industry standard PCIe (peripheral component interconnect express) technology.The post The GigaIO FabreX Network – New Frontiers in Networking For Big Data appeared first on insideHPC.
|
by staff on (#4ZEKD)
The ISC Workshop On In Situ Visualization 2020 has issued its Call for Participation. The event takes place June 25 in Frankfurt, Germany. "We encourage contributed talks on methods and workflows that have been used for large-scale parallel visualization, with a particular focus on the in situ case. Presentations on codes that closely couple numerical methods and visualization are particularly welcome."The post Call for Participation: ISC Workshop on In Situ Visualization 2020 appeared first on insideHPC.
|
by Rich Brueckner on (#4ZEKE)
Richard S. Sutton from DeepMind Alberta gave this talk NeurIPS 2019. "In practice, I work primarily in reinforcement learning as an approach to artificial intelligence. I am exploring ways to represent a broad range of human knowledge in an empirical form--that is, in a form directly in terms of experience--and in ways of reducing the dependence on manual encoding of world state and knowledge."The post Video: Toward a General AI-Agent Architecture appeared first on insideHPC.
|
by Rich Brueckner on (#4ZDQJ)
Ken Raffenetti from Argonne gave this talk at ATPESC 2019. "The Argonne Training Program on Extreme-Scale Computing (ATPESC) provides intensive, two-week training on the key skills, approaches, and tools to design, implement, and execute computational science and engineering applications on current high-end computing systems and the leadership-class computing systems of the future."The post Video: Overview of HPC Interconnects appeared first on insideHPC.
|
by Rich Brueckner on (#4ZDQM)
CIRC at Washington State University is seeking an HPC Systems Administrator in our Job of the Week. "Ideal candidates should have in-depth experience with the provisioning and administration of HPC clusters. Applicants who have experience with CentOS or RHEL, high speed networking using Mellanox Infiniband, resource schedulers such as Slurm, automation tools such as SaltStack, and parallel file systems including BeeGFS and Spectrum Scale, are highly encouraged to apply."The post Job of the Week: HPC Systems Administrator at Washington State University appeared first on insideHPC.
|
by Rich Brueckner on (#4ZCFA)
Fostering STEM education is a key to the future of high performance computing. Along these lines, ISC 2020 HPC Career Day will give 200 job seekers interested in HPC the opportunity to participate in the ISC 2020 conference on Wednesday, June 24. "Organizations scouting for STEM talent are welcome to utilize this new program to make connections with students, early and mid-career professionals looking for exciting prospects within the areas of HPC, machine learning, and data analytics."The post ISC 2020 Launches HPC Career Day appeared first on insideHPC.
|
by staff on (#4ZCFC)
Purdue University’s CERIAS Center for Education and Research in Information Assurance and Security has announced the addition of a new laboratory facility that dramatically increases Purdue’s cyber-physical research, emulation, and analysis capabilities. "This new laboratory is a mirror of the facilities already within Sandia National Labs that have served as the platform for joint CERIAS and DOE research since 2017,†said Theresa Mayer, executive vice president for research and partnerships at Purdue University. “The opening of SOL4CE at Purdue allows us to increase both the speed and impact of our national security research collaboration with Sandia National Labs.â€The post Purdue University to open Scalable Open Laboratory for Cyber Experimentation appeared first on insideHPC.
|
by Rich Brueckner on (#4ZC5G)
A partnership XSEDE and the Institute for Research on Innovation and Science (IRIS) will examine how access to advanced research computing resources and services available via XSEDE affect the collaboration networks and scientific productivity of participating researchers. "IRIS will link the IRIS UMETRICS dataset containing transaction-level administrative data on sponsored research projects from dozens of the nation’s leading higher educational institutions to data from XSEDE allocations. This will result in a new way to examine how access to supercomputers influences the way researchers collaborate with colleagues and the productivity of individuals and research teams."The post IRIS and XSEDE to investigate the impact of research supercomputing appeared first on insideHPC.
|
by staff on (#4ZC5J)
Researchers at Argonne National Laboratory have developed a new molecular layer etching technique that could potentially enable the manufacture of increasingly small microelectronics. "Our ability to control matter at the nanoscale is limited by the kinds of tools we have to add or remove thin layers of material. Molecular layer etching (MLE) is a tool to allow manufacturers and researchers to precisely control the way thin materials, at microscopic and nanoscales, are removed,†said lead author Matthias Young, an assistant professor at the University of Missouri and former postdoctoral researcher at Argonne.The post New Argonne etching technique could advance semiconductors appeared first on insideHPC.
|
by staff on (#4ZC5M)
In this video from SC19, Berkeley researchers visualizes an entire brain at nanoscale resolution. The work was published in the journal, Science. "At the core of the work is the combination of expansion microscopy and lattice light-sheet microscopy (ExLLSM) to capture large super-resolution image volumes of neural circuits using high-speed, nano-scale molecular microscopy."The post Visualizing an Entire Brain at Nanoscale Resolution appeared first on insideHPC.
|
by staff on (#4ZAGK)
In this special guest feature, Jeff Reser from SUSE describes how Linux and HPC are key enabling technologies behind the research and breakthroughs in Genomics. "The Human Genome Project is an excellent example of large-scale international cooperation. It took a closely-coordinated and collaborative team effort to complete. Once the human genome had been successfully sequenced and decoded, it was immediately made publicly available. Since then, new information has been regularly published and made freely available. Here at SUSE, we’re totally committed to this community-driven “open source†ideal. It permeates everything we do."The post How HPC is Powering the Age of Genomic Big Data appeared first on insideHPC.
|
by Rich Brueckner on (#4ZAGN)
The US Department of Energy’s Exascale Computing Project (ECP) has announced the following staff changes within the Software Technology group. Lois Curfman McInnes from Argonne will replace Jonathan Carter as Deputy Director for Software Technology. Meanwhile Sherry Li is now team lead for Math Libraries. "We are fortunate to have such an incredibly seasoned, knowledgeable, and respected staff to help us lead the ECP efforts in bringing the nation’s first exascale computing software environment to fruition,†said Mike Heroux from Sandia National Labs.The post Exascale Computing Project Announces Staff Changes Within Software Technology Group appeared first on insideHPC.
|
by Rich Brueckner on (#4ZAGP)
The U.S. Department of Energy’s Office of Science, under the leadership of Under Secretary of Energy Paul Dabbar, sponsored around 70 representatives from multiple government agencies and universities at the first Quantum Internet Blueprint Workshop, held in New York City Feb. 5-6. The primary goal of the workshop was to begin laying the groundwork for a nationwide entangled quantum Internet.The post DOE Workshop Begins Mapping the Future of Quantum Communications appeared first on insideHPC.
|
by staff on (#4ZAGR)
Researchers at SDSC and the Wisconsin IceCube Particle Astrophysics Center have successfully completed a second computational experiment using thousands of GPUs across Amazon Web Services, Microsoft Azure, and the Google Cloud Platform. "We drew several key conclusions from this second demonstration,†said SDSC’s Sfiligoi. “We showed that the cloudburst run can actually be sustained during an entire workday instead of just one or two hours, and have moreover measured the cost of using only the two most cost-effective cloud instances for each cloud provider.â€The post Second GPU Cloudburst Experiment Paves the Way for Large-scale Cloud Computing appeared first on insideHPC.
|
by Rich Brueckner on (#4Z9E7)
Today Award-winning Japanese HPC Cloud company XTREME-D announced an agreement with San Francisco-based Digital Realty, a leading global provider of data center, colocation, and interconnection solutions. MC Digital Realty will supply 10kW racks from its Digital Osaka 2 facility to host XTREME-D’s flagship product, XTREME-Stargate. XTREME-D has also formed a strategic alliance with Lenovo Enterprise Solutions Ltd. to be a supply partner for the product.The post XTREME-D Launches New HPC Infrastructure Services with Digital Realty appeared first on insideHPC.
|
by staff on (#4Z93S)
Powered by storage technology supplied by Microway, the Northeast Storage Exchange (NESE) is changing the way Boston-area universities approach research data storage. "Born out of a groundbreaking regional high-performance computing project, NESE aims to break further ground—to create a long-term, growing, self-sustaining data storage facility serving both regional researchers and national and international-scale science and engineering projects."The post Microway powers Shared Research Computing Storage Project in Massachusetts appeared first on insideHPC.
|
by Rich Brueckner on (#4Z8S6)
"The paper addresses the inherent limitations associated with today's most popular gradient-based methods, such as Adaptive Moment Estimation (ADAM) and Stochastic Gradient Descent (SGD), which incorporate backpropagation. MemComputing's approach instead aims towards a more global and parallelized optimization algorithm, achievable through its entirely new computing architecture."The post Whitepaper: Accelerate Training of Deep Neural Networks with MemComputing appeared first on insideHPC.
|
by staff on (#4Z8S8)
A research team of academics from the University of Leeds and University of Nottingham believes its has found a way of delivering ultra- fast modulation, by combining the power of acoustic and light waves. The findings were published in Nature Communications. “This result opens a new area for physics and engineering to come together in the exploration of the interaction of terahertz sound and light waves, which could have real technological applications.â€The post Using sound and light for ultra-fast data transfer appeared first on insideHPC.
|
by Rich Brueckner on (#4Z8SA)
The twelfth international Women in HPC workshop has issued its Call for Posters. The half-day WHPC workshop takes place June 25 at ISC20 in Frankfurt, Germany. "We are encouraging women who consider themselves to be ‘early career’ to participate, however this opportunity is open to help everyone who feels they may benefit from presenting their work, irrespective of career stage."The post Call for Posters: Women in HPC Workshop at ISC20 appeared first on insideHPC.
|
by Rich Brueckner on (#4Z8SC)
Colin Sauze from Aberystwyth University gave this talk at FOSDEM 2020. "The motivation for this was to overcome key problems faced by new HPC users. The talk will also discuss some of the technical challenges in deploying an HPC environment to a Raspberry Pi and attempts to keep that environment as close to a “real†HPC as possible. Methods to automate the installation process will also be covered."The post Introducing HPC with a Raspberry Pi Cluster appeared first on insideHPC.
|
by staff on (#4Z779)
Today OnScale announced their sponsorship of Revolution in Simulation, a collaborative community helping to increase the value of engineering simulation software investments through the democratization of simulation. OnScale is providing their expertise in CAE and funding to support the initiative. OnScale will participate as a moderator in the HPC topic as well as lead a new SME topical section coming soon at Rev-Sim.org on MultiPhysics Simulation.The post OnScale joins Revolution in Simulation community appeared first on insideHPC.
|
by Rich Brueckner on (#4Z77A)
To use quantum computers on a large scale, we need to improve the technology at their heart – qubits. Qubits are the quantum version of conventional computers’ most basic form of information, bits. The DOE’s Office of Science is supporting research into developing the ingredients and recipes to build these challenging qubits.The post Stepping up Qubit research at the DOE appeared first on insideHPC.
|
by Rich Brueckner on (#4Z6X2)
Twenty-two postdoctoral fellows from across the Computing Sciences Area shared the status of their current projects at the first CSA Postdoc Symposium, held January 30-31 at Berkeley Lab. Their presentations covered a broad range of research topics, including code optimization, machine/deep learning, network routing, modeling and simulation of complex scientific problems, exascale, and other next-generation computer architectures.The post Postdoc Symposium at Berkeley Lab Looks to Exascale for Modeling and Simulation appeared first on insideHPC.
|
by staff on (#4Z6X4)
In this Let's Talk Exascale Podcast, Stuart Slattery and Damien Lebrun-Grandie from ORNL describe how they are readying algorithms for next-generation supercomputers at the Department of Energy. "The mathematical library development portfolio of the Software Technology (ST) research focus area of the ECP provides general tools to implement complex algorithms. These algorithms are designed to scale up for supercomputers so that ECP teams can then use them to accelerate the development and improve the performance of science applications on DOE high-performance computing architectures."The post Podcast: Solving Multiphysics Problems at the Exascale Computing Project appeared first on insideHPC.
|
by staff on (#4Z6X6)
Europe has developed a strategy for exascale computing, through partnerships and collaboration of European HPC vendors, academic institutions and HPC centers. It aims to deliver exascale-class systems and place the continent in the top three powers for supercomputing and science and industry using HPC. "It is a major step forward for Europe to reach the next level of computing capacity; it will help us to advance in future-oriented technologies, like the Internet of Things (Iot), AI, robotics and data analytics."The post Exascale in Europe appeared first on insideHPC.
|
by Rich Brueckner on (#4Z5HG)
The University of Florida has chosen Qumulo’s distributed file system for its scalable capacity, real-time data analytics, and industry-recognized commitment to customer care. "We wanted a storage solution that would not only work for UF’s faculty but work for our research computing staff as well," said Erik Deumens, scientist and director, UF Information Technology – Research Computing. "Selecting Qumulo takes the guesswork out of our storage management and makes us more efficient when scheduling diagnostic operations. We find the Qumulo system to be rich in features and easy to work with. It is a very cost-effective solution so that we are making the best use of university funds."The post University of Florida Accelerates BioTech Research with Qumulo appeared first on insideHPC.
|
by Rich Brueckner on (#4Z5HH)
Today PASC20 announced that this year’s Public Lecture will be presented by Dalia Conde, Professor at the University of Southern Denmark and Director of Science at Species360. The lecture will focus on her team’s efforts in fighting one of the greatest current concerns of our global community: biodiversity loss. "In this keynote talk, we will unveil the results of a global initiative aiming to map, quantify and disseminate species open information to conservation policymakers globally. By developing partnerships to map information and generate development platforms, workflows and storage between open biodiversity repositories, we will outline how computational methods can be applied to novel scientific domains."The post PASC20 talk to explore Data Landscapes to Rescue Species from Extinction appeared first on insideHPC.
|