by Rich Brueckner on (#1T2J3)
Today Argonne announced that the Lab is leading a pair of newly funded applications projects for the Exascale Computing Project (ECP). The announcement comes on the heels of news that ECP has funded a total of 15 application development proposals for full funding and seven proposals for seed funding, representing teams from 45 research and academic organizations.The post Argonne to Develop Applications for ECP Exascale Computing Project appeared first on insideHPC.
|
High-Performance Computing News Analysis | insideHPC
Link | https://insidehpc.com/ |
Feed | http://insidehpc.com/feed/ |
Updated | 2024-11-06 03:45 |
by Rich Brueckner on (#1T2FW)
Ed Turkel and Percy Tzelnic from Dell Technologies presented this pair of talks at the HPC User Forum in Austin. "This week, Dell Technologies announced completion of the acquisition of EMC Corporation, creating a unique family of businesses that provides the essential infrastructure for organizations to build their digital future, transform IT and protect their most important asset, information. This combination creates a $74 billion market leader with an expansive technology portfolio."The post Dell & EMC in HPC – The Journey so far and the Road Ahead appeared first on insideHPC.
|
by Rich Brueckner on (#1T2BB)
“The University’s researchers are making landmark discoveries in fields spanning human heritable disease, cancer, agriculture and biofuels manufacture – and they depend on our IT team to provide them with the fastest, most efficient data storage and compute systems to support their data-heavy work,†said Professor David Abramson, University of Queensland Research Computing Center director. “Our IBM, SGI (DMF) and DDN-based data fabric allows us to deliver ultra-fast multi-site data access without requiring any extra intervention from researchers and helps us to ensure our scientists can focus their time on potentially life-saving discoveries.â€The post DDN Powers High Performance Data Storage Fabric at University of Queensland appeared first on insideHPC.
|
by staff on (#1T27E)
Today IBM unveiled a series of new servers designed to help propel cognitive workloads and to drive greater data center efficiency. Featuring a new chip, the Linux-based lineup incorporates innovations from the OpenPOWER community that deliver higher levels of performance and greater computing efficiency than available on any x86-based server. "Collaboratively developed with some of the world’s leading technology companies, the new Power Systems are uniquely designed to propel artificial intelligence, deep learning, high performance data analytics and other compute-heavy workloads, which can help businesses and cloud service providers save money on data center costs."The post New OpenPOWER Servers Accelerate Deep Learning with NVLink appeared first on insideHPC.
|
by Rich Brueckner on (#1SZSR)
Paul Messina presented this talk at the HPC User Forum in Austin. "The Exascale Computing Project (ECP) is a collaborative effort of the Office of Science (DOE-SC) and the National Nuclear Security Administration (NNSA). As part of President Obama’s National Strategic Computing initiative, ECP was established to develop a new class of high-performance computing systems whose power will be a thousand times more powerful than today’s petaflop machines."The post Video: The ECP Exascale Computing Project appeared first on insideHPC.
|
by staff on (#1SZE1)
Today Dell Technologies announced completion of the acquisition of EMC Corporation, creating a unique family of businesses that provides the essential infrastructure for organizations to build their digital future, transform IT and protect their most important asset, information. This combination creates a $74 billion[i] market leader with an expansive technology portfolio that solves complex problems for customers in the industry’s fast-growing areas of hybrid cloud, software-defined data center, converged infrastructure, platform-as-a-service, data analytics, mobility and cybersecurity.The post It’s “Day 1†for Dell Technologies with New Branding appeared first on insideHPC.
|
by Douglas Eadline on (#1SZA2)
Achieving better scalability and performance at Exascale will require full data reach. Without this capability, onload architectures force all data to move to the CPU before allowing any analysis. The ability to analyze data everywhere means that every active component in the cluster will contribute to the computing capabilities and boost performance. In effect, the interconnect will become its own “CPU†and provide in-network computing capabilities.The post Network Co-design as a Gateway to Exascale appeared first on insideHPC.
|
by staff on (#1SZ8A)
Today Lawrence Berkeley National Laboratory announced that LBNL scientists will lead or play key roles in developing 11 critical research applications for next-generation supercomputers as part of DOE's Exascale Computing Project (ECP).The post Berkeley Lab to Develop Key Applications for ECP Exascale Computing Project appeared first on insideHPC.
|
by Rich Brueckner on (#1SZ4V)
Earl Joseph presented this talk at the HPC User Forum in Austin. "HPC is still expected to be a strong growth market going forward. IDC is forecasting a 7.7 percent growth from 2015 to 2019. We're projecting the 2019 HPC Market will exceed $15 Billion."The post Video: IDC HPC Market Update appeared first on insideHPC.
|
by Rich Brueckner on (#1SYHZ)
"Aligning ourselves with an organization like Microsoft further supports our mission of enabling companies to manage their application workloads whether on-premise or in a hybrid or cloud context like Microsoft Azure,†said Gary Tyreman, CEO, Univa. “Univa is working hard to bridge the gap from traditional IT environments to the new hybrid world by helping enterprises reduce complexity and run workloads more efficiently in the cloud.â€The post Univa Joins up with Microsoft Enterprise Cloud Alliance appeared first on insideHPC.
|
by staff on (#1SYCY)
"These application development awards are a major first step toward achieving mission critical application readiness on the path to exascale,†said ECP director Paul Messina. “A key element of the ECP’s mission is to deliver breakthrough HPC modeling and simulation solutions that confidently deliver insight and predict answers to the most critical U.S. problems and challenges in scientific discovery, energy assurance, economic competitiveness, and national security,†Messina said. “Application readiness is a strategic aspect of our project and foundational to the development of holistic, capable exascale computing environments.â€The post Exascale Computing Project (ECP) Awards $39.8 million for Application Development appeared first on insideHPC.
|
by Rich Brueckner on (#1SWEH)
Hailing from Norway, big-memory appliance maker Numascale has been a fixture at the ISC conference since the company’s formation in 2008. At ISC 2016, Numascale was noticeably absent from the show and the word on the street was that the company was retooling their NumaConnect™ technology around NVMe. To learn more, we caught up with Einar Rustad, Numascale’s CTO.The post Interview: Numascale to Partner with OEMs on Big Memory Server Technology appeared first on insideHPC.
|
by staff on (#1SW3Z)
Submissions are now open for the ISC 2017 conference Research Paper Sessions, which aim to provide first-class opportunities for engineers and scientists in academia, industry and government to present research that will shape the future of high performance computing. Submissions will be accepted through December 2, 2016.The post Call for Research Papers: ISC 2017 appeared first on insideHPC.
|
by Rich Brueckner on (#1SW08)
Today Movidius announced that the company is being acquired by Intel. As a deep learning startup, Movidius's mission is to give the power of sight to machines.The post Intel Acquires Movidius to Bolster AI appeared first on insideHPC.
|
by staff on (#1SVK0)
With Intel Scalable System Framework Architecture Specification and Reference Designs, the company is making it easier to accelerate the time to discovery through high-performance computing. The Reference Architectures (RAs) and Reference Designs take Intel Scalable System Framework to the next step—deploying it in ways that will allow users to confidently run their workloads and allow system builders to innovate and differentiate designsThe post Facilitate HPC Deployments with Reference Designs for Intel Scalable System Framework appeared first on insideHPC.
|
by Rich Brueckner on (#1SV7C)
"This release will allow aerospace stress analysts to do their tasks in a much more efficient manner. We really focused on understanding the desired workflows and creating an environment to easily move between CAD models, CAE models and results, and external tools such as Microsoft Excel,†said Dr. Robert Yancey, Altair Vice President of Aerospace. “We look forward to working with our Aerospace customers to help them implement their workflows in the streamlined HyperWorks environment.â€The post Altair Updates HyperWorks for Aerospace appeared first on insideHPC.
|
by Rich Brueckner on (#1SRZP)
Charles W. Nakhleh from LANL presented this talk at the 2016 DOE NNSA SSGF Annual Program Review. "This talk will explore some of the future opportunities and exciting scientific and technological challenges in the National Nuclear Security Administration Stockpile Stewardship Program. The program’s objective is to ensure that the nation's nuclear deterrent remains safe, secure and effective. Meeting that objective requires sustained excellence in a variety of scientific and engineering disciplines and has led to remarkable advances in theory, experiment and simulation."The post The Challenges and Rewards of Stockpile Stewardship appeared first on insideHPC.
|
by Rich Brueckner on (#1SRXB)
Applications are now open for the annual SuperComputing Camp in Colombia. The five-day camp takes place Oct. 16-21 at CIBioFI at Universidad del Valle in Santiago de Cali.The post SuperComputing Camp Returns to Colombia appeared first on insideHPC.
|
by staff on (#1SRTS)
The Piz Daint supercomputer at the Swiss National Supercomputing Centre (CSCS) is again assisting researchers in competition for the prestigious Gordon Bell prize. "Researchers led by Peter Vincent from Imperial College London have made this year’s list of finalists for the Gordon Bell prize, with the backing of Piz Daint at the Swiss National Supercomputing Centre. The prize is awarded annually in November at SC, the world’s largest conference on supercomputing. It honors the success of scientists who are able to achieve very high efficiencies for their research codes running on the fastest supercomputer architectures currently available."The post Powering Aircraft CFD with the Piz Daint Supercomputer appeared first on insideHPC.
|
by Rich Brueckner on (#1SNXR)
Rick Wagner from SDSC presented this talk at the the 4th Annual MVAPICH User Group. "At SDSC, we have created a novel framework and infrastructure by providing virtual HPC clusters to projects using the NSF sponsored Comet supercomputer. Managing virtual clusters on Comet is similar to managing a bare-metal cluster in terms of processes and tools that are employed. This is beneficial because such processes and tools are familiar to cluster administrators."The post Video: User Managed Virtual Clusters in Comet appeared first on insideHPC.
|
by staff on (#1SNWJ)
Over at NASA, Michelle Moyers writes that the 2016 NASA Software of the Year Award has gone to Pegasus 5, a revolutionary CFD tool. "Developed in-house by a team led by aerospace engineer Stuart Rogers from NASA Ames, Pegasus 5 has been used for aerodynamic modeling and simulation by nearly every NASA program over the past 15 years, including the space shuttle, the next-generation Orion spacecraft and Space Launch System, and commercial crew programs."The post Pegasus 5 CFD Program Wins NASA’s Software of the Year appeared first on insideHPC.
|
by Rich Brueckner on (#1SK79)
In this podcast, the Radio Free HPC team previews the HPC User Forum & StartupHPC Events coming up in the Fall of 2016.The post Radio Free HPC Previews the HPC User Forum & StartupHPC Events appeared first on insideHPC.
|
by Rich Brueckner on (#1SH44)
Today ALA Services announced it has acquired Adaptive Computing of Provo, Utah. "Adaptive Computing is adding proven growth expertise and infrastructure through this acquisition by ALA Services," said Marty Smuin, CEO of Adaptive Computing. "Arthur L. Allen brings deep insights and a proven process he has used to drive success. We look forward to accelerating our business and improving value to our customers by combining ALA's expertise with Adaptive Computing's leading technology and great position within the market."The post ALA Services Acquires Adaptive Computing appeared first on insideHPC.
|
by Rich Brueckner on (#1SG41)
"By modeling the power system in depth and detail, NREL has helped reset the conversation about how far we can go operationally with wind and solar in one of the largest power systems in the world," said the Energy Department's Charlton Clark, a DOE program manager for the study. "Releasing the production cost model, underlying data, and visualization tools alongside the final report reflects our commitment to giving power system planners, operators, regulators, and others the tools to anticipate and plan for operational and other important changes that may be needed in some cleaner energy futures."The post Supercomputing Alternative Energy in the Eastern Power Grid appeared first on insideHPC.
|
by Rich Brueckner on (#1SFYR)
Miles Lubin from presented this talk at the CSGF Annual Program Review. "JuMP is an open-source software package in Julia for modeling optimization problems. In less than three years since its release, JuMP has received more than 50 citations and has been used in at least 10 universities for teaching. We tell the story of how JuMP was developed, explain the role of the DOE CSGF and high-performance computing, and discuss ongoing extensions to JuMP developed in collaboration with DOE labs."The post Video: JuMP – A Modeling Language for Mathematical Optimization appeared first on insideHPC.
|
by staff on (#1SFTH)
Scientists at the Energy Department's National Renewable Energy Laboratory (NREL) discovered a use for perovskites that could propel the development of quantum computing. "Considerable research at NREL and elsewhere has been conducted into the use of organic-inorganic hybrid perovskites as a solar cell. Perovskite systems have been shown to be highly efficient at converting sunlight to electricity. Experimenting on a lead-halide perovskite, NREL researchers found evidence the material could have great potential for optoelectronic applications beyond photovoltaics, including in the field of quantum computers."The post NREL Discovery Could Propel Quantum Computing appeared first on insideHPC.
|
by staff on (#1SFRQ)
Today, the National Geospatial-Intelligence Agency and NSF released 3-D topographic maps that show Alaska’s terrain in greater detail than ever before. Powered by the Blue Waters supercomputer, the maps are the result of a White House Arctic initiative to inform better decision-making in the Arctic. "We can’t live without Blue Waters now,†said Paul Morin, head of the University of Minnesota’s Polar Geospatial Center. “The supercomputer itself, the tools the Blue Waters team at NCSA developed, the techniques they’ve come up with in using this hardware. Blue Waters is changing the way digital terrain is made and that is changing how science is done in the Arctic.â€The post Supercomputing 3D Elevation Maps of Alaska on Blue Waters appeared first on insideHPC.
|
by staff on (#1SC4W)
Engineers at Sandia are developing new datacenter cooling technologies that could save millions of gallons of water nationwide.The post Saving Water with Sandia’s New Datacenter Cooling Technology appeared first on insideHPC.
|
by Rich Brueckner on (#1SC0W)
James Reinders presented this talk at the 2016 Argonne Training Program on Extreme-Scale Computing. Reinders is the author of multiple books on parallel programming. His most recent book, entitled Intel Xeon Phi Processor High Performance Programming: Knights Landing Edition 2nd Edition, was co-authored by James Jeffers and Avinash Sodani.The post Video: SIMD, Vectorization, and Performance Tuning appeared first on insideHPC.
|
by Rich Brueckner on (#1SBVH)
EPSRC and Cray have signed an agreement to add a Cray XC40 Development System with Intel Xeon Phi processors to ARCHER, the UK National Supercomputing Service. "The new Development system will have a very similar environment to the main ARCHER system, including Cray's Aries interconnect, operating system and Cray tools, meaning that interested users will enjoy a straightforward transition."The post Cray to Add Intel Xeon Phi to Archer Supercomputing Service in the UK appeared first on insideHPC.
|
by Rich Brueckner on (#1SBSA)
"We have enhanced Bright Cluster Manager 7.3 so our customers can quickly and easily deploy new deep learning techniques to create predictive applications for fraud detection, demand forecasting, click prediction, and other data-intensive analyses,†said Martijn de Vries, Chief Technology Officer of Bright Computing. “Going forward, customers using Bright to deploy and manage clusters for deep learning will not have to worry about finding, configuring, and deploying all of the dependent software components needed to run deep learning libraries and frameworks.â€The post New Bright for Deep Learning Solution Designed for Business appeared first on insideHPC.
|
by MichaelS on (#1SBH2)
"In the HPC domain, Python can be used to develop a wide range of applications. While tight loops may still need to be coded in C or FORTRAN, Python can still be used. As more systems become available with coprocessors or accelerators, Python can be used to offload the main CPU and take advantage of the coprocessor. pyMIC is a Python Offload Module for the Intel Xeon Phi Coprocessor and is available at popular open source code repositories."The post Python and HPC appeared first on insideHPC.
|
by Rich Brueckner on (#1S8MS)
Today Dell Inc. and EMC Corp. announced that they intend to close the transaction to combine Dell and EMC on Wednesday, September 7, 2016. The name of the newly combined company will be Dell Technologies.The post Dell and EMC Transaction to Close on September 7, 2016 appeared first on insideHPC.
|
by Rich Brueckner on (#1S825)
"Bridges has enabled early scientific successes, for example in metagenomics, organic semiconductor electrochemistry, genome assembly in endangered species, and public health decision-making. Over 2,300 users currently have access to Bridges for an extremely wide range of research spanning neuroscience, machine learning, biology, the social sciences, computer science, engineering, and many other fields."The post Bridges Supercomputer Enters Production at PSC appeared first on insideHPC.
|
by Douglas Eadline on (#1S7WR)
The move to network offloading is the first step in co-designed systems. A large amount of overhead is required to service the huge number of packets required for modern data rates. This amount of overhead can significantly reduce network performance. Offloading network processing to the network interface card helped solve this bottleneck as well as some others.The post Co-Design Offloading appeared first on insideHPC.
|
by Rich Brueckner on (#1S7VE)
Paul Messina presented this talk at the 2016 Argonne Training Program on Extreme-Scale Computing. "The President's NSCI initiative calls for the development of Exascale computing capabilities. The U.S. Department of Energy has been charged with carrying out that role in an initiative called the Exascale Computing Project (ECP). Messina has been tapped to lead the project, heading a team with representation from the six major participating DOE national laboratories: Argonne, Los Alamos, Lawrence Berkeley, Lawrence Livermore, Oak Ridge and Sandia. The project program office is located at Oak Ridge.The post Paul Messina Presents: A Path to Capable Exascale Computing appeared first on insideHPC.
|
by staff on (#1S7QS)
Today Mellanox announced that SysEleven in Germany used the company's 25/50/100GbE Open Ethernet solutions to build a new SSD-based, fully-automated cloud datacenter. "We chose the Mellanox suite of products because it allows us to fully automate our state-of-the-art Cloud data center,†said Harald Wagener, CTO, SysEleven. “Mellanox solutions are highly scalable and cost effective, allowing us to leverage the Company’s best-in-class Ethernet technology that features the industry’s best bandwidth with the flexibility of the OpenStack open architecture.â€The post Mellanox Ethernet Solutions Power Germany’s Most Advanced Cloud Datacenter appeared first on insideHPC.
|
by staff on (#1S4S2)
The Department of Energy has funded $3.8 million fro 13 new industry projects as part of its HPC4Mfg program. "We're excited about this second round of projects because companies are bringing forward challenges that we can help address, which result in advancing innovation in U.S. manufacturing and increasing our economic competitiveness," said LLNL mathematician Peg Folta, the director of the HPC4Mfg Program.The post DOE Funds 13 HPC4Mfg Clean Energy Projects appeared first on insideHPC.
|
by staff on (#1S4JT)
Indiana University plans to unveil three new HPC resources at a launch event on Sept 1: Jetstream, Big Red II+, and Diet. "With these new systems, IU continues to provide our researchers the leading-edge computational tools needed for the scale of today’s research problems," said Brad Wheeler, IU vice president for IT and CIO. "Each of these systems is quite distinct in its purpose to meet the needs of our researchers and students."The post Indiana University to Launch Three New HPC Systems appeared first on insideHPC.
|
by Rich Brueckner on (#1S4E3)
Thomas Schulthess presented this talk at the MVAPICH User Group. "Implementation of exascale computing will be different in that application performance is supposed to play a central role in determining the system performance, rather than just considering floating point performance of the high-performance Linpack benchmark. This immediately raises the question as to what the yardstick will be, by which we measure progress towards exascale computing. I will discuss what type of performance improvements will be needed to reach kilometer-scale global climate and weather simulations. This challenge will probably require more than exascale performance."The post Exascale Computing – What are the Goals and the Baseline? appeared first on insideHPC.
|
by staff on (#1S3ZW)
Seven women who work in IT departments at research institutions around the country have been selected to help build and operate the high performance SCinet conference network at SC16. The announcement came from the Women in IT Networking at SC program, also known as WINS.The post WINS Program Selects Seven Women to Help Build SCinet at SC16 appeared first on insideHPC.
|
by Rich Brueckner on (#1S3WE)
We'd like to invite our readers to participate in our new HPC Customer Experience Survey. It's an effort to better understand our readers and what is really happening out there in the world of High Performance Computing. "This survey should take less than 10 minutes to complete. All information you provide will be treated as private and kept confidential."The post Requesting Your Input on the HPC Customer Experience Survey appeared first on insideHPC.
|
by Rich Brueckner on (#1S3TJ)
Today SGI announced a significant investment in extreme scale software research at the Irish Centre for High-End Computing (ICHEC), a top European center. The investment highlights the commitment of SGI to the European software research community. These resources, including SGI application software and supercomputing hardware expertise, will assist scientists as they explore issues related to climate change, weather forecasting, and environmental research among many other topics.The post SGI Opens European Research Centre at ICHEC in Ireland appeared first on insideHPC.
|
by Rich Brueckner on (#1S0MT)
"Galaxies are complex—many physical processes operate simultaneously, and over a huge range of scales in space and time. As a result, accurately modeling the formation and evolution of galaxies over the lifetime of the universe presents tremendous technical challenges. In this talk I will describe some of the important unanswered questions regarding galaxy formation, discuss in general terms how we simulate the formation of galaxies on a computer, and present simulations (and accompanying published results) that the Enzo collaboration has recently done on the Blue Waters supercomputer. In particular, I will focus on the transition from metal-free to metal-enriched star formation in the universe, as well as the luminosity function of the earliest generations of galaxies and how we might observe it with the upcoming James Webb Space Telescope."The post Simulating the Earliest Generations of Galaxies with Enzo and Blue Waters appeared first on insideHPC.
|
by Rich Brueckner on (#1S0BZ)
Karl Schulz from Intel presented this talk at the 4th Annual MVAPICH User Group meeting. "Today, many supercomputing sites spend considerable effort aggregating a large suite of open-source projects on top of their chosen base Linux distribution in order to provide a capable HPC environment for their users. This presentation will introduce a new, open-source HPC community (OpenHPC) that is focused on providing HPC-centric package builds for a variety of common building-blocks in an effort to minimize duplication, implement integration testing to gain validation confidence, incorporate ongoing novel R&D efforts, and provide a platform to share configuration recipes from a variety of sites."The post OpenHPC – Community Building Blocks for HPC Systems appeared first on insideHPC.
|
by MichaelS on (#1S08N)
Coming in the second half of 2016: The HPE Apollo 6500 System provides the tools and the confidence to deliver high performance computing (HPC) innovation. The system consists of three key elements: The HPE ProLiant XL270 Gen9 Server tray, the HPE Apollo 6500 Chassis, and the HPE Apollo 6000 Power Shelf. Although final configurations and performance are not yet available, the system appears capable of delivering over 40 teraflop/s double precision, and significantly more in single or half precision modes.The post HPE Solutions for Deep Learning appeared first on insideHPC.
|
by Rich Brueckner on (#1S05F)
In this podcast, the Radio Free HPC team looks at why it’s so difficult for new processor architectures to gain traction in HPC and the datacenter. Plus, we introduce a new regular feature for our show: The Catch of the Week.The post Radio Free HPC Looks at Alternative Processors for High Performance Computing appeared first on insideHPC.
|
by Rich Brueckner on (#1RXE3)
In this video, students describe their learning experience at the 2016 PRACE Summer of HPC program in Barcelona. "The PRACE Summer of HPC is a PRACE outreach and training program that offers summer placements at top HPC centers across Europe to late-stage undergraduates and early-stage postgraduate students. Up to twenty top applicants from across Europe will be selected to participate. Participants spend two months working on projects related to PRACE technical or industrial work and produce a report and a visualization or video of their results."The post Students Learn Supercomputing at the Summer of HPC in Barcelona appeared first on insideHPC.
|
by Rich Brueckner on (#1RXD8)
SC16 has extended the application deadline for its Impact Showcase, a forum designed to show attendees why HPC Matters in the real world. Submissions are now due Sept. 15.The post There’s Still Time to Show Why HPC Matters in the SC16 Impact Showcase appeared first on insideHPC.
|
by staff on (#1RTXR)
Over at the SC16 Blog, JP Vetters writes that planning for the SCinet high-bandwidth conference network is a multiyear process. "The success of any large conference depends on the, often unseen, hard work of many. During the last quarter century, the SCinet team has strived to perfect its routine so that conference-goers can experience a smoothly run Show."The post SCinet Preps World’s Fastest Network Infrastructure at SC16 appeared first on insideHPC.
|