by staff on (#Q8FS)
Today Cray announced the creation of the Cray Europe, Middle East and Africa (EMEA) Research Lab. The Cray EMEA Research Lab will foster the development of deep technical collaborations with key customers and partners, and will serve as the focal point for the Company’s technical engagements with the European HPC ecosystem.The post Cray Opens EMEA Research Lab in Bristol appeared first on insideHPC.
|
High-Performance Computing News Analysis | insideHPC
Link | https://insidehpc.com/ |
Feed | http://insidehpc.com/feed/ |
Updated | 2024-11-06 16:00 |
by Rich Brueckner on (#Q8P2)
SC16 has issued a Call for Proposals for a new initiative that aims to integrate aspects of past technical papers into the Student Cluster Competition.The post Call for Benchmark Proposals: SC16 Student Cluster Competition appeared first on insideHPC.
|
by Rich Brueckner on (#Q8HY)
"When looking to buy a used car, you kick the tires, make sure the radio works, check underneath for leaks, etc. You should be just as careful when deciding which nodes to use to run job scripts. At the NASA Advanced Supercomputing Facility (NAS), our prologue and epilogue have grown almost into an extension of the O/S to make sure resources that are nominally capable of running jobs are, in fact, able to run the jobs. This presentation describes the issues and solutions used by the NAS for this purpose."The post Video: Prologue O/S – Improving the Odds of Job Success appeared first on insideHPC.
|
by Rich Brueckner on (#Q8E7)
I’ve been commissioned by insideHPC to get the scoop on who’s jumping ship and moving on up in high performance computing. Familiar names this week include Mary Bass, Wilf Pinfold, and Mike Vildibill.The post HPC People on the Move: October Edition appeared first on insideHPC.
|
by Rich Brueckner on (#Q826)
Professor Taisuke Boku from the University of Tsukuba presented this talk at the PBS User Group. "We have been operating a large scale GPU cluster HA-PACS with 332 computation nodes equipped with 1,328 GPUs managed by PBS Professional scheduler. The users are spread out across a wide variety of computational science fields with widely distributed resource sizes from single node to full-scale parallel processing. There are also several categories of user groups with paid and free scientific projects. It is a challenging operation of such a large system keeping high system utilization rate as well as keeping fairness over these user groups. We have successfully been keeping over 85%-90% of job utilization under multiple constraints."The post Case Study: PBS Pro on a Large Scale Scientific GPU Cluster appeared first on insideHPC.
|
by Rich Brueckner on (#Q7DC)
In this special guest feature, Kim McMahon from McMahon Consulting writes that, for High Performance Computing vendors, HPC Marketing is a completely different animal than B2B.The post 10 Reasons HPC Marketing Differs from B2B appeared first on insideHPC.
|
by Rich Brueckner on (#Q59A)
Christopher Lynnes from NASA presented this talk at the HPC User Forum. "The Earth Observing System Data and Information System is a key core capability in NASA’s Earth Science Data Systems Program. It provides end-to-end capabilities for managing NASA’s Earth science data from various sources—satellites, aircraft, field measurements, and various other programs."The post Evolution of NASA Earth Science Data Systems in the Era of Big Data appeared first on insideHPC.
|
by Rich Brueckner on (#Q582)
"In business and commercial computing, momentum towards cloud and big data has already built up to the point where it is unstoppable. In technical computing, the growth of the Internet of Things is pressing towards convergence of technologies, but obstacles remain, in that HPC and big data have evolved different hardware and software systems while Open Stack, the Open Source cloud computing platform, does not work well with HPC."The post Scientific Cloud Computing Lags Behind the Enterprise appeared first on insideHPC.
|
by Rich Brueckner on (#Q2TV)
Tommaso Cecchi from DDN presented this talk at the HPCAC Spain Conference. "IME unleashes a new I/O provisioning paradigm. This breakthrough, software defined storage application introduces a whole new new tier of transparent, extendable, non-volatile memory (NVM), that provides game-changing latency reduction and greater bandwidth and IOPS performance for the next generation of performance hungry scientific, analytic and big data applications – all while offering significantly greater economic and operational efficiency than today’s traditional disk-based and all flash array storage approaches that are currently used to scale performance."The post Video: DDN Infinite Memory Engine IME appeared first on insideHPC.
|
by Rich Brueckner on (#Q2SZ)
NOAA is seeking an HPC Program Manager in our HPC Job of the Week.The post Job of the Week: HPC Program Manager at NOAA appeared first on insideHPC.
|
by Rich Brueckner on (#Q00W)
ISC 2016 has issued its Call for BoFs. "Like-minded ISC High Performance conference attendees come together in our informal Birds-of-a-Feather (BoF) sessions to discuss current HPC topics, network and share their thoughts and ideas. Each 60-minute BoF session addresses a different topic and is led by one or more individuals with expertise in the area. The ISC 2016 BoF sessions will be held from Monday, June 20 through Wednesday, June 22."The post ISC 2016 Issues Call for BoFs appeared first on insideHPC.
|
by Rich Brueckner on (#PZXW)
“Argonne National Laboratory is one of the labs helping to lead the exascale push for the nation with the DOE. We lead in a numbers of areas with software and storage systems and applied math. And we’re really focusing, our expertise is focusing on those new ideas, those novel new things that will allow us to sort of leapfrog the standard slow evolution of technology and get something further out ahead, three years, five years out ahead. And that’s where our research is focused.â€The post Pete Beckman Presents: Exascale Architecture Trends appeared first on insideHPC.
|
by Rich Brueckner on (#PZTC)
HLRS in Stuttgart, Germany has upgraded their Hornet system to Hazel Hen, a 7.42 Petaflop Cray XC40 supercomputer. Twice as fast as its predecessor, Hazel Hen is now now ready to support European scientific and industrial users in their pursuit of R&D break-throughs. "In case you're wondering, HRLS chose the name Hazel Hen because it's the one animal that eats Hornets."The post With Hazel Hen Cray XC40, HRLS Upgrades to 7.42 Petaflops appeared first on insideHPC.
|
by Rich Brueckner on (#PZQA)
"As HPC resource requirements continue to increase, the need for finding economical solutions to handle the rising requirements increases as well. There are numerous ways to approach this challenge. For example, leveraging existing equipment, adding new or used equipment, and handling uncommon peak usage dynamically through cloud solutions managed by a central job management system can prove to be highly available and resource rich, while remaining economical. In this presentation we will discuss how Wayne State University implemented a combination of these approaches to dramatically increase our compute resources for the equivalent cost of only a few new servers."The post Maximizing HPC Compute Resources with Minimal Cost appeared first on insideHPC.
|
by Rich Brueckner on (#PZ3R)
XSEDE is now accepting 2016 Research Allocation Requests for the Bridges supercomputer. Available starting in January, 2016 at the Pittsburgh Supercomputing Center, Bridges represents a new concept in high performance computing: a system designed to support familiar, convenient software and environments for both traditional and non-traditional HPC users.The post Submit Your 2016 Research Allocation Requests for the Bridges Supercomputer appeared first on insideHPC.
|
by Rich Brueckner on (#PWFX)
What can we do to help ocean coral survive Global Warming? In this TACC podcast, Jorge Salazar looks at how researchers are using the Stampede supercomputer to investigate how Corals can genetically adapt to warmer waters.The post Podcast: Supercomputing Powers Efforts to Save Ocean Coral appeared first on insideHPC.
|
by Rich Brueckner on (#PW90)
"This webinar replay discusses the use of high performance computing (HPC) in the design of aircraft jet engines and gas turbines used to generate electrical power. HPC is the critical enabler in this process, but applying HPC effectively in an industrial design setting requires an integrated hardware/software solution and a clear understanding of how the value outweighs the costs. This webinar will share GE’s perspective on the successful deployment and utilization of HPC, offer examples of HPC’s impact on GE products, and discuss future trends."The post Video: HPC in the Design of Aircraft Engines appeared first on insideHPC.
|
by Rich Brueckner on (#PW5G)
Today HP and SanDisk announced a long-term partnership to collaborate on a new technology within the Storage Class Memory (SCM) category. The partnership will center around HP’s Memristor technology and expertise and SanDisk’s non-volatile ReRAM memory technology and manufacturing and design expertise to create new enterprise-wide solutions for Memory-driven Computing. The two companies also will partner in enhancing data center solutions with SSDs.The post HP and SanDisk to Team on Memory-Driven Computing appeared first on insideHPC.
|
by staff on (#PW40)
Today IBM announced launched a new LC series of servers that infuse technologies from members of the OpenPOWER Foundation and are part of IBM's Power Systems portfolio of servers. According to IBM, the new LC systems perform data analytics workloads faster and cheaper than comparable x86-based servers.The post IBM Launches LC OpenPower Servers appeared first on insideHPC.
|
by Rich Brueckner on (#PW0W)
"The Robinhood Policy Engine is a versatile tool to manage contents of large file systems. It maintains a replicate of filesystem medatada in a database that can be queried at will. It makes it possible to schedule mass action on filesystem entries by defining attribute-based policies."The post Lustre Video: Robinhood v3 Policy Engine and Beyond appeared first on insideHPC.
|
by MichaelS on (#PVZA)
The Morton order is a mapping of multidimensional data to one dimension that preserves locality of the data. This is also known as Z-order. "By using Morton ordering as an alternative to row-major or column-major data storage, significant speedups can be achieved on the Intel Xeon Phi coprocessor or Intel Xeon CPU when performing matrix multiplies or matrix transposes."The post Morton Ordering on the Intel Xeon Phi appeared first on insideHPC.
|
by Rich Brueckner on (#PS34)
We'd like to invite our readers to participate in our new HPC & Large Enterprise Purchase Sentiment Survey. "It's designed to get a feel for the technology purchasing plans of HPC and large enterprise data centers. We'll also ask some questions about how your data center is approaching new technologies, usage models, and the like. Additionally, we'd like to know how you regard major vendors in the data center space."The post Requesting Your Input on the HPC & Large Enterprise Purchase Sentiment Survey appeared first on insideHPC.
|
by Rich Brueckner on (#PS1A)
“Although the use of GPUs has generalized nowadays, including GPUs in current HPC clusters presents several drawbacks mainly related with increased costs. In this talk we present how the use of remote GPU virtualization may overcome these drawbacks while noticeably increasing the overall cluster throughput. The talk presents real throughput measurements by making use of the rCUDA remote GPU virtualization middleware.â€The post Video: Is Remote GPU Virtualization Useful? appeared first on insideHPC.
|
by Rich Brueckner on (#PRY7)
Sometimes the inbox for HPC news fills up faster than we can handle. In an effort to keep up, we've compiled noteworthy news into a Jeopardy type of Speed Round that phrases topics in the form a question.The post HPC News Bytes for Oct. 7, 2015 appeared first on insideHPC.
|
by Rich Brueckner on (#PRJ4)
Joseph Lombardo from UNLV presented this talk at the PBS Works User Group. "Lombardo will highlight results from an Alzheimer’s research project that benefited from using PBS Professional. He will then describe the NSCEE’s new system at the Supernap and how this system can be used to advance research for HPC users in both academia/R&D and commercial industry. Lombardo will also highlight two emerging projects; the New School of Medicine and new Technology park."The post Leveraging HPC for Alzheimer’s Research and Beyond appeared first on insideHPC.
|
by staff on (#PRGH)
A new private Cloud HPC system will soon benefit bioinformatics researchers in their work on bacterial pathogens. The Cloud Infrastructure for Microbial Bioinformatics (CLIMB) project, a collaboration between the University of Birmingham, the University of Warwick, Cardiff University, and Swansea University, will create a free-to-use, world leading cyber infrastructure specifically designed for microbial bioinformatics research.The post Building the CLIMB Project – World’s Largest Single System for Microbial Bioinformatics appeared first on insideHPC.
|
by Rich Brueckner on (#PQME)
In this video from the Disruptive Technologies Session at the 2015 HPC User Forum, Intel's Ralph Biesemeyer presents: Intel 3D XPoint Technology."For decades, the industry has searched for ways to reduce the lag time between the processor and data to allow much faster analysis,†said Rob Crooke, senior vice president and general manager of Intel’s Non-Volatile Memory Solutions Group. “This new class of non-volatile memory achieves this goal and brings game-changing performance to memory and storage solutions.â€The post Video: Intel 3D XPoint Technology appeared first on insideHPC.
|
by Rich Brueckner on (#PMZW)
In this video from the LAD'15 Conference, Daniel Kobras from science+computing presents: Operational Characteristics of a ZFS-backed Lustre Filesystem.The post Operational Characteristics of a ZFS-backed Lustre Filesystem appeared first on insideHPC.
|
by staff on (#PMWA)
Today Seagate Technology announced it has completed its previously announced acquisition of Dot Hill Systems, makers of innovative software and hardware storage systems.The post Seagate Acquires Dot Hill Systems appeared first on insideHPC.
|
by Rich Brueckner on (#PMWB)
In this video from the recent Argonne Training Program on Extreme-Scale Computing event, Argonne computational scientist Dr. Marius Stan discusses Computational Science in Cinema. You may know Dr Stan from the role he played as Bogdan Wolynetz, the car wash owner on the hit TV show Breaking Bad.The post Argonne’s Dr. Marius Stan from Breaking Bad on Computational Science in Cinema appeared first on insideHPC.
by staff on (#PMRP)
XSEDE's new Jetstream shared cloud resource is coming online early next year, but you can now apply for Jetstream research allocations.The post Apply Now for XSEDE’s Jetstream Shared Cloud Research Allocations appeared first on insideHPC.
|
by Rich Brueckner on (#PMJP)
Steve Conway from IDC describes high performance computing and how it really does impact our daily lives. "When you put HPC together with some of the most creative, scientific, engineering & business minds on the planet, magical things happen."The post IDC’s Steve Conway on Why HPC Matters appeared first on insideHPC.
|
by staff on (#PM6W)
With the explosion of data over the past few years, data storage has become a hot topic among corporate decision makers. It is no longer sufficient to have adequate space for the massive quantities of data that must be stored; it is just as critical that stored data be accessible without any bottlenecks that impede the ability to process and analyze data in real time.The post RDMA Enabling Storage Technology Revolution appeared first on insideHPC.
|
by Rich Brueckner on (#PHDW)
Hussein Harake from CSCS presented this talk at the HPC Advisory Council Spain Conference. “IME unleashes a new I/O provisioning paradigm. This breakthrough, software defined storage application introduces a whole new new tier of transparent, extendable, non-volatile memory (NVM), that provides game-changing latency reduction and greater bandwidth and IOPS performance for the next generation of performance hungry scientific, analytic and big data applications – all while offering significantly greater economic and operational efficiency than today’s traditional disk-based and all flash array storage approaches that are currently used to scale performance.â€The post Video: Infinite Memory Engine (IME) Burst Buffer Experience at CSCS appeared first on insideHPC.
|
by staff on (#PH8G)
NERSC has selected a number of HPC research projects to participate in the center’s new Burst Buffer Early User Program, where they will be able to test and run their codes using the new Burst Buffer feature on the center’s newest supercomputer, Cori.The post Users to Test DataWarp Burst Buffer on Cori Supercomputer appeared first on insideHPC.
|
by staff on (#PH37)
"We are excited that the H2020 SAGE Project gives us the opportunity to research and move HPC storage into the Exascale age,†said Ken Claffey, vice president and general manager, Seagate HPC systems business. “Seagate will contribute its unique skills and device technology to address the convergence of Exascale and Big Data, with an excellent selection of participants each bringing their own capabilities together to build the future of storage on an unprecedented scale.â€The post Seagate to Lead Sage Storage Project for Exascale Horizon 2020 appeared first on insideHPC.
|
by Rich Brueckner on (#PH1S)
"For High Performance Computing users who leverage open-source Lustre software, a good file system for big data is now getting even better. Building on its substantial contributions to the Lustre community, Intel is rolling out new features that will make the file system more scalable, easier to use, and more accessible to enterprise customers."The post Video: Intel Commitment to Lustre appeared first on insideHPC.
|
by Rich Brueckner on (#PGYZ)
Can 3D-stacking technology topple the long-standing "memory wall" that's been holding back HPC application performance? A new paper from the Barcelona Supercomputing Center written in collaboration with experts from Chalmers University and Lawrence Livermore National Laboratory concludes that it will take more than just the simple replacement of conventional DIMMs with 3D-stacked devices.The post New Paper: Can 3D-Stacking Topple the Memory Wall? appeared first on insideHPC.
|
by Rich Brueckner on (#PGDK)
In this podcast, the Radio Free HPC team looks at the new round of Grand Challenges targeted by the National Strategic Computing Initiative (NSCI). The conversation was sparked by an Scientific Computing editorial by IBM’s Dave Turek.The post Radio Free HPC Looks at the Grand Challenges of NSCI appeared first on insideHPC.
|
by staff on (#PG8N)
ESnet has released open source code for building online Interactive Network Portals. "Now that the libraries are made available, the team hopes that other organizations will take the code, use it, add to it and work with ESnet to make the improvements available to the community."The post ESnet Releases Code for Building Online Interactive Network Portals appeared first on insideHPC.
|
by staff on (#PE66)
Tom Wilkie from Scientific Computing World reports on how China’s HPC vendors are seeking export markets with the support of their Government. With exhibits and various talks at ISC 2015, Inspur and Sugon were busy showcasing their technologies for the European market.The post Chinese HPC Vendors Look to Expand Overseas appeared first on insideHPC.
|
by Rich Brueckner on (#PE3T)
"DMF has been protecting data in some of the industries largest virtualized environments all over the world, enabling them to maintain uninterrupted online access to data for more than 20 years. Some customers have installations with over 100PB online data capacity, and billions of files, which they are able to manage at a fraction of the cost of conventional online architectures."The post Video: DMF and Tiering Update appeared first on insideHPC.
|
by Rich Brueckner on (#PBSS)
SC15 has stepped up with a series of blog posts previewing the conference this year, an effort that seems much more engaging than the random press releases we've seen in the past.The post SC15 Blog Puts the Spotlight on Invited Talks appeared first on insideHPC.
|
by Rich Brueckner on (#PBQX)
"ThroughPuter PaaS is purpose-built for secure, dynamic cloud computing of parallel processing era. ThroughPuter offers unique, realtime application load and type adaptive parallel processing: Get the speed-up from parallel execution cost-efficiently where and when any given program/task benefits most from the parallel processing resources. Addressing the parallel processing challenge takes a full programming-to-execution platform approach."The post Executing Multiple Dynamically Parallelized Programs on Dynamically Shared Cloud Processors appeared first on insideHPC.
|
by Rich Brueckner on (#P9J0)
The good folks from the StartupHPC-15 Conference have posted the Agenda for their meeting at SC15. The day-long conference takes place Monday, Nov 16 in Austin at the Capital Factory in Downtown Austin, Texas. "Does your Startup have ties to High Performance Computing? Please come, meet like minded people, listen to industry notables, and help StartupHPC continue its efforts to build a support community."The post StartupHPC-15 Conference Posts Agenda for Austin Event appeared first on insideHPC.
|
by Rich Brueckner on (#P940)
"Andreas presents an overview on the features currently under development for the upcoming Lustre 2.8 and 2.9 releases. This includes Layout Enhancement, Progressive File Layouts, Data-on-MDT, and improved single-client metadata and IO performance. In addition, several Lustre-specific ZFS improvements are also under development that will be available in this timeframe.â€The post Video: Lustre 2.9 and Beyond appeared first on insideHPC.
|
by Rich Brueckner on (#P8WX)
"The HP Apollo 8000 supercomputing platform approaches HPC from an entirely new perspective as the system is cooled directly with warm water. This is done through a “dry-disconnect†cooling concept that has been implemented with the simple but efficient use of heat pipes. Unlike cooling fans, which are designed for maximum load, the heat pipes can be optimized by administrators. The approach allows significantly greater performance density, cutting energy consumption in half and creating synergies with other building energy systems, relative to a strictly air-cooled system."The post Video: HP R&D and HPC appeared first on insideHPC.
|
by Rich Brueckner on (#P8V5)
"By adopting an MDX philosophy, engineers are able to test designs automatically from the early concept stages and against all of the physical factors that might influence a system’s performance. It assesses which set of design parameters will break a system, and which will improve it. This pushes back the simulation process to force engineers to question every assumption they have made within a design, and optimise it appropriately by assessing a simulation with multiple operating scenarios."The post Is MDX the Future of Engineering Simulation? appeared first on insideHPC.
|
by Rich Brueckner on (#P8S9)
“NOAA will acquire software engineering support and associated tools to re-architect NOAA’s applications to run efficiently on next generation fine-grain HPC architectures. From a recent procurement document: “Finegrain architecture (FGA) is defined as: a processing unit that supports more than 60 concurrent threads in hardware (e.g. GPU or a large core-count device).â€The post Video: NOAA Software Engineering for Novel Architectures (SENA) Project appeared first on insideHPC.
|
by staff on (#P607)
Today the Barcelona Supercomputing Center (BSC) announced the opening of their Performance Optimization and Productivity (POP) Center of Excellence. With the POP, developers of applications that require the use of High Performance Computing can count now with the free advice of European experts to analyze the performance of its codes.The post BSC Opens Productivity Center of Excellence for HPC Apps appeared first on insideHPC.
|