by staff on (#MANP)
Over at Oak Ridge, Eric Gedenk writes that monitoring the status of complex supercomputer systems is an ongoing challenge. Now, Ross Miller from OLCF has developed DDNtool, which provides a single interface to 72 controllers in near real time.The post DDNtool Streamlines File System Monitoring at Oak Ridge appeared first on insideHPC.
|
High-Performance Computing News Analysis | insideHPC
Link | https://insidehpc.com/ |
Feed | http://insidehpc.com/feed/ |
Updated | 2024-11-26 10:00 |
by Rich Brueckner on (#MAKT)
Virginia Tech is seeking an HPC Systems Specialist in our Job of the Week.The post Job of the Week: HPC Systems Specialist at Virginia Tech appeared first on insideHPC.
|
by Rich Brueckner on (#M86K)
In this video from the Neuroinformatics 2015 Conference, Thomas Lippert from Jülich presents: Why Does the Human Brain Project Need HPC and Data Analytics Infrastructures? HBP, the human brain project, is one of two European flagship projects foreseen to run for 10 years. The HBP aims at creating an open neuroscience driven infrastructure for simulation and big data aided modeling and research with a credible user program.The post Thomas Lippert on Why the Human Brain Project Needs HPC and Data Analytics Infrastructures appeared first on insideHPC.
|
by staff on (#M84X)
Bull Atos will provide a second High Performance Computing cluster to the German Waterways Engineering and Research Institute (BAW). Following the installation of the first computer in 2012, the Federal Agency selected a Bull supercomputer for one of their HPC clusters dedicated to complex simulations. The new supercomputer will be operational in October and under the new contract Bull will also provide maintenance services for five years.The post BAW in Germany to Install Additional Bull Supercomputer appeared first on insideHPC.
by Rich Brueckner on (#M5KA)
Zachary Cobell from ARCADIS-US presented this talk at the HPC User Forum. "As a global leader for designing sustainable coastlines and waterways, ARCADIS believes in developing multi-faceted, integrated solutions to restore, protect, and enhance sensitive coastal areas. We are working with the Army Corps and the state of Louisiana to design these projects with and from nature, in effect using nature as a dynamic engine."The post Video: HPC for the Louisiana Coastal Master Plan appeared first on insideHPC.
|
by Rich Brueckner on (#M56B)
The Center for Advanced Computing systems has announced their agenda for the Directives and Tools for Accelerators Workshop. Also known as the Seismic Programming Shift Workshop, the event takes place Oct. 11-13 at the University of Houston.The post Satoshi Matsuoka to Keynote Workshop on Directives and Tools for Accelerators appeared first on insideHPC.
|
by staff on (#M54N)
Ace Computers in Illinois reports that the company is expanding its high performance computer footprint in the Oil & Gas Industry. Not a stranger to this space, the company has than 30 years of experience designing powerful, scalable computers for leading energy suppliers.The post Ace Computers Steps Up HPC for Oil & Gas appeared first on insideHPC.
|
by staff on (#M534)
Convergent Science reports that the company has adopted the Allinea Forge development tool suite. As the leader in internal combustion engine (ICE) simulation, Convergent Science is using Allinea to increase the capability and performance of the company's CONVERGE software.The post Allinea Forge Sparks Convergent Science Combustion Simulation appeared first on insideHPC.
|
by Rich Brueckner on (#M4D4)
Bo Ewald from D-Wave Systems presented this Disruptive Technologies talk at the HPC User Forum. "While we are only at the beginning of this journey, quantum computing has the potential to help solve some of the most complex technical, commercial, scientific, and national defense problems that organizations face. We expect that quantum computing will lead to breakthroughs in science, engineering, modeling and simulation, financial analysis, optimization, logistics, and national defense applications."The post Bo Ewald Presents: D-Wave Quantum Computing appeared first on insideHPC.
|
by Rich Brueckner on (#M1JJ)
"Starting in 2013, the SC conference organizers launched “HPC Matters†to encourage members of the computational sciences community to share their thoughts, vision, and experiences with how high performance computers are used to improve the lives of people all over the world in more simple terms. Four pillars provide structure to the program: Influencing Daily Lives; Science and Engineering; Economic Impact; and Education."The post Intel’s Diane Bryant to Keynote HPC Matters Plenary at SC15 appeared first on insideHPC.
|
by Rich Brueckner on (#M16P)
In this podcast, David Bump from Dell describes an upcoming Workshop on large genomic data sets coming to La Jolla, California on Sept. 15, 2015. "Please join us for this one day workshop featuring presentations from Dell, Appistry, UNC Chapel Hill, Arizona State University, and TGen who all will share their cutting-edge results and best practices for helping labs process, manage, and analyze large genomic data sets. You will also hear from Intel and Nvidia on their latest HPC/Big Data technology innovations."The post Podcast: Dell Workshop on Large Genomic Data sets Coming to La Jolla Sept. 15 appeared first on insideHPC.
|
by MichaelS on (#M15D)
Through profiling, developers and users can get ideas on where an application’s hotspots are, in order to optimize certain sections of the code. In addition to locating where time is spent within an application, profiling tools can locate where there is little or no parallelism and a number of other factors that may affect performance. Performance tuning can help tremendously in many cases.The post Optimization Through Profiling appeared first on insideHPC.
|
by Rich Brueckner on (#M0P7)
Christopher Hill from MIT presented this talk at the HPC User Forum. "The MITgcm (MIT General Circulation Model) is a numerical model designed for study of the atmosphere, ocean, and climate. Its non-hydrostatic formulation enables it to simulate fluid phenomena over a wide range of scales; its adjoint capability enables it to be applied to parameter and state estimation problems. By employing fluid isomorphisms, one hydrodynamical kernel can be used to simulate flow in both the atmosphere and ocean."The post Video: HPC in Earth & Planetary Science using MITgcm appeared first on insideHPC.
|
by Rich Brueckner on (#KZ62)
Ohio State University has posted presentations from MUG'15 MVAPICH User Group. The event took place Aug. 19-21 in Columbus, Ohio.The post Presentations Posted from MUG’15 – MVAPICH User Group appeared first on insideHPC.
|
by staff on (#KY2P)
Today AMD announced the promotion of Raja Koduri to senior vice president and chief architect, Radeon Technologies Group, reporting to president and CEO Dr. Lisa Su. In his expanded role, Koduri is responsible for overseeing all aspects of graphics technologies used in AMD's APU, discrete GPU, semi-custom, and GPU compute products.The post AMD Forms Radeon Technologies Group for Graphics & Immersive Computing appeared first on insideHPC.
|
by staff on (#KXRV)
Today X-ISS rolled out CloudHPC, a consulting service created to guide organizations through the complex process of moving their HPC systems into the cloud environment.The post X-ISS Launches CloudHPC Service appeared first on insideHPC.
|
by staff on (#KXNG)
Today Cray announced a world record by scaling ANSYS Fluent to 129,000 compute cores. "Less than a year ago, ANSYS announced Fluent had scaled to 36,000 cores with the help of NCSA. While the nearly 4x increase over the previous record is significant, it tells only part of the story. ANSYS has broadened the scope of simulations allowing for applicability to a much broader set of real-world problems and products than any other company offers."The post Cray Scales Fluent to 129,000 Compute Cores appeared first on insideHPC.
|
by Rich Brueckner on (#KXJ7)
The first annual Intel HPC Developer Conference is coming to Austin Nov. 14-15 in conjunction with SC15. "The Intel® HPC Developer Conference will bring together developers from around the world to discuss code modernization in high performance computing. Learn what’s next in HPC, its technologies, and its impact on tomorrow’s innovations. Find the solutions to your biggest challenges at the Intel® HPC Developer Conference."The post Intel HPC Developer Conference Coming to SC15 appeared first on insideHPC.
|
by Douglas Eadline on (#KX5Q)
HPC developers want to write code and create new applications. The advanced nature of HPC often requires that this process be associated with specific hardware and software environment present on a given HPC resource. Developers want to extract the maximum performance from HPC hardware and at the same time not get mired down in the complexities of software tool chains and dependencies.The post Best Practices for Maximizing GPU Resources in HPC Clusters appeared first on insideHPC.
|
by staff on (#KSDR)
Concerns over data center water usage have become topical both in the industry and even in the general press of late. This is not a bad thing as data center water usage is a legitimate concern. The reality is that the problem is rooted in today’s established approaches to data center cooling.The post Reducing Your Data Center “Water Guilt†appeared first on insideHPC.
|
by staff on (#KWW6)
In this special guest feature from Scientific Computing World, Tilo Wettig from the University of Regensburg in Germany describes the unusual design of a supercomputer dedicated to solving some of the most arcane issues in quantum physics.The post How the QPACE 2 Supercomputer is Solving Quantum Physics with Intel Xeon Phi appeared first on insideHPC.
|
by Rich Brueckner on (#KTRA)
In this video, Alexandru Iosup from the TU Delft presents: Scalable High Performance Systems. "During this masterclass, Alexandru discussed several steps towards addressing interesting new challenges which emerge in the operation of the datacenters that form the infrastructure of cloud services, and in supporting the dynamic workloads of demanding users. If we succeed, we may not only enable the advent of big science and engineering, and the almost complete automation of many large-scale processes, but also reduce the ecological footprint of datacenters and the entire ICT industry."The post Video: Scalable High Performance Systems appeared first on insideHPC.
|
by Rich Brueckner on (#KSGH)
In this podcast, Jason Stowe from Cycle Computing describes how the Broad Institute is mapping cancer genes with CycleCloud. According to Stowe, Cycle Computing recently ran a 50,000+ core workload for the B​road Institute with low-cost Preemptible VMs on the Google Compute Engine, performing three decades of cancer research computations in a single afternoon.The post Podcast: Preemptible VMs Lower Cost of Cancer Research at Broad Insitutue appeared first on insideHPC.
|
by staff on (#KSAN)
Over at XSEDE, Scott Gibson writes that Computational Scientist Paul Delgado Says the XSEDE Scholars Program helped him realize his dream of solving real-life problems.The post How the XSEDE Scholars Program Fosters Career Opportunities appeared first on insideHPC.
|
by Rich Brueckner on (#KS57)
After spending a lovely six straight weeks at home, I find myself marveling at how many conferences are in the queue this Fall leading up to SC15 in Austin. Starting this week at the HPC User Forum, insideHPC will be on the road, bringing you the very latest in high performance computing.The post Preview of Fall 2015 HPC Events appeared first on insideHPC.
|
by Rich Brueckner on (#KPYD)
In this video, Douglas P. Wade from NNSA describes the computational challenges the agency faces in the stewardship of the nation's nuclear stockpile. As the Acting Director of the NNSA Office of Advanced Simulation and Computing, Wade looks ahead to future systems on the road to exascale computing.The post Video: Looking to the Future of NNSA Supercomputing appeared first on insideHPC.
|
by Rich Brueckner on (#KPT8)
Today Atos announced that the company has installed the first Petascale supercomputer in Brazil. Designed by Bull, the "Santos Dumont" system will be the largest supercomputer in Latin America. "We are very proud to equip Brazil with a world-class, Petascale High-Performance Computing (HPC) infrastructure and to launch a R&D Center in Petrópolis that is fully integrated with our global R&D,†said Philippe Vannier, Vice-President Executive and Chief Technology Officer at Atos. “With a presence in this country stretching back over more than 50 years, the collaborative ties that bind Bull and now Atos to Brazil in terms of leading-edge technologies are significant.â€The post Atos Deploys Petaflop Supercomputer in Brazil appeared first on insideHPC.
|
by Rich Brueckner on (#KM44)
The AMD graphics cards are uniquely equipped with AMD Multiuser GPU technology embedded into the GPU delivering consistent and predictable performance," said Sean Burke, AMD corporate vice president and general manager, Professional Graphics. "When these AMD GPUs are appropriately configured to the needs of an organization, end users get the same access to the GPU no matter their workload. Each user is provided with the virtualized performance to design, create and execute their workflows without any one user tying up the entire GPU."The post AMD Demos World’s First Hardware-Based Virtualized GPU Solution appeared first on insideHPC.
|
by Rich Brueckner on (#KM1S)
In this video, LLNL scientists discuss the challenges of debugging programs at scale on the Sequoia supercomputer, which has 1.6 million processors. "Bugs in parallel HPC applications are difficult to debug because errors propagate among compute nodes, programmers must debug thousands of nodes or more, and bugs might manifest only at large scale."The post Video: Debugging HPC Applications at Massive Scales appeared first on insideHPC.
|
by Rich Brueckner on (#KHHV)
This video celebrates the 50th anniversary of Britain's first supercomputer, the Ferranti Atlas. "When first switched on in December 1962, Atlas was the world's most powerful computer. Some of the software concepts it pioneered, like 'virtual memory', are among the most important breakthroughs in computer design and still used today."The post Video Retrospective: Ferranti Atlas – Britain’s First Supercomputer appeared first on insideHPC.
|
by Rich Brueckner on (#KHFY)
Stanford University is seeking an HPC Virtualization Specialist in our Job of the Week.The post Job of the Week: HPC Virtualization Specialist at Stanford University appeared first on insideHPC.
|
by Rich Brueckner on (#KESJ)
In this video from the Department of Energy Computational Science Graduate Fellowship meeting, Jarrod McClean from Harvard University presents: Quantum Computers and Quantum Chemistry.The post Video: Quantum Computers and Quantum Chemistry appeared first on insideHPC.
|
by staff on (#KEKP)
The Alan Turing Institute is the UK’s national institute for data science. It has marked its first few days of operations with the announcement of its new director, the confirmation of £10 million of research funding from Lloyd’s Register Foundation, a research partnership with GCHQ, collaboration with the EPSRC and Cray, and the commencement of its first research activities.The post Alan Turing Institute Hits the Ground Running for HPC & Data Science appeared first on insideHPC.
|
by Rich Brueckner on (#KEGF)
"Argonne National Laboratory is one of the laboratories helping to lead the exascale push for the nation with the DOE. We lead in a numbers of areas with software and storage systems and applied math. And we're really focusing, our expertise is focusing on those new ideas, those novel new things that will allow us to sort of leapfrog the standard slow evolution of technology and get something further out ahead, three years, five years out ahead. And that's where our research is focused."The post Video: Argonne’s Pete Beckman Describes the Challenges of Exascale appeared first on insideHPC.
|
by staff on (#KEEX)
Today Engility announced it has been awarded a prime position on a $25 million multiple award contract by the National Oceanic and Atmospheric Administration (NOAA) to provide broad spectrum IT support including high performance computing initiatives for the agency's Geophysical Fluid Dynamics Laboratory (GFDL).The post Engility to Support NOAA’s Geophysical Fluid Dynamics Laboratory appeared first on insideHPC.
|
by Rich Brueckner on (#KDSH)
In this podcast, the Radio Free HPC team previews three of the excellent Tutorial sessions coming up at SC15. "The SC tutorials program is one of the highlights of the SC Conference series, and it is one of the largest tutorial programs at any computing-related conference in the world. It offers attendees the chance to learn from and to interact with leading experts in the most popular areas of high performance computing (HPC), networking, and storage."The post Radio Free HPC Previews Tutorials at SC15 appeared first on insideHPC.
|
by staff on (#KB3S)
HPC and Beer have always had a certain affinity ever since the days when Cray Research would include a case of Leinenkugel's with every supercomputer. Now, Brian Caulfield from Nvidia writes that a Pennsylvania startup is using GPUs and Deep Learning technologies to enable brewers to make better beer.The post GPUs Power Analytical Flavor Systems for Brewing Better Beer appeared first on insideHPC.
|
by MichaelS on (#KB2G)
A convergence in the fields of High Performance Computing (HPC) and Big Data has led to new opportunities for software developers to create and deliver products that can help to analyze very large amounts of data. The HPC software ecosystem over the years have created and maintained sets of numerical libraries, communication API’s (MPI) and applications to make running HPC type applications faster and simpler to design. Low level libraries have been developed so that developers can concentrate on higher level algorithms. Products such as the Intel Math Kernel Library (Intel MKL) have been highly tuned to take advantage of multiple cores and newer instructions sets.The post Data Analytics Requires New Libraries appeared first on insideHPC.
|
by Rich Brueckner on (#KB0N)
Today Intel announced a 10-year collaborative relationship with the Delft University of Technology and TNO, the Dutch Organization for Applied Research, to accelerate advancements in quantum computing. To achieve this goal, Intel will invest US$50 million and will provide significant engineering resources both on-site and at Intel, as well as technical support. "Quantum computing holds the promise of solving complex problems that are practically insurmountable today, including intricate simulations such as large-scale financial analysis and more effective drug development."The post Intel to Invest $50 Million in Quantum Computing appeared first on insideHPC.
|
by staff on (#KAYT)
European researchers are welcome to use the world’s fastest supercomputer, the Tianhe-2, to pursue their research in collaboration with Chinese scientists and HPC specialists. "Enough Ivy Bridge Xeon E5 2692 processors had already been delivered to allow the Tianhe-2 to be upgraded from its current 55 Petaflops peak performance to the 100 Petaflops mark."The post An Open Invitation to Work on the Tianhe-2 Supercomputer appeared first on insideHPC.
|
by staff on (#KA6Y)
A team at Oak Ridge has developed a set of automated calibration techniques for tuning residential and commercial building energy efficiency software models to match measured data. Their open source Autotune code is now available on GitHub.The post Autotune Code from ORNL Tunes Your Building Energy Efficiency appeared first on insideHPC.
|
by staff on (#K8P9)
The XSEDE project and the University of California, Berkeley are offering an online course on parallel computing for graduate students and advanced undergraduates.The post XSEDE & UC Berkeley Offer Online Parallel Computing Course appeared first on insideHPC.
|
by staff on (#K82F)
Cisco UCS solutions allow for a faster and more optimized deployment of a computing infrastructure. This solution brief details how the Cisco UCS infrastructure can help your organization become more productive more quickly and can achieve business results without having to be concerned with fitting together various pieces of disparate hardware and software.The post Five Reasons To Deploy Cicso UCS Infrastructure appeared first on insideHPC.
by Rich Brueckner on (#K7TA)
Today Microsoft announced their GS-Series of premium VMs for Compute-intensive workloads. "Powered by the Intel Xeon E5 v3 family processors, the GS-series can have up to 64TB of storage, provide 80,000 IOPs (storage I/Os per second) and deliver 2,000 MB/s of storage throughput. The GS-series offers the highest disk throughput, by more than double, of any VM offered by another hyperscale public cloud provider."The post Microsoft Boosts Azure with GS-Series VMs for Compute-intensive Workloads appeared first on insideHPC.
|
by Rich Brueckner on (#K7Q6)
"I will describe a decade-long, multi-disciplinary, multi-institutional effort spanning neuroscience, supercomputing and nanotechnology to build and demonstrate a brain-inspired computer and describe the architecture, programming model and applications. I also will describe future efforts in collaboration with DOE to build, literally, a “brain-in-a-boxâ€. The work was built on simulations conducted on Lawrence Livermore National Laboratory's Dawn and Sequoia HPC systems in collaboration with Lawrence Berkeley National Laboratory."The post Video: DARPA’s SyNAPSE and the Cortical Processor appeared first on insideHPC.
|
by Rich Brueckner on (#K7CB)
The HPC User Forum has posted their Agendas for upcoming meetings in Europe next month.The post HPC User Forum Meetings Coming to Paris and Munich in October appeared first on insideHPC.
|
by staff on (#K7A5)
Today Mellanox announced that its Spectrum 10, 25, 40, 50 and 100 Gigabit Ethernet switches are now shipping. According to the company, Mellanox is the first vendor to deliver comprehensive end-to-end 10, 25, 40, 50 and 100 Gigabit Ethernet data center connectivity solutions.The post Mellanox Shipping Spectrum Open Ethernet 25/50/100 Gigabit Switch appeared first on insideHPC.
|
by Douglas Eadline on (#K76K)
When discussing GPU accelerators, the focus is often on the price-to-performance benefits to the end user. The true cost of managing and using GPUs goes far beyond the hardware price, however. Understanding and managing these costs helps provide more efficient and productive systems.The post Strategies for Managing High Performance GPU Clusters appeared first on insideHPC.
|
by staff on (#K3RR)
The ISC High Performance conference has issued its Call for Papers. As Europe’s most renowned forum for high performance computing, ISC 2016 will take place June 20-22, 2016 in Frankfurt, Germany.The post ISC 2016 Issues Call for Papers appeared first on insideHPC.
|
by staff on (#K3MF)
Pioneering a new consulting services model for strategy, marketing, and PR, OrionX today announced the appointment of Peter ffoulkes as Partner based in San Francisco. Mr. ffoulkes was most recently Research Director for Cloud Computing and Enterprise Platforms at 451 Research.The post Peter ffoulkes Joins OrionX Consulting appeared first on insideHPC.
|