by staff on (#17WSF)
Expected later in 2016, Intel will be releasing production versions of its Knights Landing (KNL) 72-core coprocessor. These next generation coprocessors are impacting the physical design of the supercomputers now coming down the pike in a number of ways. One of the most dramatic changes is the significant increase in cooling requirements – these are high wattage chips that run very hot and present some interesting engineering challenges for systems designers.The post Cooling Today’s Hot New Processors appeared first on insideHPC.
|
High-Performance Computing News Analysis | insideHPC
Link | https://insidehpc.com/ |
Feed | http://insidehpc.com/feed/ |
Updated | 2024-11-26 03:00 |
by Rich Brueckner on (#17XM6)
Today ISC 2016 announced that a research paper in the area of Message Passing Interface (MPI) performance, has been selected to receive the 2016 Hans Meuer Award. The awarding will take place at the ISC High Performance conference on Monday, June 20.The post Intel MPI Messaging Paper Wins ISC 2016 Hans Meuer Award appeared first on insideHPC.
|
by Rich Brueckner on (#17XDK)
Zaikun Xu from the Università della Svizzera Italiana presented this talk at the Switzerland HPC Conference. "In the past decade, deep learning as a life-changing technology, has gained a huge success on various tasks, including image recognition, speech recognition, machine translation, etc. Pioneered by several research groups, Deep learning is a renaissance of neural network in the Big data era."The post Tutorial on Deep Learning appeared first on insideHPC.
|
by staff on (#17X15)
Seagate Technology and Los Alamos National Laboratory are researching a new storage tier to enable massive data archiving for supercomputing. The joint effort is aimed at determining innovative new ways to keep massive amounts of stored data available for rapid access, while also minimizing power consumption and improving the quality of data-driven research. Under a Cooperative Research and Development Agreement, Seagate and Los Alamos are working together on power-managed disk and software solutions for deep data archiving, which represents one of the biggest challenges faced by organizations that must juggle increasingly massive amounts of data using very little additional energy.The post Seagate and LANL to Heat Up Data Archiving For Supercomputers appeared first on insideHPC.
|
by staff on (#17WXQ)
Today CoolIT Systems announced it has enabled Cascade Technologies to increase their compute density by 2.5 times within their existing floor space, rack space, and air conditioning capacity by deploying liquid cooling. "Partnering with CoolIT Systems solved our key requirements of more compute density without having to expand our floor space or AC capacity,†said Frank Ham, CEO at Cascade Technologies. “The liquid cooled solution surpasses our efficiency goals, allows us to pack a lot of compute into a small environment, and is impressively quiet.â€The post Liquid Cooling Doubles Compute Capacity at Cascade Technologies appeared first on insideHPC.
|
by Rich Brueckner on (#17WTS)
DK Panda from Ohio State University presented this talk at the Switzerland HPC Conference. "This talk will focus on challenges in designing runtime environments for Exascale systems with millions of processors and accelerators to support various programming models. We will focus on MPI, PGAS (OpenSHMEM, CAF, UPC and UPC++) and Hybrid MPI+PGAS programming models by taking into account support for multi-core, high-performance networks, accelerators (GPUs and Intel MIC) and energy-awareness. Features and sample performance numbers from the MVAPICH2 libraries will be presented."The post High-Performance and Scalable Designs of Programming Models for Exascale Systems appeared first on insideHPC.
|
by staff on (#17SRG)
Nominations are now open for the PRACE Ada Lovelace HPC Award. The new award recognizes woman who are making an outstanding contributions to HPC in Europe.The post Nominations Open for PRACE Ada Lovelace HPC Award appeared first on insideHPC.
|
by Rich Brueckner on (#17SM5)
Calista Redmond from IBM presented this talk at the Switzerland HPC Conference. "The OpenPOWER Foundation was founded in 2013 as an open technical membership organization that will enable data centers to rethink their approach to technology. Today, nearly 200 member companies are enabled to customize POWER CPU processors and system platforms for optimization and innovation for their business needs. These innovations include custom systems for large or warehouse scale data centers, workload acceleration through GPU, FPGA or advanced I/O, platform optimization for SW appliances, or advanced hardware technology exploitation. OpenPOWER members are actively pursing all of these innovations and more and welcome all parties to join in moving the state of the art of OpenPOWER systems design forward."The post Industry Shifts to Open Infrastructure as OpenPOWER Foundation Gains Momentum appeared first on insideHPC.
|
by staff on (#17SJ5)
"The Techila user experience available in Google Cloud Launcher revolutionizes simulation and analysis. Techila’s patented end-to-end solution integrates the scalable power of Google Cloud Platform seamlessly into popular tools and environments on the user's own PC: MATLAB, Python, R programming language, and more. And what’s more, you don't need to buy MATLAB licenses for the computing environment,†says Rainer Wehkamp, CEO, Techila Technologies Ltd.The post Techila & Google Bringing on-demand HPC to Every Desk appeared first on insideHPC.
|
by staff on (#17SCY)
The STFC Hartree Centre in the UK will host a Hackathon for coders, developers, designers, entrepreneurs and start-ups in May. The event will take place May 18-20 at the Hartree Centre in Cheshire. In partnership with IBM Watson, the Hartree Hack will put the latest cognitive technologies directly into the hands of attendees. Participants will learn from the experts about what IBM Watson APIs (application programming interfaces) can offer them and how to use them, create their first cognitive app and compete to win £25k of support from STFC to propel their idea forward to a market reality over just three days.The post Learn the Latest Cognitive and Big Data Tools at the Hartree Hack appeared first on insideHPC.
|
by Rich Brueckner on (#17S97)
Michele de Lorenzi from the Swiss National Supercomputing Centre presented this talk at the 2016 HPC Advisory Council Switzerland Conference. "Founded in 1991, CSCS, the Swiss National Supercomputing Centre, develops and provides the key supercomputing capabilities required to solve important problems to science and/or society. The centre enables world-class research with a scientific user lab that is available to domestic and international researchers through a transparent, peer-reviewed allocation process. CSCS's resources are open to academia, and are available as well to users from industry and the business sector. The centre is operated by ETH Zurich and is located in Lugano."The post Video: Welcome to HPC in Switzerland appeared first on insideHPC.
|
by Rich Brueckner on (#17P6H)
Over at Enterprise Storage Forum, Henry Newman looks at why we should focus on how much work gets done rather than specifications as disk drives and SSDs get faster and faster. This is not a new rant for Henry, and in fact the importance of workflow over bandwidth or IOPs is the main theme at this year’s Mass Storage Systems and Technology Conference (MSST) coming up in May.The post Henry Newman on Why Workloads Matter More Than IOPS appeared first on insideHPC.
|
by Rich Brueckner on (#17P5R)
In this video, Al Roker from the Today Show looks at how Cray XC30 supercomputers give ECMWF more accurate forecasts than we get here in America. ECMWF uses advanced computer modeling techniques to analyze observations and predict future weather. Their assimilation system uses 40 million observations a day from more than 50 different instruments on satellites, and from many ground-based and airborne measurement systems.The post Video: Cray Powers More Accurate Forecasts at ECMWF appeared first on insideHPC.
|
by Rich Brueckner on (#17MFS)
A new DOE program designed to spur the use of high-performance supercomputers to advance U.S. manufacturing is now seeking a second round of proposals from industry to compete for approximately $3 million in new funding. “We are thrilled with the response from the U.S. manufacturing industry,†said LLNL mathematician Peg Folta, the director of the HPC4Mfg program. “This program lowers the barrier of entry for U.S. manufacturers to adopt HPC. It makes it easier for a company to use supercomputers by not only funding access to the HPC systems, but also to experts in the use of these systems to solve complex problems.â€The post HPC4Mfg Seek New Proposals from Industry appeared first on insideHPC.
|
by Rich Brueckner on (#17MF5)
Results are now in from Extreme Scaling Workshops held recently at the Gauss Centres for Supercomputing in Germany. With 20 participating teams, the workshops were designed to improve the computational efficiency of applications by expanding their parallel scalability across the hundreds of thousands of compute cores of the GCS supercomputers JUQUEEN and SuperMUC.The post GCS Centres Successfully Complete Extreme Scaling Workshops appeared first on insideHPC.
|
by Rich Brueckner on (#17J18)
MSC Software in Newport Beach, California is seeking an HPC Development Engineer in our Job of the Week. "The Development Engineer – HPC Scientist will create innovative parallel methods and implementations, primarily focused around the solution of linear equations providing an overall speedup to company-wide products. This role is part of a distributed and highly collaborative team of motivated HPC Scientists driven to create the fastest HPC solutions possible. The successful candidate will be involved in reviewing parallel method proposals from fellow group members for merit and estimating time for development. Initial focus will be on scalable distributed, direct methods for the solution of large sparse linear systems."The post Job of the Week: HPC Development Engineer at MSC Software appeared first on insideHPC.
|
by staff on (#17H5B)
Over at the OpenHPC Blog, Thomas Sterling from Indiana University describes why the Crest Project has joined the OpenHPC Community: "We want to make a specific contribution. By associating ourselves with an emergent framework, in which we could benefit from the work of many different people interested in different things but under a unifying guidance of scaffolding interfaces, we were able to achieve our objectives in low cost, HPC for end users. If, and I have to say if, OpenHPC does this right, you will provide that framework."The post Thomas Sterling Weighs in on the OpenHPC Community appeared first on insideHPC.
|
by Rich Brueckner on (#17GSZ)
In this video from the Open Server Summit, Dolly Wu from Inspur presents: Implementing Rack Scale Architectures with Open Hardware Designs. "There are several major rackscale open hardware platforms currently available, including OCP (Facebook Implementation), OCS (Microsoft Implementation), Scorpio Project Designs (Open Datacenter Designs for China), Intel RSA (Rack Scale Architecture). Datacenter and Cloud designers need to be aware of the differences among them and choose the one which is most suitable for their use case to deploy various types of private cloud, public cloud and hybrid cloud. We will invite Baidu as case study to speak about their datacenter deployments based on Scorpio Project Designs."The post Video: Implementing Rack Scale Architectures with Open Hardware Designs appeared first on insideHPC.
|
by Rich Brueckner on (#17GKK)
"EasyBuild, a software build and installation framework, can be used to automatically install software and generate environment modules. By using a hierarchical module naming scheme to offer environment modules to users in a more structured way, and providing Lmod, a modern tool for working with environment modules, we help typical users avoid common mistakes while giving power users the flexibility they demand. EasyBuild is developed by the High-Performance Computing team at Ghent University together with the members of the EasyBuild community, and is made available under the GNU General Public License (GPL) version 2."The post RCE Podcast Looks at EasyBuild Installation Framework appeared first on insideHPC.
|
by staff on (#175JG)
Even though it’s a new generation fabric, Intel OPA is still backwards compatible with the many applications in the HPC community that were written using the OpenFabrics Alliance* software stack for InfiniBand. So, existing InfiniBand users will be able to run their codes that are based on the OpenFabrics Enterprise Distribution (OFED) software on Intel OPA. Additionally, Intel has open sourced the key software elements of their fabric to allow integration of Intel OPA into the OFED stack, which several Linux* distributions include in their packages.The post Intel® Omni-Path Architecture—A Next-Generation HPC Fabric appeared first on insideHPC.
|
by staff on (#17E04)
IBTA’s world-class compliance and interoperability program ensures the dependability of the evolving InfiniBand specification, which in turn broadens industry adoption and user confidence,†said Rupert Dance, co-chair of the IBTA Compliance and Interoperability Working Group (CIWG). “With the continued support of our members and partners, the IBTA is able to offer the industry invaluable resources to help guide critical decision making during deployment of InfiniBand or RoCE solutions.â€The post IBTA Plugfest Expands EDR InfiniBand & RoCE Ecosystem appeared first on insideHPC.
|
by Rich Brueckner on (#17DWA)
Phil Pokorny from Penguin Computing presented this talk at the Open Compute Project Summit. "Tundra ES delivers the advantages of Open Computing in a single, cost-optimized, high-performance architecture. Organizations can integrate a wide variety of compute, accelerator, storage, network, software and cooling architectures in a vanity-free rack and sled solution. This allows them to build optimized Intel CPU, Phi, ARM or NVIDIA systems with the latest Penguin, Intel or Mellanox high-speed network technology for maximum performance."The post Video: Lessons Learned on the Road to Tundra ES Hardware appeared first on insideHPC.
|
by staff on (#17DRQ)
Today Nimbix and Bitfusion rolled out a new combined solution to offer more choices to application developers looking for high performance GPU accelerators on an on-demand basis. The Nimbix Cloud, powered by JARVICE, now integrates Bitfusion Boost to offer lower cost accelerator resources for developing compute hungry machine learning, analytics, and photorealistic rendering algorithms. "Nimbix has been about empowering developers to create accelerated applications in the cloud since day 1,†said Nimbix CTO Leo Reiter. “With this new combined solution, developers have more choices than ever before when it comes to performance and economics for the next generation of cloud computing workflows.â€The post Nimbix & Bitfusion Deliver Affordable High Performance GPU Resources in the Cloud appeared first on insideHPC.
|
by staff on (#17DN4)
Manor Racing will use Rescale’s cloud high performance computing platform to enable trackside simulation on a whole new scale for the team. “The cloud market is moving at the speed of Formula 1. We are continually challenging ourselves to enable our customers to achieve better results faster by leveraging and deploying the latest technologies in computing hardware and simulation software to the industry leaders. We are now doing this with Manor Racing and not only deploying cutting edge technology, but also technology that delivers direct and tangible results through the sport. It’s very exciting for all of us.â€The post Rescale Cloud Speeds Manor Racing Trackside Simulation appeared first on insideHPC.
|
by MichaelS on (#17DFM)
Threading plus vectorization together can increase the performance of an application more than one technique or the other. Threading and vectorizing an application are two techniques that are known to increase the performance of an application using modern CPUs and coprocessors. However, a deep understanding of the application is needed in order to make the decisions needed and to rewrite portions of the application to take advantage of these techniques. In cases where the developer might not be familiar with the code an automated tools such as the Intel Vectorization Advisor can assist the developer.The post Modernizing Code with the Intel Vectorization Advisor appeared first on insideHPC.
|
by Rich Brueckner on (#179Y8)
"This significant investment in our operational supercomputers equips us to handle the tidal wave of data that new observing platforms will generate and allows us to push our science and operations into exciting new territory," said Kathryn Sullivan, Ph.D., NOAA’s administrator. “The faster runs and better spatial and temporal resolution that Luna and Surge provide will allow NOAA to improve our environmental intelligence dramatically, giving the public faster and better predictions of weather, water and climate change. This enhanced environmental intelligence is vital to supporting the nation’s physical safety and economic security.â€The post Video: How New Cray Supercomputers at NOAA will Improve Forecasting appeared first on insideHPC.
|
by staff on (#179S4)
"Outside of a company’s product or service, the number one most important primary call-to-action elements for success are establishing marketing goals and messaging. This strategic step establishes the building blocks that lead to revenue, branding, and competitive edge. In this blog, I will give you and your team some things to think about so you can #JustStartToday with your strategy and messaging to move forward with success."The post Technical Marketing is more than Speeds and Feeds appeared first on insideHPC.
|
by staff on (#179QB)
Today DDN introduced the "industry’s fastest and most flexible" scale-out network attached storage (NAS) solution. As the newest product in the DDN GRIDScaler product family, the GS14K, delivers the speed and scale that data intensive environments need to accelerate analytics, increase reliability and integrate into modern workflows such as Hadoop, OpenStack and scale-out NAS environments. The GS14K is offered as an All Flash Array or Hybrid Storage platform, delivering the advantages of NAS data access with the high performance benefits of parallel files systems to support today’s modern, big data demands in an economical, easy to manage appliance.The post DDN Rolls Out GRIDScalar GS14K High-Performance NAS appeared first on insideHPC.
|
by Rich Brueckner on (#179KT)
Dr. Rosa Badia from BSC/CNS presented this Invited Talk at SC15. "StarSs (Star superscalar) is a task-based family of programming models that is based on the idea of writing sequential code which is executed in parallel at run-time taking into account the data dependencies between tasks. The talk will describe the evolution of this programming model and the different challenges that have been addressed in order to consider different underlying platforms from heterogeneous platforms used in HPC to distributed environments, such as federated clouds and mobile systems."The post Video: Superscalar Programming Models – Making Applications Platform Agnostic appeared first on insideHPC.
|
by Rich Brueckner on (#179FM)
Manufacturing is enjoying an economic and technological resurgence with the help of high performance computing. In this insideHPC webinar, you’ll learn how the power of CAE and simulation is transforming the industry with faster time to solution, better quality, and reduced costs. "Altair Engineering has partnered with SGI on Hyperworks Unlimited Physical Appliance, a turnkey solution that simplifies high performance computing for CAE. With this easy-to-deploy, unified platform, users can tackle a broad spectrum
of CAE applications and workloads."The post Computer Aided Engineering Leaps Forward with HPC for Manufacturing appeared first on insideHPC.
|
by staff on (#176FM)
flyelephantToday the FlyElephant announced a number of upgrades that allow users to work with private repositories with an improved system security and good task functionality. "FlyElephant is a platform for scientists, providing a computing infrastructure for calculations, helping to find partners for the collaboration on projects, and managing all data from one place. FlyElephant automates routine tasks and helps to focus on core research issues."The post FlyElephant Platform Adds Private Repositories appeared first on insideHPC.
|
by staff on (#176DP)
Today Italy's A3Cube announced the F-730 Family of EXA-Converged parallel systems built on Dell servers and achieving sub-microsecond latency through bare metal data access. "A3Cube’s EXA-Converged infrastructure represents the next step in the evolution of converged systemsâ€, said Emilio Billi, A3Cube’s CTO, “while keeping and improving on the scalability and resilience of Hyper-Converged infrastructure. It is engineered to converge all system resources and provide parallel data access and inter node communication at the bare metal level, eliminating the need for, and the limits of, traditional Hyper-converged systems. The system can efficiently use all the fastest storage devices currently on the market or planned to come to market, and puts all existing solutions in the rear view mirror.â€The post A3Cube Announces Exa-Converged Parallel Systems appeared first on insideHPC.
|
by staff on (#175S8)
Today Cadence announced a collaboration with Mellanox Technologies to demonstrate multi-lane interoperability between Mellanox's physical interface (PHY) IP for PCIe 4.0 technology and Cadence's 16Gbps multi-link and multi-protocol PHY IP implemented in TSMC's 16nm FinFET Plus (16FF+) process. Customers seeking to develop and deploy next-generation green data centers can now use a silicon-proven IP solution from Cadence for immediate integration and fastest market deployment. Cadence and Mellanox are scheduled to demonstrate electrical interoperability for PCIe 4.0 architecture between their respective PHY solutions at the 2016 TSMC Symposium on March 15, 2016 in Santa Clara, California.The post Mellanox & Cadence Demonstrate PCI Express 4.0 Multi-Lane PHY IP Interoperability appeared first on insideHPC.
|
by staff on (#175QG)
Today Bright Computing announced that it has teamed up with the Germany-based ProfitBricks to provide a cutting edge elastic HPC solution to a Swiss University. "This is a unique example of how Bright Computing can help a company move their HPC requirement to the cloud,†said Lee Carter, VP EMEA at Bright Computing. “Bright enables the university to dynamically expand and contract the infrastructure needed to support their research projects, all at the click of a button. This ensures the university only pays for the computational resources it needs, when they need them, saving time and expense.â€The post Elastic Computing Comes to Swiss University from ProfitBricks and Bright appeared first on insideHPC.
|
by Rich Brueckner on (#174ZR)
Submissions opened today for ACM SIGHPC/Intel Computational & Data Science Fellowships. Designed to increase the diversity of students pursuing graduate degrees in data science and computational science, the program will support students pursuing degrees at institutions anywhere in the world.The post Students: Apply for ACM SIGHPC/Intel Computational & Data Science Fellowships appeared first on insideHPC.
|
by Rich Brueckner on (#173BY)
"This meeting is open to all Dell HPC customers and partners. During the event, we will establish the Dell HPC Community as an independent, worldwide technical forum designed to facilitate the exchange of ideas among HPC professionals, researchers, computer scientists and engineers. Our core objective is to provide an environment in which members can candidly discuss industry trends and challenges, gather direct feedback and input from HPC professionals and influence the strategic direction and development of Dell HPC Systems and ecosystems."The post Inaugural Dell HPC Community Meeting Coming to Austin April 18-21 appeared first on insideHPC.
|
by Rich Brueckner on (#172G1)
"U.S. President Obama signed an Executive Order creating the National Strategic Computing Initiative (NSCI) on July 31, 2015. In the order, he directed agencies to establish and execute a coordinated Federal strategy in high-performance computing (HPC) research, development, and deployment. The NSCI is a whole-of-government effort to be executed in collaboration with industry and academia, to maximize the benefits of HPC for the United States. The Federal Government is moving forward aggressively to realize that vision. This presentation will describe the NSCI, its current status, and some of its implications for HPC in the U.S. for the coming decade."The post Video: The National Strategic Computing Initiative appeared first on insideHPC.
|
by Rich Brueckner on (#1721P)
Today Penguin Computing announced the availability Cyber Dyne’s KIMEME software on the POD public HPC cloud service. "It’s now possible to submit and manage large DOEs and optimization simulations flawlessly in the cloud,†said Ernesto Mininno, CEO, Cyber Dyne. “These tasks are much easier and faster thanks to the computational power of Penguin Computing’s POD HPC services.â€The post Cyber Dyne’s KIMEME Software Comes to Penguin On Demand appeared first on insideHPC.
|
by staff on (#1721R)
Today Nvidia announced that Brookhaven National Laboratory has been named a 2016 GPU Research Center. "The center will enable Brookhaven Lab to collaborate with Nvidia on the development of widely deployed codes that will benefit from more effective GPU use, and in the delivery of on-site GPU training to increase staff and guest researchers' proficiency," said Kerstin Kleese van Dam, director of CSI and chair of the Lab's Center for Data-Driven Discovery.The post Brookhaven Lab is the latest Nvidia GPU Research Center appeared first on insideHPC.
|
by staff on (#171Y9)
The CSIRO national science agency in Australia has teamed up with Dell to deliver a new HPC cluster called "Pearcey." The Pearcey cluster supports CSIRO research activities in a broad range of areas such as Bioinformatics, Fluid Dynamics and Materials Science. One CSIRO researcher benefiting from using Pearcey is Dr. Dayalan Gunasegaram, a CSIRO computational modeler who is using Pearcey for the modeling work behind the development of an improved nylon mesh for use in pelvic organ prolapse (POP) surgery, which has the potential to benefit the one in five Australian women that have surgery for the condition at some point in their lives.The post Dell Powers New Pearcey Cluster at CSIRO in Australia appeared first on insideHPC.
|
by Rich Brueckner on (#1713H)
In this podcast, the Radio Free HPC team looks at blockchain technology with Chris Skinner, author of ValueWeb – How FinTech firms are using mobile and blockchain technologies to create the Internet of Value. "The Internet of Value, or ValueWeb for short, allows machines to trade with machines and people with people, anywhere on this planet in real-time and for free. Using a combination of technologies from mobile devices and the bitcoin blockchain, fintech firms are building the ValueWeb. The question then is what this means for financial institutions, governments and citizens?"The post Radio Free HPC Looks at How Blockchain Will Create the Internet of Value appeared first on insideHPC.
|
by Rich Brueckner on (#16Z1Q)
Engineers at Ilmor are using novel CFD software from Convergent Science to dramatically reduce prototype build costs for Indycar engines. "Ilmor is the company behind the design of Chevrolet’s championship-winning Indycar engine that powered Scott Dixon to the 2015 title. In 2016, the engineering firm had the opportunity to find refinements in the cylinder head. The company chose CONVERGE, a novel CFD program specifically created to assist engine designers to optimize engine design, performance, and efficiency."The post Convergent Science Speeds Indycar Engine Development appeared first on insideHPC.
|
by staff on (#16YZA)
Originally developed by IBM in the 1950s for scientific and engineering applications, Fortran came to dominate this area of programming early on and has been in continuous use for over half a century in computationally intensive areas such as numerical weather prediction, finite element analysis, computational fluid dynamics, computational physics and computational chemistry. It is a popular language for high-performance computing and is used for programs that benchmark and rank the world's fastest supercomputers.The post Video: What’s New with Fortran? appeared first on insideHPC.
|
by Rich Brueckner on (#16WDT)
"The next flagship supercomputer in Japan, replacement of K supercomputer, is being designed toward general operation in 2020. Compute nodes, based on a manycore architecture, connected by a 6-D mesh/torus network is considered. A three level hierarchical storage system is taken into account. A heterogeneous operating system, Linux and a light-weight kernel, is designed to build suitable environments for applications. It can not be possible without codesign of applications that the system software is designed to make maximum utilization of compute and storage resources. "The post Video: System Software in Post K Supercomputer appeared first on insideHPC.
|
by Rich Brueckner on (#16WCJ)
Intel in Albuquerque is seeking a Senior Cluster Architect in our Job of the Week. "The Senior Cluster Architect will be responsible for design, support, and maintenance of HPC (High Performance Computing) cluster hardware and software for high availability, consistency, and optimized performance. This role is responsible for managing Linux systems."The post Job of the Week: Senior Cluster Architect at Intel in Albuquerque appeared first on insideHPC.
|
by Rich Brueckner on (#16SE2)
In this TACC Podcast, Jorge Salazar reports that scientists and engineers at the Texas Advanced Computing Center have created Wrangler, a new kind of supercomputer to handle Big Data.The post Podcast: Speeding Through Big Data with the Wrangler Supercomputer appeared first on insideHPC.
|
by Rich Brueckner on (#16SCM)
Today SimScale in Munich announced an online F1 workshop series focused on aerodynamics. "Today SimScale in Munich announced an online F1 workshop series focused on aerodynamics. The live online sessions will take place on Thursdays at 4:00 p.m. Central European Time, on March 17, 24 and 31. "Participants will get an overview of all the functionalities offered by the SimScale platform, while learning directly from top simulation experts through an interactive workshop and practical application of the simulation technology."The post SimScale Offers Online F1 Aerodynamics Workshop appeared first on insideHPC.
|
by Rich Brueckner on (#16S1E)
"This talk will address one of the main challenges in high performance computing which is the increased cost of communication with respect to computation, where communication refers to data transferred either between processors or between different levels of memory hierarchy, including possibly NVMs. I will overview novel communication avoiding numerical methods and algorithms that reduce the communication to a minimum for operations that are at the heart of many calculations, in particular numerical linear algebra algorithms. Those algorithms range from iterative methods as used in numerical simulations to low rank matrix approximations as used in data analytics. I will also discuss the algorithm/architecture matching of those algorithms and their integration in several applications."The post Video: Fast and Robust Communications Avoiding Algorithms appeared first on insideHPC.
|
by Rich Brueckner on (#16RYJ)
“As a research area, quantum computing is highly competitive, but if you want to buy a quantum computer then D-Wave Systems, founded in 1999, is the only game in town. Quantum computing is as promising as it is unproven. Quantum computing goes beyond Moore’s law since every quantum bit (qubit) doubles the computational power, similar to the famous wheat and chessboard problem. So the payoff is huge, even though it is expensive, unproven, and difficult to program.â€The post Podcast: Through the Looking Glass at Quantum Computing appeared first on insideHPC.
|
by staff on (#16PJB)
Today Hewlett Packard Enterprise announced HPE Haven OnDemand, an innovative cloud platform that provides advanced machine learning APIs and services that enable developers, startups and enterprises to build data-rich mobile and enterprise applications. Delivered as a service on Microsoft Azure, HPE Haven OnDemand provides more than 60 APIs and services that deliver deep learning analytics on a wide range of data, including text, audio, image, social, web and video.The post Hewlett Packard Enterprise Announces Haven OnDemand Machine Learning-as-a-Service appeared first on insideHPC.
|