by Richard Friedman on (#3QRTW)
Intel® Integrated Performance Primitives (Intel IPP) offers the developer a highly optimized, production-ready, library for lossless data compression/decompression that targets image, signal, and data processing, and cryptography applications. The Intel IPP optimized implementations of the common data compression algorithms are “drop-in†replacements for the original compression code.The post Data Compression Optimized with Intel® Integrated Performance Primitives appeared first on insideHPC.
|
High-Performance Computing News Analysis | insideHPC
Link | https://insidehpc.com/ |
Feed | http://insidehpc.com/feed/ |
Updated | 2024-11-24 23:00 |
by staff on (#3QRTX)
This is the final post in a five-part series from a report exploring the potential machine and a variety of computational approaches, including CPU, GPU and FGPA technologies. This article explores unified deep learning configurations and emerging applications.The post Unified Deep Learning Configurations and Emerging Applications appeared first on insideHPC.
|
by staff on (#3QHT9)
In this video, Doug Kothe from ORNl provides an update on the Exascale Computing Project. "With respect to progress, marrying high-risk exploratory and high-return R&D with formal project management is a formidable challenge. In January, through what is called DOE’s Independent Project Review, or IPR, process, we learned that we can indeed meet that challenge in a way that allows us to drive hard with a sense of urgency and still deliver on the essential products and solutions. In short, we passed the review with flying colors—and what’s especially encouraging is that the feedback we received tells us what we can do to improve."The post Video: Doug Kothe Looks Ahead at The Exascale Computing Project appeared first on insideHPC.
|
by staff on (#3QHHK)
Altair has acquired Germany-based FluiDyna GmbH, a renowned developer of NVIDIA CUDA and GPU-based Computational Fluid Dynamics (CFD) and numerical simulation technologies in whom Altair made an initial investment in 2015. FluiDyna’s simulation software products ultraFluidX and nanoFluidX have been available to Altair’s customers through the Altair Partner Alliance and also offered as standalone licenses. "We are excited about FluiDyna and especially their work with NVIDIA technology for CFD applications," said James Scapa, Founder, Chairman, and CEO at Altair. "We believe the increased throughput and lower cost of GPU solutions is going to allow for a significant increase in simulations which can be used to further impact the design process.â€The post Altair acquires FluiDyna CFD Technology for GPUs appeared first on insideHPC.
|
by staff on (#3QHHN)
In this TACC Podcast, Chemists at the University of California, San Diego describe how they used supercomputing to design a sheet of proteins that toggle between different states of porosity and density. This is a first in biomolecular design that combined experimental studies with computation done on supercomputers. "To meet these and other computational challenges, Paesani has been awarded supercomputer allocations through XSEDE, the Extreme Science and Engineering Discovery Environment, funded by the National Science Foundation."The post Podcast: Supercomputing the Emergence of Material Behavior appeared first on insideHPC.
|
by staff on (#3QHCM)
Today Silicon Valley startup Tachyum Inc. unveiled its new processor family – codenamed “Prodigy†– that combines the advantages of CPUs with GP-GPUs, and specialized AI chips in a single universal processor platform. According to Tachyum, the new chip has "ten times the processing power per watt" and is capable of running the world’s most complex compute tasks. "With its disruptive architecture, Prodigy will enable a super-computational system for real-time full capacity human brain neural network simulation by 2020."The post New Tachyum Prodigy Chip has “More than 10x the Performance of Conventional Processors†appeared first on insideHPC.
|
by Rich Brueckner on (#3QH8S)
Today ACM announced that that Dr. Satoshi Matsuoka will receive the annual HPDC Achievement Award for his pioneering research in the design, implementation, and application of high performance systems and software tools for parallel and distributed systems. "ACM HPDC is one of the top international conferences in the field of Computer Science / High Performance Calculation, and among them, I am delighted to have won the Society Career Award for the first time as a Japanese."The post Satoshi Matsuoka to receive High Performance Parallel Distributed Computation Achievement Award appeared first on insideHPC.
|
by staff on (#3QEXT)
Today Quantum Corporation named AutonomouStuff LLC as its primary partner for storage distribution in the automotive market, enabling them to deliver Quantum's comprehensive end-to-end storage solutions for both in-vehicle and data center environments. "Autonomous research generates an enormous volume of data which is vital to achieving the goal of a safe autonomous vehicle," said Bobby Hambrick, founder and CEO of AutonomouStuff. "Quantum multitier data storage kits powered by StorNext offer a highly scalable and economical solution to the data dilemma researchers face."The post Quantum Storage Solutions Power Self-driving Cars for AutonomouStuff appeared first on insideHPC.
|
by staff on (#3QEV3)
iRODS is taking an active role in Lustre community. The iRODS Consortium recently signed on to the Open Scalable File Systems, Inc (OpenSFS), a nonprofit organization dedicated to the success of the Lustre file system, an open source parallel distributed file system used for computing on large-scale high performance computing clusters.The post iRODS Consortium adds Members and Joins OpenSFS appeared first on insideHPC.
|
by staff on (#3QEV4)
Today Equus Compute Solutions announced that Intel has identified Equus as an Intel Platinum 2018 Technology Provider. Furthermore, Intel has distinguished Equus as both a Cloud Data Center Specialist and an HPC Data Center Specialist. These distinctions were earned based on Equus application specific platforms, solutions, and staff training in these areas. "We are honored to be a 2018 Intel Platinum Technology Partner. “said Costa Hasapopoulos, Equus President. “We are proud to offer our customers industry-leading data center solutions across a wide of range of industries and applications. In all of these applications, Equus customizes Intel-based white box servers and storage offerings to enable flexible software-defined infrastructures.â€The post Equus Compute Solutions Named Intel 2018 Cloud and HPC Data Center Specialist appeared first on insideHPC.
|
by Rich Brueckner on (#3QER9)
"Scientifico ReFrame is a new framework for writing regression tests for HPC systems. The goal of the framework is to abstract away the complexity of the interactions with the system, separating the logic of a regression test from the low-level details, which pertain to the system configuration and setup. This allows users to write easily portable regression tests, focusing only on the functionality. The purpose of the tutorial will be to do a live demo of ReFrame and a hands-on session demonstrating how to configure it and how to use it."The post ReFrame: A Regression Testing Framework Enabling Continuous Integration of Large HPC Systems appeared first on insideHPC.
|
by staff on (#3QERB)
Today DDN announced a Parabricks technology solution that provides massive acceleration for analysis of human genomes. The breakthrough platform combines GPU supercomputing performance with DDN’s Parallel Flash Data Platforms for fastest time to results, and enables unprecedented capabilities for high-throughput genomics analysis pipelines. The joint solution also ensures full saturation of GPUs for maximum efficiency and provides analysis capabilities that previously required thousands of CPUs to engage.The post DDN and Parabricks Accelerate Genome Analysis appeared first on insideHPC.
|
by staff on (#3QCTJ)
Today HPC Cloud provider Nimbix announced that their 2018 HPC Cloud Summit will take place June 6 in Silicon Valley. "We are bringing together the best and brightest minds in accelerated computing at the Computer History Museum, an institution dedicated to the preservation and celebration of computer history. Event sponsors include: Intel, Lenovo and Mellanox."The post Nimbix to Host the 2018 HPC Cloud Summit on June 6 in Silicon Valley appeared first on insideHPC.
|
by Rich Brueckner on (#3QC0T)
"Women are becoming a driving force in the open source community as the industry becomes more diverse and inclusive. However, a recent study found that only 3% of contributors in open source were women. As a community of women at Red Hat, we want to not only highlight how we contribute, but also inspire others to contribute. In the spirit of diversity, this panel will include women from different departments at Red Hat—including marketing, management, sales, consulting, and engineering—sharing the unique ways we help grow the open source community."The post Video: Women and Open Source appeared first on insideHPC.
|
by staff on (#3QBXK)
Today Asetek announced an order from Fujitsu, an established global data center OEM, for a new High Performance Computing system at a currently undisclosed location in Japan. This major installation will be implemented using Asetek's RackCDU liquid cooling solution throughout the cluster which includes 1300 Direct-to-Chip (D2C) coolers for the cluster's compute nodes. "We are pleased to see the continuing success of Fujitsu using Asetek's technology in large HPC clusters around the world," said André Sloth Eriksen, CEO and founder of Asetek.The post Asetek Receives Order For New HPC Cluster From Fujitsu appeared first on insideHPC.
|
by staff on (#3QBXM)
One year after launching its cloud service, Nimbus, the Pawsey Supercomputing Centre has now expanded with NVIDIA GPU nodes. The GPUs are currently being installed, so the Pawsey cloud team have begun a Call for Early Adopters. "Launched in mid-2017 as a free cloud service for researchers who require flexible access to high-performance computing resources, Nimbus consists of AMD Opteron CPUs making up 3000 cores and 288 terabytes of storage. Now, Pawsey will be expanding its cloud infrastructure from purely a CPU based system to include GPUs; providing a new set of functionalities for researchers."The post Pawsey Centre adds NVIDIA Volta GPUs to its HPC Cloud appeared first on insideHPC.
|
by Rich Brueckner on (#3QBTF)
"The impact of AI will be visible in the software industry much sooner than the analog world, deeply affecting open source in general, as well as Red Hat, its ecosystem, and its userbase. This shift provides a huge opportunity for Red Hat to offer unique value to our customers. In this session, we'll provide Red Hat's general perspective on AI and how we are helping our customers benefit from AI."The post Red Hat’s AI Strategy appeared first on insideHPC.
|
by staff on (#3QBQN)
"The first version of the OpenMP application programming interface (API) was published in October 1997. In the 20 years since then, the OpenMP API and the slightly older MPI have become the two stable programming models that high-performance parallel codes rely on. MPI handles the message passing aspects and allows code to scale out to significant numbers of nodes, while the OpenMP API allows programmers to write portable code to exploit the multiple cores and accelerators in modern machines."The post Celebrating 20 Years of the OpenMP API appeared first on insideHPC.
|
by staff on (#3QBMW)
Intel is working with leaders in the field to eliminate today’s data processing bottlenecks. In this guest post from Intel, the company explores how BioScience is getting a leg up from order-of-magnitude computing progress. "Intel’s framework is designed to make HPC simpler, more affordable, and more powerful."The post BioScience gets a Boost Through Order-of-Magnitude Computing Gains appeared first on insideHPC.
|
by Rich Brueckner on (#3Q92W)
Peter Lindstrom from LLNL gave this talk at the Conference on Next Generation Arithmetic in Singapore. "We propose a modular framework for representing the real numbers that generalizes IEEE, POSITS, and related floating-point number systems, and which has its roots in universal codes for the positive integers such as the Elias codes. This framework unifies several known but seemingly unrelated representations within a single schema while also introducing new representations."The post Universal Coding of the Reals: Alternatives to IEEE Floating Point appeared first on insideHPC.
|
by staff on (#3Q8ZS)
Today DDN that Harvard University’s Faculty of Arts and Sciences Research Computing (FASRC) has deployed DDN’s GRIDScaler GS7KX parallel file system appliance with 1PB of storage. The installation has sped the collection of images detailing synaptic connectivity in the brain’s cerebral cortex. "DDN’s scale-out, parallel architecture delivers the performance we need to keep stride with the rapid pace of scientific research and discovery at Harvard,†said Scott Yockel, Ph.D., director of research computing at Harvard’s FAS Division of Science. “The storage just runs as it’s supposed to, so there’s no contention for resources and no complaints from our users, which empowers us to focus on the research.â€The post DDN GridScaler Powers Neuroscience and Behavioral Research at Harvard appeared first on insideHPC.
|
by staff on (#3Q8ZV)
The Ohio Supercomputer Center has updated its innovative web-based portal for accessing high performance computing services. As part of this effort, OSC recently announced the release of Open OnDemand 1.3, the first version using its new RPM Package Manager, or Red Hat Package Manager, a common standard for distributing Linux software. "Our continuing development of Open OnDemand is aimed at making the package easier to use and more powerful at the same time," said David Hudak, interim executive director of OSC. "Open OnDemand 1.3's RPM Package Manager simplifies the installation and updating of OnDemand versions and enables OSC to do more releases more frequently."The post Ohio Supercomputer Center Upgrades HPC Access Portal appeared first on insideHPC.
|
by Rich Brueckner on (#3Q8WQ)
Jimmy Daley from HPE gave this talk at the HPC User Forum in Tucson. "High performance clusters are all about high speed interconnects. Today, these clusters are often built out of a mix of copper and active optical cables. While optical is the future, the cost of active optical cables is 4x - 6x that of copper cables. In this talk, Jimmy Daley looks at the tradeoffs system architects need to make to meet performance requirements at reasonable cost."The post HPE: Design Challenges at Scale appeared first on insideHPC.
|
by MichaelS on (#3Q8RK)
Harp-DAAL is a framework developed at Indiana University that brings together the capabilities of big data (Hadoop) and techniques that have previously been adopted for high performance computing. Together, employees can become more productive and gain deeper insights to massive amounts of data.The post High Performance Big Data Computing Using Harp-DAAL appeared first on insideHPC.
|
by staff on (#3Q65E)
The Barcelona Supercomputing Center (BSC) is a partner in the MED-GOLD project, which will foster the creation of highly specialized climate services for wine, olive oil, and durum crops, providing indicators to optimize agricultural management practices in relation to the impact of global warming. "Wine, olive oil and durum wheat products are staples of the Mediterranean diet. Their production rates and quality are highly dependent on weather and climate. These essential features are not guaranteed under future climate change conditions, which are expected to increase vulnerability to crop failure and pest damage."The post BSC Powers Climate Services for Agriculture in Europe appeared first on insideHPC.
|
by Rich Brueckner on (#3Q62J)
Dr. Mark Mattingley-Scott from IBM gave this talk at the Swiss HPC Conference. "Quantum Computing is here, right now - and we are at the start of a new way of computing, which will impact us the way the revolution started by Shockley, Bardeen and Brattain did in 1947. In this talk I will introduce Quantum Computing, its principles, capabilities and challenges and provide you with the insight you need to decide how you should engage with this revolutionary technology."The post Quantum Computing: Its Principles, Capabilities and Challenges appeared first on insideHPC.
|
by staff on (#3Q5ZG)
In this episode of Let's Talk Exascale, Mike Heroux from Sandia National Labs describes the Exascale Computing Project’s Software Development Kit, an organizational approach to reduce the complexity of the project management of ECP software technology. "My hope is that as we create these SDKs and bring these independently developed products together under a collaborative umbrella, that instead of saying that each of these individual products is available independently, we can start to say that an SDK is available."The post Let’s Talk Exascale: Making Software Development more Efficient appeared first on insideHPC.
|
by staff on (#3Q5X0)
Today Parallel Works Inc. announced that the company is partnering with R-Systems to launch the PWRS HPC Access Portal. The Portal will empower scientists, engineers, and data analysts with the tools to supercharge their computational studies. “We deliver flexible solutions to meet our clients’ on-premise, off-premise or Hybrid HPC requirements. Our partnership with Parallel Works increases R Systems’ arsenal of solutions and proves once again that we offer more than cores to our clients.â€The post R-Systems to Launch PWRS HPC Access Portal appeared first on insideHPC.
|
by staff on (#3Q3VC)
CIARA just announced new AMD-based systems for ultra-low latency, high Frequency Trading and Blockchain solutions. "With the adoption of new technologies such as large core count processors and the usage of ECC memory, the path for all financial enterprises to reap the benefits of safe hardware acceleration without compromising reliability is getting easier,†said Patrick Scateni, Vice President of Enterprise and Performance Group at CIARA. “The joint solutions coming from the CIARA and AMD will bring high-performance and broader choice of compute platforms to the FSI market.â€The post CIARA steps up to High Frequency Trading and Blockchain with AMD appeared first on insideHPC.
|
by staff on (#3Q3SH)
In this episode of the AI Podcast, Bryan Cantanzaro from NVIDIA discusses some of the latest developments at NVIDIA research. "The goal of NVIDIA research is to figure out what things are going to change the future of the company, and then build prototypes that show the company how to do that,†says Catanzaro. “And AI is a good example of that.â€The post AI Podcast Looks at Recent Developments at NVIDIA Research appeared first on insideHPC.
|
by Rich Brueckner on (#3Q21X)
In this video from Google I/O 2018, Debbie Bard from NERSC describes Deep Learning at scale for cosmology research. "Debbie Bard is acting group lead for the Data Science Engagement Group at the National Energy Research Scientific Computing Center (NERSC) at Berkeley National Lab. A native of the UK, her career spans research in particle physics, cosmology and computing on both sides of the Atlantic."The post Deep Learning at Scale for Cosmology Research appeared first on insideHPC.
|
by Rich Brueckner on (#3Q1ZT)
The Aerospace Corporation is Virginia is seeking High Performance Software Developer in our Job of the Week. "You will code high-performance technical computing applications in the areas of space system modeling, architecture performance assessment, and mission planning and scheduling."The post Job of the Week: High Performance Software Developer at The Aerospace Corporation appeared first on insideHPC.
|
by staff on (#3PZTM)
Today McObject announced a new version of its eXtremeDB Financial Edition for HPC database management system. Designed for speed and efficiency, the new version offers significant speed improvements, building on the performance of the previous record-setting version, along with a suite of other benefits such as an ultra-fast and flexible market data feed handler. "This version of eXtremeDB comes with a number of benefits, and of course delivers excellent performance," said Steve Graves, CEO and co-founder of McObject. "Speed has always been of critical importance for the financial markets, so we’re very pleased with the improvements this version delivers. It also offers unprecedented flexibility for HPC (high performance computing) with its wide range of math functions which support in-chip analytics.â€The post New Update Speeds eXtremeDB Financial Edition for HPC appeared first on insideHPC.
|
by staff on (#3PZQS)
Today Neurala announced a breakthrough update to its award-winning Lifelong Deep Neural Network (Lifelong-DNN) technology. The update allows for a significant reduction in training time compared to traditional DNN—20 seconds versus 15 hours—a reduction in overall data needs, and the ability for deep learning neural networks to learn without the risk of forgetting previous knowledge—with or without the cloud. "It takes a very long time to train a traditional DNN on a dataset, and, once that happens, it must be completely re-trained if even a single piece of new information is added. Our technology allows for a massive reduction in the time it takes to train a neural network and all but eliminates the time it takes to add new information,†said Anatoli Gorshechnikov, CTO and co-founder of Neurala. “Our Lifelong-DNN is the only AI solution that allows for incremental learning and is the breakthrough that companies across many industries have needed to make deep learning useful for their customers.â€The post Neurala Reduces Training Time for Deep Neural Network Technology appeared first on insideHPC.
|
by staff on (#3PZMM)
Today the PASC18 conference announced that this year’s panel discussion will focus on the central theme of the conference: “Fast and Big Data, Fast and Big Computation.†Are these two worlds evolving and converging together? Or is HPC facing a game-changing moment as the appetite for computation in the scientific computing community and industry is for a different type of computation than what we're used to?The post PASC18 Panel to Focus on Fast and Big Data, Fast and Big Computation appeared first on insideHPC.
|
by Rich Brueckner on (#3PZHG)
In this video from the 2018 Swiss HPC Conference, Peter Hopton from the EuroEXA project shares the problems and the solutions that are being developed in the EuroEXA co-design project. "EuroEXA hardware designers work together with system software experts optimizing the entire stack from language runtimes to low-level kernel drivers, and application developers that bring in a rich mix of key HPC applications from across climate/weather, physical/energy and life-science/bioinformatics domains to enable efficient system co-design and maximize the impact of the project."The post Video: Building Computing and Data Centres for Exascale in the EU appeared first on insideHPC.
|
by staff on (#3PY00)
"XTREME-Stargate is a small set-top linux appliance that can easily connect to HPC cloud with basic setup over a web portal using the XTREME-DNA interface. It functions as a “super head node†for HPC cloud clusters, providing access to on-premise, private, and public cloud without integration headaches, and allowing connections to baremetal cloud (either shared or dedicated), in addition to the public cloud vendors such as Azure and AWS that have always been accessible via XTREME-DNA."The post XTREME-D to Launch Gateway Appliance for Secure HPC Cloud Access appeared first on insideHPC.
|
by Rich Brueckner on (#3PWZA)
In this eeNews report, Mateo Valero, Director of the Barcelona Supercomputer center, explains how the RISC-V architecture can play a main role in new supercomputer architectures. Valero was the keynote speaker at the recent RISC-V Workshop in Barcelona. "Born in academia and research, RISC-V ISA delivers a new level of free, extensible software and hardware freedom on architecture, paving the way for the next 50 years of computing design and innovation."The post Mateo Valero on how RISC-V can play a major role in New Supercomputer Architectures appeared first on insideHPC.
|
by staff on (#3PWWN)
Today ArrayFire announced the release of ArrayFire v3.6, the company's open source library of parallel computing functions supporting CUDA, OpenCL, and CPU devices. This new version of ArrayFire includes several new features that improve the performance and usability for applications in machine learning, computer vision, signal processing, statistics, finance, and more. "We use ArrayFire to run the low level parallel computing layer of SDL Neural Machine Translation Products," said William Tambellini, Senior Software Developer at SDL. "ArrayFire flexibility, robustness and dedicated support makes it a powerful tool to support the development of Deep Learning Applications.â€The post ArrayFire Releases v3.6 Parallel Libraries appeared first on insideHPC.
|
by Rich Brueckner on (#3PWWQ)
Steve Conway from Hyperion Research gave this talk at the HPC User Forum. "We humans don’t fully understand how humans think. When it comes to deep learning, humans also don’t understand yet how computers think. That’s a big problem when we’re entrusting our lives to self-driving vehicles or to computers that diagnose serious diseases, or to computers installed to protect national security. We need to find a way to make these “black box†computers transparent."The post The Need for Deep Learning Transparency appeared first on insideHPC.
|
by staff on (#3PWQ2)
In this episode of Let's Talk Exascale, Jackie Chen from Sandia National Laboratories describes the Combustion-Pele project, which uses predictive simulation for the development of cleaner-burning engines. "Almost all practical combustors operate under extremely high turbulence levels to increase the rate of combustion providing high efficiency, but there are still outstanding challenges in understanding how turbulence affects auto-ignition."The post Let’s Talk Exascale: Transforming Combustion Science and Technology appeared first on insideHPC.
|
by staff on (#3PWMJ)
At the Microsoft Build conference held this week, Microsoft announced Azure Machine Learning Hardware Accelerated Models powered by Project Brainwave integrated with the Microsoft Azure Machine Learning SDK. In this configuration, customers gain access to industry-leading artificial intelligence inferencing performance for their models using Azure’s large-scale deployments of Intel FPGA (field programmable gate array) technology. "With today’s announcement, customers can now utilize Intel’s FPGA and Intel Xeon technologies to use Microsoft’s stream of AI breakthroughs on both the cloud and the edge."The post Intel FPGAs Power Realtime AI in the Azure cloud appeared first on insideHPC.
|
by staff on (#3PT6X)
Today Viking Technology announced its new VT-PM8 and VT-PM16 persistent memory drives that deliver performance and unlimited write endurance similar to that of DRAM, while simultaneously providing the data persistence desired for enterprise applications. VT-PM drives, part of Viking's family of persistent memory technology products, are 2.5 inch U.2 NVMe PCIe Gen3 drives built with architecture from Radian Memory Systems Incorporated.The post Viking Technology Introduces NVMe Persistent Memory Drives appeared first on insideHPC.
|
by Siddhartha Jana on (#3PT3Q)
The HPML 2018 High Performance Machine Learning Workshop has issued its Call for Papers. The event takes place September 24 in Lyon, France. "This workshop is intended to bring together the Machine Learning (ML), Artificial Intelligence (AI) and High Performance Computing (HPC) communities. In recent years, much progress has been made in Machine Learning and Artificial Intelligence in general."The post Call for Papers: High Performance Machine Learning Workshop – HPML 2018 appeared first on insideHPC.
|
by staff on (#3PT19)
"With the 19th Call for Large-Scale Projects, the GCS steering committee granted a total of more than 1 billion core hours to 17 ambitious research projects. The research teams represent a wide range of scientific disciplines, including astrophysics, atomic and nuclear physics, biology, condensed matter physics, elementary particle physics, meteorology, and scientific engineering, among others."The post Gauss Centre in Germany Allocates 1 Billion Computing Core Hours for Science appeared first on insideHPC.
|
by staff on (#3PSXR)
The PRACE initiative continues to sponsor ground-breaking research in Europe. "In the 16th PRACE Call for Project Access, Spain has allocated 470 million core hours on MareNostrum to 17 projects led by scientists from different European countries. With this allocation, this is the second time in a row that this unique supercomputer – which is installed in a chapel – has been the largest contributor in number of core hours in the last two PRACE Call for Proposals."The post MareNostrum provides 470 million core hours to European scientists appeared first on insideHPC.
|
by Rich Brueckner on (#3PSTV)
Jeff Stuecheli from IBM gave this talk at the HPC User Forum in Tucson. "Built from the ground-up for data intensive workloads, POWER9 is the only processor with state-of-the-art I/O subsystem technology, including next generation NVIDIA NVLink, PCIe Gen4, and OpenCAPI."The post POWER9 for AI & HPC appeared first on insideHPC.
|
by staff on (#3PPZ5)
Today D-Wave Systems launch its new Quadrant business unit, formed to provide machine learning services that make state-of-the-art deep learning accessible to companies across a wide range of industries and application areas. Quadrant's algorithms enable accurate discriminative learning (predicting outputs from inputs) using less data by constructing generative models which jointly model both inputs and outputs. "Quadrant is a natural extension of the scientific and technological advances from D-Wave as we continue to explore new applications for our quantum systems.â€The post D-Wave Launches Quadrant Business Unit for Machine Learning appeared first on insideHPC.
|
by staff on (#3PPW2)
Today Univa announced that Mellanox has selected Univa’s Navops Launch to extend its on-premise EDA cluster to the cloud, providing Mellanox with cost-effective, on-demand capacity. "Mellanox Technologies provides high-performance solutions to a range of customers looking to innovate with intelligent, interconnected solutions for servers, storage and hyper-converged infrastructure,†said Doron Sayag, IT enterprise computing services senior manager at Mellanox Technologies. “Integrating an enterprise-grade cluster management tool like Navops Launch into our own on-premise data center allowed us to better address peak performance needs with seamless bursting of our HPC cluster to the cloud during tape-outs. By operating more efficiently, we get to provide our customers with exceptional products for years to come.â€The post Univa Navops Launch powers Cloudbursting for Mellanox Hybrid Cloud appeared first on insideHPC.
|
by Rich Brueckner on (#3PPW4)
Saverio Proto from SWITCH gave this talk at the Swiss HPC Conference. "At SWITCH we are looking to provide a container platform as a Service solution. We are working on Kubernetes leveraging the Openstack cloud provider integration. In this talk we show how to re-use the existing keystone credentials to access the K8s cluster, how to obtain PVCs using the Cinder storage class and many other nice integration details."The post Kubernetes as a Service Built on OpenStack appeared first on insideHPC.
|