by Rich Brueckner on (#3NHQQ)
Ilkay Altintas from the San Diego Supercomputer Center gave this talk at the HPC User Forum. "WIFIRE is an integrated system for wildfire analysis, with specific regard to changing urban dynamics and climate. The system integrates networked observations such as heterogeneous satellite data and real-time remote sensor data, with computational techniques in signal processing, visualization, modeling, and data assimilation to provide a scalable method to monitor such phenomena as weather patterns that can help predict a wildfire's rate of spread."The post The Use of HPC to Model the California Wildfires appeared first on insideHPC.
|
High-Performance Computing News Analysis | insideHPC
Link | https://insidehpc.com/ |
Feed | http://insidehpc.com/feed/ |
Updated | 2024-11-05 03:45 |
by staff on (#3NFPE)
This is the first in a five-part series from a report exploring the potential of unified deep learning with CPU, GPU and FGPA technologies. This post explores the machine learning potential of taking a combined approach to these technologies.The post The Machine Learning Potential of a Combined Tech Approach appeared first on insideHPC.
|
by staff on (#3NF0Y)
"Unveiled today by the DOE, E3SM is a state-of-the-science modeling project that uses the world's fastest computers to more accurately understand how Earth's climate work and can evolve into the future. The goal: to support DOE's mission to plan for robust, efficient, and cost-effective energy infrastructures now, and into the distant future."The post Earth-modeling System steps up to Exascale appeared first on insideHPC.
|
by Rich Brueckner on (#3NEVX)
Carl Williams from NIST gave this talk at the HPC User Forum in Tucson. "Quantum information science research at NIST explores ways to employ phenomena exclusive to the quantum world to measure, encode and process information for useful purposes, from powerful data encryption to computers that could solve problems intractable with classical computers."The post Quantum Computing at NIST appeared first on insideHPC.
|
by Rich Brueckner on (#3NEPN)
In this podcast, the Radio Free HPC team looks at the new Department of Energy’s RFP for Exascale Computers. "As far as predictions go, Dan thinks one machine will go to IBM and the other will go to Intel. Rich thinks HPE will win one of the bids with an ARM-based system designed around The Machine memory-centric architecture. They have a wager, so listen in to find out where the smart money is."The post Radio Free HPC Looks at the New Coral-2 RFP for Exascale Computers appeared first on insideHPC.
|
by staff on (#3NEPP)
To remain competitive, companies, academic institutions, and government agencies must tap the data available to them to empower scientific breakthroughs and drive greater business agility. This guest post explores how Intel’s scalable and efficient HPC technology portfolio accelerates today’s diverse workloads.The post Intel HPC Technology: Fueling Discovery and Insight with a Common Foundation appeared first on insideHPC.
|
by Rich Brueckner on (#3NCQG)
Abhinav Thota, from Indiana University gave this talk at the 2018 Swiss HPC Conference. "Container use is becoming more widespread in the HPC field. There are various reasons for this, including the broadening of the user base and applications of HPC. One of the popular container tools on HPC is Singularity, an open source project coming out of the Berkeley Lab. In this talk, we will introduce Singularity, discuss how users of Indiana University are using it and share our experience supporting it. This talk will include a brief demonstration as well."The post Containers Using Singularity on HPC appeared first on insideHPC.
|
by staff on (#3NCP2)
Over at the SC18 Blog, Stephen Lien Harrell from Purdue writes that the conference will host will host a workshop on the hot topic of Reproducibility. Their Call for Submissions is out with a deadline of August 19, 2018. "
|
by Rich Brueckner on (#3NAPC)
D.E. Shaw Research is seeking an HPC System Administrator in our Job of the Week. "Our research effort is aimed at achieving major scientific advances in the field of biochemistry and fundamentally transforming the process of drug discovery."The post Job of the Week: HPC System Administrator at D.E. Shaw Research appeared first on insideHPC.
|
by staff on (#3NAM5)
Dag Lohmann from KatRisk gave this talk at the HPC User Forum in Tucson. "In 2012, a small Berkeley, California, startup called KatRisk set out to improve the quality of worldwide flood risk maps. The team wanted to create large-scale, high-resolution maps to help insurance companies evaluate flood risk on the scale of city blocks and buildings, something that had never been done. Through the OLCF’s industrial partnership program, KatRisk received 5 million processor hours on Titan."The post Using the Titan Supercomputer to Develop 50,000 Years of Flood Risk Scenarios appeared first on insideHPC.
|
by Rich Brueckner on (#3N8GC)
Fujitsu reports that the company has significantly boosted the performance of the RAIDEN supercompuer. RAIDEN is a computer system for artificial intelligence research originally deployed in 2017 at the RIKEN Center for Advanced Intelligence Project (AIP Center). "The upgraded RAIDEN has increased its performance by a considerable margin, moving from an initial total theoretical computational performance of 4 AI Petaflops to 54 AI Petaflops, placing it in the top tier of Japan's systems. In having built this system, Fujitsu demonstrates its commitment to support cutting-edge AI research in Japan."The post Fujitsu Upgrades RAIDEN at RIKEN Center for Advanced Intelligence Project appeared first on insideHPC.
|
by Rich Brueckner on (#3N8CX)
In this video from the GPU Technology Conference, John Stone from the University of Illinois describes how container technology in the NVIDIA GPU Cloud help the University distribute accelerated applications for science and engineering. "Containers are a way of packaging up an application and all of its dependencies in such a way that you can install them collectively on a cloud instance or a workstation or a compute node. And it doesn't require the typical amount of system administration skills and involvement to put one of these containers on a machine."The post Why UIUC Built HPC Application Containers for NVIDIA GPU Cloud appeared first on insideHPC.
|
by Rich Brueckner on (#3N8AN)
Christine Goulet from the Southern California Earthquake Center gave this talk at the HPC User Forum in Tucson. "SCEC coordinates fundamental research on earthquake processes using Southern California as its principal natural laboratory. The SCEC community advances earthquake system science through synthesizing knowledge of earthquake phenomena through physics-based modeling, including system-level hazard modeling and communicating our understanding of seismic hazards to reduce earthquake risk and promote community resilience."The post Video: HPC Use for Earthquake Research appeared first on insideHPC.
|
by staff on (#3N87Z)
Registration for the ISC STEM Student Day program is now open. As part of the ISC 2018 conference, the full-day program is free of charge and takes place June 27 in Frankfurt, Germany. "We have created a program to welcome science, technology, engineering, and mathematics (STEM) students into the world of HPC, demonstrate how technical skills in this area can propel your future career, introduce you to the current job landscape, and show you what the HPC workforce will look like in 2020 and beyond."The post Students: Sign up now for ISC STEM Student Day in Frankfurt appeared first on insideHPC.
|
by staff on (#3N5GC)
Over at Intel, Scott Cyphers writes that the company has open-sourced nGraph, a framework-neutral Deep Neural Network (DNN) model compiler that can target a variety of devices. With nGraph, data scientists can focus on data science rather than worrying about how to adapt their DNN models to train and run efficiently on different devices. Continue reading below for highlights of our engineering challenges and design decisions, and see GitHub, our documentation, and our SysML paper for additional details.The post Intel Open Sources nGraph Deep Neural Network model for Multiple Devices appeared first on insideHPC.
|
by staff on (#3N5AH)
"The supercomputer JUQUEEN, the one-time reigning power in Europe’s high-performance computing industry, is ceding its place to its successor, the Jülich Wizard for European Leadership Science. Called JUWELS for short, the supercomputer is the culmination of the joint efforts of more than 16 European partners in the EU-funded DEEP projects since 2011. Once completed, JUWELS will consist of three fully integrated modules able to carry out demanding simulations and scientific tasks."The post JUWELS Supercomputer in Germany to be based on Modular Supercomputing appeared first on insideHPC.
|
by staff on (#3N5AK)
In this special guest feature, Mahesh Pancholi from OCF writes that many of universities are now engaging in cloud bursting and are regularly taking advantage of public cloud infrastructures that are widely available from large companies like Amazon, Google and Microsoft. "By bursting into the public cloud, the university can offer the latest and greatest technologies as part of its Research Computing Service for all its researchers."The post Universities step up to Cloud Bursting appeared first on insideHPC.
|
by Rich Brueckner on (#3N534)
"What if I told you there was a way to allow your customers and colleagues to run their HPC jobs inside the Docker containers they're already creating? or an easily learned, easily employed method for consistently reproducing a particular application environment across numerous Linux distributions and platforms? There is. In this talk/tutorial session, we'll explore the problem domain and all the previous solutions, and then we'll discuss and demo Charliecloud, a simple, streamlined container runtime that fills the gap between Docker and HPC -- without requiring HPC Admins to lift a finger!"The post Charliecloud: Unprivileged Containers for User-Defined Software Stacks appeared first on insideHPC.
|
by staff on (#3N4ZQ)
In this episode of Let's Talk Exascale, Charlie Catlett from Argonne National Laboratory and the University of Chicago describes how extreme scale HPC will be required to better build Smart Cities. "Urbanization is a bigger set of challenges in the developing world than in the developed world, but it’s still a challenge for us in US and European cities and Japan."The post Exascale Computing for Long Term Design of Urban Systems appeared first on insideHPC.
|
by staff on (#3N2CX)
Today the RSC Group from Russia announced the deployment of the world first 100% ‘hot water’ liquid cooled supercomputer at Joint Institute for Nuclear Research (JINR) in Dubna. "It's great to note that we launch the new heterogeneous supercomputer named after professor Govorun at JINR’s Information Technology Laboratory in the year of 60th anniversary of commissioning of the first Ural-1 supercomputer at our institute in 1958. Our scientists and research groups now have a powerful and modern tool that will greatly accelerate theoretical and experimental research of nuclear physics and condensed matter physics,†– said Vladimir Vasilyevich Korenkov, Director of the Information Technology Laboratory of the Joint Institute for Nuclear Research.The post Russian RSC Group deploys ‘hot water’ cooled supercomputer at JINR appeared first on insideHPC.
|
by staff on (#3N2A2)
Wahid Bhimji from NERSC gave this talk at the 2018 HPC User Forum in Tucson. "Machine Learning and Deep Learning are increasingly used to analyze scientific data, in fields as diverse as neuroscience, climate science and particle physics. In this page you will find links to examples of scientific use cases using deep learning at NERSC, information about what deep learning packages are available at NERSC, and details of how to scale up your deep learning code on Cori to take advantage of the compute power available from Cori's KNL nodes."The post Video: Addressing Key Science Challenges with Adversarial Neural Networks appeared first on insideHPC.
|
by staff on (#3N2A4)
Today the Exascale Computing Project appointed David Kepczynski from GE Global Research as the new chair of the ECP Industry Council. “We are thrilled that Dave Kepczynski has agreed to take the leadership reins for the ECP’s Industry Council,†ECP Director Doug Kothe said. “He has been an active member of the Industry Council since day one, and his experience and vision pertaining to the potential impact of exascale on U.S. industries is invaluable to our mission.†Kothe added, “We wish to thank Michael McQuade for his pioneering leadership role with this external advisory group, and we wish him well with his future plans.â€The post David Kepczynski from GE to Chair ECP Industry Council appeared first on insideHPC.
|
by Rich Brueckner on (#3N1YT)
Robert Henschel from Indiana University gave this talk at the Swiss HPC Conference. "In this talk, I will present an overview of the High Performance Group as well as SPEC’s benchmarking philosophy in general. Most everyone knows SPEC for the SPEC CPU benchmarks that are heavily used when comparing processor performance, but the High Performance Group specifically focusses on whole system benchmarking utilizing the parallelization paradigms common in HPC, like MPI, OpenMP and OpenACC."The post Introducing the SPEC High Performance Group and HPC Benchmark Suites appeared first on insideHPC.
|
by staff on (#3N1YW)
Cray is the first system vendor to offer an optimized programing environment for AMD EYPC processors, which is a distinct advantage. "Cray's decision to offer the AMD EPYC processors in the Cray CS500 product line expands its market opportunities by offering buyers an important new choice," said Steve Conway, senior vice president of research at Hyperion Research.The post Cray Adopts AMD EPYC Processors for Supercomputing appeared first on insideHPC.
|
by staff on (#3MZM6)
Today the University of Bristol announced an initiative to accelerate the adoption of
|
by staff on (#3MZBQ)
Today StorONE announced that it has partnered with Mellanox Technologies to leverage each other’s technological approaches in order to create powerful, scalable and flexible storage solutions. These solutions achieve the enterprise-class functionality, high performance and high capacity results at the industry’s lowest total cost of ownership. "Modern software-defined storage solutions require high-performance, programmable and intelligent networks,†said Motti Beck, Senior Director Enterprise Market Development, at Mellanox. “Combining StorONE’s TRU STORAGE software with Mellanox Ethernet fabric storage solutions improves the simplicity, cost and efficiency of enterprise storage systems, supports enterprises’ mission critical storage features at wire speed and ensures the best end-user experience available.â€The post StorONE and Mellanox Build Wire-Speed TRU Storage Solutions appeared first on insideHPC.
|
by staff on (#3MZ8Q)
Today E8 Storage announced availability of InfiniBand support to its high performance, NVMe storage solutions. The move comes as a direct response to HPC customers that wish to take advantage of the high speed, low latency throughput of InfiniBand for their data hungry applications. E8 Storage support for InfiniBand will be seamless for customers who now have the flexibility to connect via Ethernet or InfiniBand when paired with Mellanox ConnectX InfiniBand/VPI adapters. "Today we demonstrate once again that E8 Storage’s architecture can expand, evolve and always extract the full potential of flash performance,†comments Zivan Ori, co-founder and CEO of E8 Storage. “Partnering with market leaders like Mellanox that deliver the very best network connectivity technology ensures we continue to meet and, frequently, exceed the needs of our HPC customers even in their most demanding environments.â€The post E8 Storage steps up to HPC with InfiniBand Support appeared first on insideHPC.
|
by Rich Brueckner on (#3MZ5Z)
Adrian Tate from Cray and Stig Telfer from StackHPC gave this talk at the 2018 Swiss HPC Conference. "This talk will describe how Cray, StackHPC and the HBP co-designed a next-generation storage system based on Ceph, exploiting complex memory hierarchies and enabling next-generation mixed workload execution. We will describe the challenges, show performance data and detail the ways that a similar storage setup may be used in HPC systems of the future."The post Ceph on the Brain: Storage and Data-Movement Supporting the Human Brain Project appeared first on insideHPC.
|
by staff on (#3MYZ8)
The DOE Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program is now seeking proposals for high-impact, computationally intensive research campaigns in a broad array of science, engineering and computer science domains. "From April 16 to June 22, 2018, INCITE’s open call provides an opportunity for researchers to pursue transformational advances in science and technology through large allocations of computer time and supporting resources at the Argonne Leadership Computing Facility (ALCF) and the Oak Ridge Leadership Computing Facility (OLCF). The ALCF and OLCF are DOE Office of Science User Facilities. Open to researchers from academia, industry and government agencies, the INCITE program will award 50 percent of the allocable time on DOE’s leadership-class supercomputers: the ALCF’s Mira and Theta systems and the OLCF’s Summit and Titan systems."The post DOE INCITE Program Seeks Advanced Computational Research Proposals for 2019 appeared first on insideHPC.
|
by staff on (#3MW1W)
Australia’s Pawsey Supercomputing Centre is hosting a GPU Hackathon this week in Perth, Australia. "The GPU Hackathon is a free event taking place at Esplanade Hotel in Fremantle, from Monday 16 April to Friday 20 April. Six teams from Australia, the United States, and Europe, are gathering in Perth for this 5-day event to adapt their applications for GPU architectures."The post Pawsey Supercomputing Centre Hosts GPU Hackathon this week appeared first on insideHPC.
|
by Rich Brueckner on (#3MVW2)
The results from our HPC & AI peception survey are here. "90 percent of all respondents felt that their business will ultimately be impacted by AI. Although almost all respondents see AI as playing a role in the future of the business, the survey also revealed the top three industries that will see the most impact. Healthcare came in first, followed by life sciences, and finance/transportation tied in third place. The possibilities of AI are seemingly endless. And the shift has already begun."The post Industry Insights: Download the Results of our AI & HPC Perceptions Survey appeared first on insideHPC.
|
by staff on (#3MVSR)
Researchers at the Atos Quantum Laboratory have successfully modeled ‘quantum noise’ and as a result, simulation is more realistic than ever before, and is closer to fulfilling researchers’ requirements. "We are thrilled by the remarkable progress that the Atos Quantum program has delivered as of today," said Thierry Breton, Chairman and CEO of Atos.The post Atos Quantum Learning Machine can now simulate real Qubits appeared first on insideHPC.
|
by Rich Brueckner on (#3MVPV)
Gilles Fourestey from EPFL gave this talk at the Swiss HPC Conference. "LENSTOOL is a gravitational lensing software that models mass distribution of galaxies and clusters. It is used to obtain sub-percent precision measurements of the total mass in galaxy clusters and constrain the dark matter self-interaction cross-section, a crucial ingredient to understanding its nature."The post Scratch to Supercomputers: Bottoms-up Build of Large-scale Computational Lensing Software appeared first on insideHPC.
|
DDN Builds New Engineering Facility in Colorado focused on AI, Cloud, and Enterprise Data Challenges
by staff on (#3MVM3)
Today DDN announced the opening of a new facility in Colorado Springs, Colorado, including a significant expansion of lab, testing and benchmarking facilities. The enhanced capabilities will enable DDN to accelerate development efforts and increase in-house capabilities to mimic customer applications and workflows. "Our Enterprise, AI, HPC and Cloud customers have always relied upon us to develop the world’s leading data storage solutions at-scale, and for our long-term focus and sustained investments in research, technology and innovation,†said Alex Bouzari, chief executive officer, chairman and co-founder of DDN. “We are excited to add our new Colorado Springs facility to the DDN R&D centers worldwide and to expand our team of very talented engineers and technologists who will continue to drive innovation for our customers in the years to come.â€The post DDN Builds New Engineering Facility in Colorado focused on AI, Cloud, and Enterprise Data Challenges appeared first on insideHPC.
|
by Rich Brueckner on (#3MSJ9)
Alberto Madonaa gave this talk at the Swiss HPC Conference. "In this work we present an extension to the container runtime of Shifter that provides containerized applications with a mechanism to access GPU accelerators and specialized networking from the host system, effectively enabling performance portability of containers across HPC resources. The presented extension makes possible to rapidly deploy high-performance software on supercomputers from containerized applications that have been developed, built, and tested in non-HPC commodity hardware, e.g. the laptop or workstation of a researcher."The post Shifter – Docker Containers for HPC appeared first on insideHPC.
|
by staff on (#3MSGT)
Altair software is now part of the Inspire Unlimited software-as-a-service offering available on the Azure cloud. "Unlike the HyperWorks Unlimited Appliance, where performance is based on the number of nodes, Inspire’s scale requirements are based on the number of simultaneous users; there could be 1,000 engineers working together at a time,†says Sam Mahalingam from Altair. “We felt that the HPC environment in Azure was architected to meet the type of back-end requirements we needed for Inspire.†Altair uses Microsoft Azure Virtual Machines, with NV instances powered by NVIDIA Tesla M60 GPUs.The post Altair Steps up to Azure Cloud with Inspire Unlimited appeared first on insideHPC.
|
by Rich Brueckner on (#3MQTF)
Haodong Tang from Intel gave this talk at the 2018 Open Fabrics Workshop. "Efficient network messenger is critical for today’s scale-out storage systems. Ceph is one of the most popular distributed storage system providing a scalable and reliable object, block and file storage services. As the explosive growth of Big Data continues, there're strong demands leveraging Ceph build high performance & ultra-low latency storage solution in the cloud and bigdata environment. The traditional TCP/IP cannot satisfy this requirement, but Remote Direct Memory Access (RDMA) can."The post Accelerating Ceph with RDMA and NVMe-oF appeared first on insideHPC.
|
by staff on (#3MQP5)
NASA researchers are using AI technologies to detect gravitational waves. The work is described in a new article in Physics Review D this month. "This article shows that we can automatically detect and group together noise anomalies in data from the LIGO detectors by using artificial intelligence algorithms based on neural networks that were already pre-trained to classify images of real-world objects," said research scientist, Eliu Huerta.The post Using Ai to detect Gravitational Waves with the Blue Waters Supercomputer appeared first on insideHPC.
|
by Rich Brueckner on (#3MN7Q)
Brian Barrett from Amazon gave this talk at the 2018 OpenFabrics Workshop. "As network performance becomes a larger bottleneck in application performance, AWS is investing in improving HPC network performance. Our initial investment focused on improving performance in open source MPI implementations, with positive results. Recently, however, we have pivoted to focusing on using libfabric to improve point to point performance."The post Amazon and Libfabric: A case study in flexible HPC Infrastructure appeared first on insideHPC.
|
by Rich Brueckner on (#3MN5A)
"Supercomputing capability plays a key role in ECMWF’s success and in its ability to implement its strategic vision to 2025. In order to meet the required flexibility for future growth, ECMWF’s data centre is being relocated to Bologna, Italy and will be operational in 2020. The new Director of Computing will be responsible for delivering this challenging transition, fit-out and transformation, whilst ensuring that ECMWF’s computing capability continues to support some of the most critical scientific advances of our time."The post Job of the Week: Director of Computing at ECMWF appeared first on insideHPC.
|
by Lisa King on (#3MN0V)
The mission of the eScience Centre is to support research through the development and provision of advanced ICT infrastructure, services and expertise. We provide services and expertise in the areas of High Performance Computing, Cloud Services, Data Services and e-science. We support large research, educational and government institutions, as well as the business community. The […]The post Architects for Research Infrastructure appeared first on insideHPC.
|
by Rich Brueckner on (#3MMWA)
DK Panda from Ohio State University gave this talk at the Swiss HPC Conference. "This talk will provide an overview of challenges in accelerating Hadoop, Spark, and Memcached on modern HPC clusters. An overview of RDMA-based designs for Hadoop (HDFS, MapReduce, RPC and HBase), Spark, Memcached, Swift, and Kafka using native RDMA support for InfiniBand and RoCE will be presented. Enhanced designs for these components to exploit NVM-based in-memory technology and parallel file systems (such as Lustre) will also be presented."The post Exploiting HPC Technologies for Accelerating Big Data Processing and Associated Deep Learning appeared first on insideHPC.
|
by staff on (#3MMWB)
Today PRACE announced that Prof. Dr. Xiaoxiang Zhu, German Aerospace Center (DLR) and Technical University of Munich (TUM), Germany, is the winner of the 2018 PRACE Ada Lovelace Award for HPC for her outstanding contributions and impact on HPC in Europe. "Prof Zhu and her team (SiPEO) develop explorative algorithms to improve information retrieval from remote sensing data, in particular those from the current and next generation of Earth observation missions."The post Dr. Xiaoxiang Zhu wins 2018 PRACE Ada Lovelace Award for HPC appeared first on insideHPC.
|
by Rich Brueckner on (#3MJHA)
Christoph Lameter from Jump Trading LLC gave this talk at the OpenFabrics Workshop. "Recently new types of memory have shown up like HBM (High Bandwidth Memory), Optane, 3DXpoint, NVDIMM, NVME and various "nonvolatile" types memory. This talk gives a brief rundown on what is available and gives some example on how the vendors enable the actual use of this memory in the operating system (f.e. DAX and filesystems) and then show how an application would make use of this memory. In particular then we will be looking at what considerations are important for the use of RDMA to those memory devices."The post New Types of Memory, their support in Linux, and how to use them via RDMA appeared first on insideHPC.
|
by Rich Brueckner on (#3MJEH)
In this video from the 2018 GPU Technology Conference, Prof. Taisuke Boku from the University of Tsukuba & JCAHPC and Duncan Poole, President of OpenACC describe how OpenACC is accelerating science. "Dr. Boku and his team are working on OpenACC and XcalableACC for accelerated computing. XcalableACC is an extension of XMP for accelerated clusters using OpenACC. Programmers can develop an application to insert XMP and OpenACC directives into a sequential program."The post Developing Faster Algorithms with OpenACC and XcalableACC appeared first on insideHPC.
|
by Rich Brueckner on (#3MJ8Z)
Dave Turek from IBM gave this talk at the Swiss HPC Conference. "There is a shift underway where HPC is beginning to be addressed with novel techniques and technologies including cognitive and analytic approaches to HPC problems and the arrival of the first quantum systems. This talk will showcase how IBM is merging cognitive, analytics, and quantum with classic simulation and modeling to create a new path for computational science."The post The Transformation of HPC: Simulation and Cognitive Methods in the Era of Big Data appeared first on insideHPC.
|
by Rich Brueckner on (#3MJ6D)
The 34th International Conference on Massive Storage Systems and Technologies (MSST 2018) has posted their Speaker Agenda. The event takes place May 14-16 in Santa Clara, California. "Join the discussion on webscale IT, and the demand on storage systems from IoT, healthcare, scientific research, and the continuing stream of smart applications (apps) for mobile devices."The post Agenda Posted for Mass Storage Conference in Santa Clara appeared first on insideHPC.
|
by staff on (#3MFJV)
Today Intel announced top-tier OEM adoption of Intel’s field programmable gate array (FPGA) acceleration in their server lineup. This is the first major use of reprogrammable silicon chips to help speed up mainstream applications for the modern data center. "We are at the horizon of a new era of data center computing as Dell EMC and Fujitsu put the power and flexibility of Intel FPGAs in mainstream server products,†said Reynette Au, vice president of marketing for the Intel Programmable Solutions Group. “We’re enabling our customers and partners to create a rich set of high-performance solutions at scale by delivering the benefits of hardware performance, all in a software development environment.â€The post Intel FPGAs Goes Mainstream for Enterprise Workloads appeared first on insideHPC.
|
by Rich Brueckner on (#3MFFT)
Today RAID Incorporated updated its ARI-400 Series of storage solutions. The newest release features declustered RAID, which allows for mixing and matching of multiple disk capacities and greatly reduces rebuild times, a much welcomed technology as larger disks continue to be released into the market.The post ARI-400 Series Features Declustered RAID Technology appeared first on insideHPC.
|
by staff on (#3MFFW)
"HPC organizations that utilize cloud service providers (AWS, Azure, Google, Oracle, etc.) in conjunction with Adaptive Computing’s NODUS Cloud Bursting Solution can significantly reduce their on-premise cluster sizes and costs by as much as 40-50 percent, and burst the rest of their HPC workload to the cloud, by implementing this hybrid approach."The post Reduce Costs with Adaptive Computing’s NODUS Cloud Bursting Solution appeared first on insideHPC.
|