by veronicahpc1 on (#1WYES)
Supercomputing developers and experts from around the globe will converge on Salt Lake City, Utah for the 2016 Intel® HPC Developer Conference on November 12-13 – just prior to SC ‘16. Conference attendance is free, however, those interested in attending should register quickly as Intel is expecting a big response, reflecting the broadening demand for HPC learning opportunities among technical developers. road on to learn about the incredible presenter lineup this year.The post 2016 Intel HPC Developer Conference Addresses In-Demand Topics appeared first on insideHPC.
|
High-Performance Computing News Analysis | insideHPC
Link | https://insidehpc.com/ |
Feed | http://insidehpc.com/feed/ |
Updated | 2024-11-25 21:45 |
by staff on (#1WYP9)
This may indeed be the year of artificial intelligence, when the technology came into its own for mainstream businesses. "But will other companies understand if AI has value for them? Perhaps a better question is "Why now?" This question centers on both the opportunity and why many companies are scared about missing out."The post insideHPC Readers: Weigh in on Why AI is Taking Off Now appeared first on insideHPC.
|
by staff on (#1WVB6)
Two University of Wyoming graduate students earned a trip to the SC16 conference in November by virtue of winning the poster contest at the recent Rocky Mountain Advanced Computing Consortium (RMACC) High Performance Computing Symposium. "I hope to receive good exposure to the most recent advancements in the field of high-performance computing,†Kommera says.The post Winning Posters on GPU Programming Send UW Students to SC16 appeared first on insideHPC.
|
by staff on (#1WV6N)
The Ohio Supercomputer Center has joined the CaRC Consortium, an NSF-funded research coordination network.The post OSC Joins CaRC Research Coordination Network appeared first on insideHPC.
|
by Rich Brueckner on (#1WV2Y)
"This tutorial, part of the SC16 State of the Practice, will guide attendees through the process of purchasing and deploying a HPC system. It will cover the whole process from engaging with stakeholders in securing funding, requirements capture, market survey, specification of the tender/request for proposal documents, engaging with suppliers, evaluating proposals, and managing the installation. Attendees will learn how to specify what they want, yet enable the suppliers to provide innovative solutions beyond their specification both in technology and in the price; how to demonstrate to stakeholders that the solution selected is best value for money; and the common risks, pitfalls and mitigation strategies essential to achieve an on-time and on-quality installation process."The post Preview: SC16 Tutorial on How to Buy a Supercomputer appeared first on insideHPC.
|
by Rich Brueckner on (#1WTZ3)
Today Mellanox announced the availability of standard Linux kernel driver for the company Open Ethernet, Spectrum switch platforms. Developed within the large Linux community, the new driver enables standard Linux Operating Systems and off-the-shelf Linux-based applications to operate on the switch, including L2 and L3 switching. Open Ethernet provides data centers with the flexibility to choose the best hardware platform and the best software platform, resulting in optimized data center performance and higher return on investment.The post Mellanox Deploys Standard Linux Operating Systems over Ethernet Switches appeared first on insideHPC.
|
by MichaelS on (#1WTES)
From Megaflops to Gigaflops to Teraflops to Petaflops and soon to be Exaflops, the march in HPC is always on and moving ahead. This whitepaper details some of the technical challenges that will need to be addressed in the coming years in order to get to exascale computing.The post Exascale – A Race to the Future of HPC appeared first on insideHPC.
|
by Rich Brueckner on (#1WQXB)
Registration is now open for the Dell HPC Community event at SC16. The event takes place Nov. 12 at the Radisson Hotel in Salt Lake City. "The Dell HPC Community events feature keynote presentations by HPC experts and working group sessions to discuss best practices in the use of Dell HPC Systems."The post Registration Opens for Dell HPC Community Event at SC16 appeared first on insideHPC.
|
by Rich Brueckner on (#1WQTT)
"We are still in the first minutes of the first day of the Intelligence revolution. In this keynote, Dr. Joseph Sirosh will present 5 solutions (and their implementations) that the intelligent cloud delivers. Sirosh shares five cloud AI patterns that his team and presented at the Summit. These five patterns are really about ways to bring data and learning together in cloud services, to infuse intelligence."The post Video: Azure – the Cloud Supercomputer for AI appeared first on insideHPC.
|
by staff on (#1WN8D)
"It's often a challenge to test the scalability of system software components before a large deployment, particularly if you need low level hardware access", said Dan Stanzione, Executive Director at TACC and a Co-PI on the Chameleon project. "Chameleon was designed for just these sort of cases – when your local test hardware is inadequate, and you are testing something that would be difficult to test in the commercial cloud – like replacing the available file system. Projects like Slash2 can use Chameleon to make tomorrow's cloud systems better than today's."The post Chameleon Testbed Blazes New Trails for Cloud HPC at TACC appeared first on insideHPC.
|
by Rich Brueckner on (#1WN5Z)
Adrian Jackson from EPCC at the University of Edinburgh presented this tutorial to ARCHER users. "We have been working for a number of years on porting computational simulation applications to the KNC, with varying successes. We were keen to test this new processor with its promise of 3x serial performance compared to the KNC and 5x memory bandwidth over normal processors (using the high-bandwidth, MCDRAM, memory attached to the chip)."The post Video: Intel Xeon Phi (KNL) Processor Overview appeared first on insideHPC.
|
by Rich Brueckner on (#1WJAY)
Sure, your code seems fast, but how do you know if you are leaving potential performance on the table? Recognized HPC experts Georg Hager and Gerhard Wellein will teach a tutorial on Node-Level Performance Engineering at SC16. The session will take place 8:30-5:00pm on Sunday, Nov. 13 in Salt Lake City.The post SC16 Tutorial to Focus on Node-Level Performance Engineering appeared first on insideHPC.
|
by staff on (#1WJ82)
"Our high-performance computing solutions enable deep learning, engineering, and scientific fields to scale out their compute clusters to accelerate their most demanding workloads and achieve fastest time-to-results with maximum performance per watt, per square foot, and per dollar,†said Charles Liang, President and CEO of Supermicro. “With our latest innovations incorporating the new NVIDIA P100 processors in a performance and density optimized 1U and 4U architectures with NVLink, our customers can accelerate their applications and innovations to address the most complex real world problems.â€The post Supermicro Rolls Out New Servers with Tesla P100 GPUs appeared first on insideHPC.
|
by Rich Brueckner on (#1WJ16)
In this video from the 2016 Argonne Training Program on Extreme-Scale Computing, Mark Miller from LLNL leads a panel discussion on Experiences in eXtreme Scale in HPC with FASTMATH team members. "The FASTMath SciDAC Institute is developing and deploying scalable mathematical algorithms and software tools for reliable simulation of complex physical phenomena and collaborating with U.S. Department of Energy (DOE) domain scientists to ensure the usefulness and applicability of our work. The focus of our work is strongly driven by the requirements of DOE application scientists who work extensively with mesh-based, continuum-level models or particle-based techniques."The post Video: Experiences in eXtreme Scale HPC appeared first on insideHPC.
|
by staff on (#1WHX5)
Today XSEDE announced it has awarded 30,000 core-hours of supercomputing time on the Bridges supercomputer to the North Carolina School of Science and Mathematics (NCSSM). Funded with a $9.65M NSF grant, Bridges contains a large number of research-grade software packages for science and engineering, including codes for computational chemistry, computational biology, and computational physics, along with specialty codes such as computational fluid dynamics. "NCSSM research students often pursue interdisciplinary research projects that involve computational and/or laboratory work in chemistry, physics, and other fields," said Jon Bennett, instructor of physics and faculty mentor for physics research. "The availability of supercomputer computational resources would greatly expand the range and depth of projects that are possible for these students.â€The post Bridges Supercomputer to Power Research at North Carolina School of Science and Mathematics appeared first on insideHPC.
|
by staff on (#1WHKQ)
Today Amazon Web Services announced the availability of P2 instances, a new GPU instance type for Amazon Elastic Compute Cloud designed for compute-intensive applications that require massive parallel floating point performance, including artificial intelligence, computational fluid dynamics, computational finance, seismic analysis, molecular modeling, genomics, and rendering. With up to 16 NVIDIA Tesla K80 GPUs, P2 instances are the most powerful GPU instances available in the cloud.The post GPUs Power New AWS P2 Instances for Science & Engineering in the Cloud appeared first on insideHPC.
|
by staff on (#1WE7W)
Men still outnumber women in STEM training and employment, and engineering leaders are working to bring awareness to that diversity gap and the opportunities it presents. SC16 is calling upon all organizations to look at the diversity landscape and publish that data. “Of course, we are supporting programs that empower more girls to study and pursue STEM degrees and careers. Getting more girls through the educational and training pipeline is a great first step, but it’s just the beginning.â€The post Supercomputing Experts Lend Expertise to Address STEM Gender Gap appeared first on insideHPC.
|
by staff on (#1WE61)
A huge barrier in converting cellulose polymers to biofuel lies in removing other biomass polymers that subvert this chemical process. To overcome this hurdle, large-scale computational simulations are picking apart lignin, one of those inhibiting polymers, and its interactions with cellulose and other plant components. The results point toward ways to optimize biofuel production and […]The post Supercomputing Plant Polymers for Biofuels appeared first on insideHPC.
|
by staff on (#1WDXN)
Today Penguin Computing announced Scyld Cloud Workstation 3.0, a 3D-accelerated remote desktop solution which provides true multi-user remote desktop collaboration for cloud-based Linux and Windows desktops. "Unlike other remote desktop solutions, collaboration via Scyld Cloud Workstation is more like sitting in-person with other engineers because a user can hand off control of their desktop to simplify collaboration on a project,†said Victor Gregorio, Vice President and General Manager, Cloud Services, Penguin Computing. “Scyld Cloud Workstation brings collaboration to life, providing a much more thorough and proficient interaction among researchers and engineers working together on a remote desktop. Ultimately, this allows customers a more efficient means to leverage cloud-based desktop solutions.â€The post Penguin Computing Adds Remote Desktop Collaboration to Scyld Cloud Workstation appeared first on insideHPC.
|
by Rich Brueckner on (#1WDXQ)
SC16 will continue its HPC Matters Plenary session series this year with a panel discussion on HPC and Precision Medicine. The event will take place at 5:30 pm on Monday, Nov 14 just prior to the exhibits opening gala. "The success of all of these research programs hinge on harnessing the power of HPC to analyze volumes of complex genomics and other biological datasets that simply can’t be processed by humans alone. The challenge for our community will be to develop the computing tools and services needed to transform how we think about disease and bring us closer to the precision medicine future."The post SC16 Plenary Session to Focus on HPC and Precision Medicine appeared first on insideHPC.
|
by MichaelS on (#1WDP5)
With the introduction of the Intel Scalable System Framework, the Intel Xeon Phi processor can speed up Finite Element Analysis significantly. Using highly tuned math libraries such as the Intel Math Kernel Library (Intel MKL), FEA applications can execute math routines in parallel on the Intel Xeon Phi processor.The post Accelerating Finite Element Analysis with Intel Xeon Phi appeared first on insideHPC.
|
by Rich Brueckner on (#1WE3Y)
Nikos Trikoupis from the City University of New York gave this talk at the HPC User Forum in Austin. "We focus on measuring the aggregate throughput delivered by 12 Intel SSD DC P3700 for NVMe cards installed on the SGI UV 300 scale-up system in the CUNY High Performance Computing Center. We establish a performance baseline for a single SSD. The 12 SSDs are assembled into a single RAID-0 volume using Linux Software RAID and the XVM Volume Manager. The aggregate read and write throughput is measured against different configurations that include the XFS and the GPFS file systems."The post Video: Analysis of SSDs on SGI UV 300 appeared first on insideHPC.
|
by Rich Brueckner on (#1WAG4)
"The POWER8 with NVIDIA NVLink processor enables incredible speed of data transfer between CPUs and GPUs ideal for emerging workloads like AI, machine learning and advanced analyticsâ€, said Rick Newman, Director of OpenPOWER Strategy & Market Development Europe. “The open and collaborative spirit of innovation within the OpenPOWER Foundation enables companies like E4 to leverage new technology and build cutting edge solutions to help clients grappling with the massive amounts of data in today’s technology environment.â€The post E4 Computer Engineering Rolls Out GPU-accelerated OpenPOWER server appeared first on insideHPC.
|
by Rich Brueckner on (#1WAEP)
Today Nvidia announced the general availability of CUDA 8 toolkit for GPU developers. "A crucial goal for CUDA 8 is to provide support for the powerful new Pascal architecture, the first incarnation of which was launched at GTC 2016: Tesla P100,†said Nvidia’s Mark Harris in a blog post. “One of NVIDIA’s goals is to support CUDA across the entire NVIDIA platform, so CUDA 8 supports all new Pascal GPUs, including Tesla P100, P40, and P4, as well as NVIDIA Titan X, and Pascal-based GeForce, Quadro, and DrivePX GPUs.â€The post Nvidia Releases Cuda 8 appeared first on insideHPC.
|
by staff on (#1WA9V)
Today Allinea Software announces availability of its new software release, version 6.1, which offers full support for programming parallel code on the Pascal GPU architecture, CUDA 8 from Nvidia. "The addition of Allinea tools into the mix is an exciting one, enabling teams to accurately measure GPU utilization, employ smart optimization techniques and quickly develop new CUDA 8 code that is bug and bottleneck free,†said Mark O’Connor, VP of Product Management at Allinea.The post Allinea Adds CUDA 8 Support for GPU Developers appeared first on insideHPC.
|
by Rich Brueckner on (#1WA41)
Today at GTC Europe, Nvidia unveiled Xavier, an all-new SoC based on the company's next-gen Volta GPU, which will be the processor in future self-driving cars. According to Huang, the ARM-based Xavier will feature unprecedented performance and energy efficiency, while supporting deep-learning features important to the automotive market. A single Xavier-based AI car supercomputer will be able to replace today’s fully configured DRIVE PX 2 with two Parker SoCs and two Pascal GPUs.The post Video: Nvidia Unveils ARM-Powered SoC with Volta GPU appeared first on insideHPC.
|
by staff on (#1W9YF)
"Our customers are looking for a highly integrated server adapter that solves their pressing need for network performance, efficiency and security,†said Gilad Shainer, vice president of marketing, Mellanox Technologies. “The Innova adapter provides IPsec offload to deliver complete end-to-end security for traffic moving within the data center. Combined with the intelligent network offload and acceleration engines, Innova IPsec is the ideal solution for cloud, telecommunication, Web 2.0, high-performance compute and storage infrastructures.â€The post Mellanox Roll Out New Innova IPsec 10/40G Ethernet Adapters appeared first on insideHPC.
|
by Rich Brueckner on (#1W9WC)
In this video from LUG 2016 in Australia, Chakravarthy Nagarajan from Intel presents: An Optimized Entry Level Lustre Solution in a Small Form Factor. "Our goal was to provide an entry level Lustre storage solution in a high density form factor, with a low cost, small footprint, all integrated with Intel Enterprise Edition for Lustre* software."The post Video: An Optimized Entry Level Lustre Solution in a Small Form Factor appeared first on insideHPC.
|
by staff on (#1W6EV)
"We are at an inflection point in the big data era,†said Bob Picciano, senior vice president, IBM Analytics. “We know that users spend up to 80 percent of their time on data preparation, no matter the task, even when they are applying the most sophisticated AI. Project DataWorks helps transform this challenge by bringing together all data sources on one common platform, enabling users to get the data ready for insight and action, faster than ever before.â€The post IBM Unveils Project DataWorks for AI-Powered Decision-Making appeared first on insideHPC.
|
by Rich Brueckner on (#1W6D0)
Registration is now open for HP-CAST at SC16. The event takes place Nov. 11-12 in Salt Lake City.The post Registration Opens for HP-CAST at SC16 appeared first on insideHPC.
|
by staff on (#1W643)
Today Rogue Wave Software announced it is working with IBM to help make open source software (OSS) support more available. This will help provide comprehensive, enterprise-grade technical support for OSS packages. "With our ten-year history in open source, organizations can feel confident in our ability to resolve issues,†said Richard Sherrard, director of product management at Rogue Wave Software. “We have tier-3 and 4 enterprise architects that offer round-the-clock support for entire ecosystems. We are long-standing experts when it comes to OSS and proud to be working with IBM.â€The post Rogue Wave Improves Support for Open Source Software with IBM appeared first on insideHPC.
|
by staff on (#1W62B)
Today D-Wave Systems announced details of its most advanced quantum computing system, featuring a new 2000-qubit processor. The announcement is being made at the company’s inaugural users group conference in Santa Fe, New Mexico. The new processor doubles the number of qubits over the previous generation D-Wave 2X system, enabling larger problems to be solved and extending D-Wave’s significant lead over all quantum computing competitors. The new system also introduces control features that allow users to tune the quantum computational process to solve problems faster and find more diverse solutions when they exist. In early tests these new features have yielded performance improvements of up to 1000 times over the D-Wave 2X system.The post D-Wave Systems Previews 2000-Qubit Quantum Computer appeared first on insideHPC.
|
by Rich Brueckner on (#1W5Y9)
"The demands of cloud-based business models require service providers to pack more efficient computational capability into their infrastructure,†said Monika Biddulph, general manager, systems and software group, ARM. “Our new CoreLink system IP for SoCs, based on the ARMv8-A architecture, delivers the flexibility to seamlessly integrate heterogeneous computing and acceleration to achieve the best balance of compute density and workload optimization within fixed power and space constraints.â€The post ARM Releases CoreLink Interconnect appeared first on insideHPC.
|
by Rich Brueckner on (#1W5TJ)
"Starting in 2015, Oak Ridge National Laboratory partnered with the University of Tennessee to offer a minor-degree program in data center technology and management, one of the first offerings of its kind in the country. ORNL staff members developed the senior-level course in collaboration with UT College of Engineering professor Mark Dean after an ORNL strategic partner identified a need for employees who could bridge both the facilities and operational aspects of running a data center. In addition to developing the course curriculum, ORNL staff members are also serving as guest lecturers."The post Video: How ORNL is Bridging the Gap between Computing and Facilities appeared first on insideHPC.
|
by Rich Brueckner on (#1W34E)
"Deep learning developers and researchers want to train neural networks as fast as possible. Right now we are limited by computing performance," said Dr. Diamos. "The first step in improving performance is to measure it, so we created DeepBench and are opening it up to the deep learning community. We believe that tracking performance on different hardware platforms will help processor designers better optimize their hardware for deep learning applications."The post Baidu Research Announces DeepBench Benchmark for Deep Learning appeared first on insideHPC.
|
by Rich Brueckner on (#1W2MJ)
Oak Ridge National Lab is hosting a 3-day GPU Mini-hackathon led by experts from the OLCF and Nvidia. The event takes place Nov. 1-3 in Knoxville, Tennessee. "General-purpose Graphics Processing Units (GPGPUs) potentially offer exceptionally high memory bandwidth and performance for a wide range of applications. The challenge in utilizing such accelerators has been the difficulty in programming them. This event will introduce you to GPU programming techniques."The post Register Now for GPU Mini-Hackathon at ORNL Nov. 1-3 appeared first on insideHPC.
|
by staff on (#1W2AN)
"As more organizations turn to high performance computing to process large data sets, demand is growing for scalable and secure data centre solutions. The source, availability and reliability of the power grid infrastructure is becoming a critical factor in a data centre site selection decision,†said Jeff Monroe, CEO at Verne Global. “Verne Global is able to deliver EI a forward-thinking path for growth with a solution that combines unparalleled costs savings with operational efficiencies to support their data-intensive research.â€The post Earlham Institute Tests Green HPC from Verne Global in Iceland appeared first on insideHPC.
|
by Rich Brueckner on (#1W254)
Today DDN Japan announced that the University of Tokyo and the Joint Center for Advanced High Performance Computing (JCAHPC) has selected DDN’s burst buffer solution “IME14K†for their new Reedbush supercomputer. "Many problems in science and research today are located at the intersections of HPC and Big Data, and storage and I/O are increasingly important components of any large compute infrastructure."The post University of Tokyo to Deploy IME14K Burst Buffer on Reedbush Supercomputer appeared first on insideHPC.
|
by Rich Brueckner on (#1W1S1)
In this podcast, the Radio Free HPC team discuss Henry Newman’s recent editorial calling for a self-descriptive data format that will stand the test of time. Henry contends that we seem headed for massive data loss unless we act.The post Radio Free HPC Looks for the Forever Data Format appeared first on insideHPC.
|
by staff on (#1W1AW)
Today Quantum Corp. announced that two of Europe’s premier research institutions are using the company’s StorNext workflow storage as the foundation for managing their growing data and enabling a range of scientific initiatives. "With the StorNext platform, we have removed barriers to research," Thomas Disper, CISO and Head of IT, Max Planck Institute for Chemistry. "It allows us to provide a lot more capacity quickly and easily. We don’t need to give research teams data limits, and storage for new projects can be ready in an afternoon.â€The post Quantum Powers Petascale Storage at Two Major European Research Institutions appeared first on insideHPC.
|
by staff on (#1VYX8)
Over at the ANSYS Blog, Tony DeVarco writes that the company worked with SGI to break a world record for HPC scalability. "Breaking last year’s 129,024 core record by more than 16,000 cores, SGI was able to run the ANSYS provided 830 million cell gas combustor model from 1,296 to 145,152 CPU cores.This reduces the total solver wall clock time to run a single simulation from 20 minutes for 1,296 cores to a mere 13 seconds using 145,152 cores and achieving an overall scaling efficiency of 83%."The post SGI and ANSYS Achieve New World Record in HPC appeared first on insideHPC.
|
by staff on (#1VYV4)
The National Computational Infrastructure in Canberra, Australia’s national advanced computing facility, is the first Australian institution to deploy the latest generation of Intel Xeon Phi processors, formerly code named Knights Landing. "NCI is leading efforts in the scientific community to tune applications for Intel Xeon Phi processors,†explains Dr Muhammad Atif, NCI’s HPC Systems and Cloud Services Manager. “We have identified a large number of applications that will benefit from this hardware and software paradigm, including those applications in the domains of computational physics, computational chemistry and climate research.â€The post Intel Xeon Phi Boosts Supercomputing at NCI in Australia appeared first on insideHPC.
|
by Rich Brueckner on (#1VW16)
Maria Chan from NST presented this talk at Argonne Out Loud. "People eagerly anticipate environmental benefits from advances in clean energy technologies, such as advanced batteries for electric cars and thin-film solar cells. Optimizing these technologies for peak performance requires an atomic-level understanding of the designer materials used to make them. But how is that achieved? Maria Chan will explain how computer modeling is used to investigate and even predict how materials behave and change, and how researchers use this information to help improve the materials' performance. She will also discuss the open questions, challenges, and future strategies for using computation to advance energy materials."The post Video: Using HPC to build Clean Energy Technologies appeared first on insideHPC.
|
by Rich Brueckner on (#1VW18)
The European Fortissimo Project has issued its Second Call for Proposals. Fortissimo is a collaborative project that enables European SMEs to be more competitive globally through the use of simulation services running on High Performance Computing Cloud infrastructure.The post Call for Proposals: Fortissimo Project appeared first on insideHPC.
|
by staff on (#1VS0J)
Today Cadence Design Systems announced several important deliveries in its collaboration with TSMC to advance 7nm FinFET designs for mobile and high-performance computing platforms. Working together, Cadence and TSMC have developed some of the first design IP offerings for the 7nm process, offering early IP access to protocols that are optimized for and most relevant to mobile and HPC applications.The post Cadence and TSMC Advance Towards 7nm FinFET Designs appeared first on insideHPC.
|
by Rich Brueckner on (#1VRYV)
The DDN User Group meeting is returning to Salt Lake City for SC16. The meeting takes place Monday, Nov. 14 from 2pm-6pm at the Radisson Hotel near the convention center.The post DDN User Group Coming to SC16 Nov. 14 appeared first on insideHPC.
|
by Rich Brueckner on (#1VRKT)
Larry Smarr presented this talk as part of NCSA's 30th Anniversary Celebration. "For the last thirty years, NCSA has played a critical role in bringing computational science and scientific visualization to the national user community. I will embed those three decades in the 50 year period 1975 to 2025, beginning with my solving Einstein's equations for colliding black holes on the megaFLOPs CDC 6600 and ending with the exascale supercomputer. This 50 years spans a period in which we will have seen a one trillion-fold increase in supercomputer speed."The post Larry Smarr Presents: 50 Years of Supercomputing appeared first on insideHPC.
|
by staff on (#1VRJE)
This week Minimal Metrics announced an early-adopter program for PerfMiner, which uses lightweight, and pervasive performance data collection technology, automates its collection, and mines the data for key performance indicators. These indicators were developed through Minimal Metrics’ extensive experience tuning HPC and enterprise application performance, presented in an audience-specific, drill-down hierarchy that provides accountability for site productivity down to the performance of individual application threads.The post Minimal Metrics Releases PerfMiner Parallel Optimization Tool appeared first on insideHPC.
|
by Rich Brueckner on (#1VQS6)
In this this video from ISC 2016, Tim Carroll describes how Cycle Computing is working with Dell Technologies to deliver more science for more users. Cycle Computing’s CycleCloud software suite is the leading cloud orchestration, provisioning, and data management platform for Big Compute, Big Data, and large technical computing applications running on any public, private, or internal environment.The post Video: Cycle Computing Works with Dell to Deliver More Science for More Users appeared first on insideHPC.
|
by staff on (#1VN19)
Today Verne Global announced Volkswagen is moving more than 1 MW of high performance computing applications to the company’s datacenter in Iceland. The company will take advantage of Verne Global’s hybrid data center approach – with variable resiliency and flexible density – to support HPC applications in its continuous quest to develop cutting-edge cars and automotive technology.The post Volkswagen Moves HPC Workloads to Verne Global in Iceland appeared first on insideHPC.
|