by Rich Brueckner on (#11ANK)
Today Bright Computing announced that Italy-based Do IT Systems has signed up to the Bright Partner Program. "Together, Bright Computing and Do IT Systems offer a compelling proposition for Italian customers that require high quality HPC solutions and remote HPC management," said Roberto Strano, Technical Manager at Do IT Systems. “We have been very impressed with the Bright software, and we are confident that it will enable our customers to develop and manage HPC clusters in an affordable and efficient way.â€The post Do IT Systems Joins Bright Partner Program appeared first on insideHPC.
|
High-Performance Computing News Analysis | insideHPC
Link | https://insidehpc.com/ |
Feed | http://insidehpc.com/feed/ |
Updated | 2024-11-06 12:30 |
by Rich Brueckner on (#11A46)
In this video, Ruchir Puri, an IBM Fellow at the IBM Thomas J. Watson Research Center talks about building large-scale big data systems and delivering real-time solutions such as using machine learning to predict drug reactions. “There is a need for systems that provide greater speed to insight -- for data and analytics workloads to help businesses and organization make sense of the data, to outthink competitors as we usher in a new era of Cognitive Computing."The post Video: Accelerating Cognitive Workloads with Machine Learning appeared first on insideHPC.
|
by Rich Brueckner on (#119HZ)
The Supercomputing Frontiers 2016 conference has issued its Call for Papers. Held in conjunction of the launch of the new Singapore National Supercomputing Center, the event takes place March 15-18. "Supercomputing Frontiers 2016 is Singapore’s annual international conference that provides a platform for thought leaders from both academia and industry to interact and discuss visionary ideas, important global trends and substantial innovations in supercomputing. You are invited to submit 4-page extended abstract by February 8, 2016."The post Call for Papers: Supercomputing Frontiers in Singapore appeared first on insideHPC.
|
by staff on (#119DZ)
Today the ACTnowHPC on demand cloud cluster joined the UberCloud online marketplace, where engineers, scientists and their service providers discover, try, and buy computing power and software in the cloud as a service. ACTnowHPC enables users to harness the power of high performance computing resources on a pay-as-you-go basis without having to invest in the hardware and infrastructure. “We welcome ACTnowHPC to the UberCloud marketplace, the online platform where hardware and software providers exhibit their cloud-related HPC products as a service,†said Wolfgang Gentzsch, Cofounder and President of UberCloud. “Now Advanced Clustering Technologies’ customers can buy and access HPC for their peak demand projects in an extremely user-friendly way.â€The post ACTnowHPC Joins UberCloud Marketplace appeared first on insideHPC.
|
by staff on (#118TY)
Today the Texas Advanced Computing Center (TACC) announced that the Lonestar 5 supercomputer is in full production and is ready to contribute to advancing science across the state of Texas. Managed by TACC, the center's second petaflop system is primed to be a leading computing resource for the engineering and science research community. "An analysis of strong-scaling on Lonestar 5 shows gains over other comparable resources," said Scott Waibel, a graduate student in the Department of Geological Sciences at Portland State University. "Lonestar 5 provides the perfect high performance computing resource for our efforts."The post TACC’s Lonestar 5 Begins Full Production appeared first on insideHPC.
|
by Rich Brueckner on (#11733)
The HPC Advisory Council Stanford Conference 2016 has posted its speaker agenda. The event will take place Feb 24-25, 2016 on the Stanford University campus at the new Jen-Hsun Huang Engineering Center. "The HPC Advisory Council Stanford Conference 2016 will focus on High-Performance Computing usage models and benefits, the future of supercomputing, latest technology developments, best practices and advanced HPC topics. In addition, there will be a strong focus on new topics such as Machine Learning and Big Data. The conference is open to the public free of charge and will bring together system managers, researchers, developers, computational scientists and industry affiliates."The post Agenda Posted for HPC Advisory Council Stanford Conference 2016 appeared first on insideHPC.
|
by Rich Brueckner on (#116E4)
In this Intel Chip Chat podcast, Dan Ferber, Open Source Server Based Storage Technologist at Intel and Ross Turk, Director of Product Marketing for Red Hat describe how Ceph plays a critical role in delivering the full enterprise capability of OpenStack. Ross explains how Ceph allows you to build storage using open source software and standard servers and disks providing a lot of flexibility and enabling you to easily scale out storage. By lowering hardware costs, lowering the vendor lock-in threshold, and enabling customers to fix and enhance their own code, open source and software defined storage (SDS) solutions are enabling the future of next generation storage.The post Podcast: Ceph and the Future of Software Defined Storage appeared first on insideHPC.
|
by staff on (#1165E)
Today HPC cloud provider Rescale announced that Zack Smocha has joined the company as vice president of product marketing, bringing two decades of experience in cloud and enterprise software marketing to his new role. As a member of the Rescale executive team, Smocha will play a key role in supporting Rescale’s mission to provide the leading cloud based high performance computing (HPC) and simulation platform empowering the world’s engineers, scientists, developers and IT professionals to design innovative products and transform IT into unified, agile environments.The post Zack Smocha Joins Rescale as VP of Product Marketing appeared first on insideHPC.
|
by staff on (#115YT)
Today Samsung Electronics announced that it has begun mass producing the industry’s first 4-gigabyte DRAM package based on the second-generation High Bandwidth Memory (HBM2) interface, for use in high performance computing, advanced graphics and network systems, as well as enterprise servers. Samsung’s new HBM solution will offer unprecedented DRAM performance – more than seven times faster than the current DRAM performance limit, allowing faster responsiveness for high-end computing tasks including parallel computing, graphics rendering and machine learning.The post Samsung Mass Producing HBM2 – World’s Fastest DRAM appeared first on insideHPC.
|
by staff on (#115W2)
Today Allinea announced that Oak Ridge National Laboratory has deployed its code performance profiler Allinea MAP in strength on the Titan supercomputer. Allinea MAP enables developers of software for supercomputers of all sizes to produce faster code. Its deployment on Titan will help to use the system’s 299,008 CPU cores and 18,688 GPUs more efficiently. Software teams at Oak Ridge are also preparing for the arrival of the next generation supercomputer, the Summit pre-Exascale system – which will be capable of over 150 PetaFLOPS in 2018.The post Allinea Scalable Profiler Speeds Application Readiness for Summit Supercomputer at Oak Ridge appeared first on insideHPC.
|
by staff on (#115HW)
Although liquid cooling is considered by many to be the future for data centers, the fact remains that there are some who do not yet need to make a full transformation to liquid cooling. Others are restricted until the next budget cycle. Whatever the reason, new technologies like Internal Loop are more affordable than liquid cooling and can replaces less efficient air coolers. This enables HPC data centers to still utilize the highest performing CPUs and GPUs.The post Enhanced Air Cooling with Internal Loop appeared first on insideHPC.
by Rich Brueckner on (#113PV)
The 2016 OpenFabrics Workshop has extended the dealing for its Call for Sessions to Feb. 1, 2016. The event takes place April 4-8, 2016 in Monterey, California. "The Workshop is the premier event for collaboration between OpenFabrics Software (OFS) producers and those whose systems and applications depend on the technology. Every year, the workshop generates lively exchanges among Alliance members, developers and users who all share a vested interest in high performance networks."The post Open Fabrics Workshop Extends Call for Sessions Deadline to Feb 1 appeared first on insideHPC.
by staff on (#112GZ)
"Scientific research is dependent on maintaining and advancing a wide variety of software. However, software development, production, and maintenance are people-intensive; software lifetimes are long compared to hardware; and the value of software is often underappreciated. Because software is not a one-time effort, it must be sustained, meaning that it must be continually updated to work in environments that are changing and to solve changing problems. Software that is not maintained will either simply stop working, or will stop being useful."The post Why Sustainable Software Needs a Change in the Culture of Science appeared first on insideHPC.
|
by Rich Brueckner on (#112FH)
Tejas Karmarkar from Microsoft presented this talk at SC15. "Azure provides on-demand compute resources that enable you to run large parallel and batch compute jobs in the cloud. Extend your on-premises HPC cluster to the cloud when you need more capacity, or run work entirely in Azure. Scale easily and take advantage of advanced networking features such as RDMA to run true HPC applications using MPI to get the results you want, when you need them."The post Video: Microsoft Azure for Engineering Analysis and Simulation appeared first on insideHPC.
|
by staff on (#10ZM7)
Although the cloud has become an accepted part of commercial and consumer computing, science and engineering have been less welcoming to the concept, but this could be on the point of changing with the announcement this month that the ESI Group will be delivering advanced engineering simulation in the cloud, across multiple physics and engineering disciplines.The post ESI Opens Datacenter at Teratec for Engineering in the Cloud appeared first on insideHPC.
|
by Rich Brueckner on (#10ZK4)
In this video from the Intel HPC Developer Conference at SC15, Kevin O'Leary from Intel presents: Vectorization Advisor in Action for Computer-Aided Formulation. "The talk will focus on a step-by-step walkthrough of optimizations for an industry code by using the new Vectorization Advisor (as part of Intel® Advisor XE 2016). Using this tool, HPC experts at UK Daresbury Lab were able to spot new SIMD modernization and optimization opportunities in the DL_MESO application - an industry engine currently used by “computer-aided formulation†companies like Unilever."The post Video: Vectorization Advisor in Action for Computer-Aided Formulation appeared first on insideHPC.
|
by Rich Brueckner on (#10WVN)
The 32nd International Conference on Massive Storage Systems and Technology (MSST 2016) has issued its Call for Participation & Papers. The event takes place April 30 - May 6 in Santa Clara, CA. "The Program Committee requests presentation proposals on issues in designing, building, maintaining, and migrating large-scale systems that implement databases and other kinds of large, typically persistent, web-scale stores (HSM, NoSQL, key-value stores, etc.), and archives at scales of tens of petabytes to exabytes and beyond."The post Call for Papers: International Conference on Massive Storage Systems and Technology appeared first on insideHPC.
|
by Rich Brueckner on (#10WTX)
Lawrence Livermore National Lab is seeking an HPC Compiler & Tools Engineer in our Job of the Week. "As a member of the Development Environment Group in the Livermore Computing (LC) supercomputing center, will work as a software developer specializing in compilers and application development tools for supporting High Performance Computing (HPC). Will work with scientific computing teams, the open source software community, and HPC vendor partners on the development of enabling technologies for the state-of-the-art platforms currently in use and under procurement."The post Job of the Week: HPC Compiler & Tools Engineer at LLNL appeared first on insideHPC.
|
by staff on (#10T1G)
Today Intersect360 Research released its eighth 2015 Site Budget Allocation Map, a look at how HPC sites divide and spend their budgets.The post 2015 Site Budget Map Reflects HPC Spending appeared first on insideHPC.
|
by Rich Brueckner on (#10SZZ)
"Computers are an invaluable tool for most scientific fields. It is used to process measurement data and make simulation models of e.g. the climate or the universe. Brian Vinter talks about what makes a computer a supercomputer, and why it is so hard to build and program supercomputers."The post Video: An Overview of Supercomputing appeared first on insideHPC.
|
by Rich Brueckner on (#10SQA)
Today Baidu’s Silicon Valley AI Lab (SVAIL) released Warp-CTC open source software for the machine learning community. Warp-CTC is an implementation of the #‎CTC algorithm for #‎CPUs and NVIDIA #‎GPUs. "According to SVAIL, Warp-CTC is 10-400x faster than current implementations. It makes end-to-end deep learning easier and faster so researchers can make progress more rapidly."The post Accelerating Machine Learning with Open Source Warp-CTC appeared first on insideHPC.
|
by Rich Brueckner on (#10SNG)
The fastest supercomputers are built with the fastest microprocessor chips, which in turn are built upon the fastest switching technology. But, even the best semiconductors are reaching their limits as more is demanded of them. In the closing months of this year, came news of several developments that could break through silicon’s performance barrier and herald an age of smaller, faster, lower-power chips. It is possible that they could be commercially viable in the next few years.The post In Search Of: A Quantum Leap in Processors appeared first on insideHPC.
|
by Rich Brueckner on (#10P9R)
In this video, Nick Nystrom from PSC describes the new Bridges Supercomputer. Bridges sports a unique architecture featuring Hewlett Packard Enterprise (HPE) large-memory servers including HPE Integrity Superdome X, HPE ProLiant DL580, and HPE Apollo 2000. Bridges is interconnected by Intel Omni-Path Architecture fabric, deployed in a custom topology for Bridges’ anticipated workloads.The post Video: Bridges Supercomputer to be a Flexible Resource for Data Analytics appeared first on insideHPC.
|
by staff on (#10P5Z)
Géant, Europe’s collaboration on e-infrastructure and services for research and education, and operator of the network that interconnects Europe’s National Research and Education Networks (NRENs), has extended its 100 Gigabit Ethernet (GbE) network into data centers.The post Géant Expands Europe’s National Research and Education Network appeared first on insideHPC.
|
by Rich Brueckner on (#10P47)
Researchers are using XSEDE compute resources to study how lasers can be used to make useful materials. In this podcast, Dr. Zhigilei discusses the practical applications of zapping surfaces with short laser pulses. Laser ablation, which refers to the ejection of materials from the irradiated target, generates chemical-free nanoparticles that can be used in medical applications, for example.The post Podcast: Simulating how Lasers can Transform Materials appeared first on insideHPC.
|
by staff on (#10P00)
Today Seagate announced that the French Alternative Energies and Atomic Energy Commission (CEA) has selected the Seagate ClusterStor L300 for its GS1K HPC storage needs. GS1K is the next-generation supercomputing data management infrastructure for CEA’s Military Applications Division.The post Seagate to Power CEA Supercomputing Data Management Infrastructure appeared first on insideHPC.
|
by Rich Brueckner on (#10JHK)
Today Allinea reports that developers of Roxar Software Solutions at Emerson Process Management used the Allinea Forge to increase the performance of their Tempest MORE next-generation reservoir simulator by 30 percent.The post Allinea Tools Help Deliver 30% Performance Boost in Reservoir Simulation appeared first on insideHPC.
|
by staff on (#10JBC)
"The path to Exascale computing is clearly paved with Co-Design architecture. By using a Co-Design approach, the network infrastructure becomes more intelligent, which reduces the overhead on the CPU and streamlines the process of passing data throughout the network. A smart network is the only way that HPC data centers can deal with the massive demands to scale, to deliver constant performance improvements, and to handle exponential data growth."The post InfiniBand Enables Intelligent Networks appeared first on insideHPC.
|
by MichaelS on (#10M5Z)
"In GPAW, the high level nature of Python allows developers to design the algorithms, while C can be implemented for numeric intensive portions of the application through the use of highly optimized math kernels. In this application, the Python portions of the code are serial, which makes offloading to the Intel Xeon Phi coprocessor not feasible. However, and interface has been developed, pyMIC, which allows the application to launch kernels and control data transfers to the coprocessor."The post Python for HPC and the Intel Xeon Phi appeared first on insideHPC.
|
by Rich Brueckner on (#10J78)
A new paper from ORNL's Sparsh Mittal presents a survey of approximate computing techniques. Recently published in ACM Computing Surveys 2016, A Survey Of Techniques for Approximate Computing reviews nearly 85 papers on this increasingly hot topic.The post New Paper Surveys Approximate Computing Techniques appeared first on insideHPC.
|
by staff on (#10J3X)
Today DDN announced that it fortified its position as the HPC storage market leader during 2015. According to a recently published Intersect360 Research HPC User Site Census report, DDN has largest share of installed systems at surveyed HPC sites.The post Survey Shows DDN as HPC Storage Market Leader appeared first on insideHPC.
|
by Rich Brueckner on (#10J0S)
Today the ASC Student Supercomputer Challenge (ASC16) announced details from their Preliminary Contest on January 6. College students from around the world were asked to design a high performance computer system that optimizes HPCG and MASNUM_WAM applications under 3000W as well as to conduct a DNN performance optimization on a standalone hybrid CPU+MIC platform. All system designs along with the result and the code of the optimization application are to be submitted by March 2.The post Deep Learning, Ocean Modeling, and HPCG Come to ASC16 Student Supercomputer Challenge appeared first on insideHPC.
|
by Rich Brueckner on (#10HCC)
The 2016 ACM International Conference on Computing Frontiers has issued its Call for Papers. The event takes place May 16-18 in Como, Italy. "We seek contributions that push the envelope in a wide range of computing topics, from more traditional research in architecture and systems to new technologies and devices. We seek contributions on novel computing paradigms, computational models, algorithms, application paradigms, development environments, compilers, operating environments, computer architecture, hardware substrates, memory technologies, and smarter life applications."The post Call for Papers: ACM International Conference on Computing Frontiers appeared first on insideHPC.
|
by Rich Brueckner on (#10ENT)
Today Fujitsu Limited announced it has developed the world's largest magnetic-reversal simulator. Developed in joint research with the National Institute for Materials Science (NIMS), the simulator runs on the famous K computer using a mesh covering more than 300 million micro-regions. Based on the large-scale magnetic-reversal simulation technology first developed in 2013, this new development offers a faster calculation algorithm and more efficient massive parallel processing.The post K Computer Runs World’s Largest-Scale Magnetic-Reversal Simulator appeared first on insideHPC.
|
by Rich Brueckner on (#10EJB)
Today the Brookhaven National Laboratory announced that it has expanded its Computational Science Initiative (CSI). The programs within this initiative leverage computational science, computer science, and mathematics expertise and investments across multiple research areas at the Laboratory-including the flagship facilities that attract thousands of scientific users each year-further establishing Brookhaven as a leader in tackling the "big data" challenges at experimental facilities and expanding the frontiers of scientific discovery.The post Brookhaven Lab Expands Computational Science Initiative appeared first on insideHPC.
|
by Rich Brueckner on (#10EDK)
Today NOAA reported a nearly four-fold increase in computing capacity to innovate U.S. forecasting in 2016. NOAA’s Weather and Climate Operational Supercomputer System is now running at record speed, with the capacity to process and analyze earth observations at quadrillions of calculations per second to support weather, water and climate forecast models. This investment to advance the field of meteorology and improve global forecasts secures the U.S. reputation as a world leader in atmospheric and water prediction sciences and services.The post NOAA Upgrades Supercomputers to 5.78 Petaflops appeared first on insideHPC.
|
by Rich Brueckner on (#10EBS)
Today Zadara Storage announced that PCPC Direct has added the Zadara Storage Virtual Private Storage Array (VPSA) solution to their product portfolio to provide enterprise-grade storage-as-a-service to their HPC customers in a wide variety of markets. "PCPC Direct has an 18-year history of providing IT operations and managed services to some of the most demanding customers in the HPC market. PCPC Direct provides the underlying infrastructure for companies developing scientific calculations in the aerospace industry, cutting-edge research in higher education and medical research, seismic research for multi-national energy companies and Next-Generation Sequencing (NGS) for genomic research projects."The post Zadara Storage-as-a-Service comes to PCPC Direct appeared first on insideHPC.
|
by MichaelS on (#10E9W)
Parallel file systems have become the norm for HPC environments. While typically used in high end simulations, these parallel file systems can greatly affect the performance and thus the customer experience when using analytics from leading organizations such as SAS. This whitepaper is an excellent summary of how parallel file systems can enhance the workflow and insight that SAS Analytics gives.The post Faster SAS Analytics Using DDN Storage Solutions appeared first on insideHPC.
|
by Rich Brueckner on (#10DRM)
Today FlyElephant announced new tools, a series of webinars, and the formation of a community around the platform. FlyElephant is a platform that provides scientists with computing infrastructure for calculation and automates routine tasks and allows focus on the core issues of research.The post FlyElephant Announces New Tools for Scientific Computing Data Management appeared first on insideHPC.
|
by staff on (#10BJ0)
Today the Texas Advanced Computing Center (TACC) announced it has joined the iRODS Partner Program.The post TACC Joins iRODS Partners Program appeared first on insideHPC.
|
by Rich Brueckner on (#10B3G)
Sean Hefty from Intel presented this talk at the Intel HPC Developer Conference at SC15. "OpenFabrics Interfaces (OFI) is a framework focused on exporting fabric communication services to applications. OFI is best described as a collection of libraries and applications used to export fabric services. The key components of OFI are: application interfaces, provider libraries, kernel services, daemons, and test applications. Libfabric is a core component of OFI. It is the library that defines and exports the user-space API of OFI, and is typically the only software that applications deal with directly. It works in conjunction with provider libraries, which are often integrated directly into libfabric."The post Video: A Brief Introduction to OpenFabrics appeared first on insideHPC.
|
by Rich Brueckner on (#10AZM)
Today DDN announced that the National Center for Atmospheric Research (NCAR) has selected DDN’s new SFA14K high-performance hyper-converged storage platform to drive the performance and deliver the capacity needed for scientific breakthroughs in climate, weather and atmospheric-related science to power its Cheyenne supercomputer. “Having a centralized, large-scale storage resource delivers a real benefit to our scientists,†said Anke Kamrath, director of the operations and services division at NCAR’s computing lab. “With the new system, we have a balance between capacity and performance so that researchers will be able to start looking at the model output immediately without having to move data around. Now, they’ll be able to get right down to the work of analyzing the results and figuring out what the models reveal.â€The post DDN Storage Enables NCAR to Advance Weather and Climate Science appeared first on insideHPC.
|
by Rich Brueckner on (#10AXW)
The HiPEAC16 High Performance and Embedded Architecture and Compilation conference returns to Prague next week. With three keynote talks, 33 workshops, nine tutorials, and 37 papers, the three-day conference takes place January 18-20.The post HiPEAC16 Conference Returns to Prague January 18-20 appeared first on insideHPC.
|
by Rich Brueckner on (#10AHK)
Today the National Center for Atmospheric Research announced that it has selected SGI to build one of the world’s most advanced compute systems used to develop models for predicting the impact of climate change and severe weather events on both a global and local scale. As part of a new procurement coming online in 2017, an SGI ICE XA system named “Cheyenne†will perform some of the world’s most data intensive calculations for weather and climate modeling to improve the resolution and precision by orders of magnitude. As a result, NCAR’s scientists will provide more actionable projections about the impact of climate change for specific regions and assist agencies throughout the world develop more accurate weather predictions on a local and global scale.The post NCAR Selects SGI Supercomputer for Advanced Modeling and Research appeared first on insideHPC.
|
by staff on (#107ZW)
Today Comsol announced the availability of COMSOL Multiphysics software on the Rescale Cloud simulation platform. "For customers seeking HPC resources for bigger analyses, this important initiative with Rescale allows our users to take full advantage of both the COMSOL Multiphysics software and Rescale's secure and flexible simulation environments," said Phil Kinnane, COMSOL's VP of Business Development.The post COMSOL Multiphysics Comes to Rescale Cloud Simulation Platform appeared first on insideHPC.
|
by Rich Brueckner on (#107YQ)
In this video from the Intel HPC Developer Conference at SC15, James Reinders hosts an Intel Black Belt discussion on Code Modernization. "Modern high performance computers are built with a combination of resources including: multi-core processors, many core processors, large caches, high speed memory, high bandwidth inter-processor communications fabric, and high speed I/O capabilities. High performance software needs to be designed to take full advantage of these wealth of resources. Whether re-architecting and/or tuning existing applications for maximum performance or architecting new applications for existing or future machines, it is critical to be aware of the interplay between programming models and the efficient use of these resources. Consider this a starting point for information regarding Code Modernization. When it comes to performance, your code matters!"The post Video: Intel Black Belt Discussion on HPC Code Modernization appeared first on insideHPC.
|
by Rich Brueckner on (#105CP)
“The virtually infinite scale of cloud compute resources is now within easy reach from either existing network-attached or object-based storage. No longer is the location of your storage a roadblock to reaping the ease, timeliness, and cost offered by cloud compute services. Avere’s Enterprise Cloud Bursting solution utilizes the Virtual FXT Edge filer (vFXT) which puts high-performance, scalable NAS where you need it to enable massive compute on-demand for enterprise apps with simple installation and zero hardware maintenance. Avere makes your NAS data accessible to cloud compute without experiencing latency or requiring that your data be moved to the cloud.â€The post Video: Overcoming Storage Roadblocks in HPC Clouds appeared first on insideHPC.
|
by Rich Brueckner on (#104NF)
Dell in Austin is seeking an HPC Benchmarking Principal Engineer in our Job of the Week.The post Job of the Week: HPC Benchmarking Principal Engineer at Dell in Austin appeared first on insideHPC.
|
by staff on (#102S5)
The National Science Foundation’s premier data management platform for the life sciences has rebranded, shedding the project’s original label of iPlant Collaborative and donning the new name CyVerse. The rebrand emphasizes the project’s capacity to provide data management and computation services beyond plant sciences, for collaborations across scientific disciplines.The post NSF’s iPlant Rebrands to CyVerse, Provides Data Management and Computation Across Scientific Disciplines appeared first on insideHPC.
|
by Rich Brueckner on (#102D5)
In this video from the Dell booth at SC15, Addison Snell from Intersect360 Research discusses why HPC is now important to a broader group of use cases, and dug deep into overviews of HPC for research, life sciences and manufacturing. Participants learned more about why HPC, Big Data, and Cloud are converging.The post Video: Intersect360 Research Describes HPC Market Trends at SC15 appeared first on insideHPC.
|