by staff on (#11PZ5)
Over at the UberCloud, Wolfgang Gentzsch writes that, despite the ever increasing complexity of CAE tools, hardware, and system components engineers have never been this close to ubiquitous CAE as a common tool for every engineer.The post How Containers will Enable Ubiquitous CAE as a Common Tool for Every Engineer appeared first on insideHPC.
|
High-Performance Computing News Analysis | insideHPC
Link | https://insidehpc.com/ |
Feed | http://insidehpc.com/feed/ |
Updated | 2024-11-26 06:30 |
by Rich Brueckner on (#11PXZ)
In this video from the HPC in the Cloud Educational Series, Marco Novaes, Solutions Engineer with the Google Cloud Platform team explains how the Broad Institute was able to use Google Pre-Emptible VMs to leverage over 50,000 cores to advance cancer research. "Cancer researchers saw value in a highly-complex genome analysis, but even though they already had powerful processing systems in-house, running the analysis would take months or more. We thought this would be a perfect opportunity to utilize Google Compute Engine’s Preemptible VMs to further their cancer research, which was a natural part of our mission. And now that Preemptible VMs are generally available, we’re excited to tell you about this work."The post Video: Using Google Compute Engine Pre-Emptible VMs for Cancer Research appeared first on insideHPC.
|
by staff on (#11ME9)
The Health Cyberinfrastructure Division of the San Diego Supercomputer Center (SDSC) is participating in a multi-million dollar project with City of Hope Cancer Center to create a research cyberinfrastructure that includes a secure, cloud-based data management platform.The post SDSC Health Division to Create Cancer Research Infrastructure appeared first on insideHPC.
|
by Rich Brueckner on (#11MC4)
engilityEngility in California is seeking an HPC Program Director in our Job of the Week. "Engility is recruiting for a Sr. HPC Program Director to lead a large Scientific Program within the government space. This individual will manage a large team; to include technical, functional and administrative staff. The program will focus on updating, implementing technologies into the customer space with the goal of seamless integration with minimal downtime. This is a HPC focused program that will be responsible for the full lifecycle of HPC services from technology identification and road mapping, HPC architecture design, development, acquisition integration and testing through 24/7 operations including user support. The PM will work proactively with the customer and stakeholders to develop work statements, prioritize personnel and resource deployment, and create and manage complex multi-year budgets and schedules, including risk and opportunity management."The post Job of the Week: HPC Program Director at Engility appeared first on insideHPC.
|
by Rich Brueckner on (#11HMF)
Flow Science has just released FLOW-3D/MP v6.1, the high-performance computing version of its flagship CFD software, FLOW-3D. Enhancements include active simulation control, batch post processing and report generation. "Our 5-6 day simulations became 15-18 hour simulations using FLOW-3D/MP running on our cluster with Infiniband interconnect," said Dr. Justin Crapps of Los Alamos National Labs. "Decreased simulation time allows us to investigate more design options and additional physics/phenomenological complexity."The post FLOW-3D Release Scales CFD up to 512 Cores appeared first on insideHPC.
|
by Rich Brueckner on (#11HHP)
"The Intel’s next generation Xeon Phi processor family x200 product (code-name Knights Landing) brings in new memory technology, a high bandwidth on package memory called Multi-Channel DRAM (MCDRAM) in addition to the traditional DDR4. MCDRAM is a high bandwidth (~4x more than DDR4), low capacity (up to 16GB) memory, packaged with the Knights Landing Silicon. MCDRAM can be configured as a third level cache (memory side cache) or as a distinct NUMA node (allocatable memory) or somewhere in between. With the different memory modes by which the system can be booted, it becomes very challenging from a software perspective to understand the best mode suitable for an application."The post Video: MCDRAM (High Bandwidth Memory) on Knights Landing appeared first on insideHPC.
|
by staff on (#11H09)
Today AliCloud signed a strategic partnership with Nvidia to provide the first GPU-based cloud HPC platform in China. The partnership also plans to provide emerging companies support in areas of HPC and deep learning with comprehensive GPU (Graphics Processing Unit) computing. "Innovative companies in deep learning are one of our most important user communities," said Zhang Wensong, chief scientist of AliCloud. "Together with Nvidia, AliCloud will use its strength in public cloud computing and experiences accumulated in HPC to offer emerging companies in deep learning greater support in the future."The post AliCloud & Nvidia to Expand HPC and Deep Learning Market appeared first on insideHPC.
|
by staff on (#11GYB)
Bright Cluster Manager Version 7.2 is out today, a new release that "extends insight, integration, and ease-of-use for managing clustered and cloud-based IT infrastructures." The new release incorporates a wide range of new features and significantly enhanced monitoring capabilities. "Bright Computing has always prided itself on upgrading its product offerings to respond to new technological trends and user feedback,†said Martijn de Vries, Chief Technology Officer of Bright Computing. “The enhancements we have made in Version 7.2 address recent technology trends, such as the rapid adoption of containers to drive IT efficiency, and support our customers’ ongoing need to stay on top of their dynamic, complex, and converging IT infrastructures.â€The post Bright Cluster Manager 7.2 adds Support for Docker and Intel Omni-Path appeared first on insideHPC.
|
by Rich Brueckner on (#11G9Y)
Today Adaptive Computing announces it has set a new record in High Throughput Computing (HTC) in collaboration with Supermicro, a leader in high-performance green computing solutions. Supermicro SuperServers, custom optimized for Nitro, the new high throughput resource manager from Adaptive Computing, were able to launch up to 530 tasks per second per core on Supermicro based low latency UP SuperServer and over 17,600 tasks per second on its 4-Way based SuperServer. This record-breaking throughput can accelerate financial risk analysis, EDA regression tests, life sciences research, and other data analysis-driven projects. It can expedite the process of gaining critical insights, thereby delivering products and services to market faster.The post Adaptive Computing Achieves Record High Throughput with Supermicro Systems appeared first on insideHPC.
|
by john kirkley on (#11DNE)
The consensus of the panel was that making full use of Intel SSF requires system thinking at the highest level. This entails deep collaboration with the company’s application end-user customers as well as with its OEM partners, who have to design, build and support these systems at the customer site. Mark Seager commented: “For the high-end we’re going after density and (solving) the power problem to create very dense solutions that, in many cases, are water-cooled going forward. We are also asking how can we do a less dense design where cost is more of a driver.†In the latter case, lower end solutions can relinquish some scalability features while still retaining application efficiency.The post The Death and Life of Traditional HPC appeared first on insideHPC.
|
by staff on (#11DCN)
Compute Canada is partnering with the Social Sciences and Humanities Research Council (SSHRC) to launch the first ever Human Dimensions Open Data Challenge. This challenge, led by social sciences and humanities researchers, will see research teams compete against one another using open-data sets to develop systems, processes, or fully-functional technology applications that address the human dimensions to key challenges in the natural resources and energy sectors. The Ontario Centres of Excellence, and ThinkData Works have also partnered on this project to provide additional resources and support.The post Compute Canada Sponsors Human Dimensions Open Data Challenge appeared first on insideHPC.
|
by Rich Brueckner on (#11D93)
Application deadlines are fast approaching for the Blue Waters Graduate Program and the International Summer School on HPC Challenges in Computational Sciences.The post Apply now for HPC Summer School & Blue Waters Graduate Program appeared first on insideHPC.
|
by MichaelS on (#11D5T)
Matrix multiplies can be decomposed into tiles and executed very fast on the latest generations of coprocessors. Intel has developed the hStreams library that supports task concurrency on heterogeneous platforms. The concurrency may be across nodes (Xeon, KNC, KNL-SB, KNL-LB); within a node for small matrix operations; and in the overlapping of computation and communication, particularly for tiled solutions. It relieves the user of complexity in dealing with thread affinitization, offloading, memory types, and memory affinitization.The post Heterogeneous Streams with Intel Xeon Phi appeared first on insideHPC.
|
by Rich Brueckner on (#11ANK)
Today Bright Computing announced that Italy-based Do IT Systems has signed up to the Bright Partner Program. "Together, Bright Computing and Do IT Systems offer a compelling proposition for Italian customers that require high quality HPC solutions and remote HPC management," said Roberto Strano, Technical Manager at Do IT Systems. “We have been very impressed with the Bright software, and we are confident that it will enable our customers to develop and manage HPC clusters in an affordable and efficient way.â€The post Do IT Systems Joins Bright Partner Program appeared first on insideHPC.
|
by Rich Brueckner on (#11A46)
In this video, Ruchir Puri, an IBM Fellow at the IBM Thomas J. Watson Research Center talks about building large-scale big data systems and delivering real-time solutions such as using machine learning to predict drug reactions. “There is a need for systems that provide greater speed to insight -- for data and analytics workloads to help businesses and organization make sense of the data, to outthink competitors as we usher in a new era of Cognitive Computing."The post Video: Accelerating Cognitive Workloads with Machine Learning appeared first on insideHPC.
|
by Rich Brueckner on (#119HZ)
The Supercomputing Frontiers 2016 conference has issued its Call for Papers. Held in conjunction of the launch of the new Singapore National Supercomputing Center, the event takes place March 15-18. "Supercomputing Frontiers 2016 is Singapore’s annual international conference that provides a platform for thought leaders from both academia and industry to interact and discuss visionary ideas, important global trends and substantial innovations in supercomputing. You are invited to submit 4-page extended abstract by February 8, 2016."The post Call for Papers: Supercomputing Frontiers in Singapore appeared first on insideHPC.
|
by staff on (#119DZ)
Today the ACTnowHPC on demand cloud cluster joined the UberCloud online marketplace, where engineers, scientists and their service providers discover, try, and buy computing power and software in the cloud as a service. ACTnowHPC enables users to harness the power of high performance computing resources on a pay-as-you-go basis without having to invest in the hardware and infrastructure. “We welcome ACTnowHPC to the UberCloud marketplace, the online platform where hardware and software providers exhibit their cloud-related HPC products as a service,†said Wolfgang Gentzsch, Cofounder and President of UberCloud. “Now Advanced Clustering Technologies’ customers can buy and access HPC for their peak demand projects in an extremely user-friendly way.â€The post ACTnowHPC Joins UberCloud Marketplace appeared first on insideHPC.
|
by staff on (#118TY)
Today the Texas Advanced Computing Center (TACC) announced that the Lonestar 5 supercomputer is in full production and is ready to contribute to advancing science across the state of Texas. Managed by TACC, the center's second petaflop system is primed to be a leading computing resource for the engineering and science research community. "An analysis of strong-scaling on Lonestar 5 shows gains over other comparable resources," said Scott Waibel, a graduate student in the Department of Geological Sciences at Portland State University. "Lonestar 5 provides the perfect high performance computing resource for our efforts."The post TACC’s Lonestar 5 Begins Full Production appeared first on insideHPC.
|
by Rich Brueckner on (#11733)
The HPC Advisory Council Stanford Conference 2016 has posted its speaker agenda. The event will take place Feb 24-25, 2016 on the Stanford University campus at the new Jen-Hsun Huang Engineering Center. "The HPC Advisory Council Stanford Conference 2016 will focus on High-Performance Computing usage models and benefits, the future of supercomputing, latest technology developments, best practices and advanced HPC topics. In addition, there will be a strong focus on new topics such as Machine Learning and Big Data. The conference is open to the public free of charge and will bring together system managers, researchers, developers, computational scientists and industry affiliates."The post Agenda Posted for HPC Advisory Council Stanford Conference 2016 appeared first on insideHPC.
|
by Rich Brueckner on (#116E4)
In this Intel Chip Chat podcast, Dan Ferber, Open Source Server Based Storage Technologist at Intel and Ross Turk, Director of Product Marketing for Red Hat describe how Ceph plays a critical role in delivering the full enterprise capability of OpenStack. Ross explains how Ceph allows you to build storage using open source software and standard servers and disks providing a lot of flexibility and enabling you to easily scale out storage. By lowering hardware costs, lowering the vendor lock-in threshold, and enabling customers to fix and enhance their own code, open source and software defined storage (SDS) solutions are enabling the future of next generation storage.The post Podcast: Ceph and the Future of Software Defined Storage appeared first on insideHPC.
|
by staff on (#1165E)
Today HPC cloud provider Rescale announced that Zack Smocha has joined the company as vice president of product marketing, bringing two decades of experience in cloud and enterprise software marketing to his new role. As a member of the Rescale executive team, Smocha will play a key role in supporting Rescale’s mission to provide the leading cloud based high performance computing (HPC) and simulation platform empowering the world’s engineers, scientists, developers and IT professionals to design innovative products and transform IT into unified, agile environments.The post Zack Smocha Joins Rescale as VP of Product Marketing appeared first on insideHPC.
|
by staff on (#115YT)
Today Samsung Electronics announced that it has begun mass producing the industry’s first 4-gigabyte DRAM package based on the second-generation High Bandwidth Memory (HBM2) interface, for use in high performance computing, advanced graphics and network systems, as well as enterprise servers. Samsung’s new HBM solution will offer unprecedented DRAM performance – more than seven times faster than the current DRAM performance limit, allowing faster responsiveness for high-end computing tasks including parallel computing, graphics rendering and machine learning.The post Samsung Mass Producing HBM2 – World’s Fastest DRAM appeared first on insideHPC.
|
by staff on (#115W2)
Today Allinea announced that Oak Ridge National Laboratory has deployed its code performance profiler Allinea MAP in strength on the Titan supercomputer. Allinea MAP enables developers of software for supercomputers of all sizes to produce faster code. Its deployment on Titan will help to use the system’s 299,008 CPU cores and 18,688 GPUs more efficiently. Software teams at Oak Ridge are also preparing for the arrival of the next generation supercomputer, the Summit pre-Exascale system – which will be capable of over 150 PetaFLOPS in 2018.The post Allinea Scalable Profiler Speeds Application Readiness for Summit Supercomputer at Oak Ridge appeared first on insideHPC.
|
by staff on (#115HW)
Although liquid cooling is considered by many to be the future for data centers, the fact remains that there are some who do not yet need to make a full transformation to liquid cooling. Others are restricted until the next budget cycle. Whatever the reason, new technologies like Internal Loop are more affordable than liquid cooling and can replaces less efficient air coolers. This enables HPC data centers to still utilize the highest performing CPUs and GPUs.The post Enhanced Air Cooling with Internal Loop appeared first on insideHPC.
by Rich Brueckner on (#113PV)
The 2016 OpenFabrics Workshop has extended the dealing for its Call for Sessions to Feb. 1, 2016. The event takes place April 4-8, 2016 in Monterey, California. "The Workshop is the premier event for collaboration between OpenFabrics Software (OFS) producers and those whose systems and applications depend on the technology. Every year, the workshop generates lively exchanges among Alliance members, developers and users who all share a vested interest in high performance networks."The post Open Fabrics Workshop Extends Call for Sessions Deadline to Feb 1 appeared first on insideHPC.
by staff on (#112GZ)
"Scientific research is dependent on maintaining and advancing a wide variety of software. However, software development, production, and maintenance are people-intensive; software lifetimes are long compared to hardware; and the value of software is often underappreciated. Because software is not a one-time effort, it must be sustained, meaning that it must be continually updated to work in environments that are changing and to solve changing problems. Software that is not maintained will either simply stop working, or will stop being useful."The post Why Sustainable Software Needs a Change in the Culture of Science appeared first on insideHPC.
|
by Rich Brueckner on (#112FH)
Tejas Karmarkar from Microsoft presented this talk at SC15. "Azure provides on-demand compute resources that enable you to run large parallel and batch compute jobs in the cloud. Extend your on-premises HPC cluster to the cloud when you need more capacity, or run work entirely in Azure. Scale easily and take advantage of advanced networking features such as RDMA to run true HPC applications using MPI to get the results you want, when you need them."The post Video: Microsoft Azure for Engineering Analysis and Simulation appeared first on insideHPC.
|
by staff on (#10ZM7)
Although the cloud has become an accepted part of commercial and consumer computing, science and engineering have been less welcoming to the concept, but this could be on the point of changing with the announcement this month that the ESI Group will be delivering advanced engineering simulation in the cloud, across multiple physics and engineering disciplines.The post ESI Opens Datacenter at Teratec for Engineering in the Cloud appeared first on insideHPC.
|
by Rich Brueckner on (#10ZK4)
In this video from the Intel HPC Developer Conference at SC15, Kevin O'Leary from Intel presents: Vectorization Advisor in Action for Computer-Aided Formulation. "The talk will focus on a step-by-step walkthrough of optimizations for an industry code by using the new Vectorization Advisor (as part of Intel® Advisor XE 2016). Using this tool, HPC experts at UK Daresbury Lab were able to spot new SIMD modernization and optimization opportunities in the DL_MESO application - an industry engine currently used by “computer-aided formulation†companies like Unilever."The post Video: Vectorization Advisor in Action for Computer-Aided Formulation appeared first on insideHPC.
|
by Rich Brueckner on (#10WVN)
The 32nd International Conference on Massive Storage Systems and Technology (MSST 2016) has issued its Call for Participation & Papers. The event takes place April 30 - May 6 in Santa Clara, CA. "The Program Committee requests presentation proposals on issues in designing, building, maintaining, and migrating large-scale systems that implement databases and other kinds of large, typically persistent, web-scale stores (HSM, NoSQL, key-value stores, etc.), and archives at scales of tens of petabytes to exabytes and beyond."The post Call for Papers: International Conference on Massive Storage Systems and Technology appeared first on insideHPC.
|
by Rich Brueckner on (#10WTX)
Lawrence Livermore National Lab is seeking an HPC Compiler & Tools Engineer in our Job of the Week. "As a member of the Development Environment Group in the Livermore Computing (LC) supercomputing center, will work as a software developer specializing in compilers and application development tools for supporting High Performance Computing (HPC). Will work with scientific computing teams, the open source software community, and HPC vendor partners on the development of enabling technologies for the state-of-the-art platforms currently in use and under procurement."The post Job of the Week: HPC Compiler & Tools Engineer at LLNL appeared first on insideHPC.
|
by staff on (#10T1G)
Today Intersect360 Research released its eighth 2015 Site Budget Allocation Map, a look at how HPC sites divide and spend their budgets.The post 2015 Site Budget Map Reflects HPC Spending appeared first on insideHPC.
|
by Rich Brueckner on (#10SZZ)
"Computers are an invaluable tool for most scientific fields. It is used to process measurement data and make simulation models of e.g. the climate or the universe. Brian Vinter talks about what makes a computer a supercomputer, and why it is so hard to build and program supercomputers."The post Video: An Overview of Supercomputing appeared first on insideHPC.
|
by Rich Brueckner on (#10SQA)
Today Baidu’s Silicon Valley AI Lab (SVAIL) released Warp-CTC open source software for the machine learning community. Warp-CTC is an implementation of the #‎CTC algorithm for #‎CPUs and NVIDIA #‎GPUs. "According to SVAIL, Warp-CTC is 10-400x faster than current implementations. It makes end-to-end deep learning easier and faster so researchers can make progress more rapidly."The post Accelerating Machine Learning with Open Source Warp-CTC appeared first on insideHPC.
|
by Rich Brueckner on (#10SNG)
The fastest supercomputers are built with the fastest microprocessor chips, which in turn are built upon the fastest switching technology. But, even the best semiconductors are reaching their limits as more is demanded of them. In the closing months of this year, came news of several developments that could break through silicon’s performance barrier and herald an age of smaller, faster, lower-power chips. It is possible that they could be commercially viable in the next few years.The post In Search Of: A Quantum Leap in Processors appeared first on insideHPC.
|
by Rich Brueckner on (#10P9R)
In this video, Nick Nystrom from PSC describes the new Bridges Supercomputer. Bridges sports a unique architecture featuring Hewlett Packard Enterprise (HPE) large-memory servers including HPE Integrity Superdome X, HPE ProLiant DL580, and HPE Apollo 2000. Bridges is interconnected by Intel Omni-Path Architecture fabric, deployed in a custom topology for Bridges’ anticipated workloads.The post Video: Bridges Supercomputer to be a Flexible Resource for Data Analytics appeared first on insideHPC.
|
by staff on (#10P5Z)
Géant, Europe’s collaboration on e-infrastructure and services for research and education, and operator of the network that interconnects Europe’s National Research and Education Networks (NRENs), has extended its 100 Gigabit Ethernet (GbE) network into data centers.The post Géant Expands Europe’s National Research and Education Network appeared first on insideHPC.
|
by Rich Brueckner on (#10P47)
Researchers are using XSEDE compute resources to study how lasers can be used to make useful materials. In this podcast, Dr. Zhigilei discusses the practical applications of zapping surfaces with short laser pulses. Laser ablation, which refers to the ejection of materials from the irradiated target, generates chemical-free nanoparticles that can be used in medical applications, for example.The post Podcast: Simulating how Lasers can Transform Materials appeared first on insideHPC.
|
by staff on (#10P00)
Today Seagate announced that the French Alternative Energies and Atomic Energy Commission (CEA) has selected the Seagate ClusterStor L300 for its GS1K HPC storage needs. GS1K is the next-generation supercomputing data management infrastructure for CEA’s Military Applications Division.The post Seagate to Power CEA Supercomputing Data Management Infrastructure appeared first on insideHPC.
|
by Rich Brueckner on (#10JHK)
Today Allinea reports that developers of Roxar Software Solutions at Emerson Process Management used the Allinea Forge to increase the performance of their Tempest MORE next-generation reservoir simulator by 30 percent.The post Allinea Tools Help Deliver 30% Performance Boost in Reservoir Simulation appeared first on insideHPC.
|
by staff on (#10JBC)
"The path to Exascale computing is clearly paved with Co-Design architecture. By using a Co-Design approach, the network infrastructure becomes more intelligent, which reduces the overhead on the CPU and streamlines the process of passing data throughout the network. A smart network is the only way that HPC data centers can deal with the massive demands to scale, to deliver constant performance improvements, and to handle exponential data growth."The post InfiniBand Enables Intelligent Networks appeared first on insideHPC.
|
by MichaelS on (#10M5Z)
"In GPAW, the high level nature of Python allows developers to design the algorithms, while C can be implemented for numeric intensive portions of the application through the use of highly optimized math kernels. In this application, the Python portions of the code are serial, which makes offloading to the Intel Xeon Phi coprocessor not feasible. However, and interface has been developed, pyMIC, which allows the application to launch kernels and control data transfers to the coprocessor."The post Python for HPC and the Intel Xeon Phi appeared first on insideHPC.
|
by Rich Brueckner on (#10J78)
A new paper from ORNL's Sparsh Mittal presents a survey of approximate computing techniques. Recently published in ACM Computing Surveys 2016, A Survey Of Techniques for Approximate Computing reviews nearly 85 papers on this increasingly hot topic.The post New Paper Surveys Approximate Computing Techniques appeared first on insideHPC.
|
by staff on (#10J3X)
Today DDN announced that it fortified its position as the HPC storage market leader during 2015. According to a recently published Intersect360 Research HPC User Site Census report, DDN has largest share of installed systems at surveyed HPC sites.The post Survey Shows DDN as HPC Storage Market Leader appeared first on insideHPC.
|
by Rich Brueckner on (#10J0S)
Today the ASC Student Supercomputer Challenge (ASC16) announced details from their Preliminary Contest on January 6. College students from around the world were asked to design a high performance computer system that optimizes HPCG and MASNUM_WAM applications under 3000W as well as to conduct a DNN performance optimization on a standalone hybrid CPU+MIC platform. All system designs along with the result and the code of the optimization application are to be submitted by March 2.The post Deep Learning, Ocean Modeling, and HPCG Come to ASC16 Student Supercomputer Challenge appeared first on insideHPC.
|
by Rich Brueckner on (#10HCC)
The 2016 ACM International Conference on Computing Frontiers has issued its Call for Papers. The event takes place May 16-18 in Como, Italy. "We seek contributions that push the envelope in a wide range of computing topics, from more traditional research in architecture and systems to new technologies and devices. We seek contributions on novel computing paradigms, computational models, algorithms, application paradigms, development environments, compilers, operating environments, computer architecture, hardware substrates, memory technologies, and smarter life applications."The post Call for Papers: ACM International Conference on Computing Frontiers appeared first on insideHPC.
|
by Rich Brueckner on (#10ENT)
Today Fujitsu Limited announced it has developed the world's largest magnetic-reversal simulator. Developed in joint research with the National Institute for Materials Science (NIMS), the simulator runs on the famous K computer using a mesh covering more than 300 million micro-regions. Based on the large-scale magnetic-reversal simulation technology first developed in 2013, this new development offers a faster calculation algorithm and more efficient massive parallel processing.The post K Computer Runs World’s Largest-Scale Magnetic-Reversal Simulator appeared first on insideHPC.
|
by Rich Brueckner on (#10EJB)
Today the Brookhaven National Laboratory announced that it has expanded its Computational Science Initiative (CSI). The programs within this initiative leverage computational science, computer science, and mathematics expertise and investments across multiple research areas at the Laboratory-including the flagship facilities that attract thousands of scientific users each year-further establishing Brookhaven as a leader in tackling the "big data" challenges at experimental facilities and expanding the frontiers of scientific discovery.The post Brookhaven Lab Expands Computational Science Initiative appeared first on insideHPC.
|
by Rich Brueckner on (#10EDK)
Today NOAA reported a nearly four-fold increase in computing capacity to innovate U.S. forecasting in 2016. NOAA’s Weather and Climate Operational Supercomputer System is now running at record speed, with the capacity to process and analyze earth observations at quadrillions of calculations per second to support weather, water and climate forecast models. This investment to advance the field of meteorology and improve global forecasts secures the U.S. reputation as a world leader in atmospheric and water prediction sciences and services.The post NOAA Upgrades Supercomputers to 5.78 Petaflops appeared first on insideHPC.
|
by Rich Brueckner on (#10EBS)
Today Zadara Storage announced that PCPC Direct has added the Zadara Storage Virtual Private Storage Array (VPSA) solution to their product portfolio to provide enterprise-grade storage-as-a-service to their HPC customers in a wide variety of markets. "PCPC Direct has an 18-year history of providing IT operations and managed services to some of the most demanding customers in the HPC market. PCPC Direct provides the underlying infrastructure for companies developing scientific calculations in the aerospace industry, cutting-edge research in higher education and medical research, seismic research for multi-national energy companies and Next-Generation Sequencing (NGS) for genomic research projects."The post Zadara Storage-as-a-Service comes to PCPC Direct appeared first on insideHPC.
|