by staff on (#2KVQE)
Simon Fraser University (SFU), Compute Canada and WestGrid were all part of the major new update to Canada's HPC resources with the recent announcement of the launch of the most powerful academic supercomputer in Canada, Cedar. Housed in the new data centre at SFU’s Burnaby Campus, Cedar will serve Canadian researchers across the country in all scientific disciplines by providing expanded compute, storage and cloud resources.The post Cedar Supercomputer Comes to Canada appeared first on insideHPC.
|
High-Performance Computing News Analysis | insideHPC
Link | https://insidehpc.com/ |
Feed | http://insidehpc.com/feed/ |
Updated | 2024-11-25 11:15 |
by Rich Brueckner on (#2KVP1)
"In this talk we will discuss a workflow for building and testing Docker containers and their deployment on an HPC system using Shifter. Docker is widely used by developers as a powerful tool for standardizing the packaging of applications across multiple environments, which greatly eases the porting efforts. On the other hand, Shifter provides a container runtime that has been specifically built to fit the needs of HPC. We will briefly introduce these tools while discussing the advantages of using these technologies to fulfill the needs of specific workflows for HPC, e.g., security, high-performance, portability and parallel scalability."The post HPC Workflows Using Containers appeared first on insideHPC.
|
by staff on (#2KR7B)
Over at the Google Blog, Alex Barrett writes that an MIT math professor recently broke the record for the largest-ever Compute Engine cluster, with 220,000 cores on Preemptible VMs. According to Google, this is the largest known HPC cluster to ever run in the public cloud.The post MIT Professor Runs Record Google Compute Engine job with 220K Cores appeared first on insideHPC.
|
by staff on (#2KR3E)
Today the Gauss Centre for Supercomputing in Germany announced that Prof. Dr. Michael M. Resch is the new chairman of the GCS Board of Directors. “Over the coming years, GCS is devoted to keeping its leading European position in HPC,†Resch said. "With all the challenges of architectural diversity and varying user requirements, we strongly believe that GCS will face the challenge and deliver performance not just in terms of flops, but more importantly in terms of best solutions and practices for our scientific and industrial users.â€The post Michael Resch Named Chairman of GCS in Germany appeared first on insideHPC.
|
by Rich Brueckner on (#2KR12)
Dr. Eng Lim Goh from Hewlett Packard Enterprise gave this talk at the HPC User Forum. "SGI’s highly complementary portfolio, including its in-memory high-performance data analytics technology and leading high-performance computing solutions will extend and strengthen HPE’s current leadership position in the growing mission critical and high-performance computing segments of the server market."The post Dr. Eng Lim Goh presents: HPC & AI Technology Trends appeared first on insideHPC.
|
by MichaelS on (#2KQYY)
"Just as developers need tools to understand the performance of a CPU intensive application in order to modify the code for higher performance, so do those that develop interactive 3D computer graphics applications. An excellent tool for t this purpose is the Intel Graphics Performance Analyzer set. This tool, which is free to download, can help the developer understand at a very low level how the application is performing, from a number of aspects."The post Intel® Graphics Performance Analyzer for Faster Graphics Performance appeared first on insideHPC.
|
by Rich Brueckner on (#2KKVF)
"Starting today, Intel will contribute all Lustre features and enhancements to the open source community. This will mean that we will no longer provide Intel-branded releases of Lustre, and instead align our efforts and support around the community release." In related news, former Whamcloud CEO and Lustre team leader Brent Gorda has left the Intel Lustre team for another management position.The post Intel to Open Source All Lustre Code as Brent Gorda Moves On appeared first on insideHPC.
|
by staff on (#2KKS4)
Today PSSC Labs announced it has refreshed its CBeST (Complete Beowulf Software Toolkit) cluster management package. CBeST is already a proven platform deployed on over 2200 PowerWulf Clusters to date and with this refresh PSSC Labs is adding a host of new features and upgrades to ensure users have everything needed to manage, monitor, maintain and upgrade their HPC cluster. "PSSC Labs is unique in that we manufacture all of our own hardware and develop our own cluster management toolkits in house. While other companies simply cobble together third party hardware and software, PSSC Labs custom builds every HPC cluster to achieve performance and reliability boosts of up to 15%,†said Alex Lesser, Vice President of PSSC Labs.The post PSSC Labs Updates CBeST Cluster Management Software appeared first on insideHPC.
|
by Rich Brueckner on (#2KKKJ)
Paul Messina from Argonne presented this talk at the HPC User Forum in Santa Fe. "The Exascale Computing Project (ECP) was established with the goals of maximizing the benefits of HPC for the United States and accelerating the development of a capable exascale computing ecosystem. The ECP is a collaborative effort of two U.S. Department of Energy organizations – the Office of Science (DOE-SC) and the National Nuclear Security Administration (NNSA)."The post Update on the Exascale Computing Project (ECP) appeared first on insideHPC.
|
by Beth Harlen on (#2KKCN)
This Rock Stars of HPC series is about the men and women who are changing the way the HPC community develops, deploys, and operates the supercomputers and social and economic impact of their discoveries. "As the lead developer of the VMD molecular visualization and analysis tool, John Stone’s code is used by more than 100,000 researchers around the world. He’s also a CUDA Fellow, helping to bring HPC to the masses with accelerated computing. In this way and many others, John Stone is certainly one of the Rock Stars of HPC."The post Rock Stars of HPC: John Stone appeared first on insideHPC.
|
by Rich Brueckner on (#2KG4F)
Dan Olds from OrionX.net presented this talk at the Switzerland HPC Conference. "Dan Olds will present recent research into the history of High Performance Interconnects (HPI), the current state of the HPI market, where HPIs are going in the future, and how customers should evaluate HPI options today. This will be a highly informative and interactive session."The post High Performance Interconnects – Assessments, Rankings and Landscape appeared first on insideHPC.
|
by staff on (#2KG2Y)
Today the Rescale HPC Cloud introduced the ScaleX Labs with Intel Xeon Phi processors and Intel Omni-Path Fabric managed by R Systems. The collaboration brings lightning-fast, next-generation computation to Rescale’s cloud platform for big compute, ScaleX Pro. "We are proud to provide a remote access platform for Intel’s latest processors and interconnect, and appreciate the committed cooperation of our partners at R Systems,†said Rescale CEO Joris Poort. “Our customers care about both performance and convenience, and the ScaleX Labs with Intel Xeon Phi processors brings them both in a single cloud HPC solution at a price point that works for everyone.â€The post Rescale Announces ScaleX Labs with Intel Xeon Phi and Omni-Path appeared first on insideHPC.
|
by staff on (#2KFRD)
"With the increasing challenges in conventional approaches to improving memory capacity and power efficiency, our early research indicates that a significant change in the operating temperature of DRAM using cryogenic techniques may become essential in future memory systems,†said Dr. Gary Bronner, vice president of Rambus Labs. “Our strategic partnership with Microsoft has enabled us to identify new architectural models as we strive to develop systems utilizing cryogenic memory. The expansion of this collaboration will lead to new applications in high-performance supercomputers and quantum computers.â€The post Rambus Collaborates with Microsoft on Cryogenic Memory appeared first on insideHPC.
|
by staff on (#2KFKZ)
Today the European PRACE initiative announced that their 14th Call for Proposals yielded 113 eligible proposals of which 59 were awarded a total of close to 2 thousand million core hours. This brings the total number of projects awarded by PRACE to 524. Taking into account the 3 multi-year projects from the 12th Call that were renewed and the 10 million core hours reserved for Centres of Excellence, the total amount of core hours awarded by PRACE to more than 14 thousand million. "The 59 newly awarded projects are led by principal investigators from 15 different European countries. In addition, two projects are led by PIs from New Zealand and the USA."The post Call for Proposals reflects magnitude of PRACE 2 appeared first on insideHPC.
|
by Rich Brueckner on (#2KEWM)
In this video from the Switzerland HPC Conference, Jeffrey Stuecheli from IBM presents: Open CAPI, A New Standard for High Performance Attachment of Memory, Acceleration, and Networks. "OpenCAPI sets a new standard for the industry, providing a high bandwidth, low latency open interface design specification. This session will introduce the new standard and it's goals. This includes details on how the interface protocol provides unprecedented latency and bandwidth to attached devices."The post Open CAPI: A New Standard for High Performance Attachment of Memory, Acceleration, and Networks appeared first on insideHPC.
|
by Rich Brueckner on (#2KCCG)
"HPC software is becoming increasingly complex. The space of possible build configurations is combinatorial, and existing package management tools do not handle these complexities well. Because of this, most HPC software is built by hand. This talk introduces "Spack", an open-source tool for scientific package management which helps developers and cluster administrators avoid having to waste countless hours porting and rebuilding software." A tutorial video on using Spack is also included.The post SPACK: A Package Manager for Supercomputers, Linux, and MacOS appeared first on insideHPC.
|
by staff on (#2KCAV)
Today IBM announced that it will offer the Anaconda Open Data Science platform on IBM Cognitive Systems. Anaconda will also integrate with the PowerAI software distribution for machine learning and deep learning that makes it simple and fast to take advantage of Power performance and GPU optimization for data intensive cognitive workloads. "Anaconda is an important capability for developers building cognitive solutions, and now it’s available on IBM’s high performance deep learning platform,†said Bob Picciano, senior vice president of Cognitive Systems. “Anaconda on IBM Cognitive Systems empowers developers and data scientists to build and deploy deep learning applications that are ready to scale.â€The post Anaconda Open Data Science Platform comes to IBM Cognitive Systems appeared first on insideHPC.
|
by Rich Brueckner on (#2KC9A)
Today the Department of Energy’s Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program announced it is accepting proposals for high-impact, computationally intensive research campaigns in a broad array of science, engineering, and computer science domains. DOE’s Office of Science plans to award over 6 billion supercomputer processor-hours at Argonne National Laboratory and […]The post DOE’s INCITE Program Seeks Advanced Computational Research Proposals for 2018 appeared first on insideHPC.
|
by staff on (#2KC78)
Today IBM announced that the first annual OpenPOWER Foundation Developer Congress will take place May 22-25 in San Francisco. With a focus on Machine Learning, the conference will focus on continuing to foster the collaboration within the foundation to satisfy the performance demands of today’s computing market.The post OpenPOWER Developer Congress Event to Focus on Machine Learning appeared first on insideHPC.
|
by staff on (#2KC5K)
"Baidu and NVIDIA are long-time partners in advancing the state of the art in AI,†said Ian Buck, general manager of Accelerated Computing at NVIDIA. “Baidu understands that enterprises need GPU computing to process the massive volumes of data needed for deep learning. Through Baidu Cloud, companies can quickly convert data into insights that lead to breakthrough products and services.â€The post Baidu Deep Learning Service adds Latest NVIDIA Pascal GPUs appeared first on insideHPC.
|
by staff on (#2K9A7)
In this AI Podcast, Mark Michalski from the Massachusetts General Hospital Center for Clinical Data Science discusses how AI is being used to advance medicine. "Medicine — particularly radiology and pathology — have become more data-driven. The Massachusetts General Hospital Center for Clinical Data Science — led by Mark Michalski — promises to accelerate that, using AI technologies to spot patterns that can improve the detection, diagnosis and treatment of diseases."The post Podcast: How AI Can Improve the Diagnosis and Treatment of Diseases appeared first on insideHPC.
|
by Rich Brueckner on (#2K993)
Luigi Brochard from Lenovo gave this talk at the Switzerland HPC Conference. "High performance computing is converging more and more with the big data topic and related infrastructure requirements in the field. Lenovo is investing in developing systems designed to resolve todays and future problems in a more efficient way and respond to the demands of Industrial and research application landscape."The post Lenovo HPC Strategy Update appeared first on insideHPC.
|
by Rich Brueckner on (#2K6TY)
"The basic idea of deep learning is to automatically learn to represent data in multiple layers of increasing abstraction, thus helping to discover intricate structure in large datasets. NVIDIA has invested in SaturnV, a large GPU-accelerated cluster, (#28 on the November 2016 Top500 list) to support internal machine learning projects. After an introduction to deep learning on GPUs, we will address a selection of open questions programmers and users may face when using deep learning for their work on these clusters."The post Deep Learning on the SaturnV Cluster appeared first on insideHPC.
|
by Rich Brueckner on (#2K6RW)
The TERATEC Forum has posted their Agenda for their upcoming June meeting. With technical workshops, plenary sessions and a vendor exhibit, the event takes place June 27-28 at the Ecole Polytechnique campus in Palaiseau, France. "Our objective is to bring together all decision makers and experts in the field of digital simulation and Big Data, from the industrial and technological world and the world of research."The post Agenda Posted for June Teratec Forum in France appeared first on insideHPC.
|
by Rich Brueckner on (#2K3K9)
"MeteoSwiss, the Swiss national weather forecast institute, has selected densely populated accelerator servers as their primary system to compute weather forecast simulation. Servers with multiple accelerator devices that are primarily connected by a PCI-Express (PCIe) network achieve a significantly higher energy efficiency. Memory transfers between accelerators in such a system are subjected to PCIe arbitration policies. In this paper, we study the impact of PCIe topology and develop a congestion-aware performance model for PCIe communication. We present an algorithm for computing congestion factors of every communication in a congestion graph that characterizes the dynamic usage of network resources by an application."The post A PCIe Congestion-Aware Performance Model for Densely Populated Accelerator Servers appeared first on insideHPC.
|
by staff on (#2K3EQ)
Today Engility that the company will bring its world-class high performance computing capabilities to bear as it competes to win NASA’s Advanced Computing Services contract. "HPC is a strategic, enabling capability for NASA,†said Lynn Dugle, CEO of Engility. “Engility’s cadre of renowned computational scientists and HPC experts, coupled with our proven high performance data analytics solutions, will help increase NASA’s science and engineering capabilities.â€The post Engility Pursues NASA Advanced Computing Services Contract appeared first on insideHPC.
|
by staff on (#2K3BS)
In this special guest feature, James Reinders discusses the use of the Intel® Advanced Vector Instructions (Intel® AVX-512), covering a variety of vectorization techniques available for accessing the performance of Intel AVX-512.The post Intel Xeon Phi Processor Intel AVX-512 Programming in a Nutshell appeared first on insideHPC.
|
by Rich Brueckner on (#2K36C)
Registration is now open for the PASC17 conference, which takes place the week after ISC in Lugano, Switzerland. "The PASC17 Conference is pleased to announce that a preliminary program is available online and that registration is now open. The PASC Conference is an interdisciplinary ev"ent in high performance computing that brings together domain science, applied mathematics and computer science – where computer science is focused on enabling the realization of scientific computation.The post Registration Opens for PASC17 Conference in Lugano appeared first on insideHPC.
|
by Rich Brueckner on (#2JZXK)
The OpenSFS Lustre community has posted the Agenda for their upcoming LUG 2017 conference. The event takes place May 30 – June 2 in Bloomington, Indiana. The Lustre User Group (LUG) conference is the industry’s primary venue for discussion and seminars on the Lustre parallel file system and other open source file system technologies.†LUG provides […]The post Agenda Posted for LUG 2017 in Bloomington appeared first on insideHPC.
|
by Rich Brueckner on (#2JZT7)
In this video from Switzerland HPC Conference, Rich Brueckner from insideHPC moderates a panel discussion on Exascale Computing. "The Exascale Computing Project in the USA is tasked with developing a set of advanced supercomputers with 50x better performance than today's fastest machines on real applications. This panel discussion will look at the challenges, gaps, and probable pathways forward in this monumental endeavor."Panelists:Gilad Shainer, HPC Advisory Council
|
by Rich Brueckner on (#2JZ8S)
"High performance computing is rapidly finding new uses in many applications and businesses, enabling the creation of disruptive products and services. Huawei, a global leader in information and communication technologies, brings a broad spectrum of innovative solutions to HPC. This talk examines Huawei's world class HPC solutions and explores creative new ways to solve HPC problems."The post A Fresh Look at HPC from Huawei Enterprise appeared first on insideHPC.
by Richard Friedman on (#2JYVC)
Discovering where the performance bottlenecks are and knowing what to do about it can be a mysterious and complex art, needing some very sophisticated performance analysis tools for success. That’s where Intel® VTune™ Amplifier XE 2017, part of Intel Parallel Studio XE, comes in.The post Intel® VTune™ Amplifier Turns Raw Profiling Data Into Performance Insights appeared first on insideHPC.
|
by Rich Brueckner on (#2JW0F)
"Over the last decade, CUDA and the underlying GPU hardware architecture have continuously gained popularity in various high-performance computing application domains such as climate modeling, computational chemistry, or machine learning. Despite this popularity, we lack a single coherent programming model for GPU clusters. We therefore introduce the dCUDA programming model, which implements device-side remote memory access."The post dCUDA: Distributed GPU Computing with Hardware Overlap appeared first on insideHPC.
|
by staff on (#2JVQ1)
Today Spectra Logic announced that Globus has completed client certification for its Spectra BlackPearl Converged Storage System. BlackPearl allows customers in university, high performance computing (HPC) and research organizations to seamlessly store data to disk, tape and cloud storage using a unified interface provided by Globus. Globus software-as-a-service (SaaS) simplifies file transfer, sharing and data publication for geographically diverse research communities worldwide. Spectra’s BlackPearl Converged Storage System integrates efficiently with the Globus service and delivers a fully integrated storage solution built to transfer data between Globus users, reducing overall data management time, cost and complexities.The post Spectra BlackPearl Gets Globus Client Certification appeared first on insideHPC.
|
by staff on (#2JVFG)
"One of the challenges we face in enabling big data science is providing safe and affordable storage at petabyte scale,†said Dr. Marek Michalewicz, deputy director of ICM. “When the Avere and Western Digital team arrived, we were able to count time-to-completion in hours. Together, Avere and Western Digital let us take advantage of object storage efficiencies to support demand, delivering the capacity, performance and manageability that the OCEAN data center environment requires.â€The post Avere Powers Active Archive Object Storage at University of Warsaw appeared first on insideHPC.
|
by Rich Brueckner on (#2JVE3)
General Atomics in San Diego is seeking a Software Developer for Exascale Computing at General Atomics in our Job of the Week. "This position independently leads the design, development and verification of novel scientific software for high-fidelity physics simulations on unique high performance computational hardware including Exascale-class systems."The post Job of the Week: Software Developer for Exascale at General Atomics appeared first on insideHPC.
|
by staff on (#2JV6S)
The LAD'17 Lustre Administrators and Developers Conference has issued their Call for Proposals. The event takes place Oct. 3-4 in Paris. "We are inviting community members to send proposals for presentations at this event. No proceeding is required, just an abstract of a 30-min (technical) presentation. Topics may include (but are not limited to): site updates or future projects, Lustre administration, monitoring and tools, Lustre feature overview, Lustre client performance, benefits of hardware evolution to Lustre, comparison between Lustre and other parallel file system, Lustre and Exascale I/O, etc."The post Call for Proposals: LAD’17 in Paris appeared first on insideHPC.
|
by staff on (#2JRK3)
During the first week of April, Eni fired up its new HPC cluster in the Green Data Center in Ferrera Erbognone, Italy. Known as HPC3, the new 5.8 Petaflop cluster will allow Eni to fully support all the activities in the Energy Exploration and Production sector. "The start-up of the new HPC3 supercomputer and its follow-on HPC4 will enable Eni to deploy the most advanced and sophisticated proprietary codes developed by our research for the E&P activities," said Eni CEO Claudio Descalzi. "These technologies will provide Eni with unprecedented accuracy and resolution in seismic imaging, geological modeling and reservoir dynamic simulation, allowing us to further accelerate overall cycle times in the upstream process and to sustain the E&P performances."The post Eni in Italy fires up 5.8 Petaflop HPC3 Cluster for Oil & Gas appeared first on insideHPC.
|
by Rich Brueckner on (#2JRDK)
"This talk will focus on challenges in designing programming models and runtime environments for Exascale systems with millions of processors and accelerators to support various programming models. We will focus on MPI+X (PGAS - OpenSHMEM/UPC/CAF/UPC++, OpenMP, and CUDA) programming models by taking into account support for multi-core systems (KNL and OpenPower), high-performance networks, GPGPUs (including GPUDirect RDMA), and energy-awareness."The post High-Performance and Scalable Designs of Programming Models for Exascale Systems appeared first on insideHPC.
|
by staff on (#2JR9M)
Today DDN announced new feature enhancements to its WOS object storage platform that include increased data protection and multi-site connectivity options. With its newest capabilities, DDN WOS provides the lowest data protection overhead available in the market. It also delivers control for local-only rebuilds for higher uptime and lower performance impact of hardware failures, along with multi-site collaboration, distribution and disaster recovery that is faster and more efficient than public cloud solutions.The post DDN Adds Cost-Effective Data Protection to WOS Object Storage appeared first on insideHPC.
|
by staff on (#2JR0T)
Genomic sequencing has progressed so rapidly that researchers can now analyze the genetic profiles of healthy individuals to uncover mutations that will almost certainly lead to a genetic condition. These breakthroughs are demonstrating that the future of genomic medicine will focus not just on the ability to reactively treat diseases, but on predicting and preventing them before they occur.The post Next-Generation Sequencing Improving Precision Medicine appeared first on insideHPC.
|
by Rich Brueckner on (#2JQKG)
"Nimbix has tremendous experience in GPU cloud computing, going all the way back to NVIDIA’s Fermi architecture,†said Steve Hebert, CEO of Nimbix. “We are looking forward to accelerating deep learning and analytics applications for customers seeking the latest generation GPU technology available in a public cloud.â€The post Tesla P100 GPUs Speed Cloud-Based Deep Learning on the Nimbix Cloud appeared first on insideHPC.
|
by Rich Brueckner on (#2JMQV)
Sean Hefty from Intel presented this talk at the OpenFabrics Workshop. "With its initial release two years ago, libfabric advanced the state of fabric software interfaces. One of the promises of OFI was extensibility: adapting to increased demands of fabric services from applications. This session explores the first major enhancements to the libfabric API in response to user demands and learnings."The post Video: Advancing Open Fabrics Interfaces appeared first on insideHPC.
|
by staff on (#2JMEG)
Today Asetek announced an order from one of its existing OEM partners for its RackCDU D2C (Direct-to-Chip) liquid cooling solution. The order is part of a new installation for an undisclosed HPC customer.The post Asetek Lands Another RackCDU D2C Order for New HPC Installation appeared first on insideHPC.
|
by Rich Brueckner on (#2JMB8)
Costas Bekas from IBM Research Zurich presented this talk at the Switzerland HPC Conference. "IBM Research builds applications that enable humans to collaborate with powerful AI technologies to discover, analyze and tackle the world’s greatest challenges. Humans are on the cusp of augmenting their lives in extraordinary ways with AI. At IBM Research Labs around the globe, we envision and develop next-generation systems that work side-by side with humans, accelerating our ability to create, learn, make decisions and think."The post Visionary Perspective: Foundations of Cognitive Computing appeared first on insideHPC.
|
by Rich Brueckner on (#2JKYJ)
In this podcast, the Radio Free HPC team discusses the upcoming MSST Mass Storage Conference with Program Chair Matthew O'Keefe from Oracle. "Since the conference was founded by the leading national laboratories, MSST has been a venue for massive-scale storage system designers and implementers, storage architects, researchers, and vendors to share best practices and discuss building and securing the world's largest storage systems for HPC, web-scale systems, and enterprises."The post Radio Free HPC Previews the MSST Mass Storage Conference appeared first on insideHPC.
|
by Rich Brueckner on (#2JJ79)
In this video, Dana Brunson from Oklahoma State describes the mission of the Oklahoma High Performance Computing Center. Formed in 2007, the HPCC facilitates computational and data-intensive research across a wide variety of disciplines by providing students, faculty and staff with cyberinfrastructure resources, cloud services, education and training, bioinformatics assistance, proposal support and collaboration.The post Cowboy Supercomputer Powers Research at Oklahoma State appeared first on insideHPC.
|
by Rich Brueckner on (#2JHYV)
Appentra and CESGA are organizing a GPU Hackathon in Spain. The event takes place May 29 - June 1 at the Galicia Supercomputing Center in Santiago de Compostela. The event is free, limited in capacity and in Spanish, with priority for national participants.The post GPU Hackathon Coming to Spain End of May appeared first on insideHPC.
|
by Rich Brueckner on (#2JGDK)
Ira Weiny from Intel presented this talk at the OpenFabrics Workshop. "Individual node configuration when managing 1000s or 10s of thousands of nodes in a cluster can be a daunting challenge. Two key daemons are now part of the rdma-core package which aid the management of individual nodes in a large fabric: IBACM and rdma-ndd."The post Managing Node Configuration with 1000s of Nodes appeared first on insideHPC.
|
by staff on (#2JGCV)
The latest industrial vehicles – as with other areas of automotive design – often involve high-tech components composite components to assisted driving or vehicle automation systems which require significantly more complex simulation. Automotive design tasks frequently deal with contradictory requirements of this kind: "make something stronger while making it lighter," explained Sjodin. "Simulations here can be invaluable since modern tools can be setup to sweep over a large range of cases, or to automatically optimize for a certain objective."The post Driving Change with Multiphysics appeared first on insideHPC.
|