by Rich Brueckner on (#1QX5W)
In this video, D-Wave Systems Founder Eric Ladizinsky presents: The Coming Quantum Computing Revolution. "Despite the incredible power of today’s supercomputers, there are many complex computing problems that can’t be addressed by conventional systems. Our need to better understand everything, from the universe to our own DNA, leads us to seek new approaches to answer the most difficult questions. While we are only at the beginning of this journey, quantum computing has the potential to help solve some of the most complex technical, commercial, scientific, and national defense problems that organizations face."The post Video: The Coming Quantum Computing Revolution appeared first on insideHPC.
|
High-Performance Computing News Analysis | insideHPC
Link | https://insidehpc.com/ |
Feed | http://insidehpc.com/feed/ |
Updated | 2024-11-25 21:45 |
by MichaelS on (#1QX0F)
"The major functionality of the Intel Xeon Phi coprocessor is a chip that does the heavy computation. The current version utilizes up to 16 channels of GDDR5 memory. An interesting notes is that up to 32 memory devices can be used, by using both sides of the motherboard to hold the memory. This doubles the effective memory availability as compared to more conventional designs."The post Intel Xeon Phi Coprocessor Design appeared first on insideHPC.
|
by Douglas Eadline on (#1QTMJ)
A single issue has always defined the history of HPC systems: performance. While offloading and co-design may seem like new approaches to computing, they actually have been used, to a lesser degree, in the past as a way to enhance performance. Current co-design methods are now going deeper into cluster components than was previously possible. These new capabilities extend from the local cluster nodes into the "computing network."The post Designing Machines Around Problems: The Co-Design Push to Exascale appeared first on insideHPC.
|
by Rich Brueckner on (#1QSXX)
In this video from the 2016 Blue Waters Symposium, Andriy Kot from NCSA presents: Parallel I/O Best Practices.The post Video: Parallel I/O Best Practices appeared first on insideHPC.
|
by Rich Brueckner on (#1QSVW)
In this TACC Podcast, Researchers describe how XSEDE supercomputing resources are helping them grow a better soybean through the SoyKB project based from the University of Missouri-Columbia. "The way resequencing is conducted is to chop the genome in many small pieces and see the many, many combinations of small pieces," said Xu. "The data are huge, millions of fragments mapped to a reference. That's actually a very time consuming process. Resequencing data analysis takes most of our computing time on XSEDE."The post Podcast: Supercomputing Better Soybeans appeared first on insideHPC.
|
by MichaelS on (#1QSQD)
Deep learning solutions are typically a part of a broader high performance analytics function in for profit enterprises, with a requirement to deliver a fusion of business and data requirements. In addition to support large scale deployments, industrial solutions typically require portability, support for a range of development environments, and ease of use.The post Software Framework for Deep Learning appeared first on insideHPC.
|
by staff on (#1QSMB)
NOAA and its partners have developed a new forecasting tool to simulate how water moves throughout the nation’s rivers and streams, paving the way for the biggest improvement in flood forecasting the country has ever seen. Launched today and run on NOAA’s powerful new Cray XC40 supercomputer, the National Water Model uses data from more than 8,000 U.S. Geological Survey gauges to simulate conditions for 2.7 million locations in the contiguous United States. The model generates hourly forecasts for the entire river network. Previously, NOAA was only able to forecast streamflow for 4,000 locations every few hours.The post Supercomputers Power NOAA Flood Forecasting Tool appeared first on insideHPC.
|
by MichaelS on (#1QS9B)
This very interesting whitepaper explains how selecting a proper parallel file system for your application can increase the performance of complex simulations and reduce time to completion.The post Faster and More Accurate Exploration using Shared Storage with Parallel Access appeared first on insideHPC.
|
by Rich Brueckner on (#1QPGG)
Today SC16 announced that the conference will feature 38 high-quality workshops to complement the overall Technical Program events, expand the knowledge base of its subject area, and extend its impact by providing greater depth of focus.The post SC16 to Feature 38 HPC Workshops appeared first on insideHPC.
|
by Rich Brueckner on (#1QP6X)
Today the U.S. Department of Energy announced that it will invest $16 million over the next four years to accelerate the design of new materials through use of supercomputers. “Our simulations will rely on current petascale and future exascale capabilities at DOE supercomputing centers. To validate the predictions about material behavior, we’ll conduct experiments and use the facilities of the Advanced Photon Source, Spallation Neutron Source and the Nanoscale Science Research Centers.â€The post DOE to Invest $16 Million in Supercomputing Materials appeared first on insideHPC.
|
by Rich Brueckner on (#1QP43)
"Few fields are moving faster right now than deep learning," writes Buck. "Today’s neural networks are 6x deeper and more powerful than just a few years ago. There are new techniques in multi-GPU scaling that offer even faster training performance. In addition, our architecture and software have improved neural network training time by over 10x in a year by moving from Kepler to Maxwell to today’s latest Pascal-based systems, like the DGX-1 with eight Tesla P100 GPUs. So it’s understandable that newcomers to the field may not be aware of all the developments that have been taking place in both hardware and software."The post Nvidia Disputes Intel’s Maching Learning Performance Claims appeared first on insideHPC.
|
by Rich Brueckner on (#1QNX9)
Today Cycle Computing announced its continued involvement in optimizing research spearheaded by NASA’s Center for Climate Simulation (NCCS) and the University of Minnesota. Currently, a biomass measurement effort is underway in a coast-to-coast band of Sub-Saharan Africa. An over 10 million square kilometer region of Africa’s trees, a swath of acreage bigger than the entirety […]The post NASA Optimizes Climate Impact Research with Cycle Computing appeared first on insideHPC.
|
by Rich Brueckner on (#1QNVK)
"In order to address data intensive workloads in need of higher performance for storage, TYAN takes full advantage of Intel NVMe technology to highlight hybrid storage configurations. TYAN server solutions with NVMe support can not only boost storage performance over the PCIe interface but provide storage flexibility for customers through scale-out architecture†said Danny Hsu, Vice President of MiTAC Computing Technology Corporation's TYAN Business Unit.The post TYAN Showcases NVMe Servers and Storage Platforms at IDF 2016 appeared first on insideHPC.
|
by staff on (#1QNGE)
With the release of high wattage processors liquid cooling is becoming a necessity for HPC data centers. Liquid cooling’s ability to provide the direct removal of heat from these high wattage components within the servers is well established. However, there are sometimes concerns from facilities management that need to be addressed prior to liquid cooling’s introduction to the data center.The post Making it Easy to Introduce Liquid Cooling to the Data Center appeared first on insideHPC.
|
by Rich Brueckner on (#1QJQ8)
LANL reports that a moment of inspiration during a wiring diagram review has saved more than $2 million in material and labor costs for the Trinity supercomputer at Los Alamos National Laboratory.The post Trinity Supercomputer Wiring Reconfiguration Saves Millions appeared first on insideHPC.
|
by Rich Brueckner on (#1QJNQ)
In this Intel Chip Chat Podcast, Nidhi Chappell, the Director of Machine Learning Strategy at Intel discusses the company's planned acquisition of Nervana Systems to further drive Intel’s capabilities in the artificial intelligence (AI) field. "We will apply Nervana’s software expertise to further optimize the Intel Math Kernel Library and its integration into industry standard frameworks. Nervana’s Engine and silicon expertise will advance Intel’s AI portfolio and enhance the deep learning performance and TCO of our Intel Xeon and Intel Xeon Phi processors.â€The post Podcast: Intel Steps Up to Machine Learning with Nervana Systems Acquisition appeared first on insideHPC.
|
by Rich Brueckner on (#1QJGY)
Peter Ungaro presented this talk at the 2016 Blue Waters Symposium. "Built by Cray, Blue Waters is one of the most powerful supercomputers in the world, and is the fastest supercomputer on a university campus. Scientists and engineers across the country use the computing and data power of Blue Waters to tackle a wide range of challenging problems, from predicting the behavior of complex biological systems to simulating the evolution of the cosmos."The post Pete Ungaro Presents: Blue Waters & the Cray Roadmap appeared first on insideHPC.
|
by Rich Brueckner on (#1QJBA)
"I am honored to have been asked to drive NCSA’s continuing mission as a world-class, integrative center for transdisciplinary convergent research, education, and innovation," said Gropp. "Embracing advanced computing and domain collaborations across the University of Illinois at Urbana-Champaign campus and ensuring scientific communities have access to advanced digital resources will be at the heart of these efforts."The post Bill Gropp Named Acting Director of NCSA appeared first on insideHPC.
|
by Rich Brueckner on (#1QJ7F)
Researchers at the University of Oxford have achieved a quantum logic gate with record-breaking 99.9% precision, reaching the benchmark required theoretically to build a quantum computer. "An analogy from conventional computing hardware would be that we have finally worked out how to build a transistor with good enough performance to make logic circuits, but the technology for wiring thousands of those transistors together to build an electronic computer is still in its infancy."The post University of Oxford Develops Logic Gate for Quantum Computing appeared first on insideHPC.
|
by Rich Brueckner on (#1QFJY)
"High performance computing has transformed how science and engineering research is conducted. Answering a question in 30 minutes that used to take 6 months can quickly change the way one asks questions. Large computing facilities provide access to some of the world’s largest computing, data, and network resources in the world. Indeed, the DOE complex has the highest concentration of supercomputing capability in the world. However, by nature of their existence, making use of the largest computers in the world can be a challenging and unique task. This talk will discuss how supercomputers are unique and explain how that impacts their use."The post Video: How the HPC Environment is Different from the Desktop (and Why) appeared first on insideHPC.
|
by Rich Brueckner on (#1QAX1)
In this podcast, the Radio Free HPC team looks HPE's pending acquisition of SGI. "Will the acquisition be good for SGI and HP customers? Our RFHPC team is in unprecedented agreement that indeed it will. The key, however, to HPE's success will be keeping the SGI people. Rich thinks this acquisition will potentially give HPE the engineering talent it needs to compete with Cray at the high end of the market."The post Radio Free HPC Looks at HPE’s Pending Acquisition of SGI appeared first on insideHPC.
|
by Rich Brueckner on (#1QD34)
Nikkei in Japan writes that the Post K supercomputer is facing 1-2 year delay for deployment as part of the Flagship2020 project. Originally targeted for completion in 2020, the ARM-based Post K supercomputer has a performance target of being 100 times faster than the original K computer within a power envelope that will only be 3-4 times that of its predecessor. Nikkei cites semiconductor development issues as the reason for the project delay.The post Post K Supercomputer Delayed in Japan appeared first on insideHPC.
|
by Rich Brueckner on (#1Q9S4)
"Between 2011 and 2016, eight projects, with a total budget of more than €50 Million, were selected for this first push in the direction of the next- generation supercomputer: CRESTA, DEEP and DEEP-ER, EPiGRAM, EXA2CT, Mont- Blanc (I + II) and Numexas. The challenges they addressed in their projects were manifold: innovative approaches to algorithm and application development, system software, energy efficiency, tools and hardware design took centre stage."The post New Report Looks at European Exascale Projects appeared first on insideHPC.
|
by Rich Brueckner on (#1Q9P5)
Ed Seidel from NCSA presented this talk at The Digital Future conference in Berlin. "The National Center for Supercomputing Applications (NCSA) is a hub of transdisciplinary research and digital scholarship where University of Illinois faculty, staff, and students, and collaborators from around the globe, unite to address research grand challenges for the benefit of science and society. NCSA is also an engine of economic impact for the state and the nation, helping companies address computing and data challenges and providing hands-on training for undergraduate and graduate students and post-docs."The post Video: The Impact of the Computing and Data Revolution on Science & Society appeared first on insideHPC.
|
by Rich Brueckner on (#1Q9MP)
Nvidia is expanding its popular GPU Technology Conference to eight cities worldwide. "We’re broadening the reach of GTC with a series of conferences in eight cities across four continents, bringing the latest industry trends to major technology centers around the globe. Beijing, Taipei, Amsterdam, Melbourne, Tokyo, Seoul, Washington, and Mumbai will all host GTCs. Each will showcase technology from NVIDIA and our partners across the fields of deep learning, autonomous driving and virtual reality. Several events in the series will also feature keynote presentations by NVIDIA CEO and co-founder Jen-Hsun Huang."The post Nvidia Expands GTC to Eight Global Events appeared first on insideHPC.
|
by Rich Brueckner on (#1Q7EY)
Today SGI announced that it has signed a definitive agreement to be acquired by Hewlett Packard Enterprise (HPE) for $7.75 per share in cash, a transaction valued at approximately $275 million, net of cash and debt. "At HPE, we are focused on empowering data-driven organizations," said Antonio Neri, executive vice president and general manager, Enterprise Group, Hewlett Packard Enterprise. "SGI's innovative technologies and services, including its best-in-class big data analytics and high performance computing solutions, complement HPE's proven data center solutions designed to create business insight and accelerate time to value for customers."The post Hewlett Packard Enterprise to Acquire SGI appeared first on insideHPC.
|
by staff on (#1Q6S5)
"Supermicro RSD is architected to dramatically improve CPU and storage utilization rates, agility and efficiency in the datacenter,†stated Charles Liang, President and CEO of Supermicro. “When combined with our leadership position in the newest technologies such as U.2 NVMe, and in upcoming fabric technologies like Red Rock Canyon and PCI-E switches, Supermicro RSD will provide datacenters with unparalleled competitive advantages, especially when implemented with the new Ruler form factor high capacity flash storage.â€The post Supermicro to Unveil RSD Rack Scale Design at IDF appeared first on insideHPC.
|
by Rich Brueckner on (#1Q6P7)
IDC has published the preliminary agenda for the next international HPC User Forum. The event will take place Sept. 29-30 in Oxford, UK.The post Agenda Posted for HPC User Forum in Oxford appeared first on insideHPC.
|
by staff on (#1Q6KT)
Today One Stop Systems (OSS) introduced a pair of high-speed networked storage appliances that supports high-performance, shared storage services. "The OSS approach optimizes the hardware for the environment and optimizes the software for the application in the Flash Storage Array for Networks product line (FSAn). This hardware and software optimization in the FSAn product line provides the best ROI in any environment by minimizing hardware and license costs through advance array-level optimizations while maximizing the utilization of the flash array through VSI and VDI application support."The post OSS Introduces Flash Appliances appeared first on insideHPC.
|
by Rich Brueckner on (#1Q6D4)
The flagship supercomputer at the Swiss National Supercomputing Centre (CSCS), Piz Daint, named after a mountain in the Alps, currently delivers 7.8 petaflops of compute performance, or 7.8 quadrillion mathematical calculations per second. A recently announced upgrade will double its peak performance, thanks to a refresh using the latest Intel Xeon CPUs and 4,500 Nvidia Tesla P100 GPUs.The post Creating Balance in HPC on the Piz Daint Supercomputer appeared first on insideHPC.
|
by MichaelS on (#1Q6BJ)
"High performance systems now typically a host processor and a coprocessor. The role of the coprocessor is to provide the developer and the user the ability to significantly speed up simulations if the algorithm that is used can run with a high degree of parallelization and can take advantage of an SIMD architecture. The Intel Xeon Phi coprocessor is an example of a coprocessor that is used in many HPC systems today."The post Intel Xeon Phi Coprocessor Architecture appeared first on insideHPC.
|
by Rich Brueckner on (#1Q39P)
NCI in Australia has issued its Call for Participation for the the Down-Under version of the 2016 Lustre User Group. The event will be held Sept. 7-8 on the campus of The Australian National University in Canberra, ACT Australia. "LUG 2016 will be a dynamic two day workshop that will explore improvements in the performance and flexibility of the Lustre file system for supporting diverse workloads. This will be a great opportunity for the Lustre community to discuss the challenges associated with enhancing Lustre for diverse applications, the technological advances necessary, and the associated ecosystem."The post Call for Participation: Lustre User Group at NCI in Australia appeared first on insideHPC.
|
by Rich Brueckner on (#1Q2X1)
In this video, Dan Stanzione from TACC describes how the Stampede II supercomputer will driving computational science. "Announced in June, a $30 million NSF award to the Texas Advanced Computing Center will be used acquire and deploy a new large scale supercomputing system, Stampede II, as a strategic national resource to provide high-performance computing capabilities for thousands of researchers across the U.S. This award builds on technology and expertise from the Stampede system first funded in by NSF 2011 and will deliver a peak performance of up to 18 Petaflops, over twice the overall system performance of the current Stampede system."The post Video: Stampede II Supercomputer to Advance Computational Science at TACC appeared first on insideHPC.
|
by Rich Brueckner on (#1Q2SC)
Today One Stop Systems announced the 4U Flash Storage Array with Mangstor MX6300 NVMe SSDs. OSS' FSAe-4 can accommodate 32 of the MX6300 providing up to 172TB of shared Flash storage. The FSAe-4 is a fully redundant, hot serviceable configuration with 4 independent 1U servers attached to the PCIe expansion chassis. The expansion system can support Ethernet (RoCE) or Infiniband fabrics and network speeds up to 100Gb/s.The post Mangstor MX6300 NVMe SSDs Power One Stop Systems FSAe-4 Flash Storage Array appeared first on insideHPC.
|
by Rich Brueckner on (#1Q2R3)
Today Intel announced plans to acquire startup Nervana Systems as part of an effort to bolster the company's artificial intelligence capabilities. "Nervana has a fully-optimized software and hardware stack for deep learning," said Intel's Diane Bryant in a blog post. "Their IP and expertise in accelerating deep learning algorithms will expand Intel’s capabilities in the field of AI. We will apply Nervana’s software expertise to further optimize the Intel Math Kernel Library and its integration into industry standard frameworksThe post Intel to Bolster Machine Learning with Nervana Acquisition appeared first on insideHPC.
|
by MichaelS on (#1Q2KW)
The recent introduction of new high end processors from Intel combined with accelerator technologies such as NVIDIA Tesla GPUs and Intel Xeon Phi provide the raw ‘industry standard’ materials to cobble together a test platform suitable for small research projects and development. When combined with open source toolkits some meaningful results can be achieved, but wide scale enterprise deployment in production environments raises the infrastructure, software and support requirements to a completely different level.The post Components For Deep Learning appeared first on insideHPC.
|
by Rich Brueckner on (#1PZE7)
"The ExaFlash Platform is an historic achievement that will reshape the storage and data center industries," said Thomas Isakovich, CEO and Founder of Nimbus Data. "It offers unprecedented scale (from terabytes to exabytes), record-smashing efficiency (95% lower power and 50x greater density than existing all-flash arrays), and a breakthrough price point (a fraction of the cost of existing all-flash arrays). ExaFlash brings the all-flash data center dream to reality and will help empower humankind’s innovation for decades to come."The post Nimbus Data Rolls Out ExaFlash Storage Platform appeared first on insideHPC.
|
by Rich Brueckner on (#1PZ60)
"We’ve seen the rapid evolution of SSDs and have been contributing to the NVMe over Fabrics standard and community drivers,†said Michael Kagan, CTO at Mellanox Technologies. “Because faster storage requires faster networks, we designed the highest-speeds and most intelligent offloads into both our ConnectX-5 and BlueField families. This lets us connect many SSDs directly to the network at full speed, without the need to dedicate many CPU cores to managing data movement, and we provide a complete end-to-end networking solution with the highest-performing 25, 50, and 100GbE switches and cables as well.â€The post New Mellanox Networking Solutions Accelerate NVMe Over Fabrics appeared first on insideHPC.
|
by staff on (#1PZ42)
Today the Ethernet Alliance unveiled the agenda for its 2016 Technology Exploration Forum (TEF 2016). At the center of the day’s agenda is Ethernet’s quickening journey through its next decade of continuous technology evolution and growth as the marketplace continues to change. TEF 2016: The Road to Ethernet 2026 is scheduled for September 29, 2016, at the Santa Clara County Convention Center, Santa Clara, Calif.The post Ethernet Alliance Technology Forum Looks to 2026 appeared first on insideHPC.
|
by staff on (#1PZ2K)
Today Seagate announced two new flash innovations that extend the limits of storage computing performance in enterprise data centers to unprecedented levels. The new products include a 60 terabyte SAS solid-state-drive — the largest SSD ever demonstrated — and the 8TB Nytro XP7200 NVMe SSD. These two new products represent the high performance end of Seagate’s Enterprise portfolio – a complete ecosystem of HDD, SSD and storage system products designed to help customers manage the deluge of data they face and move the right data where it’s needed fast to meet rapidly evolving business priorities and market demands.The post Seagate Unveils 60 Terabyte SSD appeared first on insideHPC.
|
by staff on (#1PZ0W)
"Fujitsu Laboratories has newly developed parallelization technology to efficiently share data between machines, and applied it to Caffe, an open source deep learning framework widely used around the world. Fujitsu Laboratories evaluated the technology on AlexNet, where it was confirmed to have achieved learning speeds with 16 and 64 GPUs that are 14.7 and 27 times faster, respectively, than a single GPU. These are the world's fastest processing speeds(2), representing an improvement in learning speeds of 46% for 16 GPUs and 71% for 64 GPUs."The post Fujitsu Develops High-Speed Software for Deep Learning appeared first on insideHPC.
|
by staff on (#1PYR0)
Advancements in video technology have slowly pushed applications like video editing, video rendering and video storage editing into the High Performance Computing world. There are many different video editing programs that can cut, trim, re-sequence, and add sound, transitions and special effects to video. But with the introduction of 4K/8K video, a simple laptop isn’t powerful enough on its own anymore, especially for online editing.The post High Performance 4K Video Storage, Editing and Rendering appeared first on insideHPC.
by staff on (#1PWB8)
Today One Stop Systems introduced the Magma StorageBox 1000 PCIe expansion system. Targeted for HPC applications, the SBB1000 provides up to 25.6TB of NVMe SSD direct attached storage through eight 2.5" drives to any standard server.The post OSS Rolls Out Magma StorageBox 1000 PCIe Expansion Chassis appeared first on insideHPC.
|
by staff on (#1PW3J)
Today HPC cloud provider Nimbix announced a significant increase in their presence in the machine learning market space as more customers are using their JARVICE platform to help address the need for an easier, more cost efficient way of working with machine learning. "The Nimbix Cloud was a great choice for our research tasks in conversational AI. They are one of the first cloud services to provide NVIDIA Tesla K80 GPUs that were essential for computing neural networks that are implemented as part of Luka's AI,†said Phil Dudchuck, Co-Founder at Luka.ai.The post Nimbix Speeds Cloud-based Machine Learning appeared first on insideHPC.
|
by Rich Brueckner on (#1PW0N)
Today Netlist announced the first public demonstration of its HybriDIMM Storage Class Memory (SCM) product at the upcoming Flash Memory Summit. Using an industry standard DDR4 LRDIMM interface, HybriDIMM is the first SCM product to operate in current Intel x86 servers without BIOS and hardware changes, and the first unified DRAM-NAND solution that scales memory to terabyte storage capacities and accelerates storage to nanosecond memory speeds.The post Netlist HybriDIMM Memory Unifies DRAM-NAND appeared first on insideHPC.
|
by Rich Brueckner on (#1PVZ7)
Is Machine Learning more of a Data Movement problem than a Processing problem? In this podcast, the Radio Free HPC team looks at use cases for Machine Learning where data locality is critical for performance. "Most of the Machine Learning hearing stories we hear involve a central data repository. Henry says he is not hearing enough about how Machine Learning is going to deal with the problem of massive data streams from things like sensors. Such data, he contends, will have to be processed at the source."The post Radio Free HPC Looks at Machine Learning and Data Locality appeared first on insideHPC.
|
by staff on (#1PVSE)
Today E8 Storage launched the storage industry’s first-ever centralized, highly available rack scale flash appliance based on Non-Volatile Memory express (NVMe) drives. The E8-D24 is the first array that combines the high performance of NVMe drives, the high availability and reliability of centralized storage, and the high scalability of scale-out solutions.The post E8 Storage Launches NVMe Rack Scale Flash Appliance appeared first on insideHPC.
|
by staff on (#1PVQ4)
Today the Green500 released their listing of the world's most energy efficient supercomputers. "Japan’s research institution RIKEN once again captured the top spot with its Shoubu supercomputer. With rating of 6673.84 MFLOPS/Watt, Shoubu edged out another RIKEN system, Satsuki, the number 2 system that delivered 6195.22 MFLOPS/Watt. Both are “ZettaScalerâ€supercomputers, employing Intel Xeon processors and PEZY-SCnp manycore accelerators.The post Riken’s Shoubu Supercomputer Leads Green500 List appeared first on insideHPC.
|
by Rich Brueckner on (#1PS7D)
AMD’s motivation for developing these open-source GPU tools is based on an opportunity to remove the added complexity of proprietary programming frameworks to GPU application development. "If successful, these tools – or similar versions – could help to democratize GPU application development, removing the need for proprietary frameworks, which then makes the HPC accelerator market much more competitive for smaller players. For example, HPC users could potentially use these tools to convert CUDA code into C++ and then run it on an Intel Xeon co-processor."The post AMD Boltzmann Initiative Promotes HPC Freedom of Choice appeared first on insideHPC.
|
by Rich Brueckner on (#1PS40)
In this video from The Digital Future conference in Berlin, Leslie Greengard from the Simons Center for Data Analysis presents: Modeling Physical Systems in Complex Geometry. Greengard is an American mathematician, physician and computer scientist. He is co-inventor of the fast multipole method, recognized as one of the top-ten algorithms of computing."The post Leslie Greengard Presents: Modeling Physical Systems in Complex Geometry appeared first on insideHPC.
|