by Rich Brueckner on (#2939Q)
In this podcast, the Radio Free HPC team looks at D-Wave's new open source software for quantum computing. The software is available on github along with a whitepaper written by Cray Research alums Mike Booth and Steve Reinhardt. "The new tool, qbsolv, enables developers to build higher-level tools and applications leveraging the quantum computing power of systems provided by D-Wave, without the need to understand the complex physics of quantum computers."The post Radio Free HPC Looks at New Open Source Software for Quantum Computing appeared first on insideHPC.
|
High-Performance Computing News Analysis | insideHPC
Link | https://insidehpc.com/ |
Feed | http://insidehpc.com/feed/ |
Updated | 2024-11-25 16:30 |
by Douglas Eadline on (#292TH)
"The move away from the traditional single processor/memory design has fostered new programming paradigms that address multiple processors (cores). Existing single core applications need to be modified to use extra processors (and accelerators). Unfortunately there is no single portable and efficient programming solution that addresses both scale-up and scale-out systems."The post Scaling Software for In-Memory Computing appeared first on insideHPC.
|
by Rich Brueckner on (#28ZRM)
In this visualization, ocean temperatures and salinity are tracked over the course of a year. Based on data from global climate models, these visualizations aid our understanding of the physical processes that create the Earth's climate, and inform predictions about future changes in climate. "The water's saltiness, or salinity, plays a significant role in this ocean heat engine, Harrison said. Salt makes the water denser, helping it to sink. As the atmosphere warms due to global climate change, melting ice sheets have the potential to release tremendous amounts of fresh water into the oceans."The post Video: Tracing Ocean Salinity for Global Climate Models appeared first on insideHPC.
|
by Rich Brueckner on (#28ZP9)
Registration is now open for the 2017 Rice Oil & Gas HPC Conference. The event takes place March 15-16 in Houston, Texas. "Join us for the 10th anniversary of the Rice Oil & Gas HPC Conference. OG HPC is the premier meeting place for networking and discussion focused on computing and information technology challenges and needs in the oil and gas industry."The post Registration Opens for Rice Oil & Gas HPC Conference appeared first on insideHPC.
|
by Rich Brueckner on (#28WR6)
"The University of Colorado, Boulder supports researchers’ large-scale computational needs with their newly optimized high performance computing system, Summit. Summit is designed with advanced computation, network, and storage architectures to deliver accelerated results for a large range of HPC and big data applications. Summit is built on Dell EMC PowerEdge Servers, Intel Omni-Path Architecture Fabric and Intel Xeon Phi Knights Landing processors."The post Dell EMC Powers Summit Supercomputer at CU Boulder appeared first on insideHPC.
|
by Rich Brueckner on (#28WNQ)
Bennett Aerospace has an opening for a highly motivated Research Scientist and Computational Chemist for the Army Corps of Engineers (USACE), Engineer Research and Development Center (ERDC), Environmental Laboratory (EL), Environmental Processes Branch (EP-P) ) Environmental Genomics and Systems Biology Team (EGSB) in execution of its mission. The candidate will be a Bennett Aerospace employee performing services for ERDC in Vicksburg, MS.The post Job of the Week: Computational Chemist at Bennett Aerospace appeared first on insideHPC.
|
by Rich Brueckner on (#28S7Z)
In this video, researchers at NASA Ames explore the aerodynamics of a popular example of a small, battery-powered drone, a modified DJI Phantom 3 quadcopter. "The Phantom relies on four whirring rotors to generate enough thrust to lift it and any payload it’s carrying off the ground. Simulations revealed the complex motions of air due to interactions between the vehicle’s rotors and X-shaped frame during flight. As an experiment, researchers added four more rotors to the vehicle to study the effect on the quadcopter’s performance. This configuration produced a nearly twofold increase in the amount of thrust."The post Supercomputing Drone Aerodynamics appeared first on insideHPC.
|
by Rich Brueckner on (#28S3M)
In this AI Podcast, Lynn Richards, president and CEO of the Congress for New Urbanism and Charles Marohn, president and co-founder of Strong Towns, describe how AI will reshape our cities. "AI will do much more than automate driving. It promises to help create more liveable cities. And help put expensive infrastructure where we need it most."The post Podcast: How Deep Learning Will Reshape Our Cities appeared first on insideHPC.
|
by Rich Brueckner on (#28S1V)
"STFC Hartree Centre needed a powerful, flexible server system that could drive research in energy efficiency as well as economic impact for its clients. By extending its System x platform with NeXtScale System, Hartree Centre can now move to exascale computing, support sustainable energy use and help its clients gain a competitive advantage." Sophisticated data processes are now integral to all areas of research and business. Whether you are new to discovering the potential of supercomputing, data analytics and cognitive techniques, or are already using them, Hartree's easy to use portfolio of advanced computing facilities, software tools and know-how can help you create better research outcomes that are also faster and cheaper than traditional research methods.The post Video: Lenovo Powers Manufacturing Innovation at Hartree Centre appeared first on insideHPC.
|
by staff on (#28RT5)
"Intel recently announced the first product release of its High Performance Python distribution powered by Anaconda. The product provides a prebuilt easy-to-install Intel Architecture (IA) optimized Python for numerical and scientific computing, data analytics, HPC and more. It’s a free, drop in replacement for existing Python distributions that requires no changes to Python code. Yet benchmarks show big Intel Xeon processor performance improvements and even bigger Intel Xeon Phi processor performance improvements."The post IA Optimized Python Rocks in Production appeared first on insideHPC.
|
by staff on (#28MYP)
"Bridges’ new nodes add large-memory and GPU resources that enable researchers who have never used high-performance computing to easily scale their applications to tackle much larger analyses,†says Nick Nystrom, principal investigator in the Bridges project and Senior Director of Research at PSC. “Our goal with Bridges is to transform researchers’ thinking from ‘What can I do within my local computing environment?’ to ‘What problems do I really want to solve?’â€The post Upgraded Bridges Supercomputer Now in Production appeared first on insideHPC.
|
by staff on (#28MW9)
"The DS8880 All-Flash family is targeted at users that have experienced poor storage performance due to latency, low server utilization, high energy consumption, low system availability and high operating costs. These same users have been listening, learning and understand the data value proposition of being a cognitive business,†said Ed Walsh, general manager, IBM Storage and Software Defined Infrastructure. “In the coming year we expect an awakening by companies to the opportunity that cognitive applications, and hybrid cloud enablement, bring them in a data driven marketplace.â€The post IBM Rolls Out All-flash Storage for Cognitive Workloads appeared first on insideHPC.
|
by staff on (#28MTP)
"Just as a software ecosystem helped to create the immense computing industry that exists today, building a quantum computing industry will require software accessible to the developer community,†said Bo Ewald, president, D-Wave International Inc. “D-Wave is building a set of software tools that will allow developers to use their subject-matter expertise to build tools and applications that are relevant to their business or mission. By making our tools open source, we expand the community of people working to solve meaningful problems using quantum computers.â€The post D-Wave Releases Open Quantum Software Environment appeared first on insideHPC.
|
by Richard Friedman on (#28MP1)
While HPC developers worry about squeezing out the ultimate performance while running an application on dedicated cores, Intel TBB tackles a problem that HPC users never worry about: How can you make parallelism work well when you share the cores that you run upon?†This is more of a concern if you’re running that application on a many-core laptop or workstation than a dedicated supercomputer because who knows what will also be running on those shared cores. Intel Threaded Building Blocks reduce the delays from other applications by utilizing a revolutionary task-stealing scheduler. This is the real magic of TBB.The post A Decade of Multicore Parallelism with Intel TBB appeared first on insideHPC.
|
by Rich Brueckner on (#28MF4)
Today Cray announced the appointment of Stathis Papaefstathiou to the position of senior vice president of research and development. Papaefstathiou will be responsible for leading the software and hardware engineering efforts for all of Cray’s research and development projects. "At our core, we are an engineering company, and we’re excited to have Stathis’ impressive and diverse technical expertise in this key leadership position at Cray,†said Peter Ungaro, president and CEO of Cray. “Leveraging the growing convergence of supercomputing and big data, Stathis will help us continue to build unique and innovative products for our broadening customer base.â€The post Cray Appoints Stathis Papaefstathiou as SVP of Research and Development appeared first on insideHPC.
|
by Peter ffoulkes on (#28MBR)
"With three primary network technology options widely available, each with advantages and disadvantages in specific workload scenarios, the choice of solution partner that can deliver the full range of choices together with the expertise and support to match technology solution to business requirement becomes paramount."The post Selecting HPC Network Technology appeared first on insideHPC.
|
by staff on (#28BX3)
In this week’s Sponsored Post, Nicolas Dube of Hewlett Packard Enterprise outlines the future of HPC and the role and challenges of exascale computing in this evolution. The HPE approach to exascale is geared to breaking the dependencies that come with outdated protocols. Exascale computing will allow users to process data, run systems, and solve problems at a totally new scale, which will become increasingly important as the world’s problems grow ever larger and more complex.The post Exascale Computing: A Race to the Future of HPC appeared first on insideHPC.
|
by staff on (#28J2C)
Each year the OpenFabrics Alliance (OFA) hosts an annual workshop devoted to advancing the state of the art in networking. "One secret to the enduring success of the workshop is the OFA’s emphasis on hosting an interactive, community-driven event. To continue that trend, we are once again reaching out to the community to create a rich program that addresses topics important to the networking industry. We’re looking for proposals for workshop sessions."The post OpenFabrics Alliance Workshop 2017 – Call for Sessions Open appeared first on insideHPC.
|
by Rich Brueckner on (#28GXC)
In this video, Jonathan Allen from LLNL describes how Lawrence Livermore’s supercomputers are playing a crucial role in advancing cancer research and treatment. "A historic partnership between the Department of Energy (DOE) and the National Cancer Institute (NCI) is applying the formidable computing resources at Livermore and other DOE national laboratories to advance cancer research and treatment. Announced in late 2015, the effort will help researchers and physicians better understand the complexity of cancer, choose the best treatment options for every patient, and reveal possible patterns hidden in vast patient and experimental data sets."The post Video: Livermore HPC Takes Aim at Cancer appeared first on insideHPC.
|
by Rich Brueckner on (#28GNK)
Oak Ridge National Laboratory reports that its team of experts are playing leading roles in the recently established DOE’s Exascale Computing Project (ECP), a multi-lab initiative responsible for developing the strategy, aligning the resources, and conducting the R&D necessary to achieve the nation’s imperative of delivering exascale computing by 2021. "ECP’s mission is to ensure all the necessary pieces are in place for the first exascale systems – an ecosystem that includes applications, software stack, architecture, advanced system engineering and hardware components – to enable fully functional, capable exascale computing environments critical to scientific discovery, national security, and a strong U.S. economy."The post Oak Ridge Plays key role in Exascale Computing Project appeared first on insideHPC.
|
by Rich Brueckner on (#28GKD)
"The PRACE Summer of HPC is an outreach and training program that offers summer placements at top High Performance Computing centers across Europe to late-stage undergraduates and early-stage postgraduate students. Up to twenty top applicants from across Europe will be selected to participate. Participants will spend two months working on projects related to PRACE technical or industrial work and produce a report and a visualization or video of their results."The post Apply Now for Summer of HPC 2017 in Barcelona appeared first on insideHPC.
|
by Rich Brueckner on (#28GF6)
"For many urban questions, however, new data sources will be required with greater spatial and/or temporal resolution, driving innovation in the use of sensors in mobile devices as well as embedding intelligent sensing infrastructure in the built environment. Collectively, these data sources also hold promise to begin to integrate computational models associated with individual urban sectors such as transportation, building energy use, or climate. Catlett will discuss the work that Argonne National Laboratory and the University of Chicago are doing in partnership with the City of Chicago and other cities through the Urban Center for Computation and Data, focusing in particular on new opportunities related to embedded systems and computational modeling."The post Understanding Cities through Computation, Data Analytics, and Measurement appeared first on insideHPC.
|
by Rich Brueckner on (#28CKS)
In this video, Maurizio Davini from the University of Pisa describe how the University works with Dell EMC and Intel to test new technologies, integrate and optimize HPC systems with Intel HPC Orchestrator software. "We believe these two companies are at the forefront of innovation in high performance computing," said University CTO Davini. "We also share a common goal of simplifying HPC to support a broader range of users.â€The post Intel HPC Orchestrator Powers Research at University of Pisa appeared first on insideHPC.
|
by Rich Brueckner on (#28CBS)
“Atos is determined to solve the technical challenges that arise in life sciences projects, to help scientists to focus on making breakthroughs and forget about technicalities. We know that one size doesn’t fit all and that is the reason why we studied carefully The Pirbright Institute’s challenges to design a customized and unique architecture. It is a pleasure for us to work with Pirbright and to contribute in some way to reduce the impact of viral diseasesâ€, says Natalia Jiménez, WW Life Sciences lead at Atos.The post Bull Atos Powers New Genomics Supercomputer at Pirbright Institute appeared first on insideHPC.
|
by Rich Brueckner on (#28C7A)
Today Mellanox announced that Spectrum Ethernet switches and ConnectX-4 100Gb/s Ethernet adapters have been selected by Baidu, the leading Chinese language Internet search provider, for Baidu’s Machine Learning platforms. The need for higher data speed and most efficient data movement placed Spectrum and RDMA-enabled ConnectX-4 adapters as key components to enable world leading machine learning […]The post Mellanox Ethernet Accelerates Baidu Machine Learning appeared first on insideHPC.
|
by staff on (#28C5F)
"The recent announcement of HDR InfiniBand included the three required network elements to achieve full end-to-end implementation of the new technology: ConnectX-6 host channel adapters, Quantum switches and the LinkX family of 200Gb/s cables. The newest generations of InfiniBand bring the game changing capabilities of In-Network Computing and In-Network Memory to further enhance the new paradigm of Data-Centric data centers – for High-Performance Computing, Machine Learning, Cloud, Web2.0, Big Data, Financial Services and more – dramatically increasing network scalability and introducing new accelerations for storage platforms and data center security."The post HDR InfiniBand Technology Reshapes the World of High-Performance and Machine Learning Platforms appeared first on insideHPC.
|
by staff on (#288CV)
The Penn State Institute for CyberScience (ICS) is hosting a series of free training workshops on high-performance computing techniques. These workshops are sponsored by the Extreme Science and Engineering Discovery Environment (XSEDE). The first workshop will be 11 a.m. to 5 p.m on Jan.17 in 118 Wagner Building, University Park.The post Penn State Institute for CyberScience to Host HPC Workshops appeared first on insideHPC.
|
by Rich Brueckner on (#288CW)
Today the Canada Foundation for Innovation announced an award of $69,455,000 through its Major Science Initiative Fund for the Compute Canada project. This award will be used to continue the operation of the national advanced research computing platform that serves more than 10,000 researchers at universities, post-secondary institutions and research institutions across Canada.The post Compute Canada Receives Funding for National Supercomputing Platform appeared first on insideHPC.
|
by Rich Brueckner on (#28892)
The Pacific Northwest National Laboratory (PNNL) is seeking a Research Scientist for High Performance Computing in our Job of the Week. "The HPC group is seeking a Scientist to actively participate in challenging software and hardware research projects that will impact future High Performance Computing systems as well as constituent technologies. In particular, the researcher will be involved in research into data analytics, large-scale computation, programming models, and introspective run-time systems. The successful researcher will join a vibrant research group whose core capabilities are in Modeling and Simulation, System Software and Applications, and Advanced Architectures."The post Job of the Week: Research Scientist for HPC at PNNL appeared first on insideHPC.
|
by Rich Brueckner on (#2883G)
In this podcast, the Radio Free HPC team shares the things we're looking forward to in 2017. "From the iPhone 8 to storage innovations and camera technologies in the fight against crime, it is looking like 2017 is going to be a great year. It will also be back to the future with the return of specialized processing devices for specific application worksloads and the continuing technology wars between processors and GPUs and Omni-Path vs InfiniBand."The post Radio Free HPC Looks Forward to New Technologies in 2017 appeared first on insideHPC.
|
by Douglas Eadline on (#287S5)
The two methods of scaling processors are based on the method used to scale the memory architecture and are called scaling-out or scale-up. Beyond the basic processor/memory architecture, accelerators and parallel file systems are also used to provide scalable performance. "High performance scale-up designs for scaling hardware require that programs have concurrent sections that can be distributed over multiple processors. Unlike the distributed memory systems described below, there is no need to copy data from system to system because all the memory is globally usable by all processors."The post Scaling Hardware for In-Memory Computing appeared first on insideHPC.
|
by staff on (#284N4)
In this special guest feature from Scientific Computing World, Cray's Barry Bolding gives some predictions for the supercomputing industry in 2017. "2016 saw the introduction or announcement of a number of new and innovative processor technologies from leaders in the field such as Intel, Nvidia, ARM, AMD, and even from China. In 2017 we will continue to see capabilities evolve, but as the demand for performance improvements continues unabated and CMOS struggles to drive performance improvements we'll see processors becoming more and more power hungry."The post Barry Bolding from Cray Shares Four Predictions for HPC in 2017 appeared first on insideHPC.
|
by Rich Brueckner on (#284HY)
In This Week in Machine Learning podcast, Xavier Amatriain from Quora discusses the process of engineering practical machine learning systems. Amatriainis a former machine learning researcher who went on to lead the recommender systems team at Netflix, and is now the vice president of engineering at Quora, the Q&A site. "What the heck is a multi-arm bandit and how can it help us."The post Podcast: Engineering Practical Machine Learning Systems appeared first on insideHPC.
|
by staff on (#281NR)
A new study led by a research scientist at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) highlights a literally shady practice in plant science that has in some cases underestimated plants’ rate of growth and photosynthesis, among other traits. "More standardized fieldwork, in parallel with new computational tools and theoretical work, will contribute to better global plant models," Keenan said.The post Supercomputing Sheds Light on Leaf Study appeared first on insideHPC.
|
by Rich Brueckner on (#281K8)
Dr. Maria Klawe gave this Invited Talk at SC16. "Like many other computing research areas, women and other minority groups are significantly under-represented in supercomputing. This talk discusses successful strategies for significantly increasing the number of women and students of color majoring in computer science and explores how these strategies might be applied to supercomputing."The post Video: Diversity and Inclusion in Supercomputing appeared first on insideHPC.
|
by Rich Brueckner on (#27XV4)
In this video, a new NASA supercomputer simulation depicts the planet and debris disk around the nearby star Beta Pictoris reveals that the planet's motion drives spiral waves throughout the disk, a phenomenon that greatly increases collisions among the orbiting debris. Patterns in the collisions and the resulting dust appear to account for many observed features that previous research has been unable to fully explain.The post Video: How an Exoplanet Makes Waves appeared first on insideHPC.
|
by Rich Brueckner on (#27XQM)
A new site developed by Tin H compares the HPC virtualization capabilities of Docker, Singularity, Shifter, and Univa Grid Engine Container Edition. "They bring the benefits of container to the HPC world and some provide very similar features. The subtleties are in their implementation approach. MPI maybe the place with the biggest difference."The post New Site Compares Docker, Singularity, Shifter, and Univa Grid Engine Container Edition appeared first on insideHPC.
|
by Rich Brueckner on (#27XEJ)
Thomas Schulthess from CSCS gave this Invited Talk at SC16. "Experience with today’s platforms show that there can be an order of magnitude difference in performance within a given class of numerical methods – depending only on choice of architecture and implementation. This bears the questions on what our baseline is, over which the performance improvements of Exascale systems will be measured. Furthermore, how close will these Exascale systems bring us to deliver on application goals, such as kilometer scale global climate simulations or high-throughput quantum simulations for materials design? We will discuss specific examples from meteorology and materials science."The post Reflecting on the Goal and Baseline for Exascale Computing appeared first on insideHPC.
|
by Rich Brueckner on (#27XAH)
In this presentation from Nvidia, top AI experts from the world's most influential companies weigh in on predicted advances for AI in 2017. “In 2017, intelligence will trump speed. Over the last several decades, nations have competed on speed, intent to build the world’s fastest supercomputer," said Ian Buck, VP of Accelerated computing at Nvidia. "In 2017, the race will shift. Nations of the world will compete on who has the smartest supercomputer, not solely the fastest.â€The post Experts Weigh in on 2017 Artificial Intelligence Predictions appeared first on insideHPC.
|
by Rich Brueckner on (#27SRV)
Are you planning for ISC 2017? The deadlines for submissions are fast approaching. The conference takes place June 18 - 22, 2017 in Frankfurt, Germany. "Participation in these sessions and programs will help enrich your experience at the conference, not to mention provide exposure for your work to some of the most discerning HPC practitioners and business people in the industry. We also want to remind you that it’s the active participation of the community that helps make ISC High Performance such a worthwhile event for all involved."The post Submission Deadlines for ISC 2017 are Fast Approaching appeared first on insideHPC.
|
by staff on (#27SND)
Today AMD unveiled preliminary details of its forthcoming GPU architecture, Vega. Conceived and executed over 5 years, Vega architecture enables new possibilities in PC gaming, professional design and machine intelligence that traditional GPU architectures have not been able to address effectively. "It is incredible to see GPUs being used to solve gigabyte-scale data problems in gaming to exabyte-scale data problems in machine intelligence. We designed the Vega architecture to build on this ability, with the flexibility to address the extraordinary breadth of problems GPUs will be solving not only today but also five years from now. Our high-bandwidth cache is a pivotal disruption that has the potential to impact the whole GPU market," said Raja Koduri, senior vice president and chief architect, Radeon Technologies Group, AMD.The post AMD Unveils Vega GPU Architecure with HBM Memory appeared first on insideHPC.
|
by Rich Brueckner on (#27SCP)
The University of Connecticut has partnered with Dell EMC and Intel to create a high performance computing cluster that students and faculty can use in their research. With this HPC Cluster, UConn researchers can solve problems that are computationally intensive or involve massive amounts of data in a matter of days or hours, instead of weeks. The HPC cluster operated on the Storrs campus features 6,000 CPU cores, a high-speed fabric interconnect, and a parallel file system. Since 2011, it has been used by over 500 researchers, from each of the university’s schools and colleges, for over 40 million hours of scientific computation.The post Dell EMC Powers HPC at University of Connecticut appeared first on insideHPC.
|
by MichaelS on (#27S6J)
"Managing the work on each node can be referred to as Domain parallelism. During the run of the application, the work assigned to each node can be generally isolated from other nodes. The node can work on its own and needs little communication with other nodes to perform the work. The tools that are needed for this are MPI for the developer, but can take advantage of frameworks such as Hadoop and Spark (for big data analytics). Managing the work for each core or thread will need one level down of control. This type of work will typically invoke a large number of independent tasks that must then share data between the tasks."The post Programming for High Performance Processors appeared first on insideHPC.
|
by staff on (#27S2W)
A team of international scientists have found a way to make memory chips perform computing tasks, which is traditionally done by computer processors like those made by Intel and Qualcomm. This means data could now be processed in the same spot where it is stored, leading to much faster and thinner mobile devices and computers. This type of chip is one of the fastest memory modules that will soon be available commercially.The post New ReRAM Memory Can Process Data Where it Lives appeared first on insideHPC.
|
by Rich Brueckner on (#27N47)
Are you shopping for Public Cloud services? A new Public Cloud Services Comparison site gives a service & feature level mapping between the 3 major public clouds: Amazon Web Service, Microsoft Azure & Google Cloud. Published by Ilyas F, a Cloud Solution Architect at Xebia Group, the Public Cloud Services Comparison is a handy reference manual to help anyone to quickly learn the alternate features & services between clouds.The post New Site Lists all Comparable Features from AWS, Azure, and Google Cloud appeared first on insideHPC.
|
by Rich Brueckner on (#27MY7)
In this a16z Podcast, Murray Shanahan, Azeem Azhar, and Tom Standage discuss the past, present, and future of A.I. as well as how it fits (or doesn’t fit) with machine learning and deep learning. "Where are we now in the A.I. evolution? What players do we think will lead, if not win, the current race? And how should we think about issues such as ethics and automation of jobs without descending into obvious extremes? All this and more, including a surprise easter egg in Ex Machina shared by Shanahan, whose work influenced the movie."The post a16z Podcast Looks at Artificial Intelligence and the Space of Possible Minds appeared first on insideHPC.
|
by Rich Brueckner on (#27MQF)
"Run your Windows and Linux HPC applications using high performance A8 and A9 compute instances on Azure, and take advantage of a backend network with MPI latency under 3 microseconds and non-blocking 32 Gbps throughput. This backend network includes remote direct memory access (RDMA) technology on Windows and Linux that enables parallel applications to scale to thousands of cores. Azure provides you with high memory and HPC-class CPUs to help you get results fast. Scale up and down based upon what you need and pay only for what you use to reduce costs."The post Video: Azure High Performance Computing appeared first on insideHPC.
|
by Rich Brueckner on (#27MJQ)
Tamara Kolda from Sandia gave this Invited Talk at SC16. "Scientists are drowning in data. The scientific data produced by high-fidelity simulations and high-precision experiments are far too massive to store. For instance, a modest simulation on a 3D grid with 500 grid points per dimension, tracking 100 variables for 100 time steps yields 5TB of data. Working with this massive data is unwieldy and it may not be retained for future analysis or comparison. Data compression is a necessity, but there are surprisingly few options available for scientific data."The post Parallel Multiway Methods for Compression of Massive Data and Other Applications appeared first on insideHPC.
|
by Peter ffoulkes on (#27MAQ)
The TOP500 list is a very good proxy for how different interconnect technologies are being adopted for the most demanding workloads, which is a useful leading indicator for enterprise adoption. The essential takeaway is that the world’s leading and most esoteric systems are currently dominated by vendor specific technologies. The Open Fabrics Alliance (OFA) will be increasingly important in the coming years as a forum to bring together the leading high performance interconnect vendors and technologies to deliver a unified, cross-platform, transport-independent software stack.The post HPC Networking Trends in the TOP500 appeared first on insideHPC.
|
by staff on (#27GJK)
Singapore-based publisher Asian Scientist has launched Supercomputing Asia, a new print title dedicated to tracking the latest developments in high performance computing across the region and making supercomputing accessible to the layman. "Aside from well-established supercomputing powerhouses like Japan and emerging new players like China, Asian countries like Singapore and South Korea have recognized the transformational power of supercomputers and invested accordingly. We hope that this new publication will provide a unique insight into the exciting developments in this region,†said Dr. Rebecca Tan, Managing Editor of Supercomputing Asia.The post Asia’s First Supercomputing Magazine Launches from Singapore appeared first on insideHPC.
|