by Rich Brueckner on (#JRFD)
In this video from the Barcelona Supercomputer Center, Big Data is presented as a key challenge for researchers studying global climate change. "Changes in the composition of the atmosphere can affect the habitability of the planet by modifying the air quality and altering long-term climate. Research in this area is devoted to the development, implementation and refinement of global and regional state-of-the-art models for short-term air quality forecasting and long-term climate predictions."The post Video: Big Data Powers Climate Research at BSC appeared first on insideHPC.
|
High-Performance Computing News Analysis | insideHPC
Link | https://insidehpc.com/ |
Feed | http://insidehpc.com/feed/ |
Updated | 2024-11-06 16:00 |
by Rich Brueckner on (#JR46)
In this video from the 2015 OLCF User Meeting, Buddy Bland from Oak Ridge presents: Present and Future Leadership Computers at OLCF. "As the home of Titan, the fastest supercomputer in the USA, OLCF has an exciting future ahead with the 2017 deployment of the Summit supercomputer. Summit will deliver more than five times the computational performance of Titan’s 18,688 nodes, using only approximately 3,400 nodes when it arrives in 2017."The post Video: Present and Future Leadership Computers at OLCF appeared first on insideHPC.
|
by staff on (#JR11)
Today GENCI announced a collaboration with IBM aimed at speeding up the path to exascale computing. "The collaboration, planned to run for at least 18 months, focuses on readying complex scientific applications for systems under development expected to achieve more than 100 petaflops, a solid step forward on the path to exascale. Working closely with supercomputing experts from IBM, GENCI will have access to some of the most advanced high performance computing technologies stemming from the rapidly expanding OpenPOWER ecosystem."The post GENCI to Collaborate with IBM in Race to Exascale appeared first on insideHPC.
|
by MichaelS on (#JQYC)
Applications that use 3D Finite Difference (3DFD) calculations are numerically intensive and can be optimized quite heavily to take advantage of accelerators that are available in today's systems. The performance of an implementation can and should be optimized using numerical stencils. Choices made when designing and implementing algorithms can affect the Arithmetic Intensity (AI), which is a measure of how efficient an implementation, by comparing the flops and memory access.The post Arithmetic Intensity of Stencil Operations appeared first on insideHPC.
|
by Rich Brueckner on (#JQ9R)
"Ultimately, we must accept that research is best served through using a combination of open-source and proprietary software, through developing new software and through the use of existing software. This approach allows the research community to focus on what is optimal for scientific discovery: the one point on which everyone in this debate agrees."The post The Price of Open-source Software – a Joint Response appeared first on insideHPC.
|
by Rich Brueckner on (#JN7N)
In what has to be one of the most beautiful simulations I've ever seen, this video from the European Space Agency shows simulated interaction of solar winds with 67P/Churyumov-Gerasimenko, the famous comet targeted the Rosetta mission. "The simulated conditions represent those expected at 1.3 AU from the Sun, close to perihelion, where the comet is strongly active."The post Video: Stunning Simulation Shows Comet in the Solar Wind appeared first on insideHPC.
|
by Rich Brueckner on (#JN2W)
Early Bird registration rates are now available for ISC Cloud & Big Data Conference, which takes place Sept. 28-30 in Frankfurt, Germany. This year the event will kick off with one full day of workshops. The new program will highlight performance demanding cloud and big data applications and technologies and will consist of three tracks: Business, Technology and Research.The post ISC Cloud & Big Data Conference to Focus on Business, Technology and Research appeared first on insideHPC.
|
by staff on (#JMTN)
Over at NERSC, Linda Vu writes that the SciDB open source database system is a powerful tool for helping scientists wrangle Big Data. "SciDB is an open source database system designed to store and analyze extremely large array-structured data—like pictures from light sources and telescopes, time-series data collected from sensors, spectral data produced by spectrometers and spectrographs, and graph-like structures that illustrate relationships between entities."The post Accelerating Science with SciDB from NERSC appeared first on insideHPC.
|
by Rich Brueckner on (#JMJ7)
"Sea level rise is one of the most visible signatures of our changing climate, and rising seas have profound impacts on our nation, our economy and all of humanity," said Michael Freilich, director of NASA's Earth Science Division. "By combining space-borne direct measurements of sea level with a host of other measurements from satellites and sensors in the oceans themselves, NASA scientists are not only tracking changes in ocean heights but are also determining the reasons for those changes."The post NASA Charts Sea Level Rise appeared first on insideHPC.
|
by Rich Brueckner on (#JMD6)
Today Rescale announced availability of its Europe region simulation and HPC platforms. As an HPC cloud provider, Rescale offers a software platform and hardware infrastructure for companies to perform scientific and engineering simulations.The post Rescale Launches Cloud HPC Platform in Europe appeared first on insideHPC.
|
by Rich Brueckner on (#JHRM)
Today Intel Corporation and BlueData announced a broad strategic technology and business collaboration, as well as an additional equity investment in BlueData from Intel Capital. BlueData is a Silicon Valley startup that makes it easier for companies to install Big Data infrastructure, such as Apache Hadoop and Spark, in their own data centers or in the cloud.The post Intel Invests in BlueData for Spinning Up Spark Clusters on the Fly appeared first on insideHPC.
|
by staff on (#JHAN)
Geert Wenes writes in the Cray Blog that the next generation of Grand Challenges will focus on critical workflows for Exascale. "For every historical HPC grand challenge application, there is now a critical dependency on a series of other processing and analysis steps, data movement and communications that goes well beyond the pre- and post-processing of yore. It is iterative, sometimes synchronous (in situ) and generally more on an equal footing with the “main†application."The post From Grand Challenges to Critical Workflows appeared first on insideHPC.
|
by Rich Brueckner on (#JH5X)
"Supercomputing should be available for everyone who wants it. With that mission in mind, a team of engineers created Parallella, an 18-core supercomputer that’s a little bigger than a credit card. Parallella is open source hardware; the circuit diagrams are on GitHub and the machine runs Linux. Icing on the cake: Parallella is the most energy efficient computer on the planet, and you can buy one for a hundred bucks. Why does parallel computing matter? How can developers use parallel computing to deliver better results for clients? Let’s explore these questions together."The post Video: Parallella – The Most Energy Efficient Supercomputer on the Planet appeared first on insideHPC.
|
by Rich Brueckner on (#JH14)
"Within the next 12 months, China expects to be operating not one but two 100 Petaflop computers, each containing (different) Chinese-made processors, and both coming on stream about a year before the United States’ 100 Petaflop machines being developed under the Coral initiative. Ironically, the CPU for one machine appears very similar to a technology abandoned by the USA in 2007, and the US Government, through its export embargo, has encouraged China to develop its own accelerator for the other machine."The post China May Develop Two 100 Petaflop Machines Within a Year appeared first on insideHPC.
|
by staff on (#JG95)
The National Science Foundation has awarded the San Diego Supercomputer Center (SDSC) a one-year extension to continue operating its Gordon supercomputer, providing continued access to the cluster for a wide range of researchers with data-intensive projects.The post SDSC Gets One-year Extension for Gordon Supercomputer appeared first on insideHPC.
|
by Rich Brueckner on (#JE7B)
Today Intel released Intel Parallel Studio XE 2016, the next iteration of its developer toolkit for HPC and technical computing applications. This release introduces the Intel Data Analytics Acceleration Library, a library for big data developers that turns large data clusters into meaningful information with advanced analytics algorithms.The post Intel Updates Developer Toolkit with Data Analytics Acceleration Library appeared first on insideHPC.
|
by Rich Brueckner on (#JE4A)
Dell has posted the Agenda for their upcoming Genomics Data Workshop. Entitled "Enabling discovery and product innovation with Dell HPC and Big Data Solutions," the event will take place Sept. 15 in La Jolla, CA.The post Dell Posts Agenda for La Jolla Genomics Data Workshop appeared first on insideHPC.
|
by Rich Brueckner on (#JDYP)
"CDSW's organizers are professional programmers and data scientists and several of us have experience teaching data science in more traditional university and corporate settings. Our talk will describe how "democratized" data science is similar to — and sometimes extremely different from — these more traditional approaches. We will talk about some of the challenges we have faced and highlight some of our most inspirational successes."The post Video: Democratizing Data Science appeared first on insideHPC.
|
by staff on (#JDT3)
Today the InfiniBand Trade Association (IBTA) announced the completion of the first Plugfest for RDMA over Converged Ethernet (RoCE) solutions and the publication of the RoCE Interoperability List on the IBTA website. Fifteen member companies participated, bringing their RoCE adapters, cables and switches for testing to the event. Products that successfully passed the testing have been added to the RoCE Interoperability List.The post IBTA Publishes RoCE Interoperability List from Plugfest appeared first on insideHPC.
|
by staff on (#JDT5)
Today Mellanox announced its EDR 100Gb/s InfiniBand solutions have been selected by the KTH Royal Institute of Technology for use in their PDC Center for High Performance Computing. Mellanox’s robust and flexible EDR InfiniBand solution offers higher interconnect speed, lower latency and smart accelerations to maximize efficiency and will enable the PDC Center to achieve world-leading data center performance across a variety of applications, including advanced modeling for climate changes, brain functions and protein-drug interactions.The post KTH in Sweden Moves to EDR 100Gb/s InfiniBand appeared first on insideHPC.
|
by Rich Brueckner on (#JCXJ)
"Researchers at the U.S. Department of Energy’s Argonne National Laboratory will be testing the limits of computing horsepower this year with a new simulation project from the Virtual Engine Research Institute and Fuels Initiative (VERIFI) that will harness 60 million computer core hours to dispel those uncertainties and pave the way to more effective engine simulations."The post Pushing the Boundaries of Combustion Simulation with Mira appeared first on insideHPC.
|
by Rich Brueckner on (#JAXB)
In the first of what will likely be a series of announcements from the Hot Chips conference this week, Phytium Technologies revealed details of its Mars 64-core ARMv8 processors.The post Phytium Shows off Mars ARMv8 Processor at Hot Chips appeared first on insideHPC.
|
by Rich Brueckner on (#JARM)
Bright Computing has announced an expansion of its global network of channel partners, local resellers, local system integrators, as well as original equipment manufacturers (OEMs). The relationships are leading to collaborations resulting in increased business opportunities in its core Western, Central and Eastern European markets, Russia, the Middle East, and Africa.The post Bright Computing Grows its Channel for Cluster Management appeared first on insideHPC.
|
by Rich Brueckner on (#JAHS)
"Climate change – or as Doug Sisterson, research meteorologist at Argonne National Laboratory, prefers to call it, climate disruption – is probably the greatest challenge we face in modern society, yet many of us don’t fully understand the causes or the consequences. Washington Governor Jay Inslee famously stated: “We’re the first generation to feel the impact of climate change and the last generation that can do something about it.â€The post Video: Climate Change – Fact, Fiction, and What You Can Do appeared first on insideHPC.
|
by Rich Brueckner on (#JAAW)
In this special guest feature, Robert Roe from Scientific Computing World explores the efforts made by top HPC centers to scale software codes to the extreme levels necessary for exascale computing. "The speed with which supercomputers process useful applications is more important than rankings on the TOP500, experts told the ISC High Performance Conference in Frankfurt last month."The post Experts Focus on Code Efficiency at ISC 2015 appeared first on insideHPC.
|
by staff on (#JA59)
Today Dell announced a new business unit aligned around hyperscale datacenters. "The Datacenter Scalable Solutions (DSS) group is designed to meet the specific needs of web tech, telecommunications service providers, hosting companies, oil and gas, and research organizations. These businesses often have high-volume technology needs and supply chain requirements in order to deliver business innovation. With a new operating model built on agile, scalable, and repeatable processes, Dell can now uniquely provide this set of customers with the technology they need, purposefully designed to their specifications, and delivered when they want it."The post Dell Opens Line of Business for Hyperscale Datacenters appeared first on insideHPC.
|
by Rich Brueckner on (#J7KE)
"Despite the growing abundance of powerful tools, building and deploying machine-learning frameworks into production continues to be major challenge, in both science and industry. I'll present some particular pain points and cautions for practitioners as well as recent work addressing some of the nagging issues. I advocate for a systems view, which, when expanded beyond the algorithms and codes to the organizational ecosystem, places some interesting constraints on the teams tasked with development and stewardship of ML products."The post Video: A Systems View of Machine Learning appeared first on insideHPC.
|
by Rich Brueckner on (#J7HG)
SC15 just announced the finalists for the the ACM Gordon Bell Prize in High Performance Computing. The $10,000 prize will be presented to the winner at the conference, which takes place Nov. 15-20 in Austin.The post SC15 Announces Finalists for Gordon Bell Prize appeared first on insideHPC.
|
by staff on (#J5DB)
Knowing how the weather will behave in the near future is indispensable for countless human endeavors. Now, researchers at ECMWF are leveraging the computational power of the Titan supercomputer at Oak Ridge to improve weather forecasting.The post Titan Supercomputer Powers the Future of Forecasting appeared first on insideHPC.
|
by Rich Brueckner on (#J5CE)
DDN is seeking an HPC Systems Engineer for Oil & Gas in our Job of the Week.The post Job of the Week: HPC Systems Engineer for Oil & Gas at DDN appeared first on insideHPC.
|
by staff on (#J334)
"SUPER builds on past successes and now includes research into performance auto-tuning, energy efficiency, resilience, multi-objective optimization, and end-to-end tool integration. Leading the project dovetails neatly with Oliker’s research interests, which include optimization of scientific methods on emerging multi-core systems, ultra-efficient designs of domain-optimized computational platforms and performance evaluation of extreme-scale applications on leading supercomputers."The post SUPER Project Aims at Efficient Supercomputing for Scientists appeared first on insideHPC.
|
by Rich Brueckner on (#J2PJ)
"Software and computers are everywhere, revolutionizing every field around us. But the majority of schools don't teach computer science. Code.org believes every student should have the opportunity to shape the 21st-century and wants to turn this problem around. This is just the beginning of a bold vision to bring this foundational field to every K-12 public school by 2020."The post Video: Computer Science – America’s Untapped Opportunity appeared first on insideHPC.
|
by Rich Brueckner on (#J2J4)
With Summer winding down, SC15 is just around the corner. With a smaller exhibits space than previous years, the SC15 Exhibits Chair Trey Breckenridge was faced with a number of challenges going into this year's Supercomputing conference. In this interview from the SC15 Blog, Breckenridge gives us a preview of what looks to be another great exhibition.The post Interview: An Exhibits Preview of SC15 appeared first on insideHPC.
|
by Rich Brueckner on (#J2EX)
In this video from IDF 2015, Intel and Oregon Health & Science University (OHSU) announce the Collaborative Cancer Cloud, a precision medicine analytics platform that allows hospitals and research institutions to securely share patient genomic, imaging, and clinical data for potentially lifesaving discoveries.The post Video: Intel Announces Collaborative Cancer Cloud appeared first on insideHPC.
|
by Rich Brueckner on (#HZRP)
The Austin Business Journal reports that actor Alan Alda will keynote the SC15 conference. "Alda is a seven-time Emmy winner and science enthusiast. Most widely known for his role as Captain Hawkeye Pierce on M*A*S*H, Alda has also hosted “Scientific American Frontiers†on PBS for 11 years and worked on programs such as “The Human Spark.â€The post Alan Alda to Keynote SC15 appeared first on insideHPC.
|
by Rich Brueckner on (#HZ1F)
Today D-Wave Systems announced the general availability of the D-Wave 2X quantum computing system. The D-Wave 2X features a 1000+ qubit quantum processor and numerous design improvements that result in larger problem sizes, faster performance and higher precision. At 1000+ qubits, the D-Wave 2X quantum processor evaluates all 2 possible solutions simultaneously as it converges on optimal or near optimal solutions, more possibilities than there are particles in the observable universe. No conventional computer of any kind could represent this many possibilities simultaneously, further illustrating the powerful nature of quantum computation.The post D-Wave 2X Quantum Computer Goes GA with 1000+ Qubits appeared first on insideHPC.
|
by Rich Brueckner on (#HYZV)
In this video, Fermilab’s Dr. Don Lincoln gives us a sense of just how much data is involved and the incredible computer resources that make the LHC possible. "The LHC is the world’s highest energy particle accelerator and scientists use it to record an unprecedented amount of data. This data is recorded in electronic format and it requires an enormous computational infrastructure to convert the raw data into conclusions about the fundamental rules that govern matter."The post Video: High Performance Computing for the LHC appeared first on insideHPC.
|
by MichaelS on (#HYY6)
"OpenCL is a fairly new programming model that is designed to help programmers get the most out of a variety of processing elements in heterogeneous environments. Many benchmarks that are available have demonstrated that excellent performance can be obtained over a wide variety of devices. Rather than lock an application into one specific accelerator, by using OpenCL, applications can be run over on a number of different architectures with each showing excellent speedups over a native (host cpu) implementation."The post OpenCL for Performance appeared first on insideHPC.
|
by staff on (#HQNT)
Many Universities, private research labs and government research agencies have begun using High Performance Computing (HPC) servers, compute accelerators and flash storage arrays to accelerate a wide array of research among disciplines in math, science and engineering. These labs utilize GPUs for parallel processing and flash memory for storing large datasets. Many universities have HPC labs that are available for students and researchers to share resources in order to analyze and store vast amounts of data more quickly.The post Research Demands More Compute Power and Faster Storage for Complex Computational Applications appeared first on insideHPC.
|
by staff on (#HY91)
Over at TACC, Jorge Salazar writes that new supercomputer simulations are helping doctors improve the repair and replacement of heart valves. "New supercomputer models have come closer than ever to capturing the behavior of normal human heart valves and their replacements, according to recent studies by groups including scientists at the Institute for Computational Engineering and Sciences (ICES) at The University of Texas at Austin and the Department of Mechanical Engineering atIowa State University."The post Podcast: Supercomputing the Human Heart appeared first on insideHPC.
|
by staff on (#HW72)
Today Norway's Dolphin Interconnect Solutions demonstrated record a low latency of 300 nanoseconds at IDF 2015. Dolphin achieved this record by adding Intel Xeon Non Transparent Bridging (NTB) support to its existing PCI Express network product. In addition, Dolphin announced a new PCIe 3.0 host adapter, the PXH810 Host Adapter, which achieves 540 nanoseconds of latency at 64Gbps wire speeds.The post Dolphin Demos 300ns Latency Across PCI Express at IDF appeared first on insideHPC.
|
by staff on (#HVXP)
KAUST in Saudi Arabia has been named as the latest Intel Parallel Computing Center. "The new PCC aims to provide scalable software kernels common to scientific simulation codes that will adapt well to future architectures, including a scheduled upgrade of KAUST’s globally Top10 Intel-based Cray XC40 system. In the spirit of co-design, Intel PCC at KAUST will also provide feedback that could influence architectural design trade-offs."The post KAUST is the Latest Intel Parallel Computing Center appeared first on insideHPC.
|
by Rich Brueckner on (#HVTR)
"At IDF, Intel introduced Intel Optane technology, which is based on the revolutionary 3D XPoint non-volatile memory media and combined with the company's advanced system memory controller, interface hardware and software IP, to unleash vast performance potential in a range of forthcoming products. Intel Optane technology will first come to market in a new line of high-endurance, high-performance Intel SSDs beginning in 2016. The new class of memory technology will also power a new line of Intel DIMMs designed for Intel's next-generation data center platforms."The post Video: First Look at Intel Optane Non-volatile Memory appeared first on insideHPC.
|
by staff on (#HVQH)
"The range of cooling options now available is testimony to engineering ingenuity. HPC centers can choose between air, oil, dielectric fluid, or water as the heat-transfer medium. Opting for something other than air means that single or two-phase flow could be available, opening up the possibilities of convective or evaporative cooling and thus saving the cost of pumping the fluid round the system."The post Innovation Keeps Supercomputers Cool appeared first on insideHPC.
|
by staff on (#HVF2)
Today Seagate announced plans to buy data storage system maker Dot Hill Systems for approximately $645 million.The post Seagate Announces Plans to Buy Dot Hill Systems appeared first on insideHPC.
|
by Rich Brueckner on (#HTV7)
In this video from the AIAA Aviation Conference 2015, panelists discuss Supercomputing: Roadmap and its Future Role in Aerospace Engineering. "Supercomputing has made significant contributions in aerospace engineering in recent decades, including advances in computational fluid dynamics that has fundamentally altered the way aircraft are designed. And the relentless growth in high-performance computing power holds promise of huge leaps in engine performance and other aerospace technology."The post Video: Panel Discussion on Supercomputing for Aerospace appeared first on insideHPC.
|
by staff on (#HRDN)
Staff from the Oak Ridge Leadership Computing Facility (OLCF), a US Department of Energy (DOE)Office of Science User Facility, held prominent roles in two recent committee meetings—one focused on OpenACC and one devoted to OpenMP.The post ORNL Hosts Meetings on OpenACC & OpenMP appeared first on insideHPC.
|
by Rich Brueckner on (#HR16)
The First International Workshop on Heterogeneous High-performance Reconfigurable Computing (H2RC'15) has issued its Call for Submissions. Held in conjunction with SC15, the event will take place Nov. 15 in Austin, Texas.The post Call for Submissions: Workshop on Heterogeneous High-performance Reconfigurable Computing appeared first on insideHPC.
|
by staff on (#HQZC)
Today Asetek announced an order for its RackCDU data center liquid cooling system placed by FORMAT Sp. Ltd, an IT solutions provider located in Poland. Building on the success of previous smaller orders, FORMAT has ordered 6 RackCDU with cooling loops for a total of 471 compute nodes that will be delivered in Q3. The order will result in revenue to Asetek in the range of $100k.The post FORMAT in Poland to Deploy RackCDU Liquid Cooling Systems appeared first on insideHPC.
|
by Rich Brueckner on (#HQVV)
Today Cray announced that the Danish Meteorological Institute (DMI) has purchased a Cray XC supercomputer and a Cray Sonexion 2000 storage system. Through an arrangement with the Icelandic Meteorological Office (IMO), the system will be installed at the IMO datacenter in Reykjavik, Iceland for year-round power and cooling efficiency.The post Danish Meteorological to Install First Cray in Iceland appeared first on insideHPC.
|