by Rich Brueckner on (#2A78D)
In this special guest feature, Tim Gillett from Scientific Computing World interviews Norbert Attig and Thomas Eickermann from the Jülich Supercomputing Centre about how JSC is tackling high performance computing challenges.The post Scaling HPC at the Jülich Supercomputing Centre appeared first on insideHPC.
|
High-Performance Computing News Analysis | insideHPC
Link | https://insidehpc.com/ |
Feed | http://insidehpc.com/feed/ |
Updated | 2024-11-05 19:30 |
by Rich Brueckner on (#2A73E)
The HPC for Energy (HPC4E) project is organizing a workshop entitled HPC Roadmap for Energy Industry. Hosted by INRIA, the event takes place Feb 1 at the French Institute for Research in Computer Science and Automation. "Energy is one of the current priorities for EU-Brazil cooperation. The main objective is to develop high-performance simulation tools that go beyond the state-of-the-art to help the energy industry respond to both future energy demands and carbon-related environmental issues."The post INRIA to Host Workshop on HPC Roadmap for Energy Industry appeared first on insideHPC.
|
by Rich Brueckner on (#2A6XE)
Today TransparentBusiness announced the appointment of Jorge Luis Titinger as its Chief Strategy Officer. Mr. Titinger is best known as the former CEO of SGI, a global leader in HPC, which was recently acquired by Hewlett Packard Enterprise. "I'm pleased to join the company which has established itself as a leader in remote work process management and coordination," said Jorge Titinger. "I believe TransparentBusiness can help accelerate the adoption of a distributed workforce; this can result in significant bottom line benefits for the companies that embrace this new direction and bring the work to where the talent is."The post Former SGI’s CEO Jorge Titinger Joins TransparentBusiness as Chief Strategy Officer appeared first on insideHPC.
|
by Rich Brueckner on (#2A34A)
In this silent video from the Blue Brain Project at SC16, 865 segments from a rodent brain are simulated with isosurfaces generated from Allen Brain Atlas image stacks. For this INCITE project, researchers from École Polytechnique Fédérale de Lausanne will use the Mira supercomputer at Argonne to advance the understanding of these fundamental mechanisms of the brain’s neocortex.The post Video: Stunning Simulation from Blue Brain Project at SC16 appeared first on insideHPC.
|
by staff on (#2A2XS)
Today Rescale announced the company is expanding to Europe with a new office in Munich, Germany. "Europe has always been a crucial market for Rescale,†said Rescale co-founder and CEO Joris Poort. “We are thrilled to be establishing a solid regional foundation for sales and support for our customers in Europe. Wolfgang Dreyer’s HPC expertise and deep familiarity with the region will be a tremendous asset to help serve our European customers.â€The post Wolfgang Dreyer to Head Up New Rescale Office in Munich appeared first on insideHPC.
|
by Rich Brueckner on (#2A2S0)
Nor-Tech reports that Caltech is upgrading its Nor-Tech demo cluster with Intel Xeon Phi. The demo cluster is a no-cost, no-strings opportunity for current and prospective clients to test-drive simulation applications on a cutting-edge Nor-Tech HPC equipped with Intel Xeon Phi and other high-demand platforms installed and configured. Users can also integrate their existing platforms into the demo cluster.The post Caltech Upgrading Demo Cluster with Intel Xeon-Phi x200 Processor appeared first on insideHPC.
|
by staff on (#2A2MC)
The European PRACE organization is now accepting applications for the following expense-paid educational programs: The 2017 International Summer School on HPC Challenges in Computational Sciences and the PRACE Summer of HPC 2017 program.The post Apply Now for PRACE HPC Summer School Programs appeared first on insideHPC.
|
by staff on (#2A2GQ)
"D-Wave's leap from 1000 qubits to 2000 qubits is a major technical achievement and an important advance for the emerging field of quantum computing," said Earl Joseph, IDC program vice president for high performance computing. "D-Wave is the only company with a product designed to run quantum computing problems, and the new D-Wave 2000Q system should be even more interesting to researchers and application developers who want to explore this revolutionary new approach to computing."The post D-Wave Rolls Out 2000 Qubit System appeared first on insideHPC.
|
by Rich Brueckner on (#29YR2)
Matthias Troyer frin ETH Zurich presented this talk at a recent Microsoft Research event. "Given limitations to the scaling for simulating the full Coulomb Hamiltonian on quantum computers, a hybrid approach – deriving effective models from density functional theory codes and solving these effective models by quantum computers seem to be a promising way to proceed for calculating the electronic structure of correlated materials on a quantum computer."The post Video: A Hybrid Approach to Strongly Correlated Materials appeared first on insideHPC.
|
by staff on (#29YJR)
"GIGABYTE servers - across standard, Open Compute Platform (OCP) and rack scale form factors - deliver exceptional value, performance and scalability for multi-tenant cloud and virtualized enterprise datacenters," said Etay Lee, GM of GIGABYTE Technology's Server Division. "The addition of QLogic 10GbE and 25GbE FastLinQ Ethernet NICs in OCP and Standard form factors will enable delivery on all of the tenets of open standards, while enabling key virtualization technologies like SR-IOV and full offloads for overlay networks using VxLAN, NVGRE and GENEVE."The post GIGABYTE Selects Cavium QLogic FastLinQ Ethernet Solutions appeared first on insideHPC.
|
by staff on (#29YJS)
"Computers as we know them are disappearing from view,’ asserts Koen De Bosschere, Professor at the Engineering Faculty of Ghent University, Belgium, and Coordinator of the HiPEAC network. "The evolution from desktop PC will not stop at smartphone and tablet: the devices and systems that will allow us to automate key infrastructures, such as transport, power grids and monitoring of medical conditions, are bringing us into the age of artificial intelligence. This does not mean man-sized robots, but smart devices that we program and then interact with, such as intelligent personal assistants and self-driving vehicles."The post HiPEAC Vision Report Advocates Reinvention of Computing appeared first on insideHPC.
|
by Rich Brueckner on (#29Y4M)
In this podast, the Radio Free HPC Team looks at the Cray's new ARM-based Isambard supercomputer that will soon be deployed in the UK. After that, we discuss how Persistent Memory will change the way vendors architect systems for Big Data workloads.The post Radio Free HPC Looks at the New Isambard Supercomputer from Cray appeared first on insideHPC.
|
by Douglas Eadline on (#29Y3E)
The move away from the traditional single processor/memory design has fostered new programming paradigms that address multiple processors (cores). Existing single core applications need to be modified to use extra processors (and accelerators). Unfortunately there is no single portable and efficient programming solution that addresses both scale-up and scale-out systems.The post Five Ways Scale-Up Systems Save Money and Improve TCO appeared first on insideHPC.
|
by staff on (#29V0P)
Today Appentra announced it has joined the OpenPOWER Foundation, an open development community based on the POWER microprocessor architecture. Founded in 2012, Appentra is a technology company providing software tools for guided parallelization in high-performance computing and HPC-like technologies. "The development model of the OpenPOWER Foundation is one that elicits collaboration and represents a new way in exploiting and innovating around processor technology.†says Calista Redmond, Director of OpenPOWER Global Alliances at IBM. “With the Power architecture designed for Big Data and Cloud, new OpenPOWER Foundation members like Appentra, will be able to add their own innovations on top of the technology to create new applications that capitalize on emerging workloads.â€The post Appentra Joins OpenPOWER Foundation for Auto-Parallelization appeared first on insideHPC.
|
by staff on (#29TZ3)
Scientists typically understand data through graphs and visualizations. But is it possible to use sound to interpret complex information? This video from Georgia Tech's Asegun Henry shows the Sonification of the vibrations of an atom in crystalline silicon. "If you look at the data, it looks like white noise," Henry said. "We decided to sonify the data, and as soon as we listened to it, we could hear the pattern."The post Video: Sonifying Simulations appeared first on insideHPC.
|
by Rich Brueckner on (#29QVJ)
"Guided by the principles of interactive supercomputing, Lincoln Laboratory was responsible for a lot of the early work on machine learning and neural networks. We now have a world-class group investigating speech and video processing as well as machine language topics including theoretical foundations, algorithms and applications. In the process, we are changing the way we go about computing. Over the years we have tended to assign a specific systems to service a discrete market, audience or project. But today those once highly specialized systems are becoming increasingly heterogeneous. Users are interacting with computational resources that exhibit a high degree of autonomy. The system, not the user, decides on the computer hardware and software that will be used for the job."The post Video: A Look at the Lincoln Laboratory Supercomputing Center appeared first on insideHPC.
|
by Rich Brueckner on (#29QTH)
"Mellanox Technologies is looking for a talented engineer to lead datacenter application performance optimization and benchmarking over Mellanox networking products. This individual will primarily work with marketing and engineering to execute low-level and application level benchmarks focused on High Performance Computing (HPC) open source and ISV applications in addition to providing software and hardware optimization recommendations. In addition, this individual will work closely with hardware and software partners, and customers to benchmark Mellanox products under different system configurations and workloads."The post Job of the Week: HPC Application Performance Engineer at Mellanox appeared first on insideHPC.
|
by staff on (#29M7S)
Today the PASC17 Conference announced that this year’s plenary presentation will be entitled “Unlocking the Mysteries of the Universe with Supercomputers.†The plenary presentation will be given by Katrin Heitmann, Senior Member of the Computation Institute at the University of Chicago and the Kavli Institute for Physical Cosmology, USA.The post PASC17 Plenary to Focus on Supercomputing Cosmology appeared first on insideHPC.
|
by Rich Brueckner on (#29KQY)
"As data proliferation continues to explode, computing architectures are struggling to get the right data to the processor efficiently, both in terms of time and power. But what if the best solution to the problem is not faster data movement, but new architectures that can essentially move the processing instructions into the data? Persistent memory arrays present just such an opportunity. Like any significant change, however, there are challenges and obstacles that must be overcome. Industry veteran Steve Pawlowski will outline a vision for the future of computing and why persistent memory systems have the potential to be more revolutionary than perhaps anyone imagines."The post Video: How Persistent Memory Will Bring an Entirely New Structure to Large Data Computing appeared first on insideHPC.
|
by staff on (#29KNG)
Today Atos announced the first installation of its Bull sequana X1000 new-generation supercomputer system in the UK at the Hartree Centre. Founded by the UK government, the Science and Technology Facilities Council (STFC) Hartree Centre is a high performance computing and data analytics research facility. Described as "the world’s most efficient supercomputer," Bull sequana is an exascale-class computer capable of processing a billion billion operations per second while consuming 10 times less energy than current systems.The post Atos Delivers Bull sequana Supercomputer to Hartree Centre appeared first on insideHPC.
|
by Rich Brueckner on (#29KH9)
In this video, Rich Brueckner from insideHPC moderates a panel discussion on Code Modernization. "SC15 luminary panelists reflect on collaboration with Intel and how building on hardware and software standards facilitates performance on parallel platforms with greater ease and productivity. By sharing their experiences modernizing code we hope to shed light on what you might see from modernizing your own code."The post Video: Modern Code – Making the Impossible Possible appeared first on insideHPC.
|
by Rich Brueckner on (#29GSZ)
"We will be fully honoring all IDC HPC contracts and deliverables, and will continue our HPC operations as before," said Earl Joseph of IDC. "Because the HPC group conducts sensitive business with governments, the group is being separated prior to the deal closing. It will be operated under new ownership that will be independent from the buyer of IDC to ensure that the group can continue to fully support government research requirements. The HPC group will continue to do business as usual, including research reports, client studies, and the HPC User Forums."The post China Oceanwide to Acquire IDG & IDC appeared first on insideHPC.
|
by Rich Brueckner on (#29FY9)
Today Altair announced plans to build and offer HPC solutions on the Oracle Cloud Platform. This follows Oracle’s decision to name Altair PBS Works as its preferred workload management solution for Oracle Cloud customers. "This move signals a big shift in strategy for Oracle, a company that abandoned the HPC market after it acquired Sun Microsystems in 2010."The post Oracle Cloud to add PBS Works for Technical Computing appeared first on insideHPC.
|
by staff on (#29FWB)
The SC17 conference is now accepting proposals for independently planned full- or half-day workshops. SC17 will be held Nov. 12-17 in Denver.The post Call for Proposals: SC17 Workshops appeared first on insideHPC.
|
by MichaelS on (#29FT8)
"OpenMP, Fortran 2008 and TBB are standards that can help to create parallel areas of an application. MKL could also be considered to be part of this family, because it uses OpenMP within the library. OpenMP is well known and has been used for quite some time and is continues to be enhanced. Some estimates are as high as 75 % of cycles used are for Fortran applications. Thus, in order to modernize some of the most significant number crunchers today, Fortran 2008 should be investigated. TBB is for C++ applications only, and does not require compiler modifications. An additional benefit to using OpenMP and Fortran 2008 is that these are standards, which allows code to be more portable."The post Managing Lots of Tasks for Intel Xeon Phi appeared first on insideHPC.
|
by staff on (#29FP8)
Following a call for proposals issued last October, NERSC has selected six science application teams to participate in the NERSC Exascale Science Applications Program for Data (NESAP for Data) program. "We're very excited to welcome these new data-intensive science application teams to NESAP,†said Rollin Thomas, a big data architect in NERSC’s Data Analytics and Services group who is coordinating NESAP for Data. “NESAP's tools and expertise should help accelerate the transition of these data science codes to KNL. But I'm also looking forward to uncovering and understanding the new performance and scalability challenges that are sure to arise along the way."The post NERSC Selects Six Teams for Exascale Science Applications Program appeared first on insideHPC.
|
by Peter ffoulkes on (#29F8M)
The Dell EMC HPC Innovation Lab, substantially powered by Intel, has been established to provide customers best practices for configuring and tuning systems and their applications for optimal performance and efficiency through blogs, whitepapers and other resources. "Dell is utilizing the lab’s world-class Infrastructure to characterize performance behavior and to test and validate upcoming technologies."The post Speeding Workloads at the Dell EMC HPC Innovation Lab appeared first on insideHPC.
|
by staff on (#29BGC)
"We are excited to have the benefit of Dr. Taufer’s leadership for SC19,†says John West, director of strategic initiatives at the Texas Advanced Computing Center and chair of the SC Steering Committee. “This conference has a unique role in our community, and we depend upon the energy, drive, and dedication of talented leaders to keep SC fresh and relevant after nearly 30 years of continuous operation. The Steering Committee also wants to express its gratitude for the commitment that the University of Delaware is making by supporting Michela in this demanding service role.â€The post Michela Taufer from University of Delaware to Chair SC19 appeared first on insideHPC.
|
by staff on (#29BC7)
"Delivering optimized technology capabilities to different communities is key to a successful public cloud offeringâ€, said Nimbix Chief Technology Officer Leo Reiter. “With this unified approach, Nimbix delivers discrete product capabilities to different audiences while maximizing value to all parties with the underlying power of the JARVICE platform.â€The post Nimbix Expands Cloud Offerings to Enterprises and Developers appeared first on insideHPC.
|
by staff on (#29B8R)
Taking place in Stockholm from 23-25 January, the 12th HiPEAC conference will bring together Europe’s top thinkers on computer architecture and compilation to tackle the key issues facing the computing systems on which we depend. HiPEAC17 will see the launch of the HiPEAC Vision 2017, a technology roadmap which lays out how technology affects our lives and how it can, and should, respond to the challenges facing European society and economies, such as the aging population, climate change and shortages in the ICT workforce.The post HiPEAC17 to Focus on Memory Systems, Energy Efficiency and Cybersecurity appeared first on insideHPC.
|
by staff on (#29B6K)
“This is an exciting time because the whole HPC landscape is changing with manycore, which is a big change for our users,†said Gerber, who joined NERSC’s User Services Group in 1996 as a postdoc, having earned his PhD in physics from the University of Illinois. “Users are facing a big challenge; they have to be able to exploit the architectural features on Cori (NERSC’s newest supercomputing system), and the HPC Department plays a critical role in helping them do this."The post Richard Gerber to Head NERSC’s HPC Department appeared first on insideHPC.
|
by staff on (#29B0F)
"Many supercomputer users, like the big DOE labs, are implementing these next generation systems. They are now engaged in significant code modernization efforts to adapt their key present and future applications to the new processing paradigm, and to bring their internal and external users up to speed. For some in the HPC community, this creates unanticipated challenges along with great opportunities."The post MIT Lincoln Laboratory Takes the Mystery Out of Supercomputing appeared first on insideHPC.
|
by Rich Brueckner on (#297JQ)
“From new cloud offerings on AWS and Azure, to Summit and Sierra, the 150+ PF supercomputers being built by the US in 2017, new AI workloads are driving the rapid growth of GPU accelerated HPC systems. For years, HPC simulations have generated ever increasing amounts of big data, a trend further accelerated by GPU computing. With GPU Deep Learning and other AI approaches, a larger amount of big data than ever can now be used to advance scientific discovery."The post Video: AI – The Next HPC Workload appeared first on insideHPC.
|
by staff on (#297CS)
The Xinhua news agency reports that China is planning to develop a prototype exascale supercomputer by the end of 2017. "A complete computing system of the exascale supercomputer and its applications can only be expected in 2020, and will be 200 times more powerful than the country's first petaflop computer Tianhe-1, recognized as the world's fastest in 2010," said Zhang Ting, application engineer with the Tianjin-based National Supercomputer Center, when attending the sixth session of the 16th Tianjin Municipal People's Congress Tuesday.The post China to Develop Exascale Prototype in 2017 appeared first on insideHPC.
|
by staff on (#29779)
“This is an exciting time in high performance computing,†said Prof Simon McIntosh-Smith, leader of the project and Professor of High Performance Computing at the University of Bristol. “Scientists have a growing choice of potential computer architectures to choose from, including new 64-bit ARM CPUs, graphics processors, and many-core CPUs from Intel. Choosing the best architecture for an application can be a difficult task, so the new Isambard GW4 Tier 2 HPC service aims to provide access to a wide range of the most promising emerging architectures, all using the same software stack."The post Cray to Develop ARM-based Isambard Supercomputer for UK Met Office appeared first on insideHPC.
|
by staff on (#2973V)
Today DataDirect Networks announced a joint sales and marketing agreement with Inspur, a leading China-based, cloud-computing and total-solution-and-services provider, in which the companies will leverage their core strengths and powerful computing technologies to offer industry-leading high-performance computing solutions to HPC customers worldwide. "DDN is delighted to expand our work with Inspur globally and to build upon the joint success we have achieved in China,†said Larry Jones, DDN’s partner manager for the Inspur relationship. “DDN’s leadership in massively scalable, high-performance storage solutions, combined with Inspur’s global data center and cloud computing solutions, offer customers extremely efficient, world-class infrastructure options.â€The post DDN and Inspur Sign Agreement for Joint Sales & Marketing appeared first on insideHPC.
|
by Rich Brueckner on (#2970E)
In this video, Prof. Dr.-Ing. André Brinkmann from the JGU datacenter describes the Mogon II cluster, a 580 Teraflop system currently ranked #265 on the TOP500. "Built by MEGWARE in Germany, the Mogon II system consists of 814 individual nodes each equipped with 2 Intel 2630v4 CPUs and connected via OmniPath 50Gbits (fat-tree). Each CPU has 10 cores, giving a total of 16280 cores."The post Video: A Look at the Mogon II HPC Cluster at Johannes Gutenberg University appeared first on insideHPC.
|
by staff on (#28PF1)
"As a bridge to that future, this two-week program fills many gaps that exist in the training computational scientists typically receive through formal education or shorter courses. The 2017 ATPESC program will be held at a new location from previous years, at the Q Center, one of the largest conference facilities in the Midwest, located just outside Chicago."The post Registration Open for Argonne Training Program on Extreme-Scale Computing appeared first on insideHPC.
|
by staff on (#293DC)
Today the Mont-Blanc European project announced it has selected Cavium’s ThunderX2 ARM server processor to power its new HPC prototype. The new Mont-Blanc prototype will be built by Atos, the coordinator of phase 3 of Mont-Blanc, using its Bull expertise and products. The platform will leverage the infrastructure of the Bull sequana pre-exascale supercomputer range for network, management, cooling, and power. Atos and Cavium signed an agreement to collaborate to develop this new platform, thus making Mont-Blanc an Alpha-site for ThunderX2.The post Bull Atos to Build for HPC Prototype for Mont-Blanc Project using Cavium ThunderX2 Processor appeared first on insideHPC.
|
by Rich Brueckner on (#2939Q)
In this podcast, the Radio Free HPC team looks at D-Wave's new open source software for quantum computing. The software is available on github along with a whitepaper written by Cray Research alums Mike Booth and Steve Reinhardt. "The new tool, qbsolv, enables developers to build higher-level tools and applications leveraging the quantum computing power of systems provided by D-Wave, without the need to understand the complex physics of quantum computers."The post Radio Free HPC Looks at New Open Source Software for Quantum Computing appeared first on insideHPC.
|
by Douglas Eadline on (#292TH)
"The move away from the traditional single processor/memory design has fostered new programming paradigms that address multiple processors (cores). Existing single core applications need to be modified to use extra processors (and accelerators). Unfortunately there is no single portable and efficient programming solution that addresses both scale-up and scale-out systems."The post Scaling Software for In-Memory Computing appeared first on insideHPC.
|
by Rich Brueckner on (#28ZRM)
In this visualization, ocean temperatures and salinity are tracked over the course of a year. Based on data from global climate models, these visualizations aid our understanding of the physical processes that create the Earth's climate, and inform predictions about future changes in climate. "The water's saltiness, or salinity, plays a significant role in this ocean heat engine, Harrison said. Salt makes the water denser, helping it to sink. As the atmosphere warms due to global climate change, melting ice sheets have the potential to release tremendous amounts of fresh water into the oceans."The post Video: Tracing Ocean Salinity for Global Climate Models appeared first on insideHPC.
|
by Rich Brueckner on (#28ZP9)
Registration is now open for the 2017 Rice Oil & Gas HPC Conference. The event takes place March 15-16 in Houston, Texas. "Join us for the 10th anniversary of the Rice Oil & Gas HPC Conference. OG HPC is the premier meeting place for networking and discussion focused on computing and information technology challenges and needs in the oil and gas industry."The post Registration Opens for Rice Oil & Gas HPC Conference appeared first on insideHPC.
|
by Rich Brueckner on (#28WR6)
"The University of Colorado, Boulder supports researchers’ large-scale computational needs with their newly optimized high performance computing system, Summit. Summit is designed with advanced computation, network, and storage architectures to deliver accelerated results for a large range of HPC and big data applications. Summit is built on Dell EMC PowerEdge Servers, Intel Omni-Path Architecture Fabric and Intel Xeon Phi Knights Landing processors."The post Dell EMC Powers Summit Supercomputer at CU Boulder appeared first on insideHPC.
|
by Rich Brueckner on (#28WNQ)
Bennett Aerospace has an opening for a highly motivated Research Scientist and Computational Chemist for the Army Corps of Engineers (USACE), Engineer Research and Development Center (ERDC), Environmental Laboratory (EL), Environmental Processes Branch (EP-P) ) Environmental Genomics and Systems Biology Team (EGSB) in execution of its mission. The candidate will be a Bennett Aerospace employee performing services for ERDC in Vicksburg, MS.The post Job of the Week: Computational Chemist at Bennett Aerospace appeared first on insideHPC.
|
by Rich Brueckner on (#28S7Z)
In this video, researchers at NASA Ames explore the aerodynamics of a popular example of a small, battery-powered drone, a modified DJI Phantom 3 quadcopter. "The Phantom relies on four whirring rotors to generate enough thrust to lift it and any payload it’s carrying off the ground. Simulations revealed the complex motions of air due to interactions between the vehicle’s rotors and X-shaped frame during flight. As an experiment, researchers added four more rotors to the vehicle to study the effect on the quadcopter’s performance. This configuration produced a nearly twofold increase in the amount of thrust."The post Supercomputing Drone Aerodynamics appeared first on insideHPC.
|
by Rich Brueckner on (#28S3M)
In this AI Podcast, Lynn Richards, president and CEO of the Congress for New Urbanism and Charles Marohn, president and co-founder of Strong Towns, describe how AI will reshape our cities. "AI will do much more than automate driving. It promises to help create more liveable cities. And help put expensive infrastructure where we need it most."The post Podcast: How Deep Learning Will Reshape Our Cities appeared first on insideHPC.
|
by Rich Brueckner on (#28S1V)
"STFC Hartree Centre needed a powerful, flexible server system that could drive research in energy efficiency as well as economic impact for its clients. By extending its System x platform with NeXtScale System, Hartree Centre can now move to exascale computing, support sustainable energy use and help its clients gain a competitive advantage." Sophisticated data processes are now integral to all areas of research and business. Whether you are new to discovering the potential of supercomputing, data analytics and cognitive techniques, or are already using them, Hartree's easy to use portfolio of advanced computing facilities, software tools and know-how can help you create better research outcomes that are also faster and cheaper than traditional research methods.The post Video: Lenovo Powers Manufacturing Innovation at Hartree Centre appeared first on insideHPC.
|
by staff on (#28RT5)
"Intel recently announced the first product release of its High Performance Python distribution powered by Anaconda. The product provides a prebuilt easy-to-install Intel Architecture (IA) optimized Python for numerical and scientific computing, data analytics, HPC and more. It’s a free, drop in replacement for existing Python distributions that requires no changes to Python code. Yet benchmarks show big Intel Xeon processor performance improvements and even bigger Intel Xeon Phi processor performance improvements."The post IA Optimized Python Rocks in Production appeared first on insideHPC.
|
by staff on (#28MYP)
"Bridges’ new nodes add large-memory and GPU resources that enable researchers who have never used high-performance computing to easily scale their applications to tackle much larger analyses,†says Nick Nystrom, principal investigator in the Bridges project and Senior Director of Research at PSC. “Our goal with Bridges is to transform researchers’ thinking from ‘What can I do within my local computing environment?’ to ‘What problems do I really want to solve?’â€The post Upgraded Bridges Supercomputer Now in Production appeared first on insideHPC.
|