by staff on (#2AFZX)
In this RCE Podcast, Brock Palen and Jeff Squyres speak with the creators of iRODS: Jason Coposky and Terrell Russell. Also known as the Integrated Rule-Oriented Data System, iRODS open source data management software is used by research organizations and government agencies worldwide. "iRODS virtualizes data storage resources, so users can take control of their data, regardless of where and on what device the data is stored. The development infrastructure supports exhaustive testing on supported platforms. The plugin architecture supports microservices, storage systems, authentication, networking, databases, rule engines, and an extensible API."The post RCE Podcast Looks at iRODS Data Management Software appeared first on insideHPC.
|
High-Performance Computing News Analysis | insideHPC
Link | https://insidehpc.com/ |
Feed | http://insidehpc.com/feed/ |
Updated | 2024-11-25 16:30 |
by Rich Brueckner on (#2AFVA)
"This talk reports efforts on refactoring and optimizing the climate and weather forecasting programs – CAM and WRF – on Sunway TaihuLight. To map the large code base to the millions of cores on the Sunway system, OpenACC-based refactoring was taken as the major approach, with source-to-source translator tools applied to exploit the most suitable parallelism for the CPE cluster and to fit the intermediate variable into the limited on-chip fast buffer."The post PASC17 to Feature Talk by Gordon Bell Prize Winner Haohuan Fu appeared first on insideHPC.
|
by Rich Brueckner on (#2AFQ8)
"Nanomagnetic devices may allow memory and logic functions to be combined in novel ways. And newer, perhaps more promising device concepts continue to emerge. At the same time, research in new architectures has also grown. Indeed, at the leading edge, researchers are beginning to focus on co-optimization of new devices and new architectures. Despite the growing research investment, the landscape of promising research opportunities outside the “FET devices and circuits box†is still largely unexplored."The post Beyond Exascale: Emerging Devices and Architectures for Computing appeared first on insideHPC.
|
by staff on (#2AFHC)
In this special guest feature, James Reinders looks at Intel Xeon Phi processors from a programmer's perspective. "How does a programmer think of Intel Xeon Phi processors? In this brief article, I will convey how I, as a programmer, think of them. In subsequent articles, I will dive a bit more into details of various programming modes, and techniques employed for some key applications. In this article, I will endeavor to not stray into deep details – but rather offer an approachable perspective on how to think about programming for Intel Xeon Phi processors."The post Intel Xeon Phi Processor Programming in a Nutshell appeared first on insideHPC.
|
by staff on (#2AC6D)
Today IBM announced that its PowerAI distribution for popular open source Machine Learning and Deep Learning frameworks on the POWER8 architecture now supports the TensorFlow 0.12 framework that was originally created by Google. TensorFlow support through IBM PowerAI provides enterprises with another option for fast, flexible, and production-ready tools and support for developing advanced machine learning products and systems.The post IBM Adds TensorFlow Support for PowerAI Deep Learning appeared first on insideHPC.
|
by Rich Brueckner on (#2ABPY)
Cheyenne is a new 5.34-petaflops, high-performance computer built for NCAR by SGI. Cheyenne be a critical tool for researchers across the country studying climate change, severe weather, geomagnetic storms, seismic activity, air quality, wildfires, and other important geoscience topics. In this video, Brian Vanderwende from UCAR describes typical workflows in the NCAR/CISL Cheyenne HPC environment as well as performance […]The post Video: Introduction to the Cheyenne Supercomputer appeared first on insideHPC.
|
by Rich Brueckner on (#2ABGD)
"CUDA C++ is just one of the ways you can create massively parallel applications with CUDA. It lets you use the powerful C++ programming language to develop high performance algorithms accelerated by thousands of parallel threads running on GPUs. Many developers have accelerated their computation- and bandwidth-hungry applications this way, including the libraries and frameworks that underpin the ongoing revolution in artificial intelligence known as Deep Learning."The post CUDA Made Easy: An Introduction appeared first on insideHPC.
|
by staff on (#2ABBP)
In this WUOT podcast, Jack Wells from ORNL describes how the Titan supercomputer helps advance science. "The world’s third-most powerful supercomputer is located in Oak Ridge, and though it bears the imposing name TITAN, its goals and capabilities are more quotidian than dystopian. After that, WUOT's Megan Jamerson tells us about a project at ORNL that uses TITAN to help humans digest vast sums of information from medical reports. If successful, the project could create new understandings about the demographics of cancer."The post Podcast: Supercomputing Cancer Research and the Human Brain appeared first on insideHPC.
|
by Richard Friedman on (#2AB5P)
"Many of the libraries developed in the 70s and 80s for core linear algebra and scientific math computation, such as BLAS, LAPACK, FFT, are still in use today with C, C++, Fortran, and even Python programs. With MKL, Intel has engineered a ready-to-use, royalty-free library that implements these numerical algorithms optimized specifically to take advantage of the latest features of Intel chip architectures. Even the best compiler can’t compete with the level of performance possible from a hand-optimized library. Any application that already relies on the BLAS or LAPACK functionality will achieve better performance on Intel and compatible architectures just by downloading and re-linking with Intel MKL."The post Achieving High-Performance Math Processing with Intel MKL 2017 appeared first on insideHPC.
|
by Rich Brueckner on (#2A7GG)
Graduate students and postdoctoral scholars from institutions in Canada, Europe, Japan and the United States are invited to apply for the eighth International Summer School on HPC Challenges in Computational Sciences, to be held June 25- 30, 2017, in Boulder, Colorado.The post Apply Now for 2017 International Summer School on HPC Challenges appeared first on insideHPC.
|
by Rich Brueckner on (#2A7D4)
Today RSC Group from Russia announced that the company has achieved highest Elite status in the Intel Solutions for Lustre Reseller Program. Only 9 Intel partners in Europe currently have this status.The post Russian RSC Group Becomes Elite Intel Lustre Reseller appeared first on insideHPC.
|
by Rich Brueckner on (#2A78D)
In this special guest feature, Tim Gillett from Scientific Computing World interviews Norbert Attig and Thomas Eickermann from the Jülich Supercomputing Centre about how JSC is tackling high performance computing challenges.The post Scaling HPC at the Jülich Supercomputing Centre appeared first on insideHPC.
|
by Rich Brueckner on (#2A73E)
The HPC for Energy (HPC4E) project is organizing a workshop entitled HPC Roadmap for Energy Industry. Hosted by INRIA, the event takes place Feb 1 at the French Institute for Research in Computer Science and Automation. "Energy is one of the current priorities for EU-Brazil cooperation. The main objective is to develop high-performance simulation tools that go beyond the state-of-the-art to help the energy industry respond to both future energy demands and carbon-related environmental issues."The post INRIA to Host Workshop on HPC Roadmap for Energy Industry appeared first on insideHPC.
|
by Rich Brueckner on (#2A6XE)
Today TransparentBusiness announced the appointment of Jorge Luis Titinger as its Chief Strategy Officer. Mr. Titinger is best known as the former CEO of SGI, a global leader in HPC, which was recently acquired by Hewlett Packard Enterprise. "I'm pleased to join the company which has established itself as a leader in remote work process management and coordination," said Jorge Titinger. "I believe TransparentBusiness can help accelerate the adoption of a distributed workforce; this can result in significant bottom line benefits for the companies that embrace this new direction and bring the work to where the talent is."The post Former SGI’s CEO Jorge Titinger Joins TransparentBusiness as Chief Strategy Officer appeared first on insideHPC.
|
by Rich Brueckner on (#2A34A)
In this silent video from the Blue Brain Project at SC16, 865 segments from a rodent brain are simulated with isosurfaces generated from Allen Brain Atlas image stacks. For this INCITE project, researchers from École Polytechnique Fédérale de Lausanne will use the Mira supercomputer at Argonne to advance the understanding of these fundamental mechanisms of the brain’s neocortex.The post Video: Stunning Simulation from Blue Brain Project at SC16 appeared first on insideHPC.
|
by staff on (#2A2XS)
Today Rescale announced the company is expanding to Europe with a new office in Munich, Germany. "Europe has always been a crucial market for Rescale,†said Rescale co-founder and CEO Joris Poort. “We are thrilled to be establishing a solid regional foundation for sales and support for our customers in Europe. Wolfgang Dreyer’s HPC expertise and deep familiarity with the region will be a tremendous asset to help serve our European customers.â€The post Wolfgang Dreyer to Head Up New Rescale Office in Munich appeared first on insideHPC.
|
by Rich Brueckner on (#2A2S0)
Nor-Tech reports that Caltech is upgrading its Nor-Tech demo cluster with Intel Xeon Phi. The demo cluster is a no-cost, no-strings opportunity for current and prospective clients to test-drive simulation applications on a cutting-edge Nor-Tech HPC equipped with Intel Xeon Phi and other high-demand platforms installed and configured. Users can also integrate their existing platforms into the demo cluster.The post Caltech Upgrading Demo Cluster with Intel Xeon-Phi x200 Processor appeared first on insideHPC.
|
by staff on (#2A2MC)
The European PRACE organization is now accepting applications for the following expense-paid educational programs: The 2017 International Summer School on HPC Challenges in Computational Sciences and the PRACE Summer of HPC 2017 program.The post Apply Now for PRACE HPC Summer School Programs appeared first on insideHPC.
|
by staff on (#2A2GQ)
"D-Wave's leap from 1000 qubits to 2000 qubits is a major technical achievement and an important advance for the emerging field of quantum computing," said Earl Joseph, IDC program vice president for high performance computing. "D-Wave is the only company with a product designed to run quantum computing problems, and the new D-Wave 2000Q system should be even more interesting to researchers and application developers who want to explore this revolutionary new approach to computing."The post D-Wave Rolls Out 2000 Qubit System appeared first on insideHPC.
|
by Rich Brueckner on (#29YR2)
Matthias Troyer frin ETH Zurich presented this talk at a recent Microsoft Research event. "Given limitations to the scaling for simulating the full Coulomb Hamiltonian on quantum computers, a hybrid approach – deriving effective models from density functional theory codes and solving these effective models by quantum computers seem to be a promising way to proceed for calculating the electronic structure of correlated materials on a quantum computer."The post Video: A Hybrid Approach to Strongly Correlated Materials appeared first on insideHPC.
|
by staff on (#29YJR)
"GIGABYTE servers - across standard, Open Compute Platform (OCP) and rack scale form factors - deliver exceptional value, performance and scalability for multi-tenant cloud and virtualized enterprise datacenters," said Etay Lee, GM of GIGABYTE Technology's Server Division. "The addition of QLogic 10GbE and 25GbE FastLinQ Ethernet NICs in OCP and Standard form factors will enable delivery on all of the tenets of open standards, while enabling key virtualization technologies like SR-IOV and full offloads for overlay networks using VxLAN, NVGRE and GENEVE."The post GIGABYTE Selects Cavium QLogic FastLinQ Ethernet Solutions appeared first on insideHPC.
|
by staff on (#29YJS)
"Computers as we know them are disappearing from view,’ asserts Koen De Bosschere, Professor at the Engineering Faculty of Ghent University, Belgium, and Coordinator of the HiPEAC network. "The evolution from desktop PC will not stop at smartphone and tablet: the devices and systems that will allow us to automate key infrastructures, such as transport, power grids and monitoring of medical conditions, are bringing us into the age of artificial intelligence. This does not mean man-sized robots, but smart devices that we program and then interact with, such as intelligent personal assistants and self-driving vehicles."The post HiPEAC Vision Report Advocates Reinvention of Computing appeared first on insideHPC.
|
by Rich Brueckner on (#29Y4M)
In this podast, the Radio Free HPC Team looks at the Cray's new ARM-based Isambard supercomputer that will soon be deployed in the UK. After that, we discuss how Persistent Memory will change the way vendors architect systems for Big Data workloads.The post Radio Free HPC Looks at the New Isambard Supercomputer from Cray appeared first on insideHPC.
|
by Douglas Eadline on (#29Y3E)
The move away from the traditional single processor/memory design has fostered new programming paradigms that address multiple processors (cores). Existing single core applications need to be modified to use extra processors (and accelerators). Unfortunately there is no single portable and efficient programming solution that addresses both scale-up and scale-out systems.The post Five Ways Scale-Up Systems Save Money and Improve TCO appeared first on insideHPC.
|
by staff on (#29V0P)
Today Appentra announced it has joined the OpenPOWER Foundation, an open development community based on the POWER microprocessor architecture. Founded in 2012, Appentra is a technology company providing software tools for guided parallelization in high-performance computing and HPC-like technologies. "The development model of the OpenPOWER Foundation is one that elicits collaboration and represents a new way in exploiting and innovating around processor technology.†says Calista Redmond, Director of OpenPOWER Global Alliances at IBM. “With the Power architecture designed for Big Data and Cloud, new OpenPOWER Foundation members like Appentra, will be able to add their own innovations on top of the technology to create new applications that capitalize on emerging workloads.â€The post Appentra Joins OpenPOWER Foundation for Auto-Parallelization appeared first on insideHPC.
|
by staff on (#29TZ3)
Scientists typically understand data through graphs and visualizations. But is it possible to use sound to interpret complex information? This video from Georgia Tech's Asegun Henry shows the Sonification of the vibrations of an atom in crystalline silicon. "If you look at the data, it looks like white noise," Henry said. "We decided to sonify the data, and as soon as we listened to it, we could hear the pattern."The post Video: Sonifying Simulations appeared first on insideHPC.
|
by Rich Brueckner on (#29QVJ)
"Guided by the principles of interactive supercomputing, Lincoln Laboratory was responsible for a lot of the early work on machine learning and neural networks. We now have a world-class group investigating speech and video processing as well as machine language topics including theoretical foundations, algorithms and applications. In the process, we are changing the way we go about computing. Over the years we have tended to assign a specific systems to service a discrete market, audience or project. But today those once highly specialized systems are becoming increasingly heterogeneous. Users are interacting with computational resources that exhibit a high degree of autonomy. The system, not the user, decides on the computer hardware and software that will be used for the job."The post Video: A Look at the Lincoln Laboratory Supercomputing Center appeared first on insideHPC.
|
by Rich Brueckner on (#29QTH)
"Mellanox Technologies is looking for a talented engineer to lead datacenter application performance optimization and benchmarking over Mellanox networking products. This individual will primarily work with marketing and engineering to execute low-level and application level benchmarks focused on High Performance Computing (HPC) open source and ISV applications in addition to providing software and hardware optimization recommendations. In addition, this individual will work closely with hardware and software partners, and customers to benchmark Mellanox products under different system configurations and workloads."The post Job of the Week: HPC Application Performance Engineer at Mellanox appeared first on insideHPC.
|
by staff on (#29M7S)
Today the PASC17 Conference announced that this year’s plenary presentation will be entitled “Unlocking the Mysteries of the Universe with Supercomputers.†The plenary presentation will be given by Katrin Heitmann, Senior Member of the Computation Institute at the University of Chicago and the Kavli Institute for Physical Cosmology, USA.The post PASC17 Plenary to Focus on Supercomputing Cosmology appeared first on insideHPC.
|
by Rich Brueckner on (#29KQY)
"As data proliferation continues to explode, computing architectures are struggling to get the right data to the processor efficiently, both in terms of time and power. But what if the best solution to the problem is not faster data movement, but new architectures that can essentially move the processing instructions into the data? Persistent memory arrays present just such an opportunity. Like any significant change, however, there are challenges and obstacles that must be overcome. Industry veteran Steve Pawlowski will outline a vision for the future of computing and why persistent memory systems have the potential to be more revolutionary than perhaps anyone imagines."The post Video: How Persistent Memory Will Bring an Entirely New Structure to Large Data Computing appeared first on insideHPC.
|
by staff on (#29KNG)
Today Atos announced the first installation of its Bull sequana X1000 new-generation supercomputer system in the UK at the Hartree Centre. Founded by the UK government, the Science and Technology Facilities Council (STFC) Hartree Centre is a high performance computing and data analytics research facility. Described as "the world’s most efficient supercomputer," Bull sequana is an exascale-class computer capable of processing a billion billion operations per second while consuming 10 times less energy than current systems.The post Atos Delivers Bull sequana Supercomputer to Hartree Centre appeared first on insideHPC.
|
by Rich Brueckner on (#29KH9)
In this video, Rich Brueckner from insideHPC moderates a panel discussion on Code Modernization. "SC15 luminary panelists reflect on collaboration with Intel and how building on hardware and software standards facilitates performance on parallel platforms with greater ease and productivity. By sharing their experiences modernizing code we hope to shed light on what you might see from modernizing your own code."The post Video: Modern Code – Making the Impossible Possible appeared first on insideHPC.
|
by Rich Brueckner on (#29GSZ)
"We will be fully honoring all IDC HPC contracts and deliverables, and will continue our HPC operations as before," said Earl Joseph of IDC. "Because the HPC group conducts sensitive business with governments, the group is being separated prior to the deal closing. It will be operated under new ownership that will be independent from the buyer of IDC to ensure that the group can continue to fully support government research requirements. The HPC group will continue to do business as usual, including research reports, client studies, and the HPC User Forums."The post China Oceanwide to Acquire IDG & IDC appeared first on insideHPC.
|
by Rich Brueckner on (#29FY9)
Today Altair announced plans to build and offer HPC solutions on the Oracle Cloud Platform. This follows Oracle’s decision to name Altair PBS Works as its preferred workload management solution for Oracle Cloud customers. "This move signals a big shift in strategy for Oracle, a company that abandoned the HPC market after it acquired Sun Microsystems in 2010."The post Oracle Cloud to add PBS Works for Technical Computing appeared first on insideHPC.
|
by staff on (#29FWB)
The SC17 conference is now accepting proposals for independently planned full- or half-day workshops. SC17 will be held Nov. 12-17 in Denver.The post Call for Proposals: SC17 Workshops appeared first on insideHPC.
|
by MichaelS on (#29FT8)
"OpenMP, Fortran 2008 and TBB are standards that can help to create parallel areas of an application. MKL could also be considered to be part of this family, because it uses OpenMP within the library. OpenMP is well known and has been used for quite some time and is continues to be enhanced. Some estimates are as high as 75 % of cycles used are for Fortran applications. Thus, in order to modernize some of the most significant number crunchers today, Fortran 2008 should be investigated. TBB is for C++ applications only, and does not require compiler modifications. An additional benefit to using OpenMP and Fortran 2008 is that these are standards, which allows code to be more portable."The post Managing Lots of Tasks for Intel Xeon Phi appeared first on insideHPC.
|
by staff on (#29FP8)
Following a call for proposals issued last October, NERSC has selected six science application teams to participate in the NERSC Exascale Science Applications Program for Data (NESAP for Data) program. "We're very excited to welcome these new data-intensive science application teams to NESAP,†said Rollin Thomas, a big data architect in NERSC’s Data Analytics and Services group who is coordinating NESAP for Data. “NESAP's tools and expertise should help accelerate the transition of these data science codes to KNL. But I'm also looking forward to uncovering and understanding the new performance and scalability challenges that are sure to arise along the way."The post NERSC Selects Six Teams for Exascale Science Applications Program appeared first on insideHPC.
|
by Peter ffoulkes on (#29F8M)
The Dell EMC HPC Innovation Lab, substantially powered by Intel, has been established to provide customers best practices for configuring and tuning systems and their applications for optimal performance and efficiency through blogs, whitepapers and other resources. "Dell is utilizing the lab’s world-class Infrastructure to characterize performance behavior and to test and validate upcoming technologies."The post Speeding Workloads at the Dell EMC HPC Innovation Lab appeared first on insideHPC.
|
by staff on (#29BGC)
"We are excited to have the benefit of Dr. Taufer’s leadership for SC19,†says John West, director of strategic initiatives at the Texas Advanced Computing Center and chair of the SC Steering Committee. “This conference has a unique role in our community, and we depend upon the energy, drive, and dedication of talented leaders to keep SC fresh and relevant after nearly 30 years of continuous operation. The Steering Committee also wants to express its gratitude for the commitment that the University of Delaware is making by supporting Michela in this demanding service role.â€The post Michela Taufer from University of Delaware to Chair SC19 appeared first on insideHPC.
|
by staff on (#29BC7)
"Delivering optimized technology capabilities to different communities is key to a successful public cloud offeringâ€, said Nimbix Chief Technology Officer Leo Reiter. “With this unified approach, Nimbix delivers discrete product capabilities to different audiences while maximizing value to all parties with the underlying power of the JARVICE platform.â€The post Nimbix Expands Cloud Offerings to Enterprises and Developers appeared first on insideHPC.
|
by staff on (#29B8R)
Taking place in Stockholm from 23-25 January, the 12th HiPEAC conference will bring together Europe’s top thinkers on computer architecture and compilation to tackle the key issues facing the computing systems on which we depend. HiPEAC17 will see the launch of the HiPEAC Vision 2017, a technology roadmap which lays out how technology affects our lives and how it can, and should, respond to the challenges facing European society and economies, such as the aging population, climate change and shortages in the ICT workforce.The post HiPEAC17 to Focus on Memory Systems, Energy Efficiency and Cybersecurity appeared first on insideHPC.
|
by staff on (#29B6K)
“This is an exciting time because the whole HPC landscape is changing with manycore, which is a big change for our users,†said Gerber, who joined NERSC’s User Services Group in 1996 as a postdoc, having earned his PhD in physics from the University of Illinois. “Users are facing a big challenge; they have to be able to exploit the architectural features on Cori (NERSC’s newest supercomputing system), and the HPC Department plays a critical role in helping them do this."The post Richard Gerber to Head NERSC’s HPC Department appeared first on insideHPC.
|
by staff on (#29B0F)
"Many supercomputer users, like the big DOE labs, are implementing these next generation systems. They are now engaged in significant code modernization efforts to adapt their key present and future applications to the new processing paradigm, and to bring their internal and external users up to speed. For some in the HPC community, this creates unanticipated challenges along with great opportunities."The post MIT Lincoln Laboratory Takes the Mystery Out of Supercomputing appeared first on insideHPC.
|
by Rich Brueckner on (#297JQ)
“From new cloud offerings on AWS and Azure, to Summit and Sierra, the 150+ PF supercomputers being built by the US in 2017, new AI workloads are driving the rapid growth of GPU accelerated HPC systems. For years, HPC simulations have generated ever increasing amounts of big data, a trend further accelerated by GPU computing. With GPU Deep Learning and other AI approaches, a larger amount of big data than ever can now be used to advance scientific discovery."The post Video: AI – The Next HPC Workload appeared first on insideHPC.
|
by staff on (#297CS)
The Xinhua news agency reports that China is planning to develop a prototype exascale supercomputer by the end of 2017. "A complete computing system of the exascale supercomputer and its applications can only be expected in 2020, and will be 200 times more powerful than the country's first petaflop computer Tianhe-1, recognized as the world's fastest in 2010," said Zhang Ting, application engineer with the Tianjin-based National Supercomputer Center, when attending the sixth session of the 16th Tianjin Municipal People's Congress Tuesday.The post China to Develop Exascale Prototype in 2017 appeared first on insideHPC.
|
by staff on (#29779)
“This is an exciting time in high performance computing,†said Prof Simon McIntosh-Smith, leader of the project and Professor of High Performance Computing at the University of Bristol. “Scientists have a growing choice of potential computer architectures to choose from, including new 64-bit ARM CPUs, graphics processors, and many-core CPUs from Intel. Choosing the best architecture for an application can be a difficult task, so the new Isambard GW4 Tier 2 HPC service aims to provide access to a wide range of the most promising emerging architectures, all using the same software stack."The post Cray to Develop ARM-based Isambard Supercomputer for UK Met Office appeared first on insideHPC.
|
by staff on (#2973V)
Today DataDirect Networks announced a joint sales and marketing agreement with Inspur, a leading China-based, cloud-computing and total-solution-and-services provider, in which the companies will leverage their core strengths and powerful computing technologies to offer industry-leading high-performance computing solutions to HPC customers worldwide. "DDN is delighted to expand our work with Inspur globally and to build upon the joint success we have achieved in China,†said Larry Jones, DDN’s partner manager for the Inspur relationship. “DDN’s leadership in massively scalable, high-performance storage solutions, combined with Inspur’s global data center and cloud computing solutions, offer customers extremely efficient, world-class infrastructure options.â€The post DDN and Inspur Sign Agreement for Joint Sales & Marketing appeared first on insideHPC.
|
by Rich Brueckner on (#2970E)
In this video, Prof. Dr.-Ing. André Brinkmann from the JGU datacenter describes the Mogon II cluster, a 580 Teraflop system currently ranked #265 on the TOP500. "Built by MEGWARE in Germany, the Mogon II system consists of 814 individual nodes each equipped with 2 Intel 2630v4 CPUs and connected via OmniPath 50Gbits (fat-tree). Each CPU has 10 cores, giving a total of 16280 cores."The post Video: A Look at the Mogon II HPC Cluster at Johannes Gutenberg University appeared first on insideHPC.
|
by staff on (#28PF1)
"As a bridge to that future, this two-week program fills many gaps that exist in the training computational scientists typically receive through formal education or shorter courses. The 2017 ATPESC program will be held at a new location from previous years, at the Q Center, one of the largest conference facilities in the Midwest, located just outside Chicago."The post Registration Open for Argonne Training Program on Extreme-Scale Computing appeared first on insideHPC.
|
by staff on (#293DC)
Today the Mont-Blanc European project announced it has selected Cavium’s ThunderX2 ARM server processor to power its new HPC prototype. The new Mont-Blanc prototype will be built by Atos, the coordinator of phase 3 of Mont-Blanc, using its Bull expertise and products. The platform will leverage the infrastructure of the Bull sequana pre-exascale supercomputer range for network, management, cooling, and power. Atos and Cavium signed an agreement to collaborate to develop this new platform, thus making Mont-Blanc an Alpha-site for ThunderX2.The post Bull Atos to Build for HPC Prototype for Mont-Blanc Project using Cavium ThunderX2 Processor appeared first on insideHPC.
|