![]() |
by Rich Brueckner on (#29QVJ)
"Guided by the principles of interactive supercomputing, Lincoln Laboratory was responsible for a lot of the early work on machine learning and neural networks. We now have a world-class group investigating speech and video processing as well as machine language topics including theoretical foundations, algorithms and applications. In the process, we are changing the way we go about computing. Over the years we have tended to assign a specific systems to service a discrete market, audience or project. But today those once highly specialized systems are becoming increasingly heterogeneous. Users are interacting with computational resources that exhibit a high degree of autonomy. The system, not the user, decides on the computer hardware and software that will be used for the job."The post Video: A Look at the Lincoln Laboratory Supercomputing Center appeared first on insideHPC.
|
Inside HPC & AI News | High-Performance Computing & Artificial Intelligence
Link | https://insidehpc.com/ |
Feed | http://insidehpc.com/feed/ |
Updated | 2025-08-16 18:45 |
![]() |
by Rich Brueckner on (#29QTH)
"Mellanox Technologies is looking for a talented engineer to lead datacenter application performance optimization and benchmarking over Mellanox networking products. This individual will primarily work with marketing and engineering to execute low-level and application level benchmarks focused on High Performance Computing (HPC) open source and ISV applications in addition to providing software and hardware optimization recommendations. In addition, this individual will work closely with hardware and software partners, and customers to benchmark Mellanox products under different system configurations and workloads."The post Job of the Week: HPC Application Performance Engineer at Mellanox appeared first on insideHPC.
|
![]() |
by staff on (#29M7S)
Today the PASC17 Conference announced that this year’s plenary presentation will be entitled “Unlocking the Mysteries of the Universe with Supercomputers.†The plenary presentation will be given by Katrin Heitmann, Senior Member of the Computation Institute at the University of Chicago and the Kavli Institute for Physical Cosmology, USA.The post PASC17 Plenary to Focus on Supercomputing Cosmology appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#29KQY)
"As data proliferation continues to explode, computing architectures are struggling to get the right data to the processor efficiently, both in terms of time and power. But what if the best solution to the problem is not faster data movement, but new architectures that can essentially move the processing instructions into the data? Persistent memory arrays present just such an opportunity. Like any significant change, however, there are challenges and obstacles that must be overcome. Industry veteran Steve Pawlowski will outline a vision for the future of computing and why persistent memory systems have the potential to be more revolutionary than perhaps anyone imagines."The post Video: How Persistent Memory Will Bring an Entirely New Structure to Large Data Computing appeared first on insideHPC.
|
![]() |
by staff on (#29KNG)
Today Atos announced the first installation of its Bull sequana X1000 new-generation supercomputer system in the UK at the Hartree Centre. Founded by the UK government, the Science and Technology Facilities Council (STFC) Hartree Centre is a high performance computing and data analytics research facility. Described as "the world’s most efficient supercomputer," Bull sequana is an exascale-class computer capable of processing a billion billion operations per second while consuming 10 times less energy than current systems.The post Atos Delivers Bull sequana Supercomputer to Hartree Centre appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#29KH9)
In this video, Rich Brueckner from insideHPC moderates a panel discussion on Code Modernization. "SC15 luminary panelists reflect on collaboration with Intel and how building on hardware and software standards facilitates performance on parallel platforms with greater ease and productivity. By sharing their experiences modernizing code we hope to shed light on what you might see from modernizing your own code."The post Video: Modern Code – Making the Impossible Possible appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#29GSZ)
"We will be fully honoring all IDC HPC contracts and deliverables, and will continue our HPC operations as before," said Earl Joseph of IDC. "Because the HPC group conducts sensitive business with governments, the group is being separated prior to the deal closing. It will be operated under new ownership that will be independent from the buyer of IDC to ensure that the group can continue to fully support government research requirements. The HPC group will continue to do business as usual, including research reports, client studies, and the HPC User Forums."The post China Oceanwide to Acquire IDG & IDC appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#29FY9)
Today Altair announced plans to build and offer HPC solutions on the Oracle Cloud Platform. This follows Oracle’s decision to name Altair PBS Works as its preferred workload management solution for Oracle Cloud customers. "This move signals a big shift in strategy for Oracle, a company that abandoned the HPC market after it acquired Sun Microsystems in 2010."The post Oracle Cloud to add PBS Works for Technical Computing appeared first on insideHPC.
|
![]() |
by staff on (#29FWB)
The SC17 conference is now accepting proposals for independently planned full- or half-day workshops. SC17 will be held Nov. 12-17 in Denver.The post Call for Proposals: SC17 Workshops appeared first on insideHPC.
|
![]() |
by MichaelS on (#29FT8)
"OpenMP, Fortran 2008 and TBB are standards that can help to create parallel areas of an application. MKL could also be considered to be part of this family, because it uses OpenMP within the library. OpenMP is well known and has been used for quite some time and is continues to be enhanced. Some estimates are as high as 75 % of cycles used are for Fortran applications. Thus, in order to modernize some of the most significant number crunchers today, Fortran 2008 should be investigated. TBB is for C++ applications only, and does not require compiler modifications. An additional benefit to using OpenMP and Fortran 2008 is that these are standards, which allows code to be more portable."The post Managing Lots of Tasks for Intel Xeon Phi appeared first on insideHPC.
|
![]() |
by staff on (#29FP8)
Following a call for proposals issued last October, NERSC has selected six science application teams to participate in the NERSC Exascale Science Applications Program for Data (NESAP for Data) program. "We're very excited to welcome these new data-intensive science application teams to NESAP,†said Rollin Thomas, a big data architect in NERSC’s Data Analytics and Services group who is coordinating NESAP for Data. “NESAP's tools and expertise should help accelerate the transition of these data science codes to KNL. But I'm also looking forward to uncovering and understanding the new performance and scalability challenges that are sure to arise along the way."The post NERSC Selects Six Teams for Exascale Science Applications Program appeared first on insideHPC.
|
![]() |
by Peter ffoulkes on (#29F8M)
The Dell EMC HPC Innovation Lab, substantially powered by Intel, has been established to provide customers best practices for configuring and tuning systems and their applications for optimal performance and efficiency through blogs, whitepapers and other resources. "Dell is utilizing the lab’s world-class Infrastructure to characterize performance behavior and to test and validate upcoming technologies."The post Speeding Workloads at the Dell EMC HPC Innovation Lab appeared first on insideHPC.
|
![]() |
by staff on (#29BGC)
"We are excited to have the benefit of Dr. Taufer’s leadership for SC19,†says John West, director of strategic initiatives at the Texas Advanced Computing Center and chair of the SC Steering Committee. “This conference has a unique role in our community, and we depend upon the energy, drive, and dedication of talented leaders to keep SC fresh and relevant after nearly 30 years of continuous operation. The Steering Committee also wants to express its gratitude for the commitment that the University of Delaware is making by supporting Michela in this demanding service role.â€The post Michela Taufer from University of Delaware to Chair SC19 appeared first on insideHPC.
|
![]() |
by staff on (#29BC7)
"Delivering optimized technology capabilities to different communities is key to a successful public cloud offeringâ€, said Nimbix Chief Technology Officer Leo Reiter. “With this unified approach, Nimbix delivers discrete product capabilities to different audiences while maximizing value to all parties with the underlying power of the JARVICE platform.â€The post Nimbix Expands Cloud Offerings to Enterprises and Developers appeared first on insideHPC.
|
![]() |
by staff on (#29B8R)
Taking place in Stockholm from 23-25 January, the 12th HiPEAC conference will bring together Europe’s top thinkers on computer architecture and compilation to tackle the key issues facing the computing systems on which we depend. HiPEAC17 will see the launch of the HiPEAC Vision 2017, a technology roadmap which lays out how technology affects our lives and how it can, and should, respond to the challenges facing European society and economies, such as the aging population, climate change and shortages in the ICT workforce.The post HiPEAC17 to Focus on Memory Systems, Energy Efficiency and Cybersecurity appeared first on insideHPC.
|
![]() |
by staff on (#29B6K)
“This is an exciting time because the whole HPC landscape is changing with manycore, which is a big change for our users,†said Gerber, who joined NERSC’s User Services Group in 1996 as a postdoc, having earned his PhD in physics from the University of Illinois. “Users are facing a big challenge; they have to be able to exploit the architectural features on Cori (NERSC’s newest supercomputing system), and the HPC Department plays a critical role in helping them do this."The post Richard Gerber to Head NERSC’s HPC Department appeared first on insideHPC.
|
![]() |
by staff on (#29B0F)
"Many supercomputer users, like the big DOE labs, are implementing these next generation systems. They are now engaged in significant code modernization efforts to adapt their key present and future applications to the new processing paradigm, and to bring their internal and external users up to speed. For some in the HPC community, this creates unanticipated challenges along with great opportunities."The post MIT Lincoln Laboratory Takes the Mystery Out of Supercomputing appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#297JQ)
“From new cloud offerings on AWS and Azure, to Summit and Sierra, the 150+ PF supercomputers being built by the US in 2017, new AI workloads are driving the rapid growth of GPU accelerated HPC systems. For years, HPC simulations have generated ever increasing amounts of big data, a trend further accelerated by GPU computing. With GPU Deep Learning and other AI approaches, a larger amount of big data than ever can now be used to advance scientific discovery."The post Video: AI – The Next HPC Workload appeared first on insideHPC.
|
![]() |
by staff on (#297CS)
The Xinhua news agency reports that China is planning to develop a prototype exascale supercomputer by the end of 2017. "A complete computing system of the exascale supercomputer and its applications can only be expected in 2020, and will be 200 times more powerful than the country's first petaflop computer Tianhe-1, recognized as the world's fastest in 2010," said Zhang Ting, application engineer with the Tianjin-based National Supercomputer Center, when attending the sixth session of the 16th Tianjin Municipal People's Congress Tuesday.The post China to Develop Exascale Prototype in 2017 appeared first on insideHPC.
|
![]() |
by staff on (#29779)
“This is an exciting time in high performance computing,†said Prof Simon McIntosh-Smith, leader of the project and Professor of High Performance Computing at the University of Bristol. “Scientists have a growing choice of potential computer architectures to choose from, including new 64-bit ARM CPUs, graphics processors, and many-core CPUs from Intel. Choosing the best architecture for an application can be a difficult task, so the new Isambard GW4 Tier 2 HPC service aims to provide access to a wide range of the most promising emerging architectures, all using the same software stack."The post Cray to Develop ARM-based Isambard Supercomputer for UK Met Office appeared first on insideHPC.
|
![]() |
by staff on (#2973V)
Today DataDirect Networks announced a joint sales and marketing agreement with Inspur, a leading China-based, cloud-computing and total-solution-and-services provider, in which the companies will leverage their core strengths and powerful computing technologies to offer industry-leading high-performance computing solutions to HPC customers worldwide. "DDN is delighted to expand our work with Inspur globally and to build upon the joint success we have achieved in China,†said Larry Jones, DDN’s partner manager for the Inspur relationship. “DDN’s leadership in massively scalable, high-performance storage solutions, combined with Inspur’s global data center and cloud computing solutions, offer customers extremely efficient, world-class infrastructure options.â€The post DDN and Inspur Sign Agreement for Joint Sales & Marketing appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#2970E)
In this video, Prof. Dr.-Ing. André Brinkmann from the JGU datacenter describes the Mogon II cluster, a 580 Teraflop system currently ranked #265 on the TOP500. "Built by MEGWARE in Germany, the Mogon II system consists of 814 individual nodes each equipped with 2 Intel 2630v4 CPUs and connected via OmniPath 50Gbits (fat-tree). Each CPU has 10 cores, giving a total of 16280 cores."The post Video: A Look at the Mogon II HPC Cluster at Johannes Gutenberg University appeared first on insideHPC.
|
![]() |
by staff on (#28PF1)
"As a bridge to that future, this two-week program fills many gaps that exist in the training computational scientists typically receive through formal education or shorter courses. The 2017 ATPESC program will be held at a new location from previous years, at the Q Center, one of the largest conference facilities in the Midwest, located just outside Chicago."The post Registration Open for Argonne Training Program on Extreme-Scale Computing appeared first on insideHPC.
|
![]() |
by staff on (#293DC)
Today the Mont-Blanc European project announced it has selected Cavium’s ThunderX2 ARM server processor to power its new HPC prototype. The new Mont-Blanc prototype will be built by Atos, the coordinator of phase 3 of Mont-Blanc, using its Bull expertise and products. The platform will leverage the infrastructure of the Bull sequana pre-exascale supercomputer range for network, management, cooling, and power. Atos and Cavium signed an agreement to collaborate to develop this new platform, thus making Mont-Blanc an Alpha-site for ThunderX2.The post Bull Atos to Build for HPC Prototype for Mont-Blanc Project using Cavium ThunderX2 Processor appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#2939Q)
In this podcast, the Radio Free HPC team looks at D-Wave's new open source software for quantum computing. The software is available on github along with a whitepaper written by Cray Research alums Mike Booth and Steve Reinhardt. "The new tool, qbsolv, enables developers to build higher-level tools and applications leveraging the quantum computing power of systems provided by D-Wave, without the need to understand the complex physics of quantum computers."The post Radio Free HPC Looks at New Open Source Software for Quantum Computing appeared first on insideHPC.
|
![]() |
by Douglas Eadline on (#292TH)
"The move away from the traditional single processor/memory design has fostered new programming paradigms that address multiple processors (cores). Existing single core applications need to be modified to use extra processors (and accelerators). Unfortunately there is no single portable and efficient programming solution that addresses both scale-up and scale-out systems."The post Scaling Software for In-Memory Computing appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#28ZRM)
In this visualization, ocean temperatures and salinity are tracked over the course of a year. Based on data from global climate models, these visualizations aid our understanding of the physical processes that create the Earth's climate, and inform predictions about future changes in climate. "The water's saltiness, or salinity, plays a significant role in this ocean heat engine, Harrison said. Salt makes the water denser, helping it to sink. As the atmosphere warms due to global climate change, melting ice sheets have the potential to release tremendous amounts of fresh water into the oceans."The post Video: Tracing Ocean Salinity for Global Climate Models appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#28ZP9)
Registration is now open for the 2017 Rice Oil & Gas HPC Conference. The event takes place March 15-16 in Houston, Texas. "Join us for the 10th anniversary of the Rice Oil & Gas HPC Conference. OG HPC is the premier meeting place for networking and discussion focused on computing and information technology challenges and needs in the oil and gas industry."The post Registration Opens for Rice Oil & Gas HPC Conference appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#28WR6)
"The University of Colorado, Boulder supports researchers’ large-scale computational needs with their newly optimized high performance computing system, Summit. Summit is designed with advanced computation, network, and storage architectures to deliver accelerated results for a large range of HPC and big data applications. Summit is built on Dell EMC PowerEdge Servers, Intel Omni-Path Architecture Fabric and Intel Xeon Phi Knights Landing processors."The post Dell EMC Powers Summit Supercomputer at CU Boulder appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#28WNQ)
Bennett Aerospace has an opening for a highly motivated Research Scientist and Computational Chemist for the Army Corps of Engineers (USACE), Engineer Research and Development Center (ERDC), Environmental Laboratory (EL), Environmental Processes Branch (EP-P) ) Environmental Genomics and Systems Biology Team (EGSB) in execution of its mission. The candidate will be a Bennett Aerospace employee performing services for ERDC in Vicksburg, MS.The post Job of the Week: Computational Chemist at Bennett Aerospace appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#28S7Z)
In this video, researchers at NASA Ames explore the aerodynamics of a popular example of a small, battery-powered drone, a modified DJI Phantom 3 quadcopter. "The Phantom relies on four whirring rotors to generate enough thrust to lift it and any payload it’s carrying off the ground. Simulations revealed the complex motions of air due to interactions between the vehicle’s rotors and X-shaped frame during flight. As an experiment, researchers added four more rotors to the vehicle to study the effect on the quadcopter’s performance. This configuration produced a nearly twofold increase in the amount of thrust."The post Supercomputing Drone Aerodynamics appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#28S3M)
In this AI Podcast, Lynn Richards, president and CEO of the Congress for New Urbanism and Charles Marohn, president and co-founder of Strong Towns, describe how AI will reshape our cities. "AI will do much more than automate driving. It promises to help create more liveable cities. And help put expensive infrastructure where we need it most."The post Podcast: How Deep Learning Will Reshape Our Cities appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#28S1V)
"STFC Hartree Centre needed a powerful, flexible server system that could drive research in energy efficiency as well as economic impact for its clients. By extending its System x platform with NeXtScale System, Hartree Centre can now move to exascale computing, support sustainable energy use and help its clients gain a competitive advantage." Sophisticated data processes are now integral to all areas of research and business. Whether you are new to discovering the potential of supercomputing, data analytics and cognitive techniques, or are already using them, Hartree's easy to use portfolio of advanced computing facilities, software tools and know-how can help you create better research outcomes that are also faster and cheaper than traditional research methods.The post Video: Lenovo Powers Manufacturing Innovation at Hartree Centre appeared first on insideHPC.
|
![]() |
by staff on (#28RT5)
"Intel recently announced the first product release of its High Performance Python distribution powered by Anaconda. The product provides a prebuilt easy-to-install Intel Architecture (IA) optimized Python for numerical and scientific computing, data analytics, HPC and more. It’s a free, drop in replacement for existing Python distributions that requires no changes to Python code. Yet benchmarks show big Intel Xeon processor performance improvements and even bigger Intel Xeon Phi processor performance improvements."The post IA Optimized Python Rocks in Production appeared first on insideHPC.
|
![]() |
by staff on (#28MYP)
"Bridges’ new nodes add large-memory and GPU resources that enable researchers who have never used high-performance computing to easily scale their applications to tackle much larger analyses,†says Nick Nystrom, principal investigator in the Bridges project and Senior Director of Research at PSC. “Our goal with Bridges is to transform researchers’ thinking from ‘What can I do within my local computing environment?’ to ‘What problems do I really want to solve?’â€The post Upgraded Bridges Supercomputer Now in Production appeared first on insideHPC.
|
![]() |
by staff on (#28MW9)
"The DS8880 All-Flash family is targeted at users that have experienced poor storage performance due to latency, low server utilization, high energy consumption, low system availability and high operating costs. These same users have been listening, learning and understand the data value proposition of being a cognitive business,†said Ed Walsh, general manager, IBM Storage and Software Defined Infrastructure. “In the coming year we expect an awakening by companies to the opportunity that cognitive applications, and hybrid cloud enablement, bring them in a data driven marketplace.â€The post IBM Rolls Out All-flash Storage for Cognitive Workloads appeared first on insideHPC.
|
![]() |
by staff on (#28MTP)
"Just as a software ecosystem helped to create the immense computing industry that exists today, building a quantum computing industry will require software accessible to the developer community,†said Bo Ewald, president, D-Wave International Inc. “D-Wave is building a set of software tools that will allow developers to use their subject-matter expertise to build tools and applications that are relevant to their business or mission. By making our tools open source, we expand the community of people working to solve meaningful problems using quantum computers.â€The post D-Wave Releases Open Quantum Software Environment appeared first on insideHPC.
|
![]() |
by Richard Friedman on (#28MP1)
While HPC developers worry about squeezing out the ultimate performance while running an application on dedicated cores, Intel TBB tackles a problem that HPC users never worry about: How can you make parallelism work well when you share the cores that you run upon?†This is more of a concern if you’re running that application on a many-core laptop or workstation than a dedicated supercomputer because who knows what will also be running on those shared cores. Intel Threaded Building Blocks reduce the delays from other applications by utilizing a revolutionary task-stealing scheduler. This is the real magic of TBB.The post A Decade of Multicore Parallelism with Intel TBB appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#28MF4)
Today Cray announced the appointment of Stathis Papaefstathiou to the position of senior vice president of research and development. Papaefstathiou will be responsible for leading the software and hardware engineering efforts for all of Cray’s research and development projects. "At our core, we are an engineering company, and we’re excited to have Stathis’ impressive and diverse technical expertise in this key leadership position at Cray,†said Peter Ungaro, president and CEO of Cray. “Leveraging the growing convergence of supercomputing and big data, Stathis will help us continue to build unique and innovative products for our broadening customer base.â€The post Cray Appoints Stathis Papaefstathiou as SVP of Research and Development appeared first on insideHPC.
|
![]() |
by Peter ffoulkes on (#28MBR)
"With three primary network technology options widely available, each with advantages and disadvantages in specific workload scenarios, the choice of solution partner that can deliver the full range of choices together with the expertise and support to match technology solution to business requirement becomes paramount."The post Selecting HPC Network Technology appeared first on insideHPC.
|
![]() |
by staff on (#28BX3)
In this week’s Sponsored Post, Nicolas Dube of Hewlett Packard Enterprise outlines the future of HPC and the role and challenges of exascale computing in this evolution. The HPE approach to exascale is geared to breaking the dependencies that come with outdated protocols. Exascale computing will allow users to process data, run systems, and solve problems at a totally new scale, which will become increasingly important as the world’s problems grow ever larger and more complex.The post Exascale Computing: A Race to the Future of HPC appeared first on insideHPC.
|
![]() |
by staff on (#28J2C)
Each year the OpenFabrics Alliance (OFA) hosts an annual workshop devoted to advancing the state of the art in networking. "One secret to the enduring success of the workshop is the OFA’s emphasis on hosting an interactive, community-driven event. To continue that trend, we are once again reaching out to the community to create a rich program that addresses topics important to the networking industry. We’re looking for proposals for workshop sessions."The post OpenFabrics Alliance Workshop 2017 – Call for Sessions Open appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#28GXC)
In this video, Jonathan Allen from LLNL describes how Lawrence Livermore’s supercomputers are playing a crucial role in advancing cancer research and treatment. "A historic partnership between the Department of Energy (DOE) and the National Cancer Institute (NCI) is applying the formidable computing resources at Livermore and other DOE national laboratories to advance cancer research and treatment. Announced in late 2015, the effort will help researchers and physicians better understand the complexity of cancer, choose the best treatment options for every patient, and reveal possible patterns hidden in vast patient and experimental data sets."The post Video: Livermore HPC Takes Aim at Cancer appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#28GNK)
Oak Ridge National Laboratory reports that its team of experts are playing leading roles in the recently established DOE’s Exascale Computing Project (ECP), a multi-lab initiative responsible for developing the strategy, aligning the resources, and conducting the R&D necessary to achieve the nation’s imperative of delivering exascale computing by 2021. "ECP’s mission is to ensure all the necessary pieces are in place for the first exascale systems – an ecosystem that includes applications, software stack, architecture, advanced system engineering and hardware components – to enable fully functional, capable exascale computing environments critical to scientific discovery, national security, and a strong U.S. economy."The post Oak Ridge Plays key role in Exascale Computing Project appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#28GKD)
"The PRACE Summer of HPC is an outreach and training program that offers summer placements at top High Performance Computing centers across Europe to late-stage undergraduates and early-stage postgraduate students. Up to twenty top applicants from across Europe will be selected to participate. Participants will spend two months working on projects related to PRACE technical or industrial work and produce a report and a visualization or video of their results."The post Apply Now for Summer of HPC 2017 in Barcelona appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#28GF6)
"For many urban questions, however, new data sources will be required with greater spatial and/or temporal resolution, driving innovation in the use of sensors in mobile devices as well as embedding intelligent sensing infrastructure in the built environment. Collectively, these data sources also hold promise to begin to integrate computational models associated with individual urban sectors such as transportation, building energy use, or climate. Catlett will discuss the work that Argonne National Laboratory and the University of Chicago are doing in partnership with the City of Chicago and other cities through the Urban Center for Computation and Data, focusing in particular on new opportunities related to embedded systems and computational modeling."The post Understanding Cities through Computation, Data Analytics, and Measurement appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#28CKS)
In this video, Maurizio Davini from the University of Pisa describe how the University works with Dell EMC and Intel to test new technologies, integrate and optimize HPC systems with Intel HPC Orchestrator software. "We believe these two companies are at the forefront of innovation in high performance computing," said University CTO Davini. "We also share a common goal of simplifying HPC to support a broader range of users.â€The post Intel HPC Orchestrator Powers Research at University of Pisa appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#28CBS)
“Atos is determined to solve the technical challenges that arise in life sciences projects, to help scientists to focus on making breakthroughs and forget about technicalities. We know that one size doesn’t fit all and that is the reason why we studied carefully The Pirbright Institute’s challenges to design a customized and unique architecture. It is a pleasure for us to work with Pirbright and to contribute in some way to reduce the impact of viral diseasesâ€, says Natalia Jiménez, WW Life Sciences lead at Atos.The post Bull Atos Powers New Genomics Supercomputer at Pirbright Institute appeared first on insideHPC.
|
![]() |
by Rich Brueckner on (#28C7A)
Today Mellanox announced that Spectrum Ethernet switches and ConnectX-4 100Gb/s Ethernet adapters have been selected by Baidu, the leading Chinese language Internet search provider, for Baidu’s Machine Learning platforms. The need for higher data speed and most efficient data movement placed Spectrum and RDMA-enabled ConnectX-4 adapters as key components to enable world leading machine learning […]The post Mellanox Ethernet Accelerates Baidu Machine Learning appeared first on insideHPC.
|
![]() |
by staff on (#28C5F)
"The recent announcement of HDR InfiniBand included the three required network elements to achieve full end-to-end implementation of the new technology: ConnectX-6 host channel adapters, Quantum switches and the LinkX family of 200Gb/s cables. The newest generations of InfiniBand bring the game changing capabilities of In-Network Computing and In-Network Memory to further enhance the new paradigm of Data-Centric data centers – for High-Performance Computing, Machine Learning, Cloud, Web2.0, Big Data, Financial Services and more – dramatically increasing network scalability and introducing new accelerations for storage platforms and data center security."The post HDR InfiniBand Technology Reshapes the World of High-Performance and Machine Learning Platforms appeared first on insideHPC.
|