by staff on (#2PSX9)
Hewlett Packard Enterprise today introduced the world’s largest single-memory computer, the latest milestone in The Machine research project. "The prototype unveiled today contains 160 terabytes (TB) of memory, capable of simultaneously working with the data held in every book in the Library of Congress five times over—or approximately 160 million books. It has never been possible to hold and manipulate whole data sets of this size in a single-memory system, and this is just a glimpse of the immense potential of Memory-Driven Computing."The post HPE Introduces the World’s Largest Single-memory Computer appeared first on insideHPC.
|
High-Performance Computing News Analysis | insideHPC
Link | https://insidehpc.com/ |
Feed | http://insidehpc.com/feed/ |
Updated | 2024-11-25 11:15 |
by Rich Brueckner on (#2PSRN)
"The DEEP-ER project has created far-reaching impact. Its results have led to widespread innovation and substantially reinforced the position of European industry and academia in HPC. We are more than happy that we are granted the opportunity to continue our DEEP projects journey and generalize the Cluster-Booster approach to create a truly Modular Supercomputing system,†says Prof. Dr. Thomas Lippert, Head of Jülich Supercomputing Centre and Scientific Coordinator of the DEEP-ER project.The post DEEP-ER Project Paves the Way to Future Supercomputers appeared first on insideHPC.
|
by staff on (#2PSM3)
Today D-Wave Systems announced that it has received up to $50 Million in funding from PSP Investments. This facility brings D-Wave’s total funding to approximately US$200 million. The new capital is expected to enable D-Wave to deploy its next-generation quantum computing system with more densely-connected qubits, as well as platforms and products for machine learning applications. "This commitment from PSP Investments is a strong validation of D-Wave’s leadership in quantum computing,†said Vern Brownell, CEO of D-Wave. “While other organizations are researching quantum computing and building small prototypes in the lab, the support of our customers and investors enables us to deliver quantum computing technology for real-world applications today. In fact, we’ve already demonstrated practical uses of quantum computing with innovative companies like Volkswagen. This new investment provides a solid base as we build the next generation of our technology.â€The post D-Wave Lands $50M Funding for Next Generation Quantum Computers appeared first on insideHPC.
|
by Rich Brueckner on (#2PNVH)
We are very excited to bring you this livestream of the 2017 MSST Conference in Santa Clara. We'll be broadcasting all the talks Wednesday, May 17 starting at 8:30am PDT.The post Livestream: 2017 MSST Mass Storage Conference appeared first on insideHPC.
|
by MichaelS on (#2PSBH)
This is the fourth entry in an insideHPC series that explores the HPC transition to the cloud, and what your business needs to know about this evolution. This series, compiled in a complete Guide available, covers cloud computing for HPC, why the OS is important when running HPC applications, OpenStack fundamentals and more.The post Why the OS is So Important when Running HPC Applications appeared first on insideHPC.
|
by Sarah Rubenoff on (#2PP5Z)
Many more companies are turning to in-memory computing (IMC) as they struggle to analyze and process increasingly large amounts of data. That said, it’s often hard to make sense of the growing world of IMC products and solutions. A recent white paper from GridGain aims to help businesses decide which solution best matches their specific needs.The post Why IMC is Right for Today’s Fast-Data and Big-Data Applications appeared first on insideHPC.
|
by staff on (#2PP61)
Today Cray and the Markley Group announced a partnership to provide supercomputing as a service solutions. "The need for supercomputers has never been greater,†said Patrick W. Gilmore, chief technology officer at Markley. “For the life sciences industry especially, speed to market is critical. By making supercomputing and big data analytics available in a hosted model, Markley and Cray are providing organizations with the opportunity to reap significant benefits, both economically and operationally.â€The post Cray and Markley Group to Offer Supercomputing as a Service appeared first on insideHPC.
|
by staff on (#2PNXX)
Today Fujitsu and 1QB Information Technologies Inc. announced that they are collaborating on quantum-inspired technology in the field of artificial intelligence, focusing on the areas of combinatorial optimization and machine learning. The companies will work together in both the Japanese and global markets to develop applications which address industry problems using AI developed for use with quantum computers.The post Fujitsu and 1QBit Collaborate on Quantum-Inspired AI Cloud Service appeared first on insideHPC.
|
by staff on (#2PN8J)
Next-generation sequencing methods are empowering doctors and researchers to improve their ability to treat diseases, predict and prevent diseases before they occur, and personalize treatments to specific patient profiles. "With this increase in knowledge comes a tidal wave of data. Genomic data is growing so quickly that scientists are predicting that this data will soon take the lead as the largest data category in the world, eventually creating more digital information than astronomy, particle physics and even popular Internet sites like YouTube."The post Genome Analytics Driving a Healthcare Revolution appeared first on insideHPC.
|
by staff on (#2PHMW)
Today Penguin Computing announced support for Singularity containers on its Penguin Computing On-Demand (POD) HPC Cloud and Scyld ClusterWare HPC management software. "Our researchers are excited about using Singularity on POD,†said Jon McNally, Chief HPC Architect at ASU Research Computing. “Portability and the ability to reproduce an environment is key to peer reviewed research. Unlike other container technologies, Singularity allows them to run at speed and scale.â€The post Penguin Computing Adds Support for Singularity Containers on POD HPC Cloud appeared first on insideHPC.
|
by staff on (#2PHFJ)
Today Mellanox announced that the University of Waterloo selected Mellanox EDR 100G InfiniBand solutions to accelerate their new supercomputer. The new supercomputer will support a broad and diverse range of academic and scientific research in mathematics, astronomy, science, the environment and more. "The growing demands for research and supporting more complex simulations led us to look for the most advanced, efficient, and scalable HPC platforms,†said John Morton, technical manager for SHARCNET. “We have selected the Mellanox InfiniBand solutions because their smart acceleration engines enable high performance, efficiency and robustness for our applications.â€The post Mellanox InfiniBand to Power Science at University of Waterloo appeared first on insideHPC.
|
by staff on (#2PHDJ)
Today the Gauss Centre for Supercomputing (GCS) in Germany approved 30 large-scale projects as part of their 17th call for large-scale proposals. Combined, these projects received 2.1 billion core hours, marking the highest total ever delivered by the three GCS centres. "GCS awards large-scale allocations to researchers studying earth and climate sciences, chemistry, particle physics, materials science, astrophysics, and scientific engineering, among other research areas of great importance to society."The post Gauss Centre in Germany Awards 2.1 Billion Core Hours for Science appeared first on insideHPC.
|
by Rich Brueckner on (#2PH9A)
In this podcast, the Radio Free HPC team looks at Volta, Nvidia's new GPU architecture that delivers up to 5x the performance of its predecessor. "At the GPU Technology Conference, Nvidia CEO Jen-Hsun Huang introduced a lineup of new Volta-based AI supercomputers including a powerful new version of our DGX-1 deep learning appliance; announced the Isaac robot-training simulator; unveiled the NVIDIA GPU Cloud platform, giving developers access to the latest, optimized deep learning frameworks; and unveiled a partnership with Toyota to help build a new generation of autonomous vehicles."The post Radio Free HPC Looks at the New Volta GPUs for HPC & AI appeared first on insideHPC.
|
by staff on (#2PH44)
The Intel Omni-Path Architecture is the next-generation fabric for high-performance computing. In this feature, we will focus on what it takes for for supercomputer application to scale well by taking full advantage of processor features. "To keep these apps working at their peak means not letting them starve for the next bit or byte or calculated result—for whatever reason."The post Intel Omni-Path Architecture: What’s a Supercomputer App Want? appeared first on insideHPC.
|
by Rich Brueckner on (#2PED6)
The InfiniBand Trade Association (IBTA) has updated their InfiniBand Roadmap. With HDR 200 Gb/sec technolgies shipping this year, the roadmap looks out to an XDR world where server connectivity reaches 1000 Gb/sec. "The IBTA‘s InfiniBand roadmap is continuously developed as a collaborative effort from the various IBTA working groups. Members of the IBTA working groups include leading enterprise IT vendors who are actively contributing to the advancement of InfiniBand. The roadmap details 1x, 4x, and 12x port widths with bandwidths reaching 600Gb/s data rate HDR in 2017. The roadmap is intended to keep the rate of InfiniBand performance increase in line with systems-level performance gains."The post InfiniBand Roadmap Foretells a World Where Server Connectivity is at 1000 Gb/sec appeared first on insideHPC.
|
by Rich Brueckner on (#2PEBJ)
"In this new world, every citizen needs data science literacy. UC Berkeley is leading the way on broad curricular immersion with data science, and other universities will soon follow suit. The definitive data science curriculum has not been written, but the guiding principles are computational thinking, statistical inference, and making decisions based on data. “Bootcamp†courses don't take this approach, focusing mostly on technical skills (programming, visualization, using packages). At many computer science departments, on the other hand, machine-learning courses with multiple pre-requisites are only accessible to majors. The key of Berkeley’s model is that it truly aims to be “Data Science for All.â€The post Lorena Barba Presents: Data Science for All appeared first on insideHPC.
|
by Rich Brueckner on (#2PBKV)
Darren Cepulis from ARM gave this talk at the HPC User Forum. "ARM delivers enabling technology behind HPC. The 64-bit design of the ARMv8-A architecture combined with Advanced SIMD vectorization are ideal to enable large scientific computing calculations to be executed efficiently on ARM HPC machines. In addition ARM and its partners are working to ensure that all the software tools and libraries, needed by both users and systems administrators, are provided in readily available, optimized packages."The post Video: ARM HPC Ecosystem appeared first on insideHPC.
|
by staff on (#2PBGB)
IBM PowerAI on Power servers with GPU accelerators provide at least twice the performance of our x86 platform; everything is faster and easier: adding memory, setting up new servers and so on,†said current PowerAI customer Ari Juntunen, CTO at Elinar Oy Ltd. “As a result, we can get new solutions to market very quickly, protecting our edge over the competition. We think that the combination of IBM Power and PowerAI is the best platform for AI developers in the market today. For AI, speed is everything —nothing else comes close in our opinion.â€The post IBM’s New PowerAI Software Speeds Deep Learning appeared first on insideHPC.
|
by Rich Brueckner on (#2P84E)
In this podcast, Marc Hamilton from Nvidia describes how the new Volta GPUs will power the next generation of systems for HPC and AI. According to Nvidia, the Tesla V100 accelerator is the world’s highest performing parallel processor, designed to power the most computationally intensive HPC, AI, and graphics workloads.The post Podcast: Marc Hamilton on how Volta GPUs will Power Next-Generation HPC and AI appeared first on insideHPC.
|
by Rich Brueckner on (#2P7WX)
Liqid Inc. has fully integrated GPU support into the Liqid Composable Infrastructure (CI) Platform. "Liqid’s CI Platform is the first solution to support GPUs as a dynamic, assignable, bare-metal resource. With the addition of graphics processing, the Liqid CI Platform delivers the industry’s most fully realized approach to composable infrastructure architecture."The post Liqid Delivers Composable Infrastructure Solution for Dynamic GPU Resource Allocation appeared first on insideHPC.
|
by Rich Brueckner on (#2P7RE)
The MSST Mass Storage Conference in Silicon Valley is just a few days away, and the agenda is packed with High Performance Computing topics. In one of the invited talks, Kimberly Keeton from Hewlett Packard Enterprise speak on Memory Driven Computing. We caught up Kimberly to learn more.The post Memory Driven Computing in the Spotlight at MSST Conference Next Week appeared first on insideHPC.
|
by Rich Brueckner on (#2P4ZY)
This week at the GPU Technology Conference, Nvidia CEO Jensen Huang Wednesday launched Volta, a new GPU architecture that delivers 5x the performance of its predecessor. "Over the course of two hours, Huang introduced a lineup of new Volta-based AI supercomputers including a powerful new version of our DGX-1 deep learning appliance; announced the Isaac robot-training simulator; unveiled the NVIDIA GPU Cloud platform, giving developers access to the latest, optimized deep learning frameworks; and unveiled a partnership with Toyota to help build a new generation of autonomous vehicles."The post Nvidia Unveils GPUs with Volta Architecture appeared first on insideHPC.
|
by Richard Friedman on (#2P3TW)
Parallel STL now makes it possible to transform existing sequential C++ code to take advantage of the threading and vectorization capabilities of modern hardware architectures. It does this by extending the C++ Standard Template Library with an execution policy argument that specifies the degree of threading and vectorization for each algorithm used.The post C++ Parallel STL Introduced in Intel Parallel Studio XE 2018 Beta appeared first on insideHPC.
|
by Rich Brueckner on (#2P0B2)
Over at TACC, Aaron Dubrow writes that researchers are using TACC supercomputers to improve, plan, and understand the basic science of radiation therapy. "The science of calculating and assessing the radiation dose received by the human body is known as dosimetry – and here, as in many areas of science, advanced computing plays an important role."The post Supercomputing High Energy Cancer Treatments appeared first on insideHPC.
|
by Rich Brueckner on (#2P096)
Bob Wisniewski from Intel presents: OpenHPC: A Cohesive and Comprehensive System Software Stack. "OpenHPC is a collaborative, community effort that initiated from a desire to aggregate a number of common ingredients required to deploy and manage High Performance Computing (HPC) Linux clusters including provisioning tools, resource management, I/O clients, development tools, and a variety of scientific libraries.The post OpenHPC: A Comprehensive System Software Stack appeared first on insideHPC.
|
by staff on (#2P07A)
Today the Living Computers: Museum + Labs added a pair of Cray supercomputers to its permanent collection. The Cray-1 supercomputer went on display at Living Computers this week and will be joined by the Cray-2 supercomputer later this year. Living Computers intends to recommission the Cray-2 and make it available to the public. "I honestly can’t overstate how important these two supercomputers are to computing history, and I am thrilled to be adding them to our collection,†says Lath Carlson, Executive Director of Living Computers. “Bringing these milestones back from the depths of storage has been an incredible journey, and we look forward to making them available to the public.†Living Computers will also host a private reception for the Cray User Group on May 9th to celebrate the new home of these Cray supercomputers.The post Living Computers Museum Adds Two Iconic Cray Systems appeared first on insideHPC.
|
by staff on (#2P03F)
Today Bright Computing announced the release of Bright Cluster Manager 8.0 and Bright OpenStack 8.0 with advanced, integrated solutions to improve ease-of use and management of HPC and Big Data clusters as well as private and public cloud environments. "In our latest software release, we incorporated many new features that our users have requested,†said Martijn de Vries, Chief Technology Officer of Bright Computing. De Vries continues, “We’ve made significant improvements that provide greater ease-of-use for systems administrators as well as end-users when creating and managing their cluster and cloud environments. Our goal is to increase productivity to decrease the time to results.â€The post Bright Cluster Manager 8.0 Release Sets New Standard for Automation appeared first on insideHPC.
|
by staff on (#2NZTY)
This week at the GPU Technology Conference, Nvidia and Barcelona Supercomputing Center demonstrated an interactive visualization of a cardiac computational model that shows the potential of HPC-based simulation codes and GPU-accelerated clusters to simulate the human cardiovascular system. "This demonstration brings together Alya simulation code and NVIDIA IndeX scalable software to implement an in-situ visualization solution for of electro mechanical simulations of the BSC cardiac computational model. While Alya simulates the electromechanical cardiac propagation, NVIDIA IndeX is used for an immediate in-situ visualization. The in-situ visualization allows researchers to interact with the data on the fly giving a better insight into the simulations."The post BSC and NVIDIA Showcase Interactive Simulation of Human Body appeared first on insideHPC.
|
by Rich Brueckner on (#2NWMD)
W. Joe Allen from TACC gave this talk at the HPC User Forum. "The Agave Platform brings the power of high-performance computing into the clinic," said William (Joe) Allen, a life science researcher for TACC and lead author on the paper. "This gives radiologists and other clinical staff the means to provide real-time quality control, precision medicine, and overall better care to the patient."The post Leveraging HPC for Real-Time Quantitative Magnetic Resonance Imaging appeared first on insideHPC.
|
by staff on (#2NWJT)
Today, DOE announced nearly $3.9 million for 13 projects designed to stimulate the use of high performance supercomputing in U.S. manufacturing. The Office of Energy Efficiency and Renewable Energy (EERE) Advanced Manufacturing Office's High Performance Computing for Manufacturing (HPC4Mfg) program enables innovation in U.S. manufacturing through the adoption of high performance computing to advance applied science and technology relevant to manufacturing. HPC4Mfg aims to increase the energy efficiency of manufacturing processes, advance energy technology, and reduce energy's impact on the environment through innovation.The post DOE to Fund HPC for Manufacturing appeared first on insideHPC.
|
by staff on (#2NWH7)
Today SkyScale announced the launch of its world-class, ultra-fast multi-GPU hardware platforms in the cloud, available for lease to customers desiring the fastest performance available as a service anywhere on the globe. "Employing OSS compute and flash storage systems gives SkyScale customers the overwhelming competitive advantage in the cloud they have been asking for," said Tim Miller, President of SkyScale. “The systems we deploy at SkyScale in the cloud are identical to the hundreds of systems in the field and in the sky, trusted in the most rigorous defense, space, and medical deployments with cutting-edge, rock-solid performance, stability, and reliability. By making these systems accessible on a time-rental basis, developers have the advantage of using the most sophisticated systems available to run their algorithms without having to own them, and they can scale up or down as needs change. SkyScale is the only service that gives customers the opportunity to lease time on these costly systems, saving time-to-deployment and money.â€The post SkyScale Announces “World’s Fastest Cloud Computing Service†appeared first on insideHPC.
|
by staff on (#2NW5V)
One Stop Systems announced plans to exhibit a wide array of its high-density GPU appliances at the GPU Technology Conference in San Jose. "One Stop Systems offers a wide variety of GPU appliances with different power density solutions to support a range of customer needs," said Steve Cooper, OSS CEO. "Customers can choose a solution that fits their rack space and budget while still ensuring they get the compute power they need for their application. OSS GPU Appliances allow for tremendous performance gains in many applications like deep learning, oil and gas exploration, financial calculations, and medical devices. As GPU technology continues to improve, OSS products are immediately able to accommodate the newest and most powerful GPUs."The post One Stop Systems Showcases High-Density GPU Appliances at GTC 2017 appeared first on insideHPC.
|
by staff on (#2NRE3)
Today Infervision introduced its innovative, deep learning solution to help radiologists identify suspicious lesions and nodules in lung cancer patients faster than ever before. The Infervision AI platform is the world’s first to reshape the workflow of radiologists and it is already showing dramatic results at several top hospitals in China.The post AI Technology from China Helps Radiologists Detect Lung Cancer appeared first on insideHPC.
|
by staff on (#2NRC8)
Today MapD Technologies released the MapD Core database to the open source community under the Apache 2 license, seeding a new generation of data applications. "Open source is sparking innovation for data science and analytics developers," said Greg Papadopoulos, venture partner at New Enterprise Associates (NEA). "An open-source GPU-powered SQL database will make entirely new applications possible, especially in machine learning where GPUs have had such an enormous impact. We're incredibly proud to partner with the MapD team as they take this pivotal step."The post MapD Open Sources High-Speed GPU-Powered Database appeared first on insideHPC.
|
by staff on (#2NR8Y)
Today IBM announced its development of Non-Volatile Memory Express (NVMe) solutions to provide clients the ability to significantly lower latencies in an effort to speed data to and from storage solutions and systems. "IBM's developers are re-tooling the end-to-end storage stack to support this new, faster interconnect protocol to boost the experience of everyone consuming the massive amounts of data now being perpetuated across cloud services, retail, banking, travel and other industries."The post IBM Supports New Faster Protocols for NVMe Flash Storage appeared first on insideHPC.
|
by Rich Brueckner on (#2NR5V)
In this podcast, the Radio Free HPC team reviews the results from the ASC17 Student Cluster Competition finals in Wuxi, China. "As the world's largest supercomputing competition, ASC17 received applications from 230 universities around the world, 20 of which got through to the final round held this week at the National Supercomputing Center in Wuxi after the qualifying rounds. During the final round, the university student teams were required to independently design a supercomputing system under the precondition of a limited 3000W power consumption."The post Radio Free HPC Reviews the ASC17 Student Cluster Competition appeared first on insideHPC.
|
by Rich Brueckner on (#2NR0R)
In this slidecast, Ronald P. Luijten from IBM Research in Zurich presents: DOME 64-bit μDataCenter. "I like to call it a dataÂcenÂter in a shoeÂbox. With the comÂbiÂnaÂtion of power and enerÂgy efÂfiÂcienÂcy, we beÂlieve the miÂcroÂservÂer will be of inÂteÂrest beÂyond the DOME proÂject, parÂticÂuÂlarÂly for cloud data centers and Big Data anaÂlyÂtics apÂpliÂcaÂtions."The post Leaping Forward in Energy Efficiency with the DOME 64-bit μDataCenter appeared first on insideHPC.
|
by Rich Brueckner on (#2NNB2)
"Focused on High Performance and Scientific Computing at Novartis Institutes for Biomedical Research (NIBR), in Basel Switzerland, Nick Holway and his team provide HPC resources and services, including programming and consultancy for the innovative research organization. Supporting more than 6,000 scientists, physicians and business professionals from around the world focused on developing medicines and devices that can produce positive real-world outcomes for patients and healthcare providers, Nick also contributes expertise in bioinformatics, image processing and data science in support of the researchers and their works."The post Video: HPC at NIBR appeared first on insideHPC.
|
by Rich Brueckner on (#2NN9P)
The Naval Air Systems Command is seeking a Computer Scientist in our Job of the Week.The post Job of the Week: Computer Scientist at Naval Air Systems Command appeared first on insideHPC.
|
by Rich Brueckner on (#2NJGG)
"Big data analytics, machine learning and deep learning are among the most rapidly growing workloads in the data center. These workloads have the compute performance requirements of traditional technical computing or high performance computing, coupled with a much larger volume and velocity of data. Conventional data center architectures have not kept up with the needs for these workloads. To address these new client needs, IBM has adopted an innovative, open business model through its OpenPOWER initiative."The post Video: IBM Datacentric Servers & OpenPOWER appeared first on insideHPC.
|
by Rich Brueckner on (#2NJFE)
The 25th International Symposium on High Performance Interconnects (HotI 2017) has issued its Call for Papers. The event takes place August 29-30 at the Ericsson Campus in Santa Clara, California. "Hot Interconnects is the premier international forum for researchers and developers of state-of-the-art hardware and software architectures and implementations for interconnection networks of all scales, ranging from multi-core on-chip interconnects to those within systems, clusters, data centers, and clouds. This yearly conference is attended by leaders in industry and academia. The atmosphere provides for a wealth of opportunities to interact with individuals at the forefront of this field."The post Call for Papers: Hot Interconnects appeared first on insideHPC.
|
by staff on (#2NF9F)
Today SC17 announced that 16 teams will take part in the Student Cluster Competition. Hailing from across the U.S., as well as Asia and Europe, the student teams will race to build HPC clusters and run a full suite of applications in the space of just a few days.The post Sixteen Teams to Compete in SC17 Student Cluster Competition appeared first on insideHPC.
|
by staff on (#2NF48)
In this TACC Podcast, host Jorge Salazar interviews Xian-He Sun, Distinguished Professor of Computer Science at the Illinois Institute of Technology. Computer Scientists working in his group are bridging the file system gap with a cross-platform Hadoop reader called PortHadoop, short for portable Hadoop. "We tested our PortHadoop-R strategy on Chameleon. In fact, the speedup is 15 times faster," said Xian-He Sun. "It's quite amazing."The post Podcast: PortHadoop Speeds Data Movement for Science appeared first on insideHPC.
|
by Rich Brueckner on (#2NF29)
"Hyperion research (previously the HPC team at IDC) has launched a program to both collect this data and recognize noteworthy achievements using High Performance Computing (HPC) resources. We are interested in Innovation and/or ROI examples from today or dating back as far as 10 years. Please complete and submit a separate application form for each ROI / Innovation success story."The post Nominate Your Customers for the 2017 HPC Innovation Awards appeared first on insideHPC.
|
by Rich Brueckner on (#2NEYJ)
"The interconnect is going to be a key enabling technology for exascale systems. This is why one of the cornerstones of Bull’s exascale program is the development of our own new-generation interconnect. The Bull eXascale Interconnect or BXI introduces a paradigm shift in terms of performance, scalability, efficiency, reliability and quality of service for extreme workloads."The post Slidecast: BXI – Bull eXascale Interconnect appeared first on insideHPC.
|
by Rich Brueckner on (#2NBMJ)
Peter Braam is well-known in the HPC Community for his early work with Lustre and other projects like the SKA telescope Science Data Processor. As one of the featured speakers at the upcoming MSST Mass Storage Conference, Braam will describe how his Campaign Storage Startup provides tools for massive parallel data movement between the new low cost, industry standard campaign storage tiers with premium storage for performance or availability.The post Interview: Peter Braam on How Campaign Storage Bridges the Small & Big, Fast & Slow appeared first on insideHPC.
|
by Rich Brueckner on (#2NBGZ)
Hewlett Packard Enterprise has posted their Preliminary Agenda for HP-CAST. As HPE's user group meeting for high performance computing, the event takes place June 16-17 in Frankfurt, just prior to ISC 2017. "The High Performance Consortium for Advanced Scientific and Technical (HP-CAST) computing users group works to increase the capabilities of Hewlett Packard Enterprise solutions for large-scale, scientific and technical computing. HP-CAST provides guidance to Hewlett Packard Enterprise on the essential development and support issues for such systems. HP-CAST meetings typically include corporate briefings and presentations by HPE executives and technical staff (under NDA), and discussions of customer issues related to high-performance technical computing."The post Agenda Posted for HP-CAST at ISC 2017 appeared first on insideHPC.
|
by Rich Brueckner on (#2NBDQ)
Martin Hilgeman from Dell gave this talk at the Switzerland HPC Conference. "With all the advances in massively parallel and multi-core computing with CPUs and accelerators it is often overlooked whether the computational work is being done in an efficient manner. This efficiency is largely being determined at the application level and therefore puts the responsibility of sustaining a certain performance trajectory into the hands of the user. This presentation shows the well-known laws of parallel performance from the perspective of a system builder."The post Trends in Systems and How to Get Efficient Performance appeared first on insideHPC.
|
by staff on (#2NBAG)
Today Fastdata.io announced it has raised a total of $1.5 million from NVIDIA and other investors. The company is introducing "the world’s fastest and most efficient stream processing software engine" to meet the critical and growing need for efficient, real-time big data processing. Fastdata.io will use the financing to invest in developing its product, marketing and talent acquisition.The post Nvidia Funds AI Startup Fastdata.io appeared first on insideHPC.
|
by staff on (#2NB8R)
The San Diego Supercomputer Center has been granted a supplemental award from the National Science Foundation to double the number of GPUs on its petascale-level Comet supercomputer. "This expansion is reflective of a wider adoption of GPUs throughout the scientific community, which is being driven in large part by the availability of community-developed applications that have been ported to and optimized for GPUs,†said SDSC Director Michael Norman, who is also the principal investigator for the Comet program.The post Comet Supercomputer Doubles Down on Nvidia Tesla P100 GPUs appeared first on insideHPC.
|