by staff on (#4NZVA)
Today Mellanox announced that its RDMA (Remote Direct Memory Access) networking solutions for VMware vSphere enable virtualized Machine Learning solutions that achieve higher GPU utilization and efficiency. "As Moore's Law has slowed, traditional CPU and networking technologies are no longer sufficient to support the emerging machine learning workloads," said Kevin Deierling, vice president marketing, Mellanox Technologies. "Using hardware compute accelerators such as NVIDIA T4 GPUs and Mellanox's RDMA networking solutions has proven to boost application performance in virtualized deployments."The post Mellanox Powers Virtualized Machine Learning with VMware and NVIDIA appeared first on insideHPC.
|
High-Performance Computing News Analysis | insideHPC
Link | https://insidehpc.com/ |
Feed | http://insidehpc.com/feed/ |
Updated | 2024-11-24 09:00 |
by staff on (#4NZVB)
Today Supermicro extended its vSAN system portfolio and introduced a new enterprise-class vSAN solution -- Ultra SuperServer -- to its broad portfolio of fully configured, ready to deploy server systems. Supermicro solutions, coupled with industry-proven vSAN, provides turn-key solutions for the hyper-converged infrastructure marketplace. “The Supermicro 2U/1U Ultra SuperServers are configurable with support of 20, 10, 4, or 2 hot-swappable NVMe drives and leverage 2nd Gen Intel Xeon Scalable processors and Intel Optane DC SSDs. The BigTwin, a high-density multi-node (2U-4 node) system, is optimized for mission-critical applications supporting up to 6TB memory per node and configured for hyper-converged infrastructure. Both systems are ideal for specific workloads offering operational simplicity, scalability, low total cost of ownership (TCO), and resource-savings for intelligent enterprise deployments.â€The post Supermicro Launches New High-Performance vSAN Solution appeared first on insideHPC.
|
by staff on (#4NZQG)
Atos and India's C-DAC have signed a Cooperation Agreement to work together in the areas of Quantum Computing, Artificial Intelligence, and Exascale Computing. "Building on our position as the leading technology provider globally for Supercomputing, AI, Quantum Computing and others, this agreement is a significant step forward in our strategic relationship. This will strengthen the R&D activities between France and India with C-DAC and Atos significantly contributing to technology development and nation economic growth.â€The post Atos and C-DAC to Collaborate on AI, Quantum Computing, and Exascale in India appeared first on insideHPC.
|
by staff on (#4NZJJ)
In this special guest feature, Dan Olds from OrionX.net writes that IBM ambitions in the HPC market have fallen short. "Without more commitment and support, IBM’s POWER processor runs the risk of becoming this generations version of the DEC Alpha CPU. It was a fantastic processor and ran rings around competitors, but ultimately failed because of a lack of marketing and operating environment support from DEC."The post IBM’s HPC Dreams are Tattered appeared first on insideHPC.
|
by Rich Brueckner on (#4NZJK)
Miguel Terol from Lenovo gave this talk at HPCKP'19. "Technology players are refining their chip and platform designs to enable much denser systems. The trade-off of this trend is chips are getting more and more power hungry, and cooling those components becomes a challenge in terms of sustainability, either for the environment or the economy. In this talk we will present the high density technology landscape and different approaches to address the cooling challenges."The post Cooling Challenges for Ultra-high Density Compute Clusters appeared first on insideHPC.
|
by Rich Brueckner on (#4NY08)
Gilad Shainer from Mellanox gave this talk at the MVAPICH User Group. "In-Network Computing transforms the data center interconnect to become a "distributed CPU", and "distributed memory", enables to overcome performance barriers and to enable faster and more scalable data analysis. These technologies are in use at some of the recent large scale supercomputers around the world, including the top TOP500 platforms. The session will discuss the InfiniBand In-Network Computing technology and performance results, as well as view to future roadmap."The post Video: InfiniBand In-Network Computing Technology and Roadmap appeared first on insideHPC.
|
by Rich Brueckner on (#4NXWZ)
In this Conversations in the Cloud podcast, Esther Baldwin from Intel describes how the convergence of HPC and AI is driving innovation. "On the topic of HPC & AI converged clusters, there’s a perception that if you want to do AI, you must stand up a separate cluster, which Esther notes is not true. Existing HPC customers can do AI on their existing infrastructure with solutions like HPC & AI converged clusters."The post Podcast: HPC & AI Convergence Enables AI Workload Innovation appeared first on insideHPC.
|
by staff on (#4NWFV)
In this video, PCI-SIG President and Board Member, Al Yanes, shares and overview of PCI Express 5.0 and 6.0 specifications. “With the PCIe 6.0 specification, PCI-SIG aims to answer the demands of such hot markets as Artificial Intelligence, Machine Learning, networking, communication systems, storage, High-Performance Computing, and more.â€The post Video: PCI Express 6.0 Specification to Reach 64 GigaTransfers/sec appeared first on insideHPC.
|
by Rich Brueckner on (#4NWDD)
Pawsey’s Supercomputing Team is responsible for providing the infrastructure to fulfill the needs of the Australian research community and to engage with that community to make best use of the infrastructure. You will be part of this highly skilled team of professional specialists and developers, will work collaboratively with researchers to assist them in exploiting the vast opportunities afforded by the infrastructure operated in the Pawsey centre.The post Job of the Week: Supercomputing Applications Specialist at CSIRO appeared first on insideHPC.
|
by staff on (#4NTJV)
The good folks at Basement Supercomputing have published a white paper comparing one of the company's on-prem Limulus personal HPC appliance to Amazon EC2 Cloud instances. The economics are quite interesting. "In this careful study equivalent EC2 cluster instances were configured using current pricing (Spring 2019). A comparison of Basement Supercomputing Limulus appliance workstations with the Amazon EC2 (Elastic Compute Cloud) is presented. The capabilities of both approaches are discussed along with a detailed comparison of two Limulus appliance designs."The post White Paper: Basement Supercomputer beats Amazon EC2 on Cost and Performance appeared first on insideHPC.
|
by staff on (#4NTDP)
Quantum Computing holds tremendous promise, but it could also put today's cryptographic-based security systems at risk. Enter IBM, who just announced plans to provide quantum-safe cryptography services on the IBM public cloud in 2020. The company is now offering a Quantum Risk Assessment from IBM Security to help customers assess their risk in the quantum world. Additionally, IBM cryptographers have prototyped the world's first quantum computing safe enterprise class tape, an important step before commercialization.The post IBM Cloud to Provide Quantum-safe Cryptography in 2020 appeared first on insideHPC.
|
by Rich Brueckner on (#4NTDR)
Robert Harrison from Brookhaven gave this talk at the MVAPICH User Group. "MADNESS, TESSE/EPEXA, and MolSSI are three quite different large and long-lived projects that provide different perspectives and driving needs for the future of message passing. All three of these projects employ MPI and have a vested interest in computation at all scales, spanning the classroom to future exascale systems."The post Video: Three Perspectives on Message Passing appeared first on insideHPC.
|
by Rich Brueckner on (#4NRAK)
A recent report from Hyperion Research indicate that as much as 10 percent of HPC workloads are already in cloud and that 70 percent of HPC centers are running some jobs in public clouds. Are these numbers indicative of what you're seeing in your workplace? There is one quick way to find out--by taking our HPC Cloud Survey.The post Silence the Critics: Take our Quick Survey on HPC Cloud appeared first on insideHPC.
|
by staff on (#4NREX)
In this Conversations in the Cloud podcast, Lee Carter from Bright Computing offers comprehensive software for deploying, managing, and monitoring clustered infrastructure, in the data center or the cloud. "Looking forward, Lee speculates that the future of AI will see new technologies like autonomous cars using complex simulation and modeling in small scale systems, continuing the legacy of HPC."The post Podcast: Reducing Complexity with Cluster Management from Bright Computing appeared first on insideHPC.
|
by staff on (#4NR4K)
In this special guest feature, Calista Redmond writes that the European Processor Initiative is designing an HPC accelerator based on RISC-V. "The accelerator will be designed for high throughput and power efficiency within the general purpose processor (GPP) chip. The EPI explains that using RISC-V enables the program to leverage “open source resources at [the] hardware architecture level and software level, as well as ensuring independence from non-European patented computing technologies.â€The post How the European Processor Initiative is Leveraging RISC-V for the Future of Supercomputing appeared first on insideHPC.
|
by Rich Brueckner on (#4NQZD)
A new HPE supercomputer at NASA’s Ames Research Center will run modeling and simulation workloads for lunar landings. The 3.69 Petaflop "Aitken" system is a custom-designed supercomputer that will support modeling and simulations of entry, descent, and landing (EDL) for the agency’s missions and Artemis program, a mission to land the next humans on the lunar South Pole region by 2024.The post Aitken Supercomputer from HPE to Support NASA Moon Missions appeared first on insideHPC.
|
by Rich Brueckner on (#4NQZF)
Christian Kniep gave this talk at HPCKP'19. "This talk will dissect the convergence by refreshing the audiences’ memory on what containerization is about, segueing into why AI/ML workloads are leading to fully fledged HPC applications eventually and how this will inform the way forward. In conclusion Christian will discuss the three main challenges `Hardware Access`, `Data Access` and `Distributed Computing` in container technology and how they can be tackled by the power of open source, while focusing on the first."The post Containerized Convergence of Big Data and Big Compute appeared first on insideHPC.
|
by staff on (#4NNS7)
Today Xilinx announced the expansion of its 16 nanometer (nm) Virtex UltraScale+ family to now include the world’s largest FPGA — the Virtex UltraScale+ VU19P. With 35 billion transistors, the VU19P provides the highest logic density and I/O count on a single device ever built, enabling emulation and prototyping of tomorrow’s most advanced ASIC and SoC technologies, as well as test, measurement, compute, networking, aerospace and defense-related applications.The post Video: Xilinx Unveils World’s Largest FPGA appeared first on insideHPC.
|
by staff on (#4NNKB)
This week at The Linux Foundation Open Source Summit, IBM announced it is opening the POWER Instruction Set Architecture (ISA). "The opening of the Power ISA, an architecture with a long and distinguished history, will help the open hardware movement continue to gain momentum," said Mateo Valero, Director of Barcelona Supercomputing Center. "BSC, which has collaborated with IBM for more than two decades, is excited that IBM's announcements today provide additional options to initiatives pursuing innovative new processor and accelerator development with freedom of action."The post IBM Opens POWER Instruction Set Architecture appeared first on insideHPC.
|
by staff on (#4NNEC)
The CANcer Distributed Learning Environment, or CANDLE, is a cross-cutting initiative of the Joint Design of Advanced Computing Solutions for Cancer collaboration and is supported by DOE’s Exascale Computing Project (ECP). CANDLE is building a scalable deep learning environment to run on DOE’s most powerful supercomputers. The goal is to have an easy-to-use environment that can take advantage of the full power of these systems to find the optimal deep-learning models for making predictions in cancer.The post Exascale CANDLE Project to Fight Against Cancer appeared first on insideHPC.
|
by Rich Brueckner on (#4NNEE)
In this podcast, the Radio Free HPC team looks into the AMD Rome CPU, a beast that brings back the glory days of Opteron and establishes itself as the chip to have, and establishes AMD as the company to beat. After that, they kick off their new regular segment: Henry Newman’s Feel-Good Security Corner.The post Podcast: AMD is Back to Glory Days wit Rome CPU appeared first on insideHPC.
|
by Rich Brueckner on (#4NNEF)
Dan Stanzione from TACC gave this talk at the MVAPICH User Group. "In this talk, I will describe the main components of the award: the Phase 1 system, “Fronteraâ€, the plans for facility operations and scientific support for the next five years, and the plans to design a Phase 2 system in the mid-2020s to be the NSF Leadership system for the latter half of the decade, with capabilities 10x beyond Frontera. The talk will also discuss the key role MVAPICH and Infiniband play in the project, and why the workload for HPC still can't fit effectively on the cloud without advanced networking support."The post Frontera: The Next Generation NSF HPC Resource, and Why HPC Still isn’t the Cloud appeared first on insideHPC.
|
by staff on (#4NKAP)
Today NetApp announced the NetApp EF600 storage array. The EF600 is an end-to-end NVMe midrange array that accelerates access to data and empowers companies to rapidly develop new insights for performance-sensitive workloads. "The storage industry is currently transitioning from the SAS to the NVMe protocol, which significantly increases the speed of access to data,†said Tim Stammers, senior analyst, 451 Research. “But conventional storage systems do not fully exploit NVMe performance, because of latencies imposed by their main controllers. NetApp’s E-Series systems were designed to address this architectural issue and are already used widely in performance-sensitive applications. The EF600 sets a new level of performance for the E-Series by introducing end-to-end support for NVMe, and should be considered by IT organizations looking for high-speed storage to serve analytics and other data-intensive applications.â€The post NetApp EF600 Storage Array Speeds HPC and Analytics appeared first on insideHPC.
|
by staff on (#4NK0E)
Engineers are unlocking increased compute capacity to achieve advancements in 5G, autonomous systems, electric vehicles, and other global megatrends thanks to ANSYS Cloud HPC, powered by Microsoft Azure. Available from directly within ANSYS engineering simulation software, ANSYS Cloud is helping organizations rapidly run high-fidelity simulations, shortening development cycles and increasing time to market. ""Organizations benefit from Azure's vast number of on-demand compute cores to run large parallel and tightly coupled simulations, enabled by infrastructure specifically designed for HPC featuring RDMA InfiniBand. With Azure, ANSYS customers get performance without sacrificing security as sensitive proprietary data remains highly secure and protected by technologies that prohibit unauthorized access."The post ANSYS Cloud HPC Increases Simulation throughput for Hundreds of Organizations appeared first on insideHPC.
|
by staff on (#4NK0G)
Today at Hot Chips 2019, Intel revealed new details of upcoming high-performance AI accelerators: Intel Nervana neural network processors, with the NNP-T for training and the NNP-I for inference. Intel engineers also presented technical details on hybrid chip packaging technology, Intel Optane DC persistent memory and chiplet technology for optical I/O. InsideHPC has got all the details, here, all in one place.The post Intel Talks at Hot Chips gear up for “AI Everywhere†appeared first on insideHPC.
|
by staff on (#4NJTT)
Today Sylabs announced that Singularity Enterprise is now generally available as a self-hosted offering, making it faster and easier for businesses to adopt containerization across their production environments. In private beta since April of this year, Singularity Enterprise has grabbed the attention of DevOps and IT infrastructure teams at leading businesses and government organizations for expediting containerized workloads from development into production.The post Singularity Enterprise to Accelerate Adoption of Containers with Cryptographically Verifiable Trust appeared first on insideHPC.
|
by Rich Brueckner on (#4NJTV)
Karl Schultz from the Oden Institute gave this talk at HPCKP'19. "Formed initially in November 2015 and formalized as a Linux Foundation project in June 2016, OpenHPC has been adding new software components and now supports multiple OSes and architectures. This presentation will present an overview of the project, currently available software, and highlight more recent changes along with general project updates and future plans."The post OpenHPC: Community Building Blocks for HPC Systems appeared first on insideHPC.
|
by staff on (#4NG8C)
Today AI startup Cerebras Systems unveiled the largest chip ever built. "The Cerebras Wafer-Scale Engine (WSE) is the largest chip ever built. 56x larger than any other chip, the WSE delivers more compute, more memory, and more communication bandwidth. This enables AI research at previously-impossible speeds and scale."The post Cerebras Systems Unveils the Industry’s First Trillion Transistor Chip for AI appeared first on insideHPC.
|
by staff on (#4NG3R)
Today UPMEM announced a Processing-in-Memory (PIM) acceleration solution that allows big data and AI applications to run 20 times faster and with 10 times less energy. Instead of moving massive amounts of data to CPUs, the silicon-based technology from UPMEM puts CPUs right in the middle of data, saving time and improving efficiency. By allowing compute to take place directly in the memory chips where data already resides, data-intensive applications can be substantially accelerated.The post UPMEM Puts CPUs Inside Memory to Allow Apps to Run 20 Times Faster appeared first on insideHPC.
|
by staff on (#4NG3T)
In this video, Oded Green from NVIDIA unboxes a DGX-1 supercomputer at the College of Computing Data Center at Georgia Tech. "And while the DGX-1 arriving at Georgia Tech for student-use is exciting enough, there is cause for more celebration as a DGX Station also arrived this year as part of a new NVIDIA Artificial Intelligence Lab (NVAIL) grant awarded to CSE. The NVAIL grant focuses on developing multi-GPU graph analytics and the DGX station is constructed specifically for data science and artificial intelligence development."The post Video: Unboxing the NVIDIA DGX-1 Supercomputer at Georgia Tech appeared first on insideHPC.
|
by staff on (#4NG3W)
In this special guest feature, Jorge Salazar from TACC writes that Researchers are using XSEDE supercomputers to better understand shock turbulence interactions. "We proposed that, instead of treating the shock as a discontinuity, one needs to account for its finite thickness as in real life which may be involved as a governing parameter in, for example, amplification factors," Donzis said.The post Simulating Shock Turbulence Interactions on Stampede II appeared first on insideHPC.
|
by staff on (#4NEND)
Today Micron Technology announced advancements in DRAM scaling, making Micron the first memory company to begin mass production of 16Gb DDR4 products using 1z nm process technology. "The optimized balance between power and performance will be a key differentiator for applications including, among others, artificial intelligence, autonomous vehicles, 5G, mobile devices, graphics, gaming, network infrastructure and servers."The post Micron Starts Volume Production of 1z Nanometer DRAM Process Node appeared first on insideHPC.
|
by staff on (#4NEFQ)
Using the Titan supercomputer at Oak Ridge National Laboratory, a team of astrophysicists created a set of galactic wind simulations of the highest resolution ever performed. The simulations will allow researchers to gather and interpret more accurate, detailed data that elucidates how galactic winds affect the formation and evolution of galaxies.The post Supercomputing Galactic Winds with Cholla appeared first on insideHPC.
|
by staff on (#4NDKZ)
A collaboration that includes researchers from NERSC was recently honored with an HPC Innovation Excellence Award for their work on “Physics-Based Unsupervised Discovery of Coherent Structures in Spatiotemporal Systems.†The award was presented in June by Hyperion Research during the ISC19 meeting in Frankfurt, Germany.The post HPC Innovation Excellence Award Showcases Physics-based Scientific Discovery appeared first on insideHPC.
|
by Rich Brueckner on (#4NDM1)
New York University is seeking a Senior HPC Specialist in our Job of the Week. "In this role, you will provide technical leadership in design, development, installation and maintenance of hardware and software for the central High-Performance Computing systems and/or research computing services at New York University. Plan, design, and install Linux operating system's hardware, cluster management software, scientific computing software and/or network services."The post Job of the Week: Senior HPC Specialist at New York University appeared first on insideHPC.
|
by staff on (#4NDM2)
Today Cloudian announced that the University of Leicester has deployed the company’s HyperStore object storage system as the foundation of a revamped backup platform. The new solution requires 50% less space and, once fully implemented, is expected to save approximately 25% in data storage costs. "Today Cloudian announced that the University of Leicester has deployed the company’s HyperStore object storage system as the foundation of a revamped backup platform. The new solution requires 50% less space and, once fully implemented, is expected to save approximately 25% in data storage costs."The post University of Leicester Adopts Cloudian Object Storage for Backup appeared first on insideHPC.
|
by staff on (#4NDM4)
Today ACM announced that Milinda Shayamal Fernando of the University of Utah and Staci Smith of the University of Arizona are the recipients of the 2019 ACM-IEEE CS George Michael Memorial HPC Fellowships. "The fellowship honors exceptional PhD students throughout the world whose research focus is on high performance computing applications, networking, storage or large-scale data analytics using the most powerful computers that are currently available. The Fellowship includes a $5,000 honorarium and travel expenses to attend SC19 in Denver, where the Fellowships will be formally presented."The post ACM Announces Winners of 2019 ACM-IEEE CS George Michael Memorial HPC Fellowships appeared first on insideHPC.
|
by staff on (#4NDM6)
Today GRC announced the launch of a new micro-modular data center solution, the ICEraQ Micro, a rapidly deployable self-contained 24U server rack that can support up to 50kW of critical IT load and is nearly half the cost of other micro-modular solutions. "ICEraQ Micro gives IT professionals the freedom to easily add high-density computing, virtually anywhere.â€The post GRC Launches Immersion Cooled Micro-Modular Data Center Solution appeared first on insideHPC.
|
by Rich Brueckner on (#4NDM8)
Tom Fisher gave this talk at the Samsung Forum. "Big Data is experiencing a second revolution. This talk will address what’s happened, how it happened and what big data is bridging too. Enterprise companies have to make business critical decisions in the coming years and the marketplace is not clear. The recent changes in the Big Data market will be reviewed as well as the effects on the related ecosystem. The goal of this presentation is to provide insights to engineers, data engineers and data scientists to better navigate a rapidly moving landscape."The post Video: Big Data is Dead, Long Live Its Replacement appeared first on insideHPC.
|
by staff on (#4NBQX)
Supercomputing in Australia is slated to get a major boost this November with the deployment of the Fujitsu-made “Gadi†supercomputer, estimated to be 10 times faster than its predecessor. NCI will use Altair’s PBS Works software suite — including Altair Control, Altair Access, Altair Monitor, and Altair PBS Professional — to optimize job scheduling, manage workloads, and perform detailed analysis on the new Gadi system. “This new machine will keep Australian research and the 5,000 researchers who use it, at the cutting edge. It will help us get smarter with our big data. It will add even more brawn to the considerable brains already tapping into NCI.â€The post Altair PBS Works to Optimize Gadi Supercomputer at NCI appeared first on insideHPC.
|
by Rich Brueckner on (#4NBJB)
In this video, from Forrest Norrod from AMD welcomes Peter Ungaro from Cray to discuss how 2nd Generation AMD EPYC processors with drive new levels of performance for HPC. The AMD EPYC 7002 Series Processors are the first x86 server processors featuring 7nm hybrid-multi-die design and PCIe Gen4. With up to 64 high performance cores per SOC, 2nd Gen AMD EPYC Processors deliver world-record performance on industry benchmarks. They are available in the Cray CS500 cluster and Shasta supercomputers."The post Video: Cray Steps up with 2nd Generation AMD EPYC Processors for HPC appeared first on insideHPC.
|
by staff on (#4NBJD)
Today SIGHPC announced that Trilce Estrada is the 2019 ACM SIGHPC Emerging Woman Leader in Technical Computing award winner. Dr. Estrada is an associate professor in the department of Computer Science at the University of New Mexico. She is recognized for her innovative and transformative deployment of machine learning for knowledge discovery in molecular dynamic simulations and in situ analytics. "Her contributions in computational chemistry have transformed emerging paradigms into successful platforms for scientific discovery. She is the recipient of numerous grants and awards, including an NSF CAREER Award for enabling distributed and in-situ analysis for multidimensional structured data."The post Trilce Estrada wins 2019 ACM SIGHPC Emerging Woman Leader in Technical Computing Award appeared first on insideHPC.
|
by staff on (#4NBJF)
Today HPC cloud provider Nimbix announced the launch of HyperHub, a point-and-click catalog of HPC and accelerated applications. With the new self-service marketplace, engineers and scientists can select from a growing ecosystem of prebuilt apps and workflows and run them on any device, cloud, or on-premises infrastructure, anywhere in the world. “By providing access to a growing catalog of consumable, run-anywhere apps, HyperHub enables engineers and scientists to shorten their time-to-value when tackling complex, compute-intensive tasks like simulation and AI. HyperHub’s open marketplace structure also encourages ISVs and partners to continue building and publishing additional high-quality HPC apps for these professionals to use.â€The post Nimbix Launches HyperHub Catalog of Cloud-enabled Applications appeared first on insideHPC.
|
by Rich Brueckner on (#4NBE9)
In this podcast, HPE Distinguished Technologist Kim Keeton describes the concept of Memory-Driven Computing and how it relates to traditional high performance computing. In terms of application areas, Kim also explains her perspective on blockchain and self-driving cars. "Memory-Driven Computing sets itself apart by giving every processor in a system access to a giant shared pool of memory - a sharp departure from today's systems where relatively small amounts of memory are tethered to each processor. The resulting inefficiencies limit performance."The post Podcast: Memory-Driven Computing appeared first on insideHPC.
|
by staff on (#4N9SA)
Penguin Computing just announced the availability of AMD EPYC 7002 Series Processors for Penguin Computing’s Altus server platform. AMD EPYC 7002 Series Processors are expected to deliver up to 2X the performance-per-socket and up to 4X peak FLOPS per-socket over AMD EPYC 7001 Series Processors. These advantages enable customers to transform their infrastructure with the right resources to drive performance and reduce bottlenecks. "We’ve been waiting for this processor, which enables us to deliver breakthrough performance in solutions designed for AI and HPC workloads. In particular, we expect the EPYC 7002 to utilize PCIe Gen 4 to bolster workloads that had been bottlenecked by the bandwidth of PCIe Gen 3.â€The post Penguin Computing Expands Altus Product Family with AMD EPYC 7002 appeared first on insideHPC.
|
by staff on (#4N9GE)
Today Sylabs announced Beta 1 release of Singularity Desktop for macOS, which allows Linux containers to be designed, built, tested, and signed/verified on macOS. Designed to meet the needs of High Performance Computing, Singularity provides a single universal on-ramp from developers’ workstations to local resources, the cloud, and all the way to edge. "At the inaugural meeting of the Singularity User Group (SUG) this past March at SDSC in San Diego, we officially embarked upon an important journey — a journey whose outcome is to ultimately transform that ‘attractive spectator’ (a.k.a. your macOS laptop or desktop) into a bona fide platform for computing. The transformation is being realized through the introduction of Singularity Desktop — software that allows users to design, build, test, and sign/verify Linux-based Singularity containers on macOS. Thus the purpose of this post is to hereby announce the next milestone in this important journey: availability of the first beta release of Singularity Desktop."The post Singularity Desktop Beta comes to macOS appeared first on insideHPC.
|
by staff on (#4N9B4)
Today SC19 announced the winners of the Test of Time Award. The annual award recognizes an outstanding paper that has deeply influenced the HPC discipline. We are pleased to announce the selection of the SC08 paper, “Benchmarking GPUs to Tune Dense Linear Algebraâ€, by Vasily Volkov (NVIDIA) and James Demmel (UC Berkeley) as the SC19 ToTA winner.The paper was deemed deserving of the SC19 ToTA due to its first-of-its-kind vision of GPU architectures as a vector machine. By building on this vision, Volkov and Demmel defined techniques to achieve greater efficiency and performance.The post Volkov and Demmel Paper on GPUs Wins SC19 Test of Time Award appeared first on insideHPC.
|
by staff on (#4N964)
In this video, NVIDIA's Bryan Catanzaro explains how recent breakthroughs in natural language understanding bring us one step closer to conversational AI. "Today NVIDIA announced breakthroughs in language understanding that allow businesses to engage more naturally with customers using real-time conversational AI. "NVIDIA's groundbreaking work accelerating these models allows organizations to create new, state-of-the-art services that can assist and delight their customers in ways never before imagined."The post Video: NVIDIA Accelerates Conversational AI appeared first on insideHPC.
|
by staff on (#4N966)
In this podcast, Daniel Kasen from LBNL and Bronson Messer of ORNL discuss advancing cosmology through EXASTAR, part of the Exascale Computing Project. "We want to figure out how space and time get warped by gravitational waves, how neutrinos and other subatomic particles were produced in these explosions, and how they sort of lead us down to a chain of events that finally produced us.â€The post Podcast: ExaStar Project Seeks Answers in Cosmos appeared first on insideHPC.
|
by staff on (#4N466)
Because HPC technologies today offer substantially more power and speed than their legacy predecessors, enterprises and research institutions benefit from combining AI and HPC workloads on a single system. This sponsored post from Intel explores the ins and outs of running AI and HPC workloads together on existing infrastructure, and how organizations can gain rapid insights, and experience faster time-to-market with advanced architecture technologies.The post Running AI and HPC Workloads Together on Existing Infrastructure Enhances Return on System Investments appeared first on insideHPC.
|