by Rich Brueckner on (#3KHCM)
In this video from GTC 2018, Adel El-Hallak from IBM describes how IBM and NVIDIA are partnering to build the largest supercomputers in the world to enable data scientists and application developers to not be limited to any device memory. Between IBM and NVIDIA, you can capitalize on the Volta 32GB memory and the entire system as a whole.The post Video: IBM Brings NVIDIA Volta to Supercharge Discoveries appeared first on insideHPC.
|
High-Performance Computing News Analysis | insideHPC
Link | https://insidehpc.com/ |
Feed | http://insidehpc.com/feed/ |
Updated | 2024-11-05 03:45 |
by staff on (#3KH9T)
Over at the NVIDIA blog, Jamie Beckett writes that the new European-Extremely Large Telescope, or E-ELT, will capture images 15 times sharper than the dazzling shots the Hubble telescope has beamed to Earth for the past three decades. "are running GPU-powered simulations to predict how different configurations of E-ELT will affect image quality. Changes to the angle of the telescope’s mirrors, different numbers of cameras and other factors could improve image quality."The post Why the World’s Largest Telescope Relies on GPUs appeared first on insideHPC.
|
by Rich Brueckner on (#3KEJY)
In this video, NVIDIA CEO Jensen Huang unveils the DGX-2 supercomputer. Combined with a fully optimized, updated suite of NVIDIA deep learning software, DGX-2 is purpose-built for data scientists pushing the outer limits of deep learning research and computing. "Watch to learn how we’ve created the first 2 petaFLOPS deep learning system, using NVIDIA NVSwitch to combine the power of 16 V100 GPUs for 10X the deep learning performance."The post Video: NVIDIA Unveils DGX-2 Supercomputer appeared first on insideHPC.
|
by staff on (#3KECF)
Today DDN announced that its EXAScaler DGX solution accelerated client has been fully integrated with the NVIDIA DGX Architecture. "By supplying this groundbreaking level of performance, DDN enables customers to greatly accelerate their Machine Learning initiatives, reducing load wait times of large datasets to mere seconds for faster training turnaround."The post DDN feeds NVIDIA DGX Servers 33GB/s for Machine Learning appeared first on insideHPC.
|
by Rich Brueckner on (#3KCXV)
In this video from from 2018 GPU Technology Conference, Ziv Kalmanovich from VMware and Fred Devoir from NVIDIA describe how they are working together to bring the benefits of virtualization to GPU workloads. "For cloud environments based on vSphere, you can deploy a machine learning workload yourself using GPUs via the VMware DirectPath I/O or vGPU technology."The post Video: VMware powers HPC Virtualization at NVIDIA GPU Technology Conference appeared first on insideHPC.
|
by staff on (#3KBX4)
Today Liqid and Inspur announced that the two companies will offer a joint solution designed specifically for advanced, GPU-intensive applications and workflows. "Our goal is to work with the industry’s most innovative companies to build an adaptive data center infrastructure for the advancement of AI, scientific discovery, and next-generation GPU-centric workloads,†said Sumit Puri, CEO of Liqid. “Liqid is honored to be partnering with data center leaders Inspur Systems and NVIDIA to deliver the most advanced composable GPU platform on the market with Liqid’s fabric technology.â€The post Liqid and Inspur team up for Composable GPU-Centric Rack-Scale Solutions appeared first on insideHPC.
|
by Rich Brueckner on (#3KBSX)
Today Cray announced it is adding new options to its line of CS-Storm GPU-accelerated servers as well as improved fast-start AI configurations, making it easier for organizations implementing AI to get started on their journey with AI proof-of-concept projects and pilot-to-production use. "As companies approach AI projects, choices in system size and configuration play a crucial role,†said Fred Kohout, Cray’s senior vice president of products and chief marketing officer. “Our customers look to Cray Accel AI offerings to leverage our supercomputing expertise, technologies and best practices. Whether an organization wants a starter system for model development and testing, or a complete system for data preparation, model development, training, validation and inference, Cray Accel AI configurations provide customers a complete supercomputer system.â€The post Cray rolls out new Cray Artificial Intelligence Offerings appeared first on insideHPC.
|
by Rich Brueckner on (#3KBD8)
"Quantum computing has recently become a topic that has captured the minds and imagination of a much wider audience. Dr. Jerry Chow joined CloudFest to speak to the near future of quantum computing and insights into the IBM Q Experience, which since May 2016 has placed a rudimentary quantum computer on the Cloud for anyone and everyone to access."The post Video: Enabling Quantum Computing Over the Cloud appeared first on insideHPC.
|
by staff on (#3KBDA)
“The complexities of big data and data science models, particularly in data-intensive fields such as life sciences, telecommunications, cybersecurity, financial services and retail, require purpose-built database applications, compute systems and storage platforms. We are excited to partner with DDN and bring the benefits of its unsurpassed expertise in large-scale, high-performance computing environments to our customers.â€The post DDN partners with SQream for “World’s Fastest Big Data Analytics†appeared first on insideHPC.
|
by staff on (#3K94M)
Today BOXX Technologies announced the new APEXX W3 compact workstation featuring an Intel Xeon W processor, four dual slot NVIDIA GPUs, and other innovative features for accelerating HPC applications. "Available with an Intel Xeon W CPU (up to 18 cores) in a compact chassis, the remarkably quiet APEXX W3 is ideal for data scientists, enabling deep learning development at the user’s deskside. Capable of supporting up to four NVIDIA Quadro GV100 graphics cards, the workstation helps users rapidly iterate and test code prior to large-scale DL deployments while also being ideal for GPU-accelerated rendering. At GTC, APEXX W3 will demonstrate V-Ray rendering with NVIDIA OptiX AI-accelerated denoiser technology."The post New BOXX Deep Learning Workstation has 4 NVIDIA GPUs and 18-core Xeon Processors appeared first on insideHPC.
|
by Rich Brueckner on (#3K8VM)
Today NVIDIA unveiled the NVIDIA DGX-2: the "world's largest GPU." Ten times faster than its predecessor, the DGX-2 the first single server capable of delivering two petaflops of computational power. DGX-2 has the deep learning processing power of 300 servers occupying 15 racks of datacenter space, while being 60x smaller and 18x more power efficient.The post NVIDIA Announces DGX-2 as the “First 2 Petaflop Deep Learning System†appeared first on insideHPC.
|
by staff on (#3K8R1)
Today NVIDIA announced Quadro GV100 GPU. With innovative packaging, the Quadro GV100 comprises two Volta GPUs in the same chassis -- linked with NVIDIA's new NVlink 2 interconnect. "The new AI-dedicated Tensor Cores have dramatically increased the performance of our models and the speedier NVLink allows us to efficiently scale multi-GPU simulations.â€The post NVIDIA rolls out GV100 “Dual-Volta†GPU for Workstations appeared first on insideHPC.
|
by staff on (#3K8KW)
Today ISC 2018 announced that Dr. Keren Bergman from Columbia University will give a keynote on the latest developments in silicon photonics. The event takes place June 24-28 in Frankfurt.The post Dr. Keren Bergman and Thomas Sterling to Keynote ISC 2018 appeared first on insideHPC.
|
by staff on (#3K8GX)
Today Nyriad and ThinkParQ announced a partnership to develop a certification program for high performance, resilient storage systems that combine BeeGFS with NSULATE, Nyriad’s solution for GPU-accelerated storage-processing. "We believe this is the beginning of a fantastic partnership between two innovative software companies with similar roots in the high performance computing community said ThinkParQ CEO Frank Herold. "BeeGFS was developed at the Fraunhofer Center for High Performance Computing, while Nyriad’s NSULATE was originally developed from a partnership with the International Centre for Radio Astronomy Research in Australia. We want to bring our expertise to the wider storage industry by creating new standards for performance and reliability suitable for the coming generation of exascale systems.â€The post Nyriad and ThinkParQ Announce Partnership to Certify GPU-accelerated Storage appeared first on insideHPC.
|
by Rich Brueckner on (#3K8GZ)
NVIDIA is hosting their annual GPU Technology Conference this week. "Watch the livestream of NVIDIA’s CEO, Jensen Huang, will be delivering the opening keynote to officially kick off the event with a focus on AI and Deep Learning. The event takes place in Silicon Valley at 9AM (Pacific Time) today."The post Video Replay: GTC 2018 Keynote with Jensen Huang appeared first on insideHPC.
|
by Rich Brueckner on (#3K832)
Rob Davis from Mellanox gave this talk at the 2018 OCP Summit. "There is a new very high performance open source SSD interfaced called NVMe over Fabrics now available to expand the capabilities of networked storage solutions. It is an extension of the local NVMe SSD interface developed a few years ago driven by the need for a faster interface for SSDs. Similar to the way native disk drive SCSI protocol was networked with Fibre Channel 20 years ago, this technology enables NVMe SSDs to be networked and shared with their native protocol. By utilizes ultra-low latency RDMA technology to achieve data sharing across a network without sacrificing the local performance characteristics of NVMe SSDs, true composable infrastructure is now possible."The post NVMe Over Fabrics High performance SSDs Networked for Composable Infrastructure appeared first on insideHPC.
|
by staff on (#3K5E5)
Today Nyriad and Advanced HPC announce their partnership for a new NVIDIA GPU-accelerated storage system that achieves data protection levels well beyond any RAID solution. "Nyriad and Advanced HPC have brought together a hardware and software reference implementation around a GPU to mitigate rebuild times, enable large-scale RAID systems to run at full speed while degraded, reduce failures, and increase overall reliability, said Christopher M. Sullivan, Assistant Director for Biocomputing at Oregon State University’s Center for Genome Research and Biocomputing (CGRB). "We look forward to their continued technology support and innovative approach that keeps CGRB at the forefront of computational research groups.â€The post GPU-accelerated Storage System goes “Beyond RAID†appeared first on insideHPC.
|
by Rich Brueckner on (#3K5B6)
Jack Wells from ORNL gave this talk at the 2018 OpenPOWER Summit. "The Summit supercomputer coming to Oak Ridge is the next leap in leadership-class computing systems for open science. Summit will have a hybrid architecture, and each node will contain multiple IBM POWER9 CPUs and NVIDIA Volta GPUs all connected together with NVIDIA’s high-speed NVLink. Each node will have over half a terabyte of coherent memory (high bandwidth memory + DDR4) addressable by all CPUs and GPUs plus 800GB of non-volatile RAM that can be used as a burst buffer or as extended memory."The post Video: Powering the Road to National HPC Leadership appeared first on insideHPC.
|
by staff on (#3K52A)
Today One Stop Systems expanded its line of rack scale NVIDIA GPU accelerator products with the introduction of GPUltima-CI. "The GPUltima-CI power-optimized rack can be configured with up to 32 dual Intel Xeon Scalable Architecture compute nodes, 64 network adapters, 48 NVIDIA Volta GPUs, and 32 NVMe drives on a 128Gb PCIe switched fabric, and can support tens of thousands of composable server configurations per rack. Using one or many racks, the OSS solution contains the necessary resources to compose any combination of GPU, NIC and storage resources as may be required in today’s mixed workload data center."The post One Stop Systems Launches Rack Scale GPU Accelerator System appeared first on insideHPC.
|
by john kirkley on (#3K4ZQ)
The Broad Institute of MIT and Harvard in collaboration with Intel, is playing a major role in accelerating genomic analysis. This guest post from Intel explores how the two are working together to 'reach levels of analysis that were not possible before.'The post Broad Institute and Intel Advance Genomics appeared first on insideHPC.
|
by Rich Brueckner on (#3K30D)
Greg Casey from Dell gave this talk at the 2018 OCP Summit. "Gen-Z is different. It is a high-bandwidth, low-latency fabric with separate media and memory controllers that can be realized inside or beyond traditional chassis limits. It treats all components as memory (so-called memory-semantic communications), and it moves data between them with minimal overhead and latency. It thus takes full advantage of emerging persistent memory (memory accessed over the data bus at memory speeds). It can also handle other compute elements, such as GPUs, FPGAs, and ASIC or coprocessor-based accelerators."The post Video: Gen-Z High-Performance Interconnect for the Data-Centric Future appeared first on insideHPC.
|
by staff on (#3K2YE)
In this special guest feature, Dr. Eng Lim Goh from HPE shares the endless possibilities of supercomputing and AI, and the challenges that stand in the way. "Can AI systems continue to learn and evolve based on historical data, and then predict – with breakneck speed – the exact time to buy or sell stocks? Of course they can – and with precision that comes close to replicating other predictive models, and usually in a fraction of the time."The post A World of Opportunities for HPC and AI appeared first on insideHPC.
|
by Rich Brueckner on (#3K0YV)
Thor Sewell from Intel gave this talk at the Rice Oil & Gas conference. "We are seeing the exciting convergence of artificial intelligence and modeling & simulation workflows as well as the movement towards cloud computing and Exascale. This session will cover these significant trends, the technical challenges that must be overcome, and how to prepare for this next level of computing."The post Video: Convergence of AI & HPC appeared first on insideHPC.
|
by Rich Brueckner on (#3K0WX)
RedLine Performance Solutions in Herndon, VA is seeking an HPC Engineer in our Job of the Week. "RedLine Performance Solutions has been in the HPC solutions engineering services business for approximately 17 years and is consistently determined to keep the "bar of excellence" quite high for new hires. This enables RedLine to accomplish what other firms cannot and promotes a high level of staff retention. We offer services ranging from full life cycle HPC systems engineering to remote managed services to HPC program analysis. We are located in the Washington, DC area and are looking for a HPC Engineer to join us."The post Job of the Week: HPC Engineer at RedLine Performance Solutions appeared first on insideHPC.
|
by staff on (#3JYQW)
Over at the All Things Distributed blog, Werner Vogels writes that the new Amazon SageMaker is designed for building machine learning algorithms that can handle an infinite amount of data. "To handle unbounded amounts of data, our algorithms adopt a streaming computational model. In the streaming model, the algorithm only passes over the dataset one time and assumes a fixed-memory footprint. This memory restriction precludes basic operations like storing the data in memory, random access to individual records, shuffling the data, reading through the data several times, etc."The post Amazon SageMaker goes for “Infinitely Scalable†Machine Learning appeared first on insideHPC.
|
by staff on (#3JYJJ)
Today the European Commission announced more details on the European Processor Initiative to co-design, develop and bring on the market a low-power microprocessor. "This technology, with drastically better performance and power, is one of the core elements needed for the development of the European Exascale machine. We expect to achieve unprecedented levels of performance at very low power, and EPI’s HPC and automotive industrial partners are already considering the EPI platform for their product roadmaps," said Philippe Notton, the EPI General Manager.The post European Processor Initiative to develop chip for future supercomputers appeared first on insideHPC.
|
by staff on (#3JYD2)
"MAX is a one-stop exchange for data scientists and AI developers to consume models created using their favorite machine learning engines, like TensorFlow, PyTorch, and Caffe2, and provides a standardized approach to classify, annotate, and deploy these models for prediction and inferencing, including an increasing number of models that can be deployed and customized in IBM’s recently announce AI application development platform, Watson Studio."The post IBM Launches MAX – an App Store for Machine Learning Models appeared first on insideHPC.
|
by Rich Brueckner on (#3JY7H)
Jessica Pointing from MIT gave this talk at the IBM Think conference. "Because atoms and subatomic particles behave in strange and complex ways, classical physics can not explain their quantum behavior. However, when the behavior is harnessed effectively, systems become far more powerful than classical computers… quantum powerful."The post A Primer on Quantum Computing… with Doughnuts! appeared first on insideHPC.
|
by staff on (#3JVKE)
"We anticipate that the Grand Unified File Index will have a big impact on the ability for many levels of users to search data and get a fast response,†said Gary Grider, division leader for High Performance Computing at Los Alamos. “Compared with other methods, the Grand Unified File Index has the advantages of not requiring the system administrator to do the query, and it honors the user access controls allowing users and admins to use the same indexing system,†he said.The post Los Alamos Releases File Index Product to Open Source appeared first on insideHPC.
|
by staff on (#3JVKG)
Scientists at Argonne are helping to develop better batteries for our electronic devices. The goal is to develop beyond-lithium-ion batteries that are even more powerful, cheaper, safer and longer lived. “The energy storage capacity was about three times that of a lithium-ion battery, and five times should be easily possible with continued research. This first demonstration of a true lithium-air battery is an important step toward what we call beyond-lithium-ion batteries.â€The post Argonne Helps to Develop all-new Lithium-air Batteries appeared first on insideHPC.
|
by Richard Friedman on (#3JVGZ)
Recent Intel® enhancements to Java enable faster and better numerical computing. In particular, the Java Virtual Machine (JVM) now uses the Fused Multiply Add (FMA) instructions on Intel Intel Xeon® PhiTM processors with Advanced Vector Instructions (Intel AVX) to implement the Open JDK9 Math.fma()API. This gives significant performance improvements for matrix multiplications, the most basic computation found in most HPC, Machine Learning, and AI applications.The post Intel AVX Gives Numerical Computations in Java a Big Boost appeared first on insideHPC.
|
by Rich Brueckner on (#3JVE2)
In this video from the 2018 Rice Oil & Gas Conference, Doug Kothe from ORNL provides an update on the Exascale Computing Project. "The quest to develop a capable exascale ecosystem is a monumental effort that requires the collaboration of government, academia, and industry. Achieving exascale will have profound effects on the American people and the world—improving the nation’s economic competitiveness, advancing scientific discovery, and strengthening our national security."The post The U.S. Exascale Computing Project: Status and Plans appeared first on insideHPC.
|
by Rich Brueckner on (#3JRVP)
Over at the Lenovo Blog, Dr. Bhushan Desam writes that the company just updated its LiCO tools to accelerate AI deployment and development for Enterprise and HPC implementations. "LiCO simplifies resource management and makes launching AI training jobs in clusters easy. LiCO currently supports multiple AI frameworks, including TensorFlow, Caffe, Intel Caffe, and MXNet. Additionally, multiple versions of those AI frameworks can easily be maintained and managed using Singularity containers. This consequently provides agility for IT managers to support development efforts for multiple users and applications simultaneously."The post Lenovo Updates LiCO Tools to Accelerate AI Deployment appeared first on insideHPC.
|
by staff on (#3JRKR)
Today GIGABYTE Technology announced the availability of ThunderXStation: the industry’s first 64-bit Armv8 workstation platform based on Cavium’s flagship ThunderX2 processor. “ThunderXStation is an ideal platform for Arm software developers across networking, embedded, mobile, and IoT verticals. We are delighted to be working closely with GIGABYTE on this, and we look forward to supporting them on a number of innovative new platforms.â€The post New Arm-based Workstation Opens the Doors for HPC Developers appeared first on insideHPC.
|
by staff on (#3JRKT)
Today Penguin Computing announced that Director of Advanced Solutions, Kevin Tubbs, Ph.D., will be speaking at NVIDIA’s GPU Technology Conference (GTC) on best practices in artificial intelligence. "On the second day of the conference, Tubbs will lead the “Best Practices in Designing and Deploying End-to-End HPC and AI Solutions†session. The focus of the session will be challenges faced by organizations looking to build AI systems and the design principles and technologies that have proven successful in Penguin Computing AI deployments for customers in the Top 500."The post Penguin Computing to share AI Best Practices at GPU Technology Conference appeared first on insideHPC.
|
by staff on (#3JRAE)
Today Hewlett Packard Enterprise announced new offerings to help customers ramp up, optimize and scale artificial intelligence usage across business functions to drive outcomes such as better demand forecasting, improved operational efficiency and increased sales. “HPE is best positioned to help customers make AI work for their enterprise, regardless of where they are in their AI adoption. While others provide AI components, we provide complete AI solutions from strategic advisory to purpose-built technology, operational support and a strong AI partner ecosystem to tailor the right AI solution for each organization.â€The post HPE Launches Vertical AI Solutions appeared first on insideHPC.
|
by Rich Brueckner on (#3JR4Z)
Talia Gershon from the Thomas J. Watson Research Center gave this talk at the 2018 IBM Think conference. "There is a whole class of problems that are too difficult for even the largest and most powerful computers that exist today to solve. These exponential challenges include everything from simulating the complex interactions of atoms and molecules to optimizing supply chains. But quantum computers could enable us to solve these problems, unleashing untold opportunities for business."The post Video: IBM Quantum Computing will be “Mainstream in Five Years†appeared first on insideHPC.
|
by MichaelS on (#3JJSM)
Visualizing the results of a simulation can give new insight into complex scientific problems. Interactive viewing of entire datasets can lead to earlier understanding of the challenge at hand and can enhance the understanding of complex phenomena. With the release of the of HPC Visualization Containers with the NVIDIA CPU Cloud, it has become much easier to get a visualization system up and production ready much quicker than ever before.The post NVIDIA Makes Visualization Easier in the Cloud appeared first on insideHPC.
|
by staff on (#3JP2J)
"IBM's goal is to make it easier for you to build your deep learning models. Deep Learning as a Service has unique features, such as Neural Network Modeler, to lower the barrier to entry for all users, not just a few experts. The enhancements live within Watson Studio, our cloud-native, end-to-end environment for data scientists, developers, business analysts and SMEs to build and train AI models that work with structured, semi-structured and unstructured data — while maintaining an organization’s existing policy/access rules around the data."The post IBM Launches Deep Learning as a Service appeared first on insideHPC.
|
by staff on (#3JNZG)
Mariam Kiran is using an early-career research award from DOE’s Office of Science to develop methods combining machine-learning algorithms with parallel computing to optimize such networks. "This type of science and the problems it can address can make a real impact, Kiran says. “That’s what excites me about research – that we can improve or provide solutions to real-world problems.â€The post Overcoming Roadblocks in Computational Networks appeared first on insideHPC.
|
by staff on (#3JNWX)
In this video, researchers from IBM Research in Zurich describe how the new IBM Snap Machine Learning (Snap ML) software was able to achieve record performance running TesorFlow. "This training time is 46x faster than the best result that has been previously reported, which used TensorFlow on Google Cloud Platform to train the same model in 70 minutes."The post Video: IBM Sets Record TensorFlow Performance with new Snap ML Software appeared first on insideHPC.
|
by Rich Brueckner on (#3JNKX)
Ahmed Hashmi from BP gave this talk at the Rice Oil & Gas conference. “The Oil and Gas High Performances Computing Conference, hosted annually at Rice University, is the premier meeting place for networking and discussion focused on computing and information technology challenges and needs in the oil and gas industry. High-end computing and information technology continues to stand out across the industry as a critical business enabler and differentiator with a relatively well understood return on investment. However, challenges such as constantly changing technology landscape, increasing focus on software and software innovation, and escalating concerns around workforce development still remain.â€The post Video: High Power Algorithms, High Performance Computing appeared first on insideHPC.
|
by staff on (#3JJQB)
Today Univa announced a new global partnership and reseller agreement with UberCloud. Under terms of the agreement, UberCloud, a leading HPC cloud provider, will resell Univa Grid Engine and related Univa products to UberCloud's growing community of HPC customers. "By deploying packaged HPC applications in containers, and managing them with Univa Grid Engine, users become productive immediately. They can focus on their work rather than spending time troubleshooting complicated HPC software stacks."The post Univa partners with UberCloud appeared first on insideHPC.
|
by staff on (#3JJQD)
“Optalysys has for the first time ever, applied optical processing to the highly complex and computationally demanding area of CNNs with initial accuracy rates of over 70%. Through our uniquely scalable and highly efficient optical approach, we are developing models that will offer whole new levels of capability, not only cloud-based but also opening up the extraordinary potential of CNNs to mobile systems.â€The post Optalysys Speeds Deep Learning with Optical Processing appeared first on insideHPC.
|
by Rich Brueckner on (#3JJJA)
In this video from the 2018 Rice Oil & Gas Conference, Addison Snell from Intersect360 Research leads a panel discussion on Exascale computing. "High-end computing and information technology continues to stand out across the industry as a critical business enabler and differentiator with a relatively well understood return on investment. However, challenges such as constantly changing technology landscape, increasing focus on software and software innovation, and escalating concerns around workforce development still remain."The post Panel Discussion: Delivering Exascale Computing for the Oil and Gas Industry appeared first on insideHPC.
|
by staff on (#3JJEP)
Today Nimbus Data announced the ExaDrive DC100, the largest capacity (100 terabytes) solid state drive (SSD) ever produced. Featuring more than 3x the capacity of the closest competitor, the ExaDrive DC100 also draws 85% less power. “The ExaDrive DC100 meets these challenges for both data center and edge applications, offering unmatched capacity in an ultra-low power design.â€The post Nimbus Data launches 100 Terabyte SSD appeared first on insideHPC.
|
by Rich Brueckner on (#3JJER)
In this podcast, the Radio Free HPC team goes off the supercomputing rails a bit with a discussion on digital immortality. "A new company called Nectome will reportedly archive your mind for future uploading to a machine. While the price of $10K seems reasonable enough, they do have to kill you to complete the process."The post Radio Free HPC Looks at Immortality through Nectome’s Mind Archival appeared first on insideHPC.
|
by Sarah Rubenoff on (#3JJA1)
The world of today’s HPC computing is driven by the ever-increasing generation and consumption of digital information. And the ability to analyze this rapidly growing pool of data, and extrapolate meaningful insights, gives modern businesses a competitive edge. Download the full report, “Introducing 200G HDR InfiniBand Solutions,†to learn about how Mellanox Technologies end-to-end 200G HDR InfiniBand solution is helping enable the next generation of data centers.The post What a 200G HDR InfiniBand Solution Means for Today’s Advanced Data Centers appeared first on insideHPC.
|
by staff on (#3JGBN)
Today MathWorks rolled out Release 2018a with a range of new capabilities in MATLAB and Simulink. "R2018a includes two new products, Predictive Maintenance Toolbox for designing and testing condition monitoring and predictive maintenance algorithms, and Vehicle Dynamics Blockset for modeling and simulating vehicle dynamics in a virtual 3D environment. In addition to new features in MATLAB and Simulink, and the new products, this release also includes updates and bug fixes to 94 other products."The post MATLAB adds new capabilities with Release R2018a appeared first on insideHPC.
|
by Rich Brueckner on (#3JGA7)
Wonchan Lee, Todd Warszawski, and Karthik Murthy gave this talk at the Stanford HPC Conference. "Legion is an exascale-ready parallel programming model that simplifies the mapping of a complex, large-scale simulation code on a modern heterogeneous supercomputer. Legion relieves scientists and engineers of several burdens: they no longer need to determine which tasks depend on other tasks, specify where calculations will occur, or manage the transmission of data to and from the processors. In this talk, we will focus on three aspects of the Legion programming system, namely, dynamic tracing, projection functions, and vectorization."The post Advances in the Legion Programming Model appeared first on insideHPC.
|