by MichaelS on (#3KQVB)
Setting up an environment for High Performance Computing (HPC) especially using GPUs can be daunting. There can be multiple dependencies, a number of supporting libraries required, and complex installation instructions. NVIDIA has made this easier with the announcement and release of HPC Application Containers with the NVIDIA GPU Cloud.The post NVIDIA Makes GPU Computing Easier in the Cloud appeared first on insideHPC.
|
High-Performance Computing News Analysis | insideHPC
Link | https://insidehpc.com/ |
Feed | http://insidehpc.com/feed/ |
Updated | 2024-11-25 00:45 |
by staff on (#3KQP1)
A more flexible, application-centric, datacenter architecture is required to meet the needs of rapidly changing HPC applications and hardware. In this guest post, Katie Rivera of One Stop Systems explores how rack-scale composable infrastructure can be utilized for mixed workload data centers.The post Rack Scale Composable Infrastructure for Mixed Workload Data Centers appeared first on insideHPC.
|
by Rich Brueckner on (#3KQP3)
Satoshi Matsuoka from the Tokyo Tech writes that he is taking on a new role at RIKEN to foster the deployment of the Post-K computer. "From April 1st I have become the Director of Riken Center for Computational Science, to lead the K-Computer & Post-K development, and the next gen HPC research. Riken R-CCS Director is my main job, but I also retain my Professorship at Tokyo Tech. and lead my lab there & also lead a group for AIST-Tokyo Tech joint RWBC-OIL."The post Satoshi Matsuoka Moves to RIKEN Center for Computational Science appeared first on insideHPC.
|
by Rich Brueckner on (#3KQKM)
In this video from the GPU Technology Conference, Marc Hamilton from NVIDIA describes the new DGX-2 supercomputer with the NVSwitch interconnect. "NVIDIA NVSwitch is the first on-node switch architecture to support 16 fully-connected GPUs in a single server node and drive simultaneous communication between all eight GPU pairs at an incredible 300 GB/s each. These 16 GPUs can be used as a single large-scale accelerator with 0.5 Terabytes of unified memory space and 2 petaFLOPS of deep learning compute power. With NVSwitch, we have 2.4 terabytes a second bisection bandwidth, 24 times what you would have with two DGX-1s."The post Inside the new NVIDIA DGX-2 Supercomputer with NVSwitch appeared first on insideHPC.
|
by Rich Brueckner on (#3KQH2)
In this video from GTC 2018, Alexander St . John from Nyriad demonstrates how the company's NSULATE software running on Advanced HPC gear provides extreme data protection for HPC data. As we watch, he removes a dozen SSDs from a live filesystem -- and it keeps on running!The post RAID No More: GPUs Power NSULATE for Extreme HPC Data Protection appeared first on insideHPC.
|
by staff on (#3KNPG)
HP Z Workstations, with new NVIDIA technology, are ideal for local processing at the edge of the network – giving developers more control, better performance and added security over cloud-based solutions. "Products like the HP Z8, the most powerful workstation for ML development, coupled with the new NVIDIA Quadro GV100, the HP ML Developer Portal and our expanded services offerings will undoubtedly fast-track the adoption of machine learning.â€The post New HP Z8 is “World’s Most Powerful Workstation for Machine Learning Development†appeared first on insideHPC.
|
by Rich Brueckner on (#3KNMK)
The HPC application containers available on NVIDIA GPU Cloud (NGC) drastically improve ease of application deployment, while delivering optimized performance. The containers include HPC applications such as NAMD, GROMACS, and Relion. NGC gives researchers and scientists the flexibility to run HPC application containers on NVIDIA Pascal and NVIDIA Volta-powered systems including Quadro-powered workstations, NVIDIA DGX Systems, and HPC clusters.The post Video: Deploy HPC Applications Faster with NVIDIA GPU Cloud appeared first on insideHPC.
|
by Rich Brueckner on (#3KKSH)
Mike Ignatowski from AMD gave this talk at the Rice Oil & Gas conference. "We have reached the point where further improvements in CMOS technology and CPU architecture are producing diminishing benefits at increasing costs. Fortunately, there is a great deal of room for improvement with specialized processing, including GPUs and other emerging accelerators. In addition, there are exciting new developments in memory technology and architecture coming down the development pipeline."The post Video: The Challenge of Heterogeneous Compute & Memory Systems appeared first on insideHPC.
|
by Rich Brueckner on (#3KKNY)
The eScience Center at the University of Southern Denmark (SDU) is seeking Software/Hardware Architects for Research Infrastructure. "The eScience Center is now expanding its staff in all its core areas and expects to fill 6 or more positions. We address stimulating and interesting technological challenges, and offer a research-like environment where we encourage the study of innovative solutions. We firmly believe in open-source and web technology as means to reach our goals.The post Jobs of the Week: Software/Hardware Architects for Research Infrastructure at University of Southern Denmark appeared first on insideHPC.
|
by staff on (#3KHM3)
In this Let's Talk Exascale podcast, Lois Curfman McInnes from Argonne National Laboratory describes the Extreme-scale Scientific Software Development Kit (xSDK) for ECP, which is working toward a software ecosystem for high-performance numerical libraries. "The project is motivated by the need for next-generation science applications to use and build on diverse software capabilities that are developed by different groups."The post Let’s Talk Exascale: Software Ecosystem for High-Performance Numerical Libraries appeared first on insideHPC.
|
by staff on (#3KHFJ)
The Eleventh International Workshop on Parallel Programming Models and Systems Software for High-End Computing (P2S2) has issued its Call for Papers. The event takes place in August 22 in Eugene, Oregon. "The goal of this workshop is to bring together researchers and practitioners in parallel programming models and systems software for high-end computing architectures. Please join us in a discussion of new ideas, experiences, and the latest trends in these areas at the workshop. If you're working on #HPC programming models, performance modeling, storage or interconnect system software, large-scale scheduling or task coordination, consider submitting to the P2S2 2018 workshop!"The post Call for Papers: International Workshop on Parallel Programming Models in Oregon appeared first on insideHPC.
|
by Rich Brueckner on (#3KHCM)
In this video from GTC 2018, Adel El-Hallak from IBM describes how IBM and NVIDIA are partnering to build the largest supercomputers in the world to enable data scientists and application developers to not be limited to any device memory. Between IBM and NVIDIA, you can capitalize on the Volta 32GB memory and the entire system as a whole.The post Video: IBM Brings NVIDIA Volta to Supercharge Discoveries appeared first on insideHPC.
|
by staff on (#3KH9T)
Over at the NVIDIA blog, Jamie Beckett writes that the new European-Extremely Large Telescope, or E-ELT, will capture images 15 times sharper than the dazzling shots the Hubble telescope has beamed to Earth for the past three decades. "are running GPU-powered simulations to predict how different configurations of E-ELT will affect image quality. Changes to the angle of the telescope’s mirrors, different numbers of cameras and other factors could improve image quality."The post Why the World’s Largest Telescope Relies on GPUs appeared first on insideHPC.
|
by Rich Brueckner on (#3KEJY)
In this video, NVIDIA CEO Jensen Huang unveils the DGX-2 supercomputer. Combined with a fully optimized, updated suite of NVIDIA deep learning software, DGX-2 is purpose-built for data scientists pushing the outer limits of deep learning research and computing. "Watch to learn how we’ve created the first 2 petaFLOPS deep learning system, using NVIDIA NVSwitch to combine the power of 16 V100 GPUs for 10X the deep learning performance."The post Video: NVIDIA Unveils DGX-2 Supercomputer appeared first on insideHPC.
|
by staff on (#3KECF)
Today DDN announced that its EXAScaler DGX solution accelerated client has been fully integrated with the NVIDIA DGX Architecture. "By supplying this groundbreaking level of performance, DDN enables customers to greatly accelerate their Machine Learning initiatives, reducing load wait times of large datasets to mere seconds for faster training turnaround."The post DDN feeds NVIDIA DGX Servers 33GB/s for Machine Learning appeared first on insideHPC.
|
by Rich Brueckner on (#3KCXV)
In this video from from 2018 GPU Technology Conference, Ziv Kalmanovich from VMware and Fred Devoir from NVIDIA describe how they are working together to bring the benefits of virtualization to GPU workloads. "For cloud environments based on vSphere, you can deploy a machine learning workload yourself using GPUs via the VMware DirectPath I/O or vGPU technology."The post Video: VMware powers HPC Virtualization at NVIDIA GPU Technology Conference appeared first on insideHPC.
|
by staff on (#3KBX4)
Today Liqid and Inspur announced that the two companies will offer a joint solution designed specifically for advanced, GPU-intensive applications and workflows. "Our goal is to work with the industry’s most innovative companies to build an adaptive data center infrastructure for the advancement of AI, scientific discovery, and next-generation GPU-centric workloads,†said Sumit Puri, CEO of Liqid. “Liqid is honored to be partnering with data center leaders Inspur Systems and NVIDIA to deliver the most advanced composable GPU platform on the market with Liqid’s fabric technology.â€The post Liqid and Inspur team up for Composable GPU-Centric Rack-Scale Solutions appeared first on insideHPC.
|
by Rich Brueckner on (#3KBSX)
Today Cray announced it is adding new options to its line of CS-Storm GPU-accelerated servers as well as improved fast-start AI configurations, making it easier for organizations implementing AI to get started on their journey with AI proof-of-concept projects and pilot-to-production use. "As companies approach AI projects, choices in system size and configuration play a crucial role,†said Fred Kohout, Cray’s senior vice president of products and chief marketing officer. “Our customers look to Cray Accel AI offerings to leverage our supercomputing expertise, technologies and best practices. Whether an organization wants a starter system for model development and testing, or a complete system for data preparation, model development, training, validation and inference, Cray Accel AI configurations provide customers a complete supercomputer system.â€The post Cray rolls out new Cray Artificial Intelligence Offerings appeared first on insideHPC.
|
by Rich Brueckner on (#3KBD8)
"Quantum computing has recently become a topic that has captured the minds and imagination of a much wider audience. Dr. Jerry Chow joined CloudFest to speak to the near future of quantum computing and insights into the IBM Q Experience, which since May 2016 has placed a rudimentary quantum computer on the Cloud for anyone and everyone to access."The post Video: Enabling Quantum Computing Over the Cloud appeared first on insideHPC.
|
by staff on (#3KBDA)
“The complexities of big data and data science models, particularly in data-intensive fields such as life sciences, telecommunications, cybersecurity, financial services and retail, require purpose-built database applications, compute systems and storage platforms. We are excited to partner with DDN and bring the benefits of its unsurpassed expertise in large-scale, high-performance computing environments to our customers.â€The post DDN partners with SQream for “World’s Fastest Big Data Analytics†appeared first on insideHPC.
|
by staff on (#3K94M)
Today BOXX Technologies announced the new APEXX W3 compact workstation featuring an Intel Xeon W processor, four dual slot NVIDIA GPUs, and other innovative features for accelerating HPC applications. "Available with an Intel Xeon W CPU (up to 18 cores) in a compact chassis, the remarkably quiet APEXX W3 is ideal for data scientists, enabling deep learning development at the user’s deskside. Capable of supporting up to four NVIDIA Quadro GV100 graphics cards, the workstation helps users rapidly iterate and test code prior to large-scale DL deployments while also being ideal for GPU-accelerated rendering. At GTC, APEXX W3 will demonstrate V-Ray rendering with NVIDIA OptiX AI-accelerated denoiser technology."The post New BOXX Deep Learning Workstation has 4 NVIDIA GPUs and 18-core Xeon Processors appeared first on insideHPC.
|
by Rich Brueckner on (#3K8VM)
Today NVIDIA unveiled the NVIDIA DGX-2: the "world's largest GPU." Ten times faster than its predecessor, the DGX-2 the first single server capable of delivering two petaflops of computational power. DGX-2 has the deep learning processing power of 300 servers occupying 15 racks of datacenter space, while being 60x smaller and 18x more power efficient.The post NVIDIA Announces DGX-2 as the “First 2 Petaflop Deep Learning System†appeared first on insideHPC.
|
by staff on (#3K8R1)
Today NVIDIA announced Quadro GV100 GPU. With innovative packaging, the Quadro GV100 comprises two Volta GPUs in the same chassis -- linked with NVIDIA's new NVlink 2 interconnect. "The new AI-dedicated Tensor Cores have dramatically increased the performance of our models and the speedier NVLink allows us to efficiently scale multi-GPU simulations.â€The post NVIDIA rolls out GV100 “Dual-Volta†GPU for Workstations appeared first on insideHPC.
|
by staff on (#3K8KW)
Today ISC 2018 announced that Dr. Keren Bergman from Columbia University will give a keynote on the latest developments in silicon photonics. The event takes place June 24-28 in Frankfurt.The post Dr. Keren Bergman and Thomas Sterling to Keynote ISC 2018 appeared first on insideHPC.
|
by staff on (#3K8GX)
Today Nyriad and ThinkParQ announced a partnership to develop a certification program for high performance, resilient storage systems that combine BeeGFS with NSULATE, Nyriad’s solution for GPU-accelerated storage-processing. "We believe this is the beginning of a fantastic partnership between two innovative software companies with similar roots in the high performance computing community said ThinkParQ CEO Frank Herold. "BeeGFS was developed at the Fraunhofer Center for High Performance Computing, while Nyriad’s NSULATE was originally developed from a partnership with the International Centre for Radio Astronomy Research in Australia. We want to bring our expertise to the wider storage industry by creating new standards for performance and reliability suitable for the coming generation of exascale systems.â€The post Nyriad and ThinkParQ Announce Partnership to Certify GPU-accelerated Storage appeared first on insideHPC.
|
by Rich Brueckner on (#3K8GZ)
NVIDIA is hosting their annual GPU Technology Conference this week. "Watch the livestream of NVIDIA’s CEO, Jensen Huang, will be delivering the opening keynote to officially kick off the event with a focus on AI and Deep Learning. The event takes place in Silicon Valley at 9AM (Pacific Time) today."The post Video Replay: GTC 2018 Keynote with Jensen Huang appeared first on insideHPC.
|
by Rich Brueckner on (#3K832)
Rob Davis from Mellanox gave this talk at the 2018 OCP Summit. "There is a new very high performance open source SSD interfaced called NVMe over Fabrics now available to expand the capabilities of networked storage solutions. It is an extension of the local NVMe SSD interface developed a few years ago driven by the need for a faster interface for SSDs. Similar to the way native disk drive SCSI protocol was networked with Fibre Channel 20 years ago, this technology enables NVMe SSDs to be networked and shared with their native protocol. By utilizes ultra-low latency RDMA technology to achieve data sharing across a network without sacrificing the local performance characteristics of NVMe SSDs, true composable infrastructure is now possible."The post NVMe Over Fabrics High performance SSDs Networked for Composable Infrastructure appeared first on insideHPC.
|
by staff on (#3K5E5)
Today Nyriad and Advanced HPC announce their partnership for a new NVIDIA GPU-accelerated storage system that achieves data protection levels well beyond any RAID solution. "Nyriad and Advanced HPC have brought together a hardware and software reference implementation around a GPU to mitigate rebuild times, enable large-scale RAID systems to run at full speed while degraded, reduce failures, and increase overall reliability, said Christopher M. Sullivan, Assistant Director for Biocomputing at Oregon State University’s Center for Genome Research and Biocomputing (CGRB). "We look forward to their continued technology support and innovative approach that keeps CGRB at the forefront of computational research groups.â€The post GPU-accelerated Storage System goes “Beyond RAID†appeared first on insideHPC.
|
by Rich Brueckner on (#3K5B6)
Jack Wells from ORNL gave this talk at the 2018 OpenPOWER Summit. "The Summit supercomputer coming to Oak Ridge is the next leap in leadership-class computing systems for open science. Summit will have a hybrid architecture, and each node will contain multiple IBM POWER9 CPUs and NVIDIA Volta GPUs all connected together with NVIDIA’s high-speed NVLink. Each node will have over half a terabyte of coherent memory (high bandwidth memory + DDR4) addressable by all CPUs and GPUs plus 800GB of non-volatile RAM that can be used as a burst buffer or as extended memory."The post Video: Powering the Road to National HPC Leadership appeared first on insideHPC.
|
by staff on (#3K52A)
Today One Stop Systems expanded its line of rack scale NVIDIA GPU accelerator products with the introduction of GPUltima-CI. "The GPUltima-CI power-optimized rack can be configured with up to 32 dual Intel Xeon Scalable Architecture compute nodes, 64 network adapters, 48 NVIDIA Volta GPUs, and 32 NVMe drives on a 128Gb PCIe switched fabric, and can support tens of thousands of composable server configurations per rack. Using one or many racks, the OSS solution contains the necessary resources to compose any combination of GPU, NIC and storage resources as may be required in today’s mixed workload data center."The post One Stop Systems Launches Rack Scale GPU Accelerator System appeared first on insideHPC.
|
by john kirkley on (#3K4ZQ)
The Broad Institute of MIT and Harvard in collaboration with Intel, is playing a major role in accelerating genomic analysis. This guest post from Intel explores how the two are working together to 'reach levels of analysis that were not possible before.'The post Broad Institute and Intel Advance Genomics appeared first on insideHPC.
|
by Rich Brueckner on (#3K30D)
Greg Casey from Dell gave this talk at the 2018 OCP Summit. "Gen-Z is different. It is a high-bandwidth, low-latency fabric with separate media and memory controllers that can be realized inside or beyond traditional chassis limits. It treats all components as memory (so-called memory-semantic communications), and it moves data between them with minimal overhead and latency. It thus takes full advantage of emerging persistent memory (memory accessed over the data bus at memory speeds). It can also handle other compute elements, such as GPUs, FPGAs, and ASIC or coprocessor-based accelerators."The post Video: Gen-Z High-Performance Interconnect for the Data-Centric Future appeared first on insideHPC.
|
by staff on (#3K2YE)
In this special guest feature, Dr. Eng Lim Goh from HPE shares the endless possibilities of supercomputing and AI, and the challenges that stand in the way. "Can AI systems continue to learn and evolve based on historical data, and then predict – with breakneck speed – the exact time to buy or sell stocks? Of course they can – and with precision that comes close to replicating other predictive models, and usually in a fraction of the time."The post A World of Opportunities for HPC and AI appeared first on insideHPC.
|
by Rich Brueckner on (#3K0YV)
Thor Sewell from Intel gave this talk at the Rice Oil & Gas conference. "We are seeing the exciting convergence of artificial intelligence and modeling & simulation workflows as well as the movement towards cloud computing and Exascale. This session will cover these significant trends, the technical challenges that must be overcome, and how to prepare for this next level of computing."The post Video: Convergence of AI & HPC appeared first on insideHPC.
|
by Rich Brueckner on (#3K0WX)
RedLine Performance Solutions in Herndon, VA is seeking an HPC Engineer in our Job of the Week. "RedLine Performance Solutions has been in the HPC solutions engineering services business for approximately 17 years and is consistently determined to keep the "bar of excellence" quite high for new hires. This enables RedLine to accomplish what other firms cannot and promotes a high level of staff retention. We offer services ranging from full life cycle HPC systems engineering to remote managed services to HPC program analysis. We are located in the Washington, DC area and are looking for a HPC Engineer to join us."The post Job of the Week: HPC Engineer at RedLine Performance Solutions appeared first on insideHPC.
|
by staff on (#3JYQW)
Over at the All Things Distributed blog, Werner Vogels writes that the new Amazon SageMaker is designed for building machine learning algorithms that can handle an infinite amount of data. "To handle unbounded amounts of data, our algorithms adopt a streaming computational model. In the streaming model, the algorithm only passes over the dataset one time and assumes a fixed-memory footprint. This memory restriction precludes basic operations like storing the data in memory, random access to individual records, shuffling the data, reading through the data several times, etc."The post Amazon SageMaker goes for “Infinitely Scalable†Machine Learning appeared first on insideHPC.
|
by staff on (#3JYJJ)
Today the European Commission announced more details on the European Processor Initiative to co-design, develop and bring on the market a low-power microprocessor. "This technology, with drastically better performance and power, is one of the core elements needed for the development of the European Exascale machine. We expect to achieve unprecedented levels of performance at very low power, and EPI’s HPC and automotive industrial partners are already considering the EPI platform for their product roadmaps," said Philippe Notton, the EPI General Manager.The post European Processor Initiative to develop chip for future supercomputers appeared first on insideHPC.
|
by staff on (#3JYD2)
"MAX is a one-stop exchange for data scientists and AI developers to consume models created using their favorite machine learning engines, like TensorFlow, PyTorch, and Caffe2, and provides a standardized approach to classify, annotate, and deploy these models for prediction and inferencing, including an increasing number of models that can be deployed and customized in IBM’s recently announce AI application development platform, Watson Studio."The post IBM Launches MAX – an App Store for Machine Learning Models appeared first on insideHPC.
|
by Rich Brueckner on (#3JY7H)
Jessica Pointing from MIT gave this talk at the IBM Think conference. "Because atoms and subatomic particles behave in strange and complex ways, classical physics can not explain their quantum behavior. However, when the behavior is harnessed effectively, systems become far more powerful than classical computers… quantum powerful."The post A Primer on Quantum Computing… with Doughnuts! appeared first on insideHPC.
|
by staff on (#3JVKE)
"We anticipate that the Grand Unified File Index will have a big impact on the ability for many levels of users to search data and get a fast response,†said Gary Grider, division leader for High Performance Computing at Los Alamos. “Compared with other methods, the Grand Unified File Index has the advantages of not requiring the system administrator to do the query, and it honors the user access controls allowing users and admins to use the same indexing system,†he said.The post Los Alamos Releases File Index Product to Open Source appeared first on insideHPC.
|
by staff on (#3JVKG)
Scientists at Argonne are helping to develop better batteries for our electronic devices. The goal is to develop beyond-lithium-ion batteries that are even more powerful, cheaper, safer and longer lived. “The energy storage capacity was about three times that of a lithium-ion battery, and five times should be easily possible with continued research. This first demonstration of a true lithium-air battery is an important step toward what we call beyond-lithium-ion batteries.â€The post Argonne Helps to Develop all-new Lithium-air Batteries appeared first on insideHPC.
|
by Richard Friedman on (#3JVGZ)
Recent Intel® enhancements to Java enable faster and better numerical computing. In particular, the Java Virtual Machine (JVM) now uses the Fused Multiply Add (FMA) instructions on Intel Intel Xeon® PhiTM processors with Advanced Vector Instructions (Intel AVX) to implement the Open JDK9 Math.fma()API. This gives significant performance improvements for matrix multiplications, the most basic computation found in most HPC, Machine Learning, and AI applications.The post Intel AVX Gives Numerical Computations in Java a Big Boost appeared first on insideHPC.
|
by Rich Brueckner on (#3JVE2)
In this video from the 2018 Rice Oil & Gas Conference, Doug Kothe from ORNL provides an update on the Exascale Computing Project. "The quest to develop a capable exascale ecosystem is a monumental effort that requires the collaboration of government, academia, and industry. Achieving exascale will have profound effects on the American people and the world—improving the nation’s economic competitiveness, advancing scientific discovery, and strengthening our national security."The post The U.S. Exascale Computing Project: Status and Plans appeared first on insideHPC.
|
by Rich Brueckner on (#3JRVP)
Over at the Lenovo Blog, Dr. Bhushan Desam writes that the company just updated its LiCO tools to accelerate AI deployment and development for Enterprise and HPC implementations. "LiCO simplifies resource management and makes launching AI training jobs in clusters easy. LiCO currently supports multiple AI frameworks, including TensorFlow, Caffe, Intel Caffe, and MXNet. Additionally, multiple versions of those AI frameworks can easily be maintained and managed using Singularity containers. This consequently provides agility for IT managers to support development efforts for multiple users and applications simultaneously."The post Lenovo Updates LiCO Tools to Accelerate AI Deployment appeared first on insideHPC.
|
by staff on (#3JRKR)
Today GIGABYTE Technology announced the availability of ThunderXStation: the industry’s first 64-bit Armv8 workstation platform based on Cavium’s flagship ThunderX2 processor. “ThunderXStation is an ideal platform for Arm software developers across networking, embedded, mobile, and IoT verticals. We are delighted to be working closely with GIGABYTE on this, and we look forward to supporting them on a number of innovative new platforms.â€The post New Arm-based Workstation Opens the Doors for HPC Developers appeared first on insideHPC.
|
by staff on (#3JRKT)
Today Penguin Computing announced that Director of Advanced Solutions, Kevin Tubbs, Ph.D., will be speaking at NVIDIA’s GPU Technology Conference (GTC) on best practices in artificial intelligence. "On the second day of the conference, Tubbs will lead the “Best Practices in Designing and Deploying End-to-End HPC and AI Solutions†session. The focus of the session will be challenges faced by organizations looking to build AI systems and the design principles and technologies that have proven successful in Penguin Computing AI deployments for customers in the Top 500."The post Penguin Computing to share AI Best Practices at GPU Technology Conference appeared first on insideHPC.
|
by staff on (#3JRAE)
Today Hewlett Packard Enterprise announced new offerings to help customers ramp up, optimize and scale artificial intelligence usage across business functions to drive outcomes such as better demand forecasting, improved operational efficiency and increased sales. “HPE is best positioned to help customers make AI work for their enterprise, regardless of where they are in their AI adoption. While others provide AI components, we provide complete AI solutions from strategic advisory to purpose-built technology, operational support and a strong AI partner ecosystem to tailor the right AI solution for each organization.â€The post HPE Launches Vertical AI Solutions appeared first on insideHPC.
|
by Rich Brueckner on (#3JR4Z)
Talia Gershon from the Thomas J. Watson Research Center gave this talk at the 2018 IBM Think conference. "There is a whole class of problems that are too difficult for even the largest and most powerful computers that exist today to solve. These exponential challenges include everything from simulating the complex interactions of atoms and molecules to optimizing supply chains. But quantum computers could enable us to solve these problems, unleashing untold opportunities for business."The post Video: IBM Quantum Computing will be “Mainstream in Five Years†appeared first on insideHPC.
|
by MichaelS on (#3JJSM)
Visualizing the results of a simulation can give new insight into complex scientific problems. Interactive viewing of entire datasets can lead to earlier understanding of the challenge at hand and can enhance the understanding of complex phenomena. With the release of the of HPC Visualization Containers with the NVIDIA CPU Cloud, it has become much easier to get a visualization system up and production ready much quicker than ever before.The post NVIDIA Makes Visualization Easier in the Cloud appeared first on insideHPC.
|
by staff on (#3JP2J)
"IBM's goal is to make it easier for you to build your deep learning models. Deep Learning as a Service has unique features, such as Neural Network Modeler, to lower the barrier to entry for all users, not just a few experts. The enhancements live within Watson Studio, our cloud-native, end-to-end environment for data scientists, developers, business analysts and SMEs to build and train AI models that work with structured, semi-structured and unstructured data — while maintaining an organization’s existing policy/access rules around the data."The post IBM Launches Deep Learning as a Service appeared first on insideHPC.
|