by Rich Brueckner on (#4DYZA)
Adel El Hallak and Philip Rogers from NVIDIA gave this talk at the GPU Technology Conference. "Whether it's for AI, data science and analytics, or HPC, GPU-Accelerated software can make possible the previously impossible. But it's well known that these cutting edge software tools are often complex to use, hard to manage, and difficult to deploy. We'll explain how NGC solves these problems and gives users a head start on their projects by simplifying the use of GPU-Optimized software."The post Simplifying AI, Data Science, and HPC Workloads with NVIDIA GPU Cloud appeared first on insideHPC.
|
High-Performance Computing News Analysis | insideHPC
Link | https://insidehpc.com/ |
Feed | http://insidehpc.com/feed/ |
Updated | 2024-11-02 13:15 |
by staff on (#4DW3G)
HPC is no longer just HPC, but rather a mix of workloads that instantiate the convergence of AI, traditional HPC modeling and simulation, and HPDA (High Performance Data Analytics). Exit the traditional HPC center that just runs modeling and simulation and enter the world that must support the convergence of HPC-AI-HPDA computing, and sometimes with specialized hardware. In this sponsored post, Intel explores how HPC is becoming "more than just HPC."The post Intel Addresses the Convergence of AI, Analytic, and Traditional HPC Workloads appeared first on insideHPC.
|
by staff on (#4DWCX)
Today IBM announced the expansion of the IBM Q Network to include a number of global universities. The expanding academic network is designed to accelerate joint research in quantum computing, and develop curricula to help prepare students for careers that will be influenced by this next era of computing, across science and business. "Today, IBM is announcing Florida State University, the University of Notre Dame, Virginia Tech, Stony Brook University, and the University of Tokyo will have direct access to IBM Q's most-advanced commercially available quantum computing systems for teaching, and faculty and student research projects that advance quantum information science and explore early applications, as academic partners."The post IBM Q Network Expands to Drive Educational Opportunities in Quantum Computing appeared first on insideHPC.
|
by staff on (#4DW7W)
In this podcast, the Radio Free HPC team discusses how the news of the cool visualization of an actual black hole leads to interesting issues in HPC land. "The real point: the daunting 1.75 PB of raw data from each telescope meant a lot of physical drives that had to be flown to the data center. Henry leads a discussion about the race between bandwidth and data size."The post Podcast: Seeing the Black Hole with Big Data appeared first on insideHPC.
|
by Rich Brueckner on (#4DW7Y)
Wolfgang Gentzsch from the UberCloud gave this talk at the HPC User Forum. "The concept of personalized medicine has its roots deep in genomic research. Indeed, the successful completion of the Human Genome Project in 2003 marked a critical milestone for the field. That project took $3 billion over 13 years. Today, thanks to technological progress, a similar sequencing task would take only about $4,000 and a few weeks. Such computational power is possible thanks to cloud technology, which eliminates the barriers to high-performance computing by removing software and hardware constraints."The post Personalized Healthcare with High Performance Computing in the Cloud appeared first on insideHPC.
|
by staff on (#4DTRQ)
Today Xilinx announced today that it has entered into a definitive agreement to acquire Solarflare Communications. "The acquisition will enable Xilinx to combine its industry-leading FPGA, MPSoC and ACAP solutions with Solarflare's ultra-low latency network interface card (NIC) technology and Onload application acceleration software, to enable new converged SmartNIC solutions, accelerating Xilinx's "data center first" strategy and transition to a platform company."The post Xilinx to Acquire Solarflare appeared first on insideHPC.
|
by staff on (#4DTGV)
DOE’s Office of Electricity has selected eight projects to receive nearly $7 million in total to explore the use of big data, artificial intelligence, and machine learning technologies to improve existing knowledge and discover new insights and tools for better grid operation and management. DOE’s Office of Science announced a plan to provide $13 million in total funding for new research aimed at improving A.I. as a tool of scientific investigation and prediction.The post DOE Announces $20 Million in AI Research Funding appeared first on insideHPC.
|
by Rich Brueckner on (#4DT2D)
Addison Snell from Intersect360 Research gave this talk at the Swiss HPC Conference. "Intersect360 Research returns with an annual deep dive into the trends, technologies and usage models that will be propelling the HPC community through 2017 and beyond. Emerging areas of focus and opportunities to expand will be explored along with insightful observations needed to support measurably positive decision making within your operations.â€The post Addison Snell presents: The New HPC appeared first on insideHPC.
|
by staff on (#4DSWS)
Today Supermicro announced that its new 4-socket servers featuring 2nd Gen Intel Xeon Scalable processors and Intel Optane DC persistent memory are now shipping in volume. "Using 2nd Generation Intel Xeon Scalable processors and Intel Optane DC persistent memory, these servers deliver tremendous performance boost for in-memory computing all at a dramatically lower cost compared to the previous generation architectures."The post Supermicro Shipping new Scale-Up In-Memory Computing Platforms with Intel Xeon Scalable processors appeared first on insideHPC.
|
by staff on (#4DSWV)
Today NEC-X launched the Vector Engine Data Acceleration Center (VEDAC) at its Silicon Valley facility. This new VEDAC is one of the company’s many offerings to innovators, makers and change agents. The NEC X organization is focused on fostering big data innovations using NEC’s emerging technologies while tapping into Silicon Valley’s rich ecosystem. “We are gratified to see the developing innovations that are taking advantage of the cutting-edge technologies from NEC’s laboratories.â€The post NEC-X Opens Vector Engine Data Acceleration Center in Silicon Valley appeared first on insideHPC.
|
by Rich Brueckner on (#4DSR6)
In this podcast, Eric Thune from Velocity Compute describes how the company's PeerCache software optimizes data flow for HPC Cloud Bursting. "By using PeerCache to deliver hybrid cloud bursting, development teams can quickly extend their existing on-premise compute to burst into the cloud for elastic compute power. Your on-premise workflows will run identically in the cloud, without the need for retooling, and the workflow is then moved back to your on-premises servers until the next time you have a peak load."The post Velocity Compute: PeerCache for HPC Cloud Bursting appeared first on insideHPC.
|
by staff on (#4DQG3)
In this TACC Podcast, Dan Stanzione and Doug James from the Texas Advanced Computing Center discuss the thorny issue of reproducibility in HPC. "Computational reproducibility is a subset of the broader and even harder topic of scientific reproducibility," said Dan Stanzione, TACC's executive director. "If we can't get the exact same answer bit-for-bit, then what's close enough? What's a scientifically valid way to represent that? That's what we're after."The post TACC Podcast Looks at the Challenges of Computational Reproducibility appeared first on insideHPC.
|
by staff on (#4DQBC)
Today Sylabs announced the release of SingularityPRO 3.1 in what the company is calling a watershed moment for enterprise customers everywhere. "SingularityPRO 3.1 is the most highly anticipated release of our enterprise software ever,†said Gregory Kurtzer, founder and CEO of Sylabs. “With this release, we’re rapidly advancing container science, making it a truly opportune time for those seeking to containerize the most demanding enterprise performance computing workloads in the most trusted way."The post Sylabs boosts HPC Containers with SingularityPRO 3.1 appeared first on insideHPC.
|
by staff on (#4DQBD)
The Forum Teratec in France has posted their speaker agenda. With over 1300 attendees, the event takes place June 11-12 in Palaiseau. "The Forum Teratec is the premier international meeting for all players in HPC, Simulation, Big Data and Machine Learning (AI). It is a unique place of exchange and sharing for professionals in the sector. Come and discover the innovations that will revolutionize practices in industry and in many other fields of activity."The post Agenda Posted: Forum Teratec in France appeared first on insideHPC.
|
by staff on (#4DQ6H)
Researchers at Los Alamos National Laboratory have created the largest simulation to date of an entire gene of DNA, a feat that required one billion atoms to model and will help researchers to better understand and develop cures for diseases like cancer. "It is important to understand DNA at this level of detail because we want to understand precisely how genes turn on and off,†said Karissa Sanbonmatsu, a structural biologist at Los Alamos. “Knowing how this happens could unlock the secrets to how many diseases occur.â€The post Video: LANL Creates first Billion-atom Biomolecular Simulation appeared first on insideHPC.
|
by staff on (#4DNH7)
On April 19, researchers gathered at the Ohio Supercomputer Center for the Statewide Users Group (SUG) spring conference to collaborate and share ideas with peers and OSC staff. "SUG encompasses all OSC clients and receives direction from the SUG executive committee, a volunteer group composed of the Ohio university faculty who provide OSC’s leadership with program and policy advice and direction to ensure a productive environment for research."The post Ohio Supercomputer Center hosts Statewide User Group appeared first on insideHPC.
|
by Rich Brueckner on (#4DNDJ)
In this video, researchers investigate the millennial-scale vulnerability of the Antarctic Ice Sheet (AIS) due solely to the loss of its ice shelves. Starting at the present-day, the AIS evolves for 1000 years, exposing the floating ice shelves to an extreme thinning rate, which results in their complete collapse. The visualizations show the first 500 […]The post Video: Simulations of Antarctic Meltdown should send chills on Earth Day appeared first on insideHPC.
|
by staff on (#4DN9B)
Researchers from of the University of California at Santa Barbara are using TACC supercomputers to study bioelectric effects of cells to develop new anti-cancer strategies. "For us, this research would not have been possible without XSEDE because such simulations require over 2,000 cores for 24 hours and terabytes of data to reach time scales and length scales where the collective interactions between cells manifest themselves as a pattern," Gibou said. "It helped us observe a surprising structure for the behavior of the aggregate out of the inherent randomness."The post Supercomputing Bioelectric Fields in the Fight Against Cancer appeared first on insideHPC.
|
by Rich Brueckner on (#4DN53)
Mark Wilkinson from DiRAC gave this talk at the Swiss HPC Conference. "DiRAC is the integrated supercomputing facility for theoretical modeling and HPC-based research in particle physics, and astrophysics, cosmology, and nuclear physics, all areas in which the UK is world-leading. DiRAC provides a variety of compute resources, matching machine architecture to the algorithm design and requirements of the research problems to be solved."The post 40 Powers of 10 – Simulating the Universe with the DiRAC HPC Facility appeared first on insideHPC.
|
by Sarah Rubenoff on (#4DN1J)
Adoption of GPU-accelerated computing can offer oil and gas firms significant ROI today and pave the way to gain additional advantage from future technical developments. To stay competitive, these companies need to be able to derive insights from petabytes of sensor, geolocation, weather, drilling, and seismic data in milliseconds. A new white paper from Penguin Computing explores how GPUs are spurring innovation and changing how hydrocarbon businesses address data processing needs.The post GPUs for Oil and Gas Firms: Deriving Insights from Petabytes of Data appeared first on insideHPC.
|
by staff on (#4DKT8)
The Taurus Group in the Netherlands has acquired European HPC specialist ClusterVision. "The ability to bring ClusterVision into the portfolio is very important for our HPC strategy and future growth, "says "We have a long history in the distribution of storage, networks and compute. We believe that the integration of closely linked corporate vertical will ultimately bring significant scale, synergy and a thriving circular economy to the entire group."The post Taurus Europe Acquires ClusterVision appeared first on insideHPC.
|
by Rich Brueckner on (#4DKQN)
"High performance computing is all about scale and speed. And when you're backed by Google Cloud’s powerful and flexible infrastructure, you can solve problems faster, reduce queue times for large batch workloads, and relieve compute resource limitations. In this session, we'll discuss why GCP is a great platform to run high-performance computing workloads. We'll present best practices, architectural patterns, and how PSO can help your journey. We'll conclude by demo'ing the deployment of an autoscaling batch system in GCP."The post Video: High Performance Computing on the Google Cloud Platform appeared first on insideHPC.
|
by Rich Brueckner on (#4DJ97)
In this video, Curtis Anderson from Panasas describes how different NAS architectures optimize data flow to bring competitive advantage to your business. "You have a vision: to use high performance computing applications to help people, revolutionize your industry, or change the world. You don’t want to worry if your storage system is up to the task. As the only plug-and-play parallel storage file system in the market, Panasas helps you move beyond storage so you can focus on your big ideas and supercharge innovation."The post Video: Why Not all NAS Architectures can keep up with HPC appeared first on insideHPC.
|
by Rich Brueckner on (#4DJ6Q)
D.E. Shaw Research is seeking and HPC System Administrator in our Job of the Week. "Exceptional sysadmins sought to manage systems, storage, and network infrastructure for a New York–based interdisciplinary research group. Ideal candidates should have strong fundamental knowledge of Linux concepts such as file systems, networking, and processes in addition to practical experience administering Linux systems. Relevant areas of expertise might include large-installation systems administration experience and strong programming and scripting ability, but specific knowledge of and level of experience in any of these areas is less critical than exceptional intellectual ability."The post Job of the Week: HPC System Administrator at D.E. Shaw Research appeared first on insideHPC.
|
by staff on (#4DDRX)
Developers are increasingly besieged by the big data deluge. Intel Distribution for Python uses tried-and-true libraries like the Intel Math Kernel Library (Intel MKL)and the Intel Data Analytics Acceleration Library to make Python code scream right out of the box – no recoding required. Intel highlights some of the benefits dev teams can expect in this sponsored post.The post Making Python Fly: Accelerate Performance Without Recoding appeared first on insideHPC.
|
by staff on (#4DGAX)
The Hot Interconnects conference has issued its Call for Papers. The event takes place August 14-16 in Silicon Valley. "Hot Interconnects is the premier international forum for researchers and developers of state-of-the-art hardware and software architectures and implementations for interconnection networks of all scales, ranging from multi-core on-chip interconnects to those within systems, clusters, datacenters and Clouds. This yearly conference is attended by leaders in industry and academia. The atmosphere provides for a wealth of opportunities to interact with individuals at the forefront of this field."The post Call for Papers: Hot Interconnects Conference in Silicon Valley appeared first on insideHPC.
|
by Rich Brueckner on (#4DGAY)
In this special guest feature from the SC19 Blog, Charity Plata from Brookhaven National Lab catches up with Dr. Lin Gan from Tsinghua University, who's outstanding work in HPC has been recognized with a number of awards including the Gordon Bell Prize. As a highly awarded young researcher who already has been acknowledged for “outstanding, influential, and potentially long-lasting contributions†in HPC, Gan shares his thoughts on future supercomputers and what it means to say, “HPC Is Now."The post Dr. Lin Gan Reflects on the SC19 Theme: HPC is Now appeared first on insideHPC.
|
by staff on (#4DG67)
Today the European PRACE initiative announced that Dr Debora Sijacki from the University of Cambridge will receive the 2019 PRACE Ada Lovelace Award for HPC for her outstanding contributions to and impact on high performance computing in Europe. As a computational cosmologist she has achieved numerous high-impact results in astrophysics based on numerical simulations on state-of-the-art supercomputers.The post Dr Debora Sijacki wins 2019 PRACE Ada Lovelace Award for HPC appeared first on insideHPC.
|
by Rich Brueckner on (#4DG69)
Rahul Ramachandran from NASA gave this talk at the HPC User Forum. "NASA’s Earth Science Division (ESD) missions help us to understand our planet’s interconnected systems, from a global scale down to minute processes. ESD delivers the technology, expertise and global observations that help us to map the myriad connections between our planet’s vital processes and the effects of ongoing natural and human-caused changes."The post Evolving NASA’s Data and Information Systems for Earth Science appeared first on insideHPC.
|
by Rich Brueckner on (#4DE35)
Rick Wagner from Globus gave this talk at the Singularity User Group "We package the imSim software inside a Singularity container so that it can be developed independently, packaged to include all dependencies, trivially scaled across thousands of computing nodes, and seamlessly moved between computing systems. To date, the simulation workflow has consumed more than 30M core hours using 4K nodes (256K cores) on Argonne’s Theta supercomputer and 2K nodes (128K cores) on NERSC’s Cori supercomputer."The post Video: Managing large-scale cosmology simulations with Parsl and Singularity appeared first on insideHPC.
|
by Rich Brueckner on (#4DE37)
 In this video from the GPU Technology Conference, Dan Olds from OrionX discusses the human impact of AI with Greg Schmidt from HPE. The industry buzz about artificial intelligence and deep learning typically focuses on hardware, software, frameworks, performance, and the lofty business plans that will be enabled by this new technology. What we don’t […]The post Video: The Human Side of AI appeared first on insideHPC.
|
by staff on (#4DDY7)
Young people looking to further their careers in HPC are encouraged to sign up for the ISC STEM Student Day program. As part of the ISC High Performance Conference coming to Frankfurt in June, this program offers undergraduate and graduate students an early insight into the field of high performance computing as well as an opportunity to meet the important players in the sector.The post Sign up for ISC STEM Student Day appeared first on insideHPC.
|
by staff on (#4DBR0)
North Carolina State University researchers have developed a technique that reduces training time for deep learning networks by more than 60 percent without sacrificing accuracy, accelerating the development of new artificial intelligence applications. “One of the biggest challenges facing the development of new AI tools is the amount of time and computing power it takes to train deep learning networks to identify and respond to the data patterns that are relevant to their applications. We’ve come up with a way to expedite that process, which we call Adaptive Deep Reuse. We have demonstrated that it can reduce training times by up to 69 percent without accuracy loss.â€The post Adaptive Deep Reuse Technique cuts AI Training Time by more than 60 Percent appeared first on insideHPC.
|
by staff on (#4DBK1)
Spectra Logic is teaming with Arcitecta for tackling the massive datasets used in life sciences. The two companies will showcase their joint solutions at the BioIT World conference this week in Boston. "Addressing the needs of the life sciences market with reliable data storage lies at the heart of the Spectra and Arcitecta relationship,†said Spectra CTO Matt Starr. “This joint solution enables customers to better manage their data and metadata by optimizing multiple storage targets, retrieving data efficiently and tracking content and resources.â€The post Spectra Logic and Arcitecta team up for Genomics Data Management appeared first on insideHPC.
|
by staff on (#4DBK3)
Today DownUnder GeoSolutions (DUG) announced that tanks are arriving at Skybox Houston for "Bubba," its huge geophysically-configured supercomputer. "DUG will cool the massive Houston supercomputer using their innovative immersion cooling system that has computer nodes fully submerged in specially-designed tanks filled with polyalphaolefin dielectric fluid. This month, the first of these 722 tanks have been arriving in shipping containers at the facility in Houston."The post DUG Installs Immersive Cooling for Bubba Supercomputer in Houston appeared first on insideHPC.
|
by staff on (#4DBCV)
Jack Dongarra from the University of Tennessee has been named a Foreign Fellow of the Royal Society, joining previously inducted icons of science such as Isaac Newton, Charles Darwin, Albert Einstein, and Stephen Hawking. "This honor is both humbling because of others who have been so recognized and gratifying for the acknowledgement of the research and work I have done,†Dongarra said. “I’m deeply grateful for this recognition.â€The post Jack Dongarra Named a Foreign Fellow of the Royal Society appeared first on insideHPC.
|
by staff on (#4DBCX)
In this vintage video, Intel launches the Paragon line of supercomputers, a series of massively parallel systems produced in the 1990s. In 1993, Sandia National Laboratories installed an Intel XP/S 140 Paragon supercomputer, which claimed the No. 1 position on the June 1994 TOP500 list. "With 3,680 processors, the system ran the Linpack benchmark at 143.40 Gflop/s. It was the first massively parallel processor supercomputer to be indisputably the fastest system in the world."The post Vintage Video: The Paragon Supercomputer – A Product of Partnership appeared first on insideHPC.
|
by Sarah Rubenoff on (#4D8VR)
In today's markets, a successful HPC cluster can be a formidable competitive advantage. And many are turning to these tools to stay competitive in the HPC market. That said, these systems are inherently very complex, and have to be built, deployed and managed properly to realize their full potential. A new report from Bright Computing explore best practices for HPC clusters.The post Best Practices for Building, Deploying & Managing HPC Clusters appeared first on insideHPC.
|
by Rich Brueckner on (#4D98K)
Kelly Gaither from TACC gave this talk at the HPC User Forum. "Computing4Change is a competition empowering people to create change through computing. You may have seen articles on the anticipated shortfall of engineers, computer scientists, and technology designers to fill open jobs. Numbers from the Report to the President in 2012 (President Obama’s Council of Advisors on Science and Technology) show a shortfall of one million available workers to fill STEM-related jobs by 2020."The post The Computing4Change Program takes on STEM and Workforce Issues appeared first on insideHPC.
|
by staff on (#4D95G)
Today Quobyte announced that the company's Data Center File System is the first distributed file system to offer a TensorFlow plug-in, providing increased throughput performance and linear scalability for ML-powered applications to enable faster training across larger data sets while achieving higher-accuracy results. "By providing the first distributed file system with a TensorFlow plug-in, we are ensuring as much as a 30 percent faster throughput performance improvement for ML training workflows, helping companies better meet their business objectives through improved operational efficiency,†said Bjorn Kolbeck, Quobyte CEO.The post Quobyte Distributed File System adds TensorFlow Plug-In for Machine Learning appeared first on insideHPC.
|
by Rich Brueckner on (#4D90B)
Mark Govett from NOAA gave this talk at the GPU Technology Conference. "We'll discuss the revolution in computing, modeling, data handling and software development that's needed to advance U.S. weather-prediction capabilities in the exascale computing era. Creating prediction models to cloud-resolving 1 KM-resolution scales will require an estimated 1,000-10,000 times more computing power, but existing models can't exploit exascale systems with millions of processors. We'll examine how weather-prediction models must be rewritten to incorporate new scientific algorithms, improved software design, and use new technologies such as deep learning to speed model execution, data processing, and information processing."The post Video: Advancing U.S. Weather Prediction Capabilities with Exascale HPC appeared first on insideHPC.
|
by staff on (#4D90D)
Today Wolfram Research released Version 12 of Mathematica for advanced data science and computational discovery. "After three decades of continuous R&D and the introduction of Mathematica Version 1.0, Wolfram Research has released its most powerful software offering with Version 12 of Wolfram Language, the symbolic backbone of Mathematica. The latest version includes over a thousand new functions and features for multiparadigm data science, automated machine learning, and blockchain manipulation for modern software development and technical computing."The post Wolfram Research Releases Mathematica Version 12 for Advanced Data Science appeared first on insideHPC.
|
by Rich Brueckner on (#4D6Q9)
In this Big Compute podcast, Gabriel Broner hosts Mike Hollenbeck, founder and CTO at Optisys. Optisys is a startup that is changing the antenna industry. Using HPC in the cloud and 3D printing they are able to design customized antennas which are much smaller, lighter and higher performing than traditional antennas.The post Podcast: Rescale powers Innovation in Antenna Design appeared first on insideHPC.
|
by Rich Brueckner on (#4D6J9)
The good folks at Sylabs have added plugin support to Singularity, an open source-based container platform designed for scientific and HPC environments. As this post from February 2018 indicates, plugin support in Singularity has been on our minds for some time. After the successful reimplementation of the Singularity core in a combination of the Go […]The post Video: Singularity adds Plugin Support appeared first on insideHPC.
|
by Rich Brueckner on (#4D6JA)
The EuroMPI conference has issued its Call for Papers. The event takes place September 10-13 in Zurich, Switzerland. "The EuroMPI conference is since 1994 the preeminent meeting for users, developers and researchers to interact and discuss new developments and applications of message-passing parallel computing, in particular in and related to the Message Passing Interface (MPI). This includes parallel programming interfaces, libraries and langauges, architectures, networks, algorithms, tools, applications, and High Performance Computing with particular focus on quality, portability, performance and scalability."The post Call for Papers: EuroMPI Conference in Zurich appeared first on insideHPC.
|
by Rich Brueckner on (#4D6JC)
Today Univa launched a HPC Cloud Migration, a new portal for news, events, and resources centered around High Performance Computing. "The high-performance computing industry is at an exciting growth stage, fueled by new application deployment models (such as AI and machine learning) new cloud-service offerings and advances in management software. HPC is all about scale and speed in which users are pushing to accelerate their most complex HPC workloads’ time to completion. As a result, IT organizations are looking to maximize their HPC computing resources for harnessing the cloud."The post Univa Launches HPC Cloud Migration News Portal appeared first on insideHPC.
|
by staff on (#4D6DN)
Today Fujitsu announced that it has completed the design of Post-K supercomputer for deployment at RIKEN in Japan. While full-production of the full machine is not scheduled until 2021-2022, Fujitsu disclosed plans to productize the Post-K technologies and begin global sales in the second half of fiscal 2019. "Reaching the production milestone marks a significant achievement for Post-K and we are excited to see the potential for broader deployment of Arm-based Fujitsu technologies in support of HPC and AI applications."The post Fujitsu to Productize Post-K Supercomputer Technologies appeared first on insideHPC.
|
by Rich Brueckner on (#4D4YJ)
In this video from the HPC User Forum, Henry Newman from Seagate Government Solutions leads a panel discussion on Metadata and Archiving at Scale. "Metadata is the key to keeping track of all this unstructured scientific data. It is “data about data.†It makes scientific data easy to find, track, share, move and manage – at low cost. Unfortunately, today’s high capacity storage systems only provide bare bones system consisting of as little as file name, owner and creation/access timestamps. Data intensive scientific workflows need supplemental enhanced metadata, along with access rights and security safeguards."The post Panel Discussion: Metadata and Archiving at Scale appeared first on insideHPC.
|
by staff on (#4D4YM)
Today AI startup Wave Computing announced its new TritonAI 64 platform, which integrates a triad of powerful technologies into a single, future-proof intellectual property (IP) licensable solution. Wave’s TritonAI 64 platform delivers 8-to-32-bit integer-based support for high-performance AI inferencing at the edge now, with bfloat16 and 32-bit floating point-based support for edge training in the future.The post Wave Computing Launches TritonAI 64 Platform for High-Speed Inferencing appeared first on insideHPC.
|
by staff on (#4D37Q)
The UT Southwestern Medical Center in Dallas is seeking a Computational Scientist in our Job of the Week. "The Computational Scientist will support faculty and students in adapting computational strategies to the specific features of the HPC infrastructure. The successful candidate will work with a range of systems and technologies such as compute cluster, parallel file systems, high speed interconnects, GPU-based computing and database servers."The post Job of the Week: Computational Scientist at UT Southwestern Medical Center appeared first on insideHPC.
|