Feed insidehpc High-Performance Computing News Analysis | insideHPC

Favorite IconHigh-Performance Computing News Analysis | insideHPC

Link https://insidehpc.com/
Feed http://insidehpc.com/feed/
Updated 2024-11-04 17:30
Asetek Receives Order For New HPC Cluster From Fujitsu
Today Asetek announced an order from Fujitsu, an established global data center OEM, for a new High Performance Computing system at a currently undisclosed location in Japan. This major installation will be implemented using Asetek's RackCDU liquid cooling solution throughout the cluster which includes 1300 Direct-to-Chip (D2C) coolers for the cluster's compute nodes. "We are pleased to see the continuing success of Fujitsu using Asetek's technology in large HPC clusters around the world," said André Sloth Eriksen, CEO and founder of Asetek.The post Asetek Receives Order For New HPC Cluster From Fujitsu appeared first on insideHPC.
Pawsey Centre adds NVIDIA Volta GPUs to its HPC Cloud
One year after launching its cloud service, Nimbus, the Pawsey Supercomputing Centre has now expanded with NVIDIA GPU nodes. The GPUs are currently being installed, so the Pawsey cloud team have begun a Call for Early Adopters. "Launched in mid-2017 as a free cloud service for researchers who require flexible access to high-performance computing resources, Nimbus consists of AMD Opteron CPUs making up 3000 cores and 288 terabytes of storage. Now, Pawsey will be expanding its cloud infrastructure from purely a CPU based system to include GPUs; providing a new set of functionalities for researchers."The post Pawsey Centre adds NVIDIA Volta GPUs to its HPC Cloud appeared first on insideHPC.
Red Hat’s AI Strategy
"The impact of AI will be visible in the software industry much sooner than the analog world, deeply affecting open source in general, as well as Red Hat, its ecosystem, and its userbase. This shift provides a huge opportunity for Red Hat to offer unique value to our customers. In this session, we'll provide Red Hat's general perspective on AI and how we are helping our customers benefit from AI."The post Red Hat’s AI Strategy appeared first on insideHPC.
Celebrating 20 Years of the OpenMP API
"The first version of the OpenMP application programming interface (API) was published in October 1997. In the 20 years since then, the OpenMP API and the slightly older MPI have become the two stable programming models that high-performance parallel codes rely on. MPI handles the message passing aspects and allows code to scale out to significant numbers of nodes, while the OpenMP API allows programmers to write portable code to exploit the multiple cores and accelerators in modern machines."The post Celebrating 20 Years of the OpenMP API appeared first on insideHPC.
BioScience gets a Boost Through Order-of-Magnitude Computing Gains
Intel is working with leaders in the field to eliminate today’s data processing bottlenecks. In this guest post from Intel, the company explores how BioScience is getting a leg up from order-of-magnitude computing progress. "Intel’s framework is designed to make HPC simpler, more affordable, and more powerful."The post BioScience gets a Boost Through Order-of-Magnitude Computing Gains appeared first on insideHPC.
Universal Coding of the Reals: Alternatives to IEEE Floating Point
Peter Lindstrom from LLNL gave this talk at the Conference on Next Generation Arithmetic in Singapore. "We propose a modular framework for representing the real numbers that generalizes IEEE, POSITS, and related floating-point number systems, and which has its roots in universal codes for the positive integers such as the Elias codes. This framework unifies several known but seemingly unrelated representations within a single schema while also introducing new representations."The post Universal Coding of the Reals: Alternatives to IEEE Floating Point appeared first on insideHPC.
DDN GridScaler Powers Neuroscience and Behavioral Research at Harvard
Today DDN that Harvard University’s Faculty of Arts and Sciences Research Computing (FASRC) has deployed DDN’s GRIDScaler GS7KX parallel file system appliance with 1PB of storage. The installation has sped the collection of images detailing synaptic connectivity in the brain’s cerebral cortex. "DDN’s scale-out, parallel architecture delivers the performance we need to keep stride with the rapid pace of scientific research and discovery at Harvard,” said Scott Yockel, Ph.D., director of research computing at Harvard’s FAS Division of Science. “The storage just runs as it’s supposed to, so there’s no contention for resources and no complaints from our users, which empowers us to focus on the research.”The post DDN GridScaler Powers Neuroscience and Behavioral Research at Harvard appeared first on insideHPC.
Ohio Supercomputer Center Upgrades HPC Access Portal
The Ohio Supercomputer Center has updated its innovative web-based portal for accessing high performance computing services. As part of this effort, OSC recently announced the release of Open OnDemand 1.3, the first version using its new RPM Package Manager, or Red Hat Package Manager, a common standard for distributing Linux software. "Our continuing development of Open OnDemand is aimed at making the package easier to use and more powerful at the same time," said David Hudak, interim executive director of OSC. "Open OnDemand 1.3's RPM Package Manager simplifies the installation and updating of OnDemand versions and enables OSC to do more releases more frequently."The post Ohio Supercomputer Center Upgrades HPC Access Portal appeared first on insideHPC.
HPE: Design Challenges at Scale
Jimmy Daley from HPE gave this talk at the HPC User Forum in Tucson. "High performance clusters are all about high speed interconnects. Today, these clusters are often built out of a mix of copper and active optical cables. While optical is the future, the cost of active optical cables is 4x - 6x that of copper cables. In this talk, Jimmy Daley looks at the tradeoffs system architects need to make to meet performance requirements at reasonable cost."The post HPE: Design Challenges at Scale appeared first on insideHPC.
High Performance Big Data Computing Using Harp-DAAL
Harp-DAAL is a framework developed at Indiana University that brings together the capabilities of big data (Hadoop) and techniques that have previously been adopted for high performance computing. Together, employees can become more productive and gain deeper insights to massive amounts of data.The post High Performance Big Data Computing Using Harp-DAAL appeared first on insideHPC.
BSC Powers Climate Services for Agriculture in Europe
The Barcelona Supercomputing Center (BSC) is a partner in the MED-GOLD project, which will foster the creation of highly specialized climate services for wine, olive oil, and durum crops, providing indicators to optimize agricultural management practices in relation to the impact of global warming. "Wine, olive oil and durum wheat products are staples of the Mediterranean diet. Their production rates and quality are highly dependent on weather and climate. These essential features are not guaranteed under future climate change conditions, which are expected to increase vulnerability to crop failure and pest damage."The post BSC Powers Climate Services for Agriculture in Europe appeared first on insideHPC.
Quantum Computing: Its Principles, Capabilities and Challenges
Dr. Mark Mattingley-Scott from IBM gave this talk at the Swiss HPC Conference. "Quantum Computing is here, right now - and we are at the start of a new way of computing, which will impact us the way the revolution started by Shockley, Bardeen and Brattain did in 1947. In this talk I will introduce Quantum Computing, its principles, capabilities and challenges and provide you with the insight you need to decide how you should engage with this revolutionary technology."The post Quantum Computing: Its Principles, Capabilities and Challenges appeared first on insideHPC.
Let’s Talk Exascale: Making Software Development more Efficient
In this episode of Let's Talk Exascale, Mike Heroux from Sandia National Labs describes the Exascale Computing Project’s Software Development Kit, an organizational approach to reduce the complexity of the project management of ECP software technology. "My hope is that as we create these SDKs and bring these independently developed products together under a collaborative umbrella, that instead of saying that each of these individual products is available independently, we can start to say that an SDK is available."The post Let’s Talk Exascale: Making Software Development more Efficient appeared first on insideHPC.
R-Systems to Launch PWRS HPC Access Portal
Today Parallel Works Inc. announced that the company is partnering with R-Systems to launch the PWRS HPC Access Portal. The Portal will empower scientists, engineers, and data analysts with the tools to supercharge their computational studies. “We deliver flexible solutions to meet our clients’ on-premise, off-premise or Hybrid HPC requirements. Our partnership with Parallel Works increases R Systems’ arsenal of solutions and proves once again that we offer more than cores to our clients.”The post R-Systems to Launch PWRS HPC Access Portal appeared first on insideHPC.
CIARA steps up to High Frequency Trading and Blockchain with AMD
CIARA just announced new AMD-based systems for ultra-low latency, high Frequency Trading and Blockchain solutions. "With the adoption of new technologies such as large core count processors and the usage of ECC memory, the path for all financial enterprises to reap the benefits of safe hardware acceleration without compromising reliability is getting easier,” said Patrick Scateni, Vice President of Enterprise and Performance Group at CIARA. “The joint solutions coming from the CIARA and AMD will bring high-performance and broader choice of compute platforms to the FSI market.”The post CIARA steps up to High Frequency Trading and Blockchain with AMD appeared first on insideHPC.
AI Podcast Looks at Recent Developments at NVIDIA Research
In this episode of the AI Podcast, Bryan Cantanzaro from NVIDIA discusses some of the latest developments at NVIDIA research. "The goal of NVIDIA research is to figure out what things are going to change the future of the company, and then build prototypes that show the company how to do that,” says Catanzaro. “And AI is a good example of that.”The post AI Podcast Looks at Recent Developments at NVIDIA Research appeared first on insideHPC.
Deep Learning at Scale for Cosmology Research
In this video from Google I/O 2018, Debbie Bard from NERSC describes Deep Learning at scale for cosmology research. "Debbie Bard is acting group lead for the Data Science Engagement Group at the National Energy Research Scientific Computing Center (NERSC) at Berkeley National Lab. A native of the UK, her career spans research in particle physics, cosmology and computing on both sides of the Atlantic."The post Deep Learning at Scale for Cosmology Research appeared first on insideHPC.
Job of the Week: High Performance Software Developer at The Aerospace Corporation
The Aerospace Corporation is Virginia is seeking High Performance Software Developer in our Job of the Week. "You will code high-performance technical computing applications in the areas of space system modeling, architecture performance assessment, and mission planning and scheduling."The post Job of the Week: High Performance Software Developer at The Aerospace Corporation appeared first on insideHPC.
New Update Speeds eXtremeDB Financial Edition for HPC
Today McObject announced a new version of its eXtremeDB Financial Edition for HPC database management system. Designed for speed and efficiency, the new version offers significant speed improvements, building on the performance of the previous record-setting version, along with a suite of other benefits such as an ultra-fast and flexible market data feed handler. "This version of eXtremeDB comes with a number of benefits, and of course delivers excellent performance," said Steve Graves, CEO and co-founder of McObject. "Speed has always been of critical importance for the financial markets, so we’re very pleased with the improvements this version delivers. It also offers unprecedented flexibility for HPC (high performance computing) with its wide range of math functions which support in-chip analytics.”The post New Update Speeds eXtremeDB Financial Edition for HPC appeared first on insideHPC.
Neurala Reduces Training Time for Deep Neural Network Technology
Today Neurala announced a breakthrough update to its award-winning Lifelong Deep Neural Network (Lifelong-DNN) technology. The update allows for a significant reduction in training time compared to traditional DNN—20 seconds versus 15 hours—a reduction in overall data needs, and the ability for deep learning neural networks to learn without the risk of forgetting previous knowledge—with or without the cloud. "It takes a very long time to train a traditional DNN on a dataset, and, once that happens, it must be completely re-trained if even a single piece of new information is added. Our technology allows for a massive reduction in the time it takes to train a neural network and all but eliminates the time it takes to add new information,” said Anatoli Gorshechnikov, CTO and co-founder of Neurala. “Our Lifelong-DNN is the only AI solution that allows for incremental learning and is the breakthrough that companies across many industries have needed to make deep learning useful for their customers.”The post Neurala Reduces Training Time for Deep Neural Network Technology appeared first on insideHPC.
PASC18 Panel to Focus on Fast and Big Data, Fast and Big Computation
Today the PASC18 conference announced that this year’s panel discussion will focus on the central theme of the conference: “Fast and Big Data, Fast and Big Computation.” Are these two worlds evolving and converging together? Or is HPC facing a game-changing moment as the appetite for computation in the scientific computing community and industry is for a different type of computation than what we're used to?The post PASC18 Panel to Focus on Fast and Big Data, Fast and Big Computation appeared first on insideHPC.
Video: Building Computing and Data Centres for Exascale in the EU
In this video from the 2018 Swiss HPC Conference, Peter Hopton from the EuroEXA project shares the problems and the solutions that are being developed in the EuroEXA co-design project. "EuroEXA hardware designers work together with system software experts optimizing the entire stack from language runtimes to low-level kernel drivers, and application developers that bring in a rich mix of key HPC applications from across climate/weather, physical/energy and life-science/bioinformatics domains to enable efficient system co-design and maximize the impact of the project."The post Video: Building Computing and Data Centres for Exascale in the EU appeared first on insideHPC.
XTREME-D to Launch Gateway Appliance for Secure HPC Cloud Access
"XTREME-Stargate is a small set-top linux appliance that can easily connect to HPC cloud with basic setup over a web portal using the XTREME-DNA interface. It functions as a “super head node” for HPC cloud clusters, providing access to on-premise, private, and public cloud without integration headaches, and allowing connections to baremetal cloud (either shared or dedicated), in addition to the public cloud vendors such as Azure and AWS that have always been accessible via XTREME-DNA."The post XTREME-D to Launch Gateway Appliance for Secure HPC Cloud Access appeared first on insideHPC.
Mateo Valero on how RISC-V can play a major role in New Supercomputer Architectures
In this eeNews report, Mateo Valero, Director of the Barcelona Supercomputer center, explains how the RISC-V architecture can play a main role in new supercomputer architectures. Valero was the keynote speaker at the recent RISC-V Workshop in Barcelona. "Born in academia and research, RISC-V ISA delivers a new level of free, extensible software and hardware freedom on architecture, paving the way for the next 50 years of computing design and innovation."The post Mateo Valero on how RISC-V can play a major role in New Supercomputer Architectures appeared first on insideHPC.
ArrayFire Releases v3.6 Parallel Libraries
Today ArrayFire announced the release of ArrayFire v3.6, the company's open source library of parallel computing functions supporting CUDA, OpenCL, and CPU devices. This new version of ArrayFire includes several new features that improve the performance and usability for applications in machine learning, computer vision, signal processing, statistics, finance, and more. "We use ArrayFire to run the low level parallel computing layer of SDL Neural Machine Translation Products," said William Tambellini, Senior Software Developer at SDL. "ArrayFire flexibility, robustness and dedicated support makes it a powerful tool to support the development of Deep Learning Applications.”The post ArrayFire Releases v3.6 Parallel Libraries appeared first on insideHPC.
The Need for Deep Learning Transparency
Steve Conway from Hyperion Research gave this talk at the HPC User Forum. "We humans don’t fully understand how humans think. When it comes to deep learning, humans also don’t understand yet how computers think. That’s a big problem when we’re entrusting our lives to self-driving vehicles or to computers that diagnose serious diseases, or to computers installed to protect national security. We need to find a way to make these “black box” computers transparent."The post The Need for Deep Learning Transparency appeared first on insideHPC.
Let’s Talk Exascale: Transforming Combustion Science and Technology
In this episode of Let's Talk Exascale, Jackie Chen from Sandia National Laboratories describes the Combustion-Pele project, which uses predictive simulation for the development of cleaner-burning engines. "Almost all practical combustors operate under extremely high turbulence levels to increase the rate of combustion providing high efficiency, but there are still outstanding challenges in understanding how turbulence affects auto-ignition."The post Let’s Talk Exascale: Transforming Combustion Science and Technology appeared first on insideHPC.
Intel FPGAs Power Realtime AI in the Azure cloud
At the Microsoft Build conference held this week, Microsoft announced Azure Machine Learning Hardware Accelerated Models powered by Project Brainwave integrated with the Microsoft Azure Machine Learning SDK. In this configuration, customers gain access to industry-leading artificial intelligence inferencing performance for their models using Azure’s large-scale deployments of Intel FPGA (field programmable gate array) technology. "With today’s announcement, customers can now utilize Intel’s FPGA and Intel Xeon technologies to use Microsoft’s stream of AI breakthroughs on both the cloud and the edge."The post Intel FPGAs Power Realtime AI in the Azure cloud appeared first on insideHPC.
Viking Technology Introduces NVMe Persistent Memory Drives
Today Viking Technology announced its new VT-PM8 and VT-PM16 persistent memory drives that deliver performance and unlimited write endurance similar to that of DRAM, while simultaneously providing the data persistence desired for enterprise applications. VT-PM drives, part of Viking's family of persistent memory technology products, are 2.5 inch U.2 NVMe PCIe Gen3 drives built with architecture from Radian Memory Systems Incorporated.The post Viking Technology Introduces NVMe Persistent Memory Drives appeared first on insideHPC.
Call for Papers: High Performance Machine Learning Workshop – HPML 2018
The HPML 2018 High Performance Machine Learning Workshop has issued its Call for Papers. The event takes place September 24 in Lyon, France. "This workshop is intended to bring together the Machine Learning (ML), Artificial Intelligence (AI) and High Performance Computing (HPC) communities. In recent years, much progress has been made in Machine Learning and Artificial Intelligence in general."The post Call for Papers: High Performance Machine Learning Workshop – HPML 2018 appeared first on insideHPC.
Gauss Centre in Germany Allocates 1 Billion Computing Core Hours for Science
"With the 19th Call for Large-Scale Projects, the GCS steering committee granted a total of more than 1 billion core hours to 17 ambitious research projects. The research teams represent a wide range of scientific disciplines, including astrophysics, atomic and nuclear physics, biology, condensed matter physics, elementary particle physics, meteorology, and scientific engineering, among others."The post Gauss Centre in Germany Allocates 1 Billion Computing Core Hours for Science appeared first on insideHPC.
MareNostrum provides 470 million core hours to European scientists
The PRACE initiative continues to sponsor ground-breaking research in Europe. "In the 16th PRACE Call for Project Access, Spain has allocated 470 million core hours on MareNostrum to 17 projects led by scientists from different European countries. With this allocation, this is the second time in a row that this unique supercomputer – which is installed in a chapel – has been the largest contributor in number of core hours in the last two PRACE Call for Proposals."The post MareNostrum provides 470 million core hours to European scientists appeared first on insideHPC.
POWER9 for AI & HPC
Jeff Stuecheli from IBM gave this talk at the HPC User Forum in Tucson. "Built from the ground-up for data intensive workloads, POWER9 is the only processor with state-of-the-art I/O subsystem technology, including next generation NVIDIA NVLink, PCIe Gen4, and OpenCAPI."The post POWER9 for AI & HPC appeared first on insideHPC.
D-Wave Launches Quadrant Business Unit for Machine Learning
Today D-Wave Systems launch its new Quadrant business unit, formed to provide machine learning services that make state-of-the-art deep learning accessible to companies across a wide range of industries and application areas. Quadrant's algorithms enable accurate discriminative learning (predicting outputs from inputs) using less data by constructing generative models which jointly model both inputs and outputs. "Quadrant is a natural extension of the scientific and technological advances from D-Wave as we continue to explore new applications for our quantum systems.”The post D-Wave Launches Quadrant Business Unit for Machine Learning appeared first on insideHPC.
Univa Navops Launch powers Cloudbursting for Mellanox Hybrid Cloud
Today Univa announced that Mellanox has selected Univa’s Navops Launch to extend its on-premise EDA cluster to the cloud, providing Mellanox with cost-effective, on-demand capacity. "Mellanox Technologies provides high-performance solutions to a range of customers looking to innovate with intelligent, interconnected solutions for servers, storage and hyper-converged infrastructure,” said Doron Sayag, IT enterprise computing services senior manager at Mellanox Technologies. “Integrating an enterprise-grade cluster management tool like Navops Launch into our own on-premise data center allowed us to better address peak performance needs with seamless bursting of our HPC cluster to the cloud during tape-outs. By operating more efficiently, we get to provide our customers with exceptional products for years to come.”The post Univa Navops Launch powers Cloudbursting for Mellanox Hybrid Cloud appeared first on insideHPC.
Kubernetes as a Service Built on OpenStack
Saverio Proto from SWITCH gave this talk at the Swiss HPC Conference. "At SWITCH we are looking to provide a container platform as a Service solution. We are working on Kubernetes leveraging the Openstack cloud provider integration. In this talk we show how to re-use the existing keystone credentials to access the K8s cluster, how to obtain PVCs using the Cinder storage class and many other nice integration details."The post Kubernetes as a Service Built on OpenStack appeared first on insideHPC.
ClusterVision to build Scandinavia’s Most Powerful Supercomputer
Today Sweden's National Supercomputing Centre (NSC) at Linköping University announced it has awarded ClusterVision a contract to build its new flagship cluster, Tetralith. Available to all researchers in Sweden, the 4 Petaflop machine will be Scandinavia's most powerful yet with 60,544 cores based on Intel Xeon.The post ClusterVision to build Scandinavia’s Most Powerful Supercomputer appeared first on insideHPC.
Lustre 2.11.0 Released
Today OpenSFS announced the release of Lustre 2.11.0, the fastest and most scalable parallel file system. OpenSFS, founded in 2010 to advance Lustre development, is the premier non-profit organization promoting the use of Lustre and advancing its capabilities through coordinated releases of the Lustre file system.The post Lustre 2.11.0 Released appeared first on insideHPC.
Video: Benchmarking as the Answer to HPC Performance and Architecture Questions
"This presentation offers an impartial look at benchmarking of HPC systems by Andrew Jones (@hpcnotes), NAG VP Strategic HPC Consulting & Services. This is described as a must-see for anyone involved in high performance or scientific computing."The post Video: Benchmarking as the Answer to HPC Performance and Architecture Questions appeared first on insideHPC.
New AI Performance Milestones with NVIDIA Volta GPU Tensor Cores
Over at the NVIDIA blog, Loyd Case shares some recent advancements that deliver dramatic performance gains on GPUs to the AI community. "We have achieved record-setting ResNet-50 performance for a single chip and single server with these improvements. Recently, fast.ai also announced their record-setting performance on a single cloud instance. A single V100 Tensor Core GPU achieves 1,075 images/second when training ResNet-50, a 4x performance increase compared to the previous generation Pascal GPU."The post New AI Performance Milestones with NVIDIA Volta GPU Tensor Cores appeared first on insideHPC.
Radio Free HPC Does the Math on pending CORAL-2 Exascale Machines
In this podcast, the Radio Free HPC team takes a look at daunting performance targets for the DOE’s CORAL-2 RFP for Exascale Computers. “So, 1.5 million TeraFlops divided by 7.8 Teraflops per GPU is how many individual accelerators you need, and that’s 192,307. Now, multiply that by 300 watts per accelerator, and it is clear we are going to need something all-new to get where we want to go.”The post Radio Free HPC Does the Math on pending CORAL-2 Exascale Machines appeared first on insideHPC.
Cavium ThunderX2 Processor goes GA for HPC and Beyond
Today Cavium announced the General Availability of ThunderX2, Cavium's second generation of Armv8-A SoC processors. "Integrating ThunderX2 into the HPE Apollo 70 Servers is another example of HPE's leadership in driving innovation and superior technical solutions into the HPC server market. The ThunderX2 processor provides excellent compute and memory performance that is critical for our HPE Apollo 70 customers and the applications they depend on."The post Cavium ThunderX2 Processor goes GA for HPC and Beyond appeared first on insideHPC.
DNN Implementation, Optimization, and Challenges
This is the third in a five-part series that explores the potential of unified deep learning with CPU, GPU and FGPA technologies. This post explores DNN implementation, optimization and challenges.The post DNN Implementation, Optimization, and Challenges appeared first on insideHPC.
Supercomputing How Cancer Spreads through Superdiffusion
Over a the University of Texas at Austin, Marc Airhart writes that researchers are using TACC supercomputers to better understand the physics behind the spread of cancer. "Having a physicist working on cancer can provide a new perspective into how a tumor evolves," said Abdul Malmi-Kakkada, a postdoctoral researcher who led the project, along with postdoctoral researcher Xin Li, and professor and chair of chemistry Dave Thirumalai. "And rather than only looking at genetics or biology, trying to attack the problem of cancer from different perspectives can hopefully lead to a better understanding."The post Supercomputing How Cancer Spreads through Superdiffusion appeared first on insideHPC.
HPC in Ontario, Canada
Dr. Chris Loken gave this talk at the HPC User Forum. "We collaborate with our partners to centralize strategy and planning for Ontario’s advanced computing assets, including hardware, software, data management, storage, storage, security, connectivity and Highly Qualified Personnel. Together, we strive to address concerns about Ontario’s capacity to supply advanced computing at the level required for leading research and enabling industrial competitiveness."The post HPC in Ontario, Canada appeared first on insideHPC.
Quobyte Joins STAC Benchmark Council to help Financial Services Solve Storage Challenges
Today hyperscale storage provider Quobyte announced that it has accepted an invitation to join the Securities Technology Analysis Center (STAC) Benchmark Council, an influential group of technologists in the finance industry. "Quobyte provides massively scalable software storage to allow the industry to take advantage of the insights they can gain from the extensive amount of historical trading data they have collected,” said Björn Kolbeck, Quobyte’s CEO and Co-Founder. “We’ve found that Quobyte storage software running on a cluster of industry standard whitebox servers can significantly reduce time spent on backtesting — literally converting time saved into money.”The post Quobyte Joins STAC Benchmark Council to help Financial Services Solve Storage Challenges appeared first on insideHPC.
Job of the Week: Quantitative Software Engineer for High Frequency Trading
Two Sigma Investments in New York is seeking a Quantitative Software Engineer for High Frequency Trading in our Job of the Week. "We are seeking leading software engineers to join our dynamic, fast-paced high frequency team. Equal parts code experts and mathematical thinkers, our quantitative software engineers bring together technical and analytical expertise to grapple with difficult computational and data-related problems directly and implement efficient and innovative solutions. Entailing a sophisticated knowledge of algorithms, statistics, and high-performance computing, this effort has attracted ACM programming competition finalists, Top Coder contenders, and other highly technical, competitive problem solvers."The post Job of the Week: Quantitative Software Engineer for High Frequency Trading appeared first on insideHPC.
BSC to host the RISC-V Workshop on the Road to European Processor Initiative
The Barcelona Supercomputing Center will host the RISC-V Workshop next week, a gathering the open source processor design community to share RISC-V updates, projects and implementations. Founded in 2015, the RISC-V Foundation comprises more than 100 members building the first open, collaborative community of software and hardware innovators powering innovation at the edge forward. BSC is promoting the adoption of RISC-V as a key partner of the European Processor Initiative, the consortium to design and develop Europe's low-power processors and related technologies for extreme-scale, high-performance computing, which will be funded by the European Commission under the Horizon 2020 program.The post BSC to host the RISC-V Workshop on the Road to European Processor Initiative appeared first on insideHPC.
Fast.ai Trains Neural Net in Record Time with Super Convergence
Over at the Fast.AI Blog, Jeremy Howard writes that his Startup has achieved an amazing deep learning benchmark milestone – the ability to do an ImageNet training in 3 hours for just $25. "His recent discovery of an extraordinary phenomenon he calls super convergence shows that it is possible to train deep neural networks 5-10x faster than previously known methods, which has the potential to revolutionize the field."The post Fast.ai Trains Neural Net in Record Time with Super Convergence appeared first on insideHPC.
HLRS and Wuhan to Collaborate on Exascale Computing
The High-Performance Computing Center Stuttgart (HLRS) and Supercomputing Center of Wuhan University have announced plans to cooperate on technology and training projects. "HLRS and the Supercomputing Center at Wuhan University plan to exchange scientists and to focus on key research topics in high-performance computing. Both sides will also share experience in installing large-scale computing systems, particularly because both Wuhan and Stuttgart aim to develop exascale systems."The post HLRS and Wuhan to Collaborate on Exascale Computing appeared first on insideHPC.
...126127128129130131132133134135...