by staff on (#1WHX5)
Today XSEDE announced it has awarded 30,000 core-hours of supercomputing time on the Bridges supercomputer to the North Carolina School of Science and Mathematics (NCSSM). Funded with a $9.65M NSF grant, Bridges contains a large number of research-grade software packages for science and engineering, including codes for computational chemistry, computational biology, and computational physics, along with specialty codes such as computational fluid dynamics. "NCSSM research students often pursue interdisciplinary research projects that involve computational and/or laboratory work in chemistry, physics, and other fields," said Jon Bennett, instructor of physics and faculty mentor for physics research. "The availability of supercomputer computational resources would greatly expand the range and depth of projects that are possible for these students.â€The post Bridges Supercomputer to Power Research at North Carolina School of Science and Mathematics appeared first on insideHPC.
|
High-Performance Computing News Analysis | insideHPC
Link | https://insidehpc.com/ |
Feed | http://insidehpc.com/feed/ |
Updated | 2024-11-06 02:15 |
by staff on (#1WHKQ)
Today Amazon Web Services announced the availability of P2 instances, a new GPU instance type for Amazon Elastic Compute Cloud designed for compute-intensive applications that require massive parallel floating point performance, including artificial intelligence, computational fluid dynamics, computational finance, seismic analysis, molecular modeling, genomics, and rendering. With up to 16 NVIDIA Tesla K80 GPUs, P2 instances are the most powerful GPU instances available in the cloud.The post GPUs Power New AWS P2 Instances for Science & Engineering in the Cloud appeared first on insideHPC.
|
by staff on (#1WE7W)
Men still outnumber women in STEM training and employment, and engineering leaders are working to bring awareness to that diversity gap and the opportunities it presents. SC16 is calling upon all organizations to look at the diversity landscape and publish that data. “Of course, we are supporting programs that empower more girls to study and pursue STEM degrees and careers. Getting more girls through the educational and training pipeline is a great first step, but it’s just the beginning.â€The post Supercomputing Experts Lend Expertise to Address STEM Gender Gap appeared first on insideHPC.
|
by staff on (#1WE61)
A huge barrier in converting cellulose polymers to biofuel lies in removing other biomass polymers that subvert this chemical process. To overcome this hurdle, large-scale computational simulations are picking apart lignin, one of those inhibiting polymers, and its interactions with cellulose and other plant components. The results point toward ways to optimize biofuel production and […]The post Supercomputing Plant Polymers for Biofuels appeared first on insideHPC.
|
by staff on (#1WDXN)
Today Penguin Computing announced Scyld Cloud Workstation 3.0, a 3D-accelerated remote desktop solution which provides true multi-user remote desktop collaboration for cloud-based Linux and Windows desktops. "Unlike other remote desktop solutions, collaboration via Scyld Cloud Workstation is more like sitting in-person with other engineers because a user can hand off control of their desktop to simplify collaboration on a project,†said Victor Gregorio, Vice President and General Manager, Cloud Services, Penguin Computing. “Scyld Cloud Workstation brings collaboration to life, providing a much more thorough and proficient interaction among researchers and engineers working together on a remote desktop. Ultimately, this allows customers a more efficient means to leverage cloud-based desktop solutions.â€The post Penguin Computing Adds Remote Desktop Collaboration to Scyld Cloud Workstation appeared first on insideHPC.
|
by Rich Brueckner on (#1WDXQ)
SC16 will continue its HPC Matters Plenary session series this year with a panel discussion on HPC and Precision Medicine. The event will take place at 5:30 pm on Monday, Nov 14 just prior to the exhibits opening gala. "The success of all of these research programs hinge on harnessing the power of HPC to analyze volumes of complex genomics and other biological datasets that simply can’t be processed by humans alone. The challenge for our community will be to develop the computing tools and services needed to transform how we think about disease and bring us closer to the precision medicine future."The post SC16 Plenary Session to Focus on HPC and Precision Medicine appeared first on insideHPC.
|
by MichaelS on (#1WDP5)
With the introduction of the Intel Scalable System Framework, the Intel Xeon Phi processor can speed up Finite Element Analysis significantly. Using highly tuned math libraries such as the Intel Math Kernel Library (Intel MKL), FEA applications can execute math routines in parallel on the Intel Xeon Phi processor.The post Accelerating Finite Element Analysis with Intel Xeon Phi appeared first on insideHPC.
|
by Rich Brueckner on (#1WE3Y)
Nikos Trikoupis from the City University of New York gave this talk at the HPC User Forum in Austin. "We focus on measuring the aggregate throughput delivered by 12 Intel SSD DC P3700 for NVMe cards installed on the SGI UV 300 scale-up system in the CUNY High Performance Computing Center. We establish a performance baseline for a single SSD. The 12 SSDs are assembled into a single RAID-0 volume using Linux Software RAID and the XVM Volume Manager. The aggregate read and write throughput is measured against different configurations that include the XFS and the GPFS file systems."The post Video: Analysis of SSDs on SGI UV 300 appeared first on insideHPC.
|
by Rich Brueckner on (#1WAG4)
"The POWER8 with NVIDIA NVLink processor enables incredible speed of data transfer between CPUs and GPUs ideal for emerging workloads like AI, machine learning and advanced analyticsâ€, said Rick Newman, Director of OpenPOWER Strategy & Market Development Europe. “The open and collaborative spirit of innovation within the OpenPOWER Foundation enables companies like E4 to leverage new technology and build cutting edge solutions to help clients grappling with the massive amounts of data in today’s technology environment.â€The post E4 Computer Engineering Rolls Out GPU-accelerated OpenPOWER server appeared first on insideHPC.
|
by Rich Brueckner on (#1WAEP)
Today Nvidia announced the general availability of CUDA 8 toolkit for GPU developers. "A crucial goal for CUDA 8 is to provide support for the powerful new Pascal architecture, the first incarnation of which was launched at GTC 2016: Tesla P100,†said Nvidia’s Mark Harris in a blog post. “One of NVIDIA’s goals is to support CUDA across the entire NVIDIA platform, so CUDA 8 supports all new Pascal GPUs, including Tesla P100, P40, and P4, as well as NVIDIA Titan X, and Pascal-based GeForce, Quadro, and DrivePX GPUs.â€The post Nvidia Releases Cuda 8 appeared first on insideHPC.
|
by staff on (#1WA9V)
Today Allinea Software announces availability of its new software release, version 6.1, which offers full support for programming parallel code on the Pascal GPU architecture, CUDA 8 from Nvidia. "The addition of Allinea tools into the mix is an exciting one, enabling teams to accurately measure GPU utilization, employ smart optimization techniques and quickly develop new CUDA 8 code that is bug and bottleneck free,†said Mark O’Connor, VP of Product Management at Allinea.The post Allinea Adds CUDA 8 Support for GPU Developers appeared first on insideHPC.
|
by Rich Brueckner on (#1WA41)
Today at GTC Europe, Nvidia unveiled Xavier, an all-new SoC based on the company's next-gen Volta GPU, which will be the processor in future self-driving cars. According to Huang, the ARM-based Xavier will feature unprecedented performance and energy efficiency, while supporting deep-learning features important to the automotive market. A single Xavier-based AI car supercomputer will be able to replace today’s fully configured DRIVE PX 2 with two Parker SoCs and two Pascal GPUs.The post Video: Nvidia Unveils ARM-Powered SoC with Volta GPU appeared first on insideHPC.
|
by staff on (#1W9YF)
"Our customers are looking for a highly integrated server adapter that solves their pressing need for network performance, efficiency and security,†said Gilad Shainer, vice president of marketing, Mellanox Technologies. “The Innova adapter provides IPsec offload to deliver complete end-to-end security for traffic moving within the data center. Combined with the intelligent network offload and acceleration engines, Innova IPsec is the ideal solution for cloud, telecommunication, Web 2.0, high-performance compute and storage infrastructures.â€The post Mellanox Roll Out New Innova IPsec 10/40G Ethernet Adapters appeared first on insideHPC.
|
by Rich Brueckner on (#1W9WC)
In this video from LUG 2016 in Australia, Chakravarthy Nagarajan from Intel presents: An Optimized Entry Level Lustre Solution in a Small Form Factor. "Our goal was to provide an entry level Lustre storage solution in a high density form factor, with a low cost, small footprint, all integrated with Intel Enterprise Edition for Lustre* software."The post Video: An Optimized Entry Level Lustre Solution in a Small Form Factor appeared first on insideHPC.
|
by staff on (#1W6EV)
"We are at an inflection point in the big data era,†said Bob Picciano, senior vice president, IBM Analytics. “We know that users spend up to 80 percent of their time on data preparation, no matter the task, even when they are applying the most sophisticated AI. Project DataWorks helps transform this challenge by bringing together all data sources on one common platform, enabling users to get the data ready for insight and action, faster than ever before.â€The post IBM Unveils Project DataWorks for AI-Powered Decision-Making appeared first on insideHPC.
|
by Rich Brueckner on (#1W6D0)
Registration is now open for HP-CAST at SC16. The event takes place Nov. 11-12 in Salt Lake City.The post Registration Opens for HP-CAST at SC16 appeared first on insideHPC.
|
by staff on (#1W643)
Today Rogue Wave Software announced it is working with IBM to help make open source software (OSS) support more available. This will help provide comprehensive, enterprise-grade technical support for OSS packages. "With our ten-year history in open source, organizations can feel confident in our ability to resolve issues,†said Richard Sherrard, director of product management at Rogue Wave Software. “We have tier-3 and 4 enterprise architects that offer round-the-clock support for entire ecosystems. We are long-standing experts when it comes to OSS and proud to be working with IBM.â€The post Rogue Wave Improves Support for Open Source Software with IBM appeared first on insideHPC.
|
by staff on (#1W62B)
Today D-Wave Systems announced details of its most advanced quantum computing system, featuring a new 2000-qubit processor. The announcement is being made at the company’s inaugural users group conference in Santa Fe, New Mexico. The new processor doubles the number of qubits over the previous generation D-Wave 2X system, enabling larger problems to be solved and extending D-Wave’s significant lead over all quantum computing competitors. The new system also introduces control features that allow users to tune the quantum computational process to solve problems faster and find more diverse solutions when they exist. In early tests these new features have yielded performance improvements of up to 1000 times over the D-Wave 2X system.The post D-Wave Systems Previews 2000-Qubit Quantum Computer appeared first on insideHPC.
|
by Rich Brueckner on (#1W5Y9)
"The demands of cloud-based business models require service providers to pack more efficient computational capability into their infrastructure,†said Monika Biddulph, general manager, systems and software group, ARM. “Our new CoreLink system IP for SoCs, based on the ARMv8-A architecture, delivers the flexibility to seamlessly integrate heterogeneous computing and acceleration to achieve the best balance of compute density and workload optimization within fixed power and space constraints.â€The post ARM Releases CoreLink Interconnect appeared first on insideHPC.
|
by Rich Brueckner on (#1W5TJ)
"Starting in 2015, Oak Ridge National Laboratory partnered with the University of Tennessee to offer a minor-degree program in data center technology and management, one of the first offerings of its kind in the country. ORNL staff members developed the senior-level course in collaboration with UT College of Engineering professor Mark Dean after an ORNL strategic partner identified a need for employees who could bridge both the facilities and operational aspects of running a data center. In addition to developing the course curriculum, ORNL staff members are also serving as guest lecturers."The post Video: How ORNL is Bridging the Gap between Computing and Facilities appeared first on insideHPC.
|
by Rich Brueckner on (#1W34E)
"Deep learning developers and researchers want to train neural networks as fast as possible. Right now we are limited by computing performance," said Dr. Diamos. "The first step in improving performance is to measure it, so we created DeepBench and are opening it up to the deep learning community. We believe that tracking performance on different hardware platforms will help processor designers better optimize their hardware for deep learning applications."The post Baidu Research Announces DeepBench Benchmark for Deep Learning appeared first on insideHPC.
|
by Rich Brueckner on (#1W2MJ)
Oak Ridge National Lab is hosting a 3-day GPU Mini-hackathon led by experts from the OLCF and Nvidia. The event takes place Nov. 1-3 in Knoxville, Tennessee. "General-purpose Graphics Processing Units (GPGPUs) potentially offer exceptionally high memory bandwidth and performance for a wide range of applications. The challenge in utilizing such accelerators has been the difficulty in programming them. This event will introduce you to GPU programming techniques."The post Register Now for GPU Mini-Hackathon at ORNL Nov. 1-3 appeared first on insideHPC.
|
by staff on (#1W2AN)
"As more organizations turn to high performance computing to process large data sets, demand is growing for scalable and secure data centre solutions. The source, availability and reliability of the power grid infrastructure is becoming a critical factor in a data centre site selection decision,†said Jeff Monroe, CEO at Verne Global. “Verne Global is able to deliver EI a forward-thinking path for growth with a solution that combines unparalleled costs savings with operational efficiencies to support their data-intensive research.â€The post Earlham Institute Tests Green HPC from Verne Global in Iceland appeared first on insideHPC.
|
by Rich Brueckner on (#1W254)
Today DDN Japan announced that the University of Tokyo and the Joint Center for Advanced High Performance Computing (JCAHPC) has selected DDN’s burst buffer solution “IME14K†for their new Reedbush supercomputer. "Many problems in science and research today are located at the intersections of HPC and Big Data, and storage and I/O are increasingly important components of any large compute infrastructure."The post University of Tokyo to Deploy IME14K Burst Buffer on Reedbush Supercomputer appeared first on insideHPC.
|
by Rich Brueckner on (#1W1S1)
In this podcast, the Radio Free HPC team discuss Henry Newman’s recent editorial calling for a self-descriptive data format that will stand the test of time. Henry contends that we seem headed for massive data loss unless we act.The post Radio Free HPC Looks for the Forever Data Format appeared first on insideHPC.
|
by staff on (#1W1AW)
Today Quantum Corp. announced that two of Europe’s premier research institutions are using the company’s StorNext workflow storage as the foundation for managing their growing data and enabling a range of scientific initiatives. "With the StorNext platform, we have removed barriers to research," Thomas Disper, CISO and Head of IT, Max Planck Institute for Chemistry. "It allows us to provide a lot more capacity quickly and easily. We don’t need to give research teams data limits, and storage for new projects can be ready in an afternoon.â€The post Quantum Powers Petascale Storage at Two Major European Research Institutions appeared first on insideHPC.
|
by staff on (#1VYX8)
Over at the ANSYS Blog, Tony DeVarco writes that the company worked with SGI to break a world record for HPC scalability. "Breaking last year’s 129,024 core record by more than 16,000 cores, SGI was able to run the ANSYS provided 830 million cell gas combustor model from 1,296 to 145,152 CPU cores.This reduces the total solver wall clock time to run a single simulation from 20 minutes for 1,296 cores to a mere 13 seconds using 145,152 cores and achieving an overall scaling efficiency of 83%."The post SGI and ANSYS Achieve New World Record in HPC appeared first on insideHPC.
|
by staff on (#1VYV4)
The National Computational Infrastructure in Canberra, Australia’s national advanced computing facility, is the first Australian institution to deploy the latest generation of Intel Xeon Phi processors, formerly code named Knights Landing. "NCI is leading efforts in the scientific community to tune applications for Intel Xeon Phi processors,†explains Dr Muhammad Atif, NCI’s HPC Systems and Cloud Services Manager. “We have identified a large number of applications that will benefit from this hardware and software paradigm, including those applications in the domains of computational physics, computational chemistry and climate research.â€The post Intel Xeon Phi Boosts Supercomputing at NCI in Australia appeared first on insideHPC.
|
by Rich Brueckner on (#1VW16)
Maria Chan from NST presented this talk at Argonne Out Loud. "People eagerly anticipate environmental benefits from advances in clean energy technologies, such as advanced batteries for electric cars and thin-film solar cells. Optimizing these technologies for peak performance requires an atomic-level understanding of the designer materials used to make them. But how is that achieved? Maria Chan will explain how computer modeling is used to investigate and even predict how materials behave and change, and how researchers use this information to help improve the materials' performance. She will also discuss the open questions, challenges, and future strategies for using computation to advance energy materials."The post Video: Using HPC to build Clean Energy Technologies appeared first on insideHPC.
|
by Rich Brueckner on (#1VW18)
The European Fortissimo Project has issued its Second Call for Proposals. Fortissimo is a collaborative project that enables European SMEs to be more competitive globally through the use of simulation services running on High Performance Computing Cloud infrastructure.The post Call for Proposals: Fortissimo Project appeared first on insideHPC.
|
by staff on (#1VS0J)
Today Cadence Design Systems announced several important deliveries in its collaboration with TSMC to advance 7nm FinFET designs for mobile and high-performance computing platforms. Working together, Cadence and TSMC have developed some of the first design IP offerings for the 7nm process, offering early IP access to protocols that are optimized for and most relevant to mobile and HPC applications.The post Cadence and TSMC Advance Towards 7nm FinFET Designs appeared first on insideHPC.
|
by Rich Brueckner on (#1VRYV)
The DDN User Group meeting is returning to Salt Lake City for SC16. The meeting takes place Monday, Nov. 14 from 2pm-6pm at the Radisson Hotel near the convention center.The post DDN User Group Coming to SC16 Nov. 14 appeared first on insideHPC.
|
by Rich Brueckner on (#1VRKT)
Larry Smarr presented this talk as part of NCSA's 30th Anniversary Celebration. "For the last thirty years, NCSA has played a critical role in bringing computational science and scientific visualization to the national user community. I will embed those three decades in the 50 year period 1975 to 2025, beginning with my solving Einstein's equations for colliding black holes on the megaFLOPs CDC 6600 and ending with the exascale supercomputer. This 50 years spans a period in which we will have seen a one trillion-fold increase in supercomputer speed."The post Larry Smarr Presents: 50 Years of Supercomputing appeared first on insideHPC.
|
by staff on (#1VRJE)
This week Minimal Metrics announced an early-adopter program for PerfMiner, which uses lightweight, and pervasive performance data collection technology, automates its collection, and mines the data for key performance indicators. These indicators were developed through Minimal Metrics’ extensive experience tuning HPC and enterprise application performance, presented in an audience-specific, drill-down hierarchy that provides accountability for site productivity down to the performance of individual application threads.The post Minimal Metrics Releases PerfMiner Parallel Optimization Tool appeared first on insideHPC.
|
by Rich Brueckner on (#1VQS6)
In this this video from ISC 2016, Tim Carroll describes how Cycle Computing is working with Dell Technologies to deliver more science for more users. Cycle Computing’s CycleCloud software suite is the leading cloud orchestration, provisioning, and data management platform for Big Compute, Big Data, and large technical computing applications running on any public, private, or internal environment.The post Video: Cycle Computing Works with Dell to Deliver More Science for More Users appeared first on insideHPC.
|
by staff on (#1VN19)
Today Verne Global announced Volkswagen is moving more than 1 MW of high performance computing applications to the company’s datacenter in Iceland. The company will take advantage of Verne Global’s hybrid data center approach – with variable resiliency and flexible density – to support HPC applications in its continuous quest to develop cutting-edge cars and automotive technology.The post Volkswagen Moves HPC Workloads to Verne Global in Iceland appeared first on insideHPC.
|
by Rich Brueckner on (#1VMX3)
Today ArrayFire released the latest version of their ArrayFire open source library of parallel computing functions supporting CUDA, OpenCL, and CPU devices. ArrayFire v3.4 improves features and performance for applications in machine learning, computer vision, signal processing, statistics, finance, and more.The post ArrayFire v3.4 Parallel Computing Library Speeds Machine Learning appeared first on insideHPC.
|
by MichaelS on (#1VMKC)
"Fortran has been proven to be extremely resilient to new developments that have appeared in other programming languages over the years. New versions continue to be available and associated with ANSI standards, so that an application written for one operating system should be able to be compiled and run with different compilers on different operating systems. The latest version is Fortran 2008, with the next version reportedly to be available as Fortran 2015, in 2018."The post Fortran for HPC appeared first on insideHPC.
|
by Rich Brueckner on (#1VMQV)
In this video from GTC 2016 in Taiwan, Nvidia CEO Jen-Hsun Huang unveils technology that will accelerate the deep learning revolution that is sweeping across industries. “AI computing will let us create machines that can learn and behave as humans do. It’s the reason why we believe this is the beginning of the age of AI.â€The post Video: The Deep Learning AI Revolution appeared first on insideHPC.
|
by Rich Brueckner on (#1VKVT)
The George Washington University is seeking a Simulation IT Specialist in our Job of the Week.GW’s Clinical Learning and Simulation Skills (CLASS) Center provides one of the most innovative educational environments in the nation. "The CLASS Center is seeking and Simulation IT Specialist to be part of the dynamic team that develops and delivers simulation scenarios for GW learners. The Simulation IT Specialist will oversee the technology infrastructure for theCLASS Center by ensuring all AV and IT equipment are integrated seamlessly."The post Job of the Week: Simulation IT Specialist at George Washington University appeared first on insideHPC.
|
by Rich Brueckner on (#1VH91)
In this video from the 2016 HPC User Forum in Austin, Earl Joseph describes IDC's new Exascale Tracking Study. The project will monitor the many Exascale projects around the world.The post IDC to Launch New Exascale Tracking Study appeared first on insideHPC.
|
by staff on (#1VH59)
In this RCE Podcast, Brock Palen and Jeff Squyres speak with Gregory Kurtzer about Singularity, a container solution for HPC and research environments. "Singularity allows a non-privileged user to “swap out†the operating system on the host for one they control. So if the host system is running RHEL6 but your application runs in Ubuntu, you can create an Ubuntu image, install your applications into that image, copy the image to another host, and run your application on that host in it's native Ubuntu environment."The post RCE Podcast Looks at Singularity Container Solution for HPC appeared first on insideHPC.
|
by staff on (#1VH3F)
"PSyclone was developed for the UK Met Office and is now a part of the build system for Dynamo, the dynamical core currently in development for the Met Office’s ‘next generation’ weather and climate model software. By generating the complex code needed to make use of thousands of processors, PSyclone leaves the Met Office scientists free to concentrate on the science aspects of the model. This means that they will not have to change their code from something that works on a single processing unit (or core) to something that runs on many thousands of cores."The post PSyclone Software Eases Weather and Climate Forecasting appeared first on insideHPC.
|
by Rich Brueckner on (#1VGY2)
The CloudLightning Project in Europe has published preliminary results from a survey on Barriers to Using HPC in the Cloud. "Trust in cloud computing would appear to be a significant barrier to adopting cloud computing for HPC workloads. Data management concerns dominate the responses."The post CloudLightning Report Looks at Barriers to HPC in the Cloud appeared first on insideHPC.
|
by staff on (#1VGT4)
Today TYAN announced support and availability of the NVIDIA Tesla P100, P40 and P4 GPU accelerators with the new NVIDIA Pascal architecture. Incorporating NVIDIA’s state-of-the-art technologies allows TYAN to offer the exceptional performance and data-intensive applications features to HPC users.The post TYAN Adds Support for NVIDIA Tesla P100, P40 and P4 GPUs appeared first on insideHPC.
|
by staff on (#1VD5A)
Today Cycle Computing announced the Cloud-Agnostic Glossary, a solution brief written by Cycle Computing executives to help customers understand the different terms the different providers use and how they relate. "Technology keeps evolving, terms keep changing, and because of this, we were inspired to stop and take a moment to develop a glossary to keep track of meanings in real time, and according to vendor,†said Jason Stowe, CEO, Cycle Computing. “We ended up with this great solution brief, worthy of reading and sharing. It's a useful document that we plan to update regularly.â€The post Cycle Computing Publishes Cloud-Agnostic Glossary appeared first on insideHPC.
|
by Rich Brueckner on (#1VD1M)
In this video, Better Markets CEO Dennis Kelleher discusses the progress of the Consolidated Audit Trail (CAT), a proposed SEC supercomputer that will be used to track orders and peer into dark pools. While this sounds like a good idea, Kelleher describes the conflicts of interest inherent in the proposal process the SEC is using for CAT. Kelleher is the CEO of Better Markets, a non-profit, non-partisan, and independent organization founded in the wake of the 2008 financial crisis to promote the public interest in the financial markets.The post Video: CAT Supercomputer to Track Dark Pools on Wall Street appeared first on insideHPC.
|
by staff on (#1VCVM)
Submissions are now open for ISC 2017 tutorial and workshop proposals. The ISC 2017 conference takes place June 18-22, 2017 in Frankfurt, Germany.The post Call For Proposals: ISC 2017 Tutorials and Workshops appeared first on insideHPC.
|
by Rich Brueckner on (#1VCMJ)
"PushToCompute is the easiest and most advanced DevOps pipeline for high performance applications available todayâ€, said Nimbix CTO Leo Reiter. “It seamlessly enables serverless computing of even the most complex workflows, greatly simplifying application deployment at scale, and eliminating the need for any platform orchestration or user interface work. Developers simply focus on their specific functionality, rather than on building cloud capabilities into their applications.â€The post Nimbix Cloud Adds Docker Integration to JARVICE appeared first on insideHPC.
|
by MichaelS on (#1VCFM)
With a massive surge in genomics research, the ability to quickly process very large amounts of data is now required for any organization that is involved in genomics. While the cost has been reduced significantly, the amount of data that is produced is has increased as well. This article describes next generation sequencing and how a combination of hardware and innovative software can decrease the amount of time to sequence genomes.The post Next Generation Sequencing appeared first on insideHPC.
|