by Rich Brueckner on (#WN56)
"Ngenea’s blazingly-fast on-premises storage stores frequently accessed active data on the industry’s leading high performance file system, IBM Spectrum Scale (GPFS). Less frequently accessed data, including backup, archival data and data targeted to be shared globally, is directed to cloud storage based on predefined policies such as age, time of last access, frequency of access, project, subject, study or data source. Ngenea can direct data to specific cloud storage regions around the world to facilitate remote low latency data access and empower global collaboration."The post Video: General Atomics Delivers Data-Aware Cloud Storage Gateway with ArcaStream appeared first on insideHPC.
|
High-Performance Computing News Analysis | insideHPC
Link | https://insidehpc.com/ |
Feed | http://insidehpc.com/feed/ |
Updated | 2024-11-26 08:15 |
by Rich Brueckner on (#WHW9)
"Modern Cosmology and Plasma Physics codes are capable of simulating trillions of particles on petascale systems. Each time step generated from such simulations is on the order of 10s of TBs. Summarizing and analyzing raw particle data is challenging, and scientists often focus on density structures for follow-up analysis. We develop a highly scalable version of the clustering algorithm DBSCAN and apply it to the largest particle simulation datasets. Our system, called BD-CATDS, is the first one to perform end-to-end clustering analysis of trillion particle simulation output. We demonstrate clustering analysis of a 1.4 Trillion particle dataset from a plasma physics simulation, and a 10,240^3 particle cosmology simulation utilizing ~100,000 cores in 30 minutes. BD-CATS has enabled scientists to ask novel questions about acceleration mechanisms in particle physics, and has demonstrated qualitatively superior results in cosmology. Clustering is an example of one scientific data analytics problem. This talk will conclude with a broad overview of other leading data analytics challenges across scientific domains, and joint efforts between NERSC and Intel Research to tackle some of these challenges."The post Video: High Performance Clustering for Trillion Particle Simulations appeared first on insideHPC.
|
by Rich Brueckner on (#WHRT)
"Avere Systems is now offering you ultimate cloud flexibility. We've now announced full support across three different cloud service providers: Google, Amazon, and Azure. And now we offer customers to be able to move their compute workloads across all these different cloud providers and store data across different cloud providers to achieve ultimate flexibility."The post Avere Enables Cloudbursting Across Different Providers at SC15 appeared first on insideHPC.
|
by MichaelS on (#WHNJ)
"Many optimizations can be performed on an application that is QCD based and can take advantage of the Intel Xeon Phi coprocessor as well. With pre-fetching, SMT threading and other optimizations as well as using the Intel Xeon Phi coprocessor, the performance gains were quite significant. An initial test, using single precision on the base Sandy Bridge systems, the test case was showing about 128 Gflops. However, using the Intel Xeon Phi coprocessor, the performance jumped to over 320 Gflops."The post QCD Optimization on Intel Xeon Phi appeared first on insideHPC.
|
by staff on (#WHNM)
At SC15, 1degreenorth announced plans to build an on-demand High Performance Computing Big DataAnalytics (“HPC-BDAâ€) infrastructure the National Supercomputing Center (NSCC) Singapore. The prototype will be used for experimentation and proof-of-concept projects by the big data and data science community in Singapore.The post 1degreenorth to Prototype HPDA Infrastructure in Singapore appeared first on insideHPC.
|
by Rich Brueckner on (#WHHT)
"HPC is no longer a tool only for the most sophisticated researchers. We’re taking what we’ve learned from working with some of the most advanced, sophisticated universities and research institutions and customizing that for delivery to mainstream enterprises,†said Jim Ganthier, vice president and general manager, Engineered Solutions and Cloud, Dell. “As the leading provider of systems in this space, Dell continues to break down barriers and democratize HPC. We’re seeing customers in even more industry verticals embrace its power.â€ovations, by partnering on ecosystems, by partnering in terms of labs, we're going to be able to take all of that wonderful opportunity and make it readily available to enterprises of all classes."The post Intel® Scalable System Framework and Dell’s Strategic Focus on HPC appeared first on insideHPC.
|
by Rich Brueckner on (#WDVZ)
"We've tailored our story for the HPC developers here, who are really worried about applications and performance of applications. What's really happened traditionally is that the single-threaded applications had not really been able to take advantage of the multi-core processor-based server platforms. So they've not really been getting the optimized platform and they've been leaving money on the table, so to speak. Because when you can optimize your applications for parallelism, you can take advantage of these multi-processor server platform. And you can get sometimes up to 10x performance boost, maybe sometime 100x, we've seen some financial services applications, or 3x for chemistry types of simulations as an example."The post Hewlett Packard Enterprise Showcases Benefits of Code Modernization appeared first on insideHPC.
|
by staff on (#W9TV)
Altair has announced that it will provide an open source licensing option of PBS Professional® (PBS Pro). PBS Pro will become available under two different licensing options for commercial installations and as an Open Source Initiative compliant version. Altair will work closely with Intel and the Linux Foundation’s OpenHPC Collaborative Project to integrate the open source version of PBS Pro.The post Altair to Open Source PBS Professional HPC Technology in 2016 appeared first on insideHPC.
|
by Rich Brueckner on (#W6J8)
In this video from SC15, Patrick McGinn from CoolIT Systems describes the company's latest advancements in industry leading liquid cooling solutions for HPC data center systems. “The adoption from vendors and end users for liquid cooling is growing rapidly with the rising demands in rack density and efficiency requirements,†said Geoff Lyon, CEO/CTO of CoolIT Systems who also chairs The Green Grid’s Liquid Cooling Work Group. “CoolIT Systems is responding to these demands with our world leading enterprise level liquid cooling solutions.â€The post CoolIT Systems Takes Liquid Cooling for HPC Data Centers to the Next Level appeared first on insideHPC.
|
by staff on (#WE87)
"The challenges facing informatics systems in the pharmaceutical industry’s R&D laboratories are changing. The number of large-scale computational problems in the life sciences is growing and they will need more high-performance solutions than their predecessors. But the response has to be nuanced: HPC is not a cure-all for the computational problems of pharma R&D. Some applications are better suited to the use of HPC than others and its deployment needs to be considered right at the beginning of experimental design."The post How Can HPC Best Help Pharma R&D? appeared first on insideHPC.
|
by Rich Brueckner on (#WE65)
In this video from SC15, Rich Brueckner from insideHPC moderates a panel discussion on the NSCI initiative. "As a coordinated research, development, and deployment strategy, NSCI will draw on the strengths of departments and agencies to move the Federal government into a position that sharpens, develops, and streamlines a wide range of new 21st century applications. It is designed to advance core technologies to solve difficult computational problems and foster increased use of the new capabilities in the public and private sectors."The post Video: Dell Panel Discussion on the NSCI initiative from SC15 appeared first on insideHPC.
|
by staff on (#WE2F)
Registration is now open for the 2016 OpenPOWER Summit, which will take place April 5-7 in San Jose, California in conjunction with the GPU Technology Conference. With a conference theme of "Revolutionizing the Datacenter," the event has issued its Call for Speakers and Exhibits.The post OpenPOWER Summit Returns to GTC in San Jose in April, 2016 appeared first on insideHPC.
|
by Rich Brueckner on (#WE0H)
In this video from SC15, Rich Brueckner from insideHPC talks to contestants in the Student Cluster Competition. Using hardware loaners from various vendors and Allinea performance tools, nine teams went head-to-head to build the fastest HPC cluster.The post Student Cluster Teams Learn the Tools of Performance Tuning at SC15 appeared first on insideHPC.
|
by Rich Brueckner on (#WDYX)
Today Green Revolution Cooling announced that their immersive cooling technology is used by the most efficient supercomputers in the world as ranked in the latest Green500 list. For the third consecutive year, the Green Revolution Cooling-powered Tsubame-KFC supercomputer at Tokyo Institute of Technology achieved top honors, ranking as the most efficient commercially available setup, and second overall.The post Green Revolution Cooling Helps Tsubame-KFC Supercomputer Top the Green500 appeared first on insideHPC.
|
by staff on (#WANZ)
Today the ISC 2016 conference issued their Call for Proposals for their new PhD Forum, which will allow PhD students to present their research results in a setting that sparks the exchange of scientific knowledge and lively discussions. The call is now open and interested students are encouraged to submit their proposals by February 15, 2016.The post PhD Forum Coming to ISC 2016 appeared first on insideHPC.
|
by staff on (#WAGZ)
Today the HPC Advisory Council announced that 12 university teams from around the world will compete in the HPCAC-ISC 2016 Student Cluster Competition at ISC 2016 conference next June in Frankfurt.The post 12 International Student Teams to Face Off at HPCAC-ISC 2016 Student Cluster Competition appeared first on insideHPC.
|
by Rich Brueckner on (#WABC)
"What we're showcasing this year is - what we're jokingly calling - face-melting performance. What we're trying to do is make extreme performance available at a very aggressive price point, and at a very aggressive space point, for end users. So, what we've been doing and what we've been working on for the past couple of months has been, basically, building an NVMe-type unit. This NVMe unit connects flash devices through a PCIe interface to the processor complex."The post “Face-Melting Performance†with the Forte Hyperconverged NVMe Appliance from Scalable Informatics appeared first on insideHPC.
|
by Rich Brueckner on (#WA86)
Today Seagate and Newisys announced a new flash storage architecture capable or 1 Terabyte/sec performance. Designed for HPC applications, the "industry’s fastest flash storage design" comprises 21 Newisys NSS-2601 with dual NSS-HWxEA Storage Server Modules deployed with Seagate’s newest SAS 1200.2 SSD drives. These devices can be combined in a single 42U rack to achieve block I/O performance of 1TB/s with 5PB of storage. Each Newisys 2U server with 60 Seagate SSDs is capable of achieving bandwidth of 49GB/s.The post Seagate and Newisys Demonstrate 1 TB/s Flash Architecture appeared first on insideHPC.
|
by Rich Brueckner on (#WA66)
In this video from SC15, Bill Mannel from HPE, Charlie Wuischpard from Intel, and Nick Nystrom from the Pittsburgh Supercomputing Center discuss their collaboration for High Performance Computing. Early next year, Hewlett Packard Enterprise will deploy the Bridges supercomputer based on Intel technology for breakthrough data centric computing at PSC. "Welcome to Bridges, a new concept in HPC - a system designed to support familiar, convenient software and environments for both traditional and non-traditional HPC users. It is a richly connected set of interacting systems offering a flexible mix of gateways (web portals), Hadoop and Spark ecosystems, batch processing and interactivity."The post Hewlett Packard Enterprise, Intel and PSC: Driving Innovation in HPC appeared first on insideHPC.
|
by MichaelS on (#W9G2)
Genome sequencing is a technology that can takes advantage of the growing capability of todays ‘ modern HPC systems. Dell is leading the charge in the area of personalized medicine by providing highly tuned systems to perform genomic sequencing and data management. The whitepaper, The InsideHPC Guide to Genomic is a overview of how Dell is providing state-of-the-art solutions to the life science industry.The post Genomics and HPC appeared first on insideHPC.
|
by Rich Brueckner on (#W7HA)
"We're now providing LSF in the Cloud as a service to our customers because their workloads are getting larger over time, they're converging-- HPC is not converging with Analytics and even tough they provision for their average load, they can never provision for the spikes or for new projects. So we're helping our clients out by providing the services in the Cloud, where they can get LSF or Platform Symphony, or Spectrum Scale."The post Video: IBM Platform LSF in the Cloud at SC15 appeared first on insideHPC.
|
by Rich Brueckner on (#W73N)
Dan Stanzione from TACC presented this talk at the DDN User Group at SC15. "TACC is an advanced computing research center that provides comprehensive advanced computing resources and support services to researchers in Texas and across the USA. The mission of TACC is to enable discoveries that advance science and society through the application of advanced computing technologies. Specializing in high performance computing, scientific visualization, data analysis & storage systems, software, research & development and portal interfaces, TACC deploys and operates advanced computational infrastructure to enable computational research activities of faculty, staff, and students of UT Austin."The post Towards the Convergence of HPC and Big Data — Data-Centric Architecture at TACC appeared first on insideHPC.
|
by Rich Brueckner on (#W6WE)
The HPC Advisory Council Stanford Conference 2016 has issued its Call for Participation. The event will take place Feb 24-25, 2016 on the Stanford University campus at the new Jen-Hsun Huang Engineering Center. "The HPC Advisory Council Stanford Conference 2016 will focus on High-Performance Computing usage models and benefits, the future of supercomputing, latest technology developments, best practices and advanced HPC topics. In addition, there will be a strong focus on new topics such as Machine Learning and Big Data. The conference is open to the public free of charge and will bring together system managers, researchers, developers, computational scientists and industry affiliates."The post Call for Participation: HPC Advisory Council Stanford Conference appeared first on insideHPC.
|
by Rich Brueckner on (#W6TV)
"We have enabled virtualization for HPC but it's important to bring the benefits of virtualization to end researchers in a way they can use it, right? So what we have done is we have created the solution plus VMware High-Performance Analytics, which allows researchers to author their own workloads, they can collaborate it, they can clone it, then they can share it with other researchers. And they can modify their workload - they can fine tune it."The post Video: VMware HPC Virtualization Enables Research as Service appeared first on insideHPC.
|
by Rich Brueckner on (#W6NF)
Today NEC Corporation announced that SX-ACE vector supercomputers delivered to the University of Kiel, Alfred Wegener Institute, and the High Performance Computing Center Stuttgart have begun operating and contributing to research.The post Three German Instititutes Deploy NEC’s SX-ACE Vector Supercomputers appeared first on insideHPC.
|
by MichaelS on (#W6BR)
The computational requirements for weather forecasting are driven by the need for higher resolution models for more accurate and extended forecasts. In addition, more physics and chemistry processes are included in the models so we can observe the very fine features of weather behavior. These models operate on 3D grids that encompass the globe. The closer the points on the grid are to each other, the more accurate the results.The post HPC Helps Drive Weather Forecasting appeared first on insideHPC.
|
by staff on (#W3K7)
Last week at SC15, Rambus announced that it has partnered with Los Alamos National Laboratory (LANL) for evaluating elements of its Smart Data Acceleration (SDA) Research Program. The SDA platform has been deployed at LANL to improve the performance of in-memory databases, graph analytics and other Big Data applications.The post Rambus Advances Smart Data Acceleration Research Program with LANL appeared first on insideHPC.
|
by staff on (#W3J5)
Baidu's Chief Scientist Andrew Ng has started a Social Media campaign for inspiring people to study Machine Learning. "Regardless of where you learned Machine Learning, if it has had an impact on you or your work, please share your story on Facebook or Twitter in a short written or video post. I will invite the people who shared the 5 most inspirational stories to join me in a conversation on Google Hangout about the future of machine learning."The post Share Your Machine Learning Story to Inspire Others appeared first on insideHPC.
|
by Rich Brueckner on (#W17A)
In this video from SC15, Intel's Diane Bryant discusses how next-generation supercomputers are transforming HPC and presenting exciting opportunities to advance scientific research and discovery to deliver far-reaching impacts on society. As a frequent speaker on the future of technology, Bryant draws on her experience running Intel’s Data Center Group, which includes the HPC business segment, and products ranging from high-end co-processors for supercomputers to big data analytics solutions to high-density systems for the cloud.The post Video: SC15 HPC Matters Plenary Session with Intel’s Diane Bryant appeared first on insideHPC.
|
by Rich Brueckner on (#W0KE)
Intel in Oregon is seeking an HPC Software Intern in our Job of the Week. "If you are interested in being on the team that builds the world's fastest supercomputer, read on. Our team is designing how we integrate new HW and SW, validate extreme scale systems, and debug challenges that arise. The team consist of engineers who love to learn, love a good challenge, and aren't afraid of a changing environment. We need someone who can help us with creating and executing codes that will be used to validate and debug our system from first Si bring-up through at-scale deployment. The successful candidate will have experience in the Linux environment creating code: C or Python. If you have the right skills, you will help build systems utilized by the best minds on the planet to solve grand challenge science problems such as climate research, bio-medical research, genome analysis, renewable energy, and other areas that require the world's fastest supercomputers to tackle. Be part of the first to get to Exascale!"The post Job of the Week: HPC Software Intern at Intel appeared first on insideHPC.
|
by Rich Brueckner on (#VYG2)
Last week at SC15, Numascale announced the successful installation of a large shared memory Numascale/Supermicro/AMD system at a customer datacenter facility in North America. The system is the first part of a large cloud computing facility for analytics and simulation of sensor data combined with historical data. "The Numascale system, installed over the last two weeks, consists of 108 Supermicro 1U servers connected in a 3D torus with NumaConnect, using three cabinets with 36 servers apiece in a 6x6x3 topology. Each server has 48 cores in three AMD Opteron 6386 CPUs and 192 GBytes memory, providing a single system image and 20.7 TBytes to all 5184 cores. The system was designed to meet user demand for “very large memory†hardware solutions running a standard single image Linux OS on commodity x86 based servers."The post Numascale Teams with Supermicro & AMD for Large Shared Memory System appeared first on insideHPC.
|
by Rich Brueckner on (#VYCW)
In this podcast, Jorge Salazar from TACC interviews two winners of the 2015 ACM Gordon Bell Prize, Omar Ghattas and Johann Rudi of the Institute for Computational Engineering and Sciences, UT Austin. As part of the discussion, Ghattas describes how parallelism and exascale computing will propel science forward.The post Podcast: Supercomputing the Deep Earth with the Gordon Bell Prize Winners appeared first on insideHPC.
|
by Rich Brueckner on (#VY9B)
In this video from SC15, Rich Brueckner from insideHPC moderates a panel discussion with Hewlett Packard Enterprise HPC customers. "Government labs, as well as public and private universities worldwide, are using HPE Compute solutions to conduct research across scientific disciplines, develop new drugs, discover renewable energy sources and bring supercomputing to nontraditional users and research communities."The post Video: Hewlett Packard Enterprise HPC Customer Panel at SC15 appeared first on insideHPC.
|
by Rich Brueckner on (#VY7W)
PRACEdays16 has extended the deadline for its Call for Participation to Dec. 13, 2015. Taking place May 10-12, 2016 in Prague, the conference will bring together experts from academia and industry who will present their advancements in HPC-supported science and engineering.The post PRACEdays16 Extends Deadline for Submissions to Dec. 13, 2015 appeared first on insideHPC.
|
by Rich Brueckner on (#VY6M)
In this video, Torsten Hoefler from ETH Zurich and John West from TACC discuss the preview the upcoming PASC16 and SC16 conferences. With a focus on Exascale computing and user applications, the events will set the stage for the next decade in High Performance Computing.The post Video: A Preview of the PASC16 and SC16 Conferences appeared first on insideHPC.
|
by Rich Brueckner on (#VV7K)
Today Russia's RSC Group announced that Team TUMuch Phun from the Technical University of Munich (TUM) won the Highest Linpack Award in the SC15 Student Cluster Competition. The enthusiastic students achieved 7.1 Teraflops on the Linpack benchmark using an RSC PetaStream cluster with computing nodes based on Intel Xeon Phi. TUM student team took 3rd place in overall competition within 9 teams participated in SCC at SC15, so as only one European representative in this challenge.The post TUM Germany Wins Highest Linpack Award on RSC system at SC15 Student Cluster Competition appeared first on insideHPC.
|
by MichaelS on (#VV2E)
Basic optimization techniques that include an understanding of math functions and how to simplify can go a long way towards better performance. "When optimizing for a parallel SIMD system such as the Intel Xeon Phi coprocessor, it is also important to make sure that the results match the scalar system. Using vector data may cause parts of the computer program to be re-written, so that the compiler can generate vector code."The post Parallel Methods in Financial Services for Intel Xeon Phi appeared first on insideHPC.
|
by Rich Brueckner on (#VQHT)
Software for data analysis, system management, and for debugging other software were be among the innovations on display at SC15 last week. In addition to the software, novel and improved hardware will also be on display, together with an impressive array of initiatives from Europe in research and development leading up to Exascale computing.The post Speeding Up Applications at SC15 appeared first on insideHPC.
|
by Rich Brueckner on (#VQEJ)
In this video from SC15, Sam Mahalingam from Altair discusses the HyperWorks Unlimited Virtual Appliance and the new open source version of PBS Pro. “Our goal is for the open source community to actively participate in shaping the future of PBS Professional driving both innovation and agility. The community’s contributions combined with Altair’s continued research and development, and collaboration with Intel and our HPC technology partners will accelerate the advancement of PBS Pro to aggressively pursue exascale computing initiatives in broad classes and domains.â€The post Video: PBS Pro Workload Manager Goes Open Source appeared first on insideHPC.
|
by staff on (#VQEM)
Last week at SC15, Fujifilm announced that its next-generation LTO Ultrium 7 data cartridge has been qualified by the LTO technology provider companies for commercial production and is available immediately. FUJIFILM LTO Ultrium 7 has a compressed storage capacity of 15.0TB with a transfer rate of 750MB/sec assuming 2.5:1 compression ratio. This capacity achievement represents a 2.4X increase over the current LTO-6 generation.The post FujiFilm Qualifies Next-Gen LTO-7 Cartridge appeared first on insideHPC.
|
by staff on (#VQCK)
NCSA is now accepting applications for the Blue Waters Graduate Program. This unique program lets graduate students from across the country immerse themselves in a year of focused high-performance computing and data-intensive research using the Blue Waters supercomputer to accelerate their research.The post Apply Now for Blue Waters Graduate Fellowships appeared first on insideHPC.
|
by Rich Brueckner on (#VQB1)
In this video, Jason Souloglou and Eric Van Hensbergen from ARM describe how Pathscale EKOPath compilers are enabling a new HPC ecosystem based on low-power processors. "As an enabling technology, EKOPath gives our customers the ability to compile for native ARMv8 CPU or accelerated architectures that return the fastest time to solution."The post Video: Pathscale Compilers Power ARM for HPC at SC15 appeared first on insideHPC.
|
by Rich Brueckner on (#VM5A)
Asetek showcased its full range of RackCDU hot water liquid cooling systems for HPC data centers at SC15 in Austin. On display were early adopting OEMs such as CIARA, Cray, Fujitsu, Format and Penguin. HPC installations from around the world incorporating Asetek RackCDU D2C (Direct-to-Chip) technology were also be featured. In addition, liquid cooling solutions for both current and future high wattage CPUs and GPUs from Intel, Nvidia and OpenPower were on display.The post Video: Asetek Showcases Growing Adoption of OEM Solutions at SC15 appeared first on insideHPC.
|
by Rich Brueckner on (#VM1Z)
"General Relativity is celebrating this year a hundred years since its first publication in 1915, when Einstein introduced his theory of General Relativity, which has revolutionized in many ways the way we view our universe. For instance, the idea of a static Euclidean space, which had been assumed for centuries and the concept that gravity was viewed as a force changed. They were replaced with a very dynamical concept of now having a curved space-time in which space and time are related together in an intertwined way described by these very complex, but very beautiful equations."The post Podcast: Supercomputing Black Hole Mergers appeared first on insideHPC.
|
by Rich Brueckner on (#VKY9)
Last week at SC15, NEC Corporation announced that the Flemish Supercomputer Center (VSC) has selected an LX-series supercomputer. With a peak performance of 623 Teraflops, the new system will be the fastest in Belgium, ranking amongst the top 150 biggest and fastest supercomputers in the world. Financed by the Flemish minister for Science and Innovation in Belgium, the infrastructure will cost 5.5 million Euro.The post NEC to Build Fastest Supercomputer in Belgium for VSC appeared first on insideHPC.
|
by staff on (#VKTB)
Does your research generate, analyze, and/or visualize data using advanced digital resources? In its recent Call for Participation, the CADENS project is looking for scientific data to visualize or existing data visualizations to weave into larger documentary narratives in a series of fulldome digital films and TV programs aimed at broad public audiences. Visualizations of your work could reach millions of people, amplifying its greater societal impacts!The post CADENS Project Seeking Data and Visualizations appeared first on insideHPC.
|
by Rich Brueckner on (#VKQ1)
"We've had a great time here in Austin talking about data centric computing-- the ability to use IBM Spectrum Scale and Platform LSF to do Cognitive Computing. Customers, partners, and the world have been talking about how we can really bring together file, object, and even business analytics workloads together in amazing ways. It's been fun."The post Video: Data Centric Computing for File and Object Storage from IBM appeared first on insideHPC.
|
by staff on (#VKBG)
At SC15, Intel talked about some transformational high-performance computing technologies and the architecture—Intel® Scalable System Framework (Intel® SSF). Intel describes Intel SSF as “an advanced architectural approach for simplifying the procurement, deployment, and management of HPC systems, while broadening the accessibility of HPC to more industries and workloads.†Intel SSF is designed to eliminate the traditional bottlenecks; the so called power, memory, storage, and I/O walls that system builders and operators have run into over the years.The post Setting a Path for the Next-Generation of High-Performance Computing Architecture appeared first on insideHPC.
|
by Rich Brueckner on (#VGHR)
SC15 has announced the winners of the Student Cluster Competition, which took place last week in Austin. Team Diablo, a team of undergraduate students from Tsinghua University in China, was named the overall winner. "The competition is a real-time, non-stop, 48-hour challenge in which teams of six undergraduates assemble a small cluster at SC15 and race to complete a real-world workload across a series of scientific applications, demonstrate knowledge of system architecture and application performance, and impress HPC industry judges."The post China’s Team Diablo Wins SC15 Student Cluster Competition appeared first on insideHPC.
|
by staff on (#VGEY)
In this special guest feature from Scientific Computing World, Robert Roe writes that software scalability and portability may be more important even than energy efficiency to the future of HPC. "As the HPC market searches for the optimal strategy to reach exascale, it is clear that the major roadblock to improving the performance of applications will be the scalability of software, rather than the hardware configuration – or even the energy costs associated with running the system."The post Who Will Write Next-generation Software? appeared first on insideHPC.
|