Feed insidehpc High-Performance Computing News Analysis | insideHPC

Favorite IconHigh-Performance Computing News Analysis | insideHPC

Link https://insidehpc.com/
Feed http://insidehpc.com/feed/
Updated 2024-11-06 10:45
Dona Crawford Retires from LLNL
Dona Crawford, Associate Director for Computation at NNSA’s Lawrence Livermore National Laboratory (LLNL), announced her retirement last week after 15 years of leading Livermore’s Computation Directorate. "Dona has successfully led a multidisciplinary 1000-person team that develops and deploys world-class supercomputers, computational science, and information technology expertise that enable the Laboratory’s national security programs,” LLNL Director Bill Goldstein said. “Dona’s leadership in high performance computing has been instrumental in bringing a series of world-class machines to the Laboratory.”The post Dona Crawford Retires from LLNL appeared first on insideHPC.
Job of the Week: HPC Director at NYU Langone Medical Center
square-300The NYU Langone Medical Center is seeking an HPC Director in our Job of the Week. "The Director, High Performance Computing (HPC), under the supervision of the Senior Director, Research IT, and in close collaboration with the Associate Dean, Collaborative Science, and the Director, Institute for Computational Biology, provides support for computer-intensive scientific work at NYU Langone Medical Center (NYULMC). The Director will work intensively with scientific end users and liaise with other NYU Medical Center IT (MCIT) information, technical, and engineering staff to satisfy requirements."The post Job of the Week: HPC Director at NYU Langone Medical Center appeared first on insideHPC.
Cray Hits Record $724 Million Revenue for 2015
Today Cray announced financial results for the year and fourth quarter ended December 31, 2015. The company reported total 2015 revenue of $724.7 million, which compares with $561.6 million for 2014.The post Cray Hits Record $724 Million Revenue for 2015 appeared first on insideHPC.
Video: Optimizing Applications for the CORI Supercomputer at NERSC
In this video from SC15, NERSC shares its experience on optimizing applications to run on the new Intel Xeon Phi processors (code name Knights Landing) that will empower the Cori supercomputer by the summer of 2016. "A key goal of the Cori Phase 1 system is to support the increasingly data-intensive computing needs of NERSC users. Toward this end, Phase 1 of Cori will feature more than 1,400 Intel Haswell compute nodes, each with 128 gigabytes of memory per node. The system will provide about the same sustained application performance as NERSC’s Hopper system, which will be retired later this year. The Cori interconnect will have a dragonfly topology based on the Aries interconnect, identical to NERSC’s Edison system."The post Video: Optimizing Applications for the CORI Supercomputer at NERSC appeared first on insideHPC.
Share Your HPC Code in the PRACE CodeVault
Today the European PRACE infrastructure announced the PRACE CodeVault, an open repository containing various high performance computing code samples for the HPC community. The CodeVault is an open platform that supports self-education of learning HPC programming skills where HPC users can share example code snippets, proof-of-concept codes and more.The post Share Your HPC Code in the PRACE CodeVault appeared first on insideHPC.
AWS to Aquire Italy’s NICE Software
Today NICE software in Italy announced that the company is to be acquired by Amazon Web Services, the world’s most comprehensive and broadly adopted cloud platform. With its remote visualization platform, NICE delivers comprehensive Grid & Cloud Solutions for increasing user productivity to access applications and computing resources.The post AWS to Aquire Italy’s NICE Software appeared first on insideHPC.
Call for Papers: International Workshop on Performance-Portable Programming Models for Accelerators
The first annual International Workshop on Performance Portable Programming Models for Accelerators has issued its Call for Papers. Known as P^3MA, the workshop will provide a forum for bringing together researchers, vendors, users and developers to brainstorm aspects of heterogeneous computing and its various tools and techniques.The post Call for Papers: International Workshop on Performance-Portable Programming Models for Accelerators appeared first on insideHPC.
Creating an Exascale Ecosystem Under the NSCI Banner
“We expect NCSI to run for the next two decades. It’s a bit audacious to start a 20 year project in the last 18 months of an administration, but one of the things that gives us momentum is that we are not starting from a clean sheet of paper. There are many government agencies already involved and what we’re really doing is increasing their coordination and collaboration. Also we will be working very hard over the next 18 months to build momentum and establish new working relationships with academia and industry.”The post Creating an Exascale Ecosystem Under the NSCI Banner appeared first on insideHPC.
UW Projects Awarded 42 Million Core Hours on Yellowstone Supercomputer
"A new supercomputer, dubbed Cheyenne, is expected to be operational at the beginning of 2017. The new high-performance computer will be a 5.34-petaflop system, meaning it can carry out 5.34 quadrillion calculations per second. It will be capable of more than 2.5 times the amount of scientific computing performed by Yellowstone."The post UW Projects Awarded 42 Million Core Hours on Yellowstone Supercomputer appeared first on insideHPC.
Podcast: Molly Rector from DDN on the Changing Face of HPC Storage
In this Graybeards Podcast, Molly Rector from DDN describes how HPC storage technologies are mainstreaming into the enterprise space. "In HPC there are 1000s of compute cores that are crunching on PB of data. For Oil&Gas companies, it’s seismic and wellhead analysis; with bio-informatics it’s genomic/proteomic analysis; and with financial services, it’s economic modeling/backtesting trading strategies. For today’s enterprises such as retailers, it’s customer activity analytics; for manufacturers, it’s machine sensor/log analysis; and for banks/financial institutions, it’s credit/financial viability assessments. Enterprise IT might not have 1000s of cores at their disposal just yet, but it’s not far off. Molly thinks one way to help enterprise IT is to provide a SuperComputer as a service (ScaaS?) offering, where top 10 supercomputers can be rented out by the hour, sort of like a supercomputing compute/data cloud."The post Podcast: Molly Rector from DDN on the Changing Face of HPC Storage appeared first on insideHPC.
MultiLevel Parallelism with Intel Xeon Phi
"The combination of using both MPI and OpenMP is a topic that has been explored by many developers in order to determine the most optimum solution. Whether to use OpenMP for outer loops and MPI within, or by creating separate MPI processes and using OpenMP within can lead to various levels of performance. In most cases of determining which method will yield the best results will involve a deep understanding of the application, and not just rearranging directives."The post MultiLevel Parallelism with Intel Xeon Phi appeared first on insideHPC.
OSC to Deploy New Dell Supercomputer in Ohio
Today the Ohio Supercomputer Center (OSC) announced plans to boost scientific and industrial discovery and innovation with a powerful new supercomputer from Dell. To be deployed later this year, the new system is part of a $9.7 million investment that received approval from the State Controlling Board in January.The post OSC to Deploy New Dell Supercomputer in Ohio appeared first on insideHPC.
SC16 Workshop Proposals Due Feb. 14
SC16 is now accepting full- and half-day Workshop Proposals for SC16. "SC16 will include full- and half-day workshops that complement the overall Technical Program events, with the goal of expanding the knowledge base of practitioners and researchers in a particular subject area. These workshops provide a focused, in-depth venue for presentations, discussion and interaction. Workshop proposals were peer-reviewed academically with a focus on submissions that inspire deep and interactive dialogue in topics of interest to the HPC community."The post SC16 Workshop Proposals Due Feb. 14 appeared first on insideHPC.
Second Intel Parallel Computing Center Opens at SDSC
Intel has opened a second parallel computing center at the San Diego Supercomputer Center (SDSC), at the University of California, San Diego. The focus of this new engagement is on earthquake research, including detailed computer simulations of major seismic activity that can be used to better inform and assist disaster recovery and relief efforts.The post Second Intel Parallel Computing Center Opens at SDSC appeared first on insideHPC.
ExaNeSt European Consortium to Develop Exascale Architecture
In this special guest feature, Robert Roe from Scientific Computing World reports that a new Exascale computing architecture using ARM processors is being developed by a European consortium of hardware and software providers, research centers, and industry partners. Funded by the European Union’s Horizon2020 research program, a full prototype of the new system is expected to be ready by 2018.The post ExaNeSt European Consortium to Develop Exascale Architecture appeared first on insideHPC.
Video: Theta & Aurora – Big Systems for Big Science
"Aurora’s revolutionary architecture features Intel’s HPC scalable system framework and 2nd generation Intel Omni-Path Fabric. The system will have a combined total of over 8 Petabytes of on package high bandwidth memory and persistent memory, connected and communicating via a high-performance system fabric to achieve landmark throughput. The nodes will be linked to a dedicated burst buffer and a high-performance parallel storage solution. A second system, named Theta, will be delivered in 2016. Theta will be based on Intel’s second-generation Xeon Phi processor and will serve as an early production system for the ALCF."The post Video: Theta & Aurora – Big Systems for Big Science appeared first on insideHPC.
2016 OpenPOWER Summit Announces Speaker Agenda
Today, the OpenPOWER Foundation announced the lineup of speakers for the OpenPOWER Summit 2016, taking place April 5-8 at NVIDIA’s GPU Technology Conference (GTC) at the San Jose Convention Center. The Summit will bring together dozens of technology leaders from the OpenPOWER Foundation to showcase the latest advancements in the OpenPOWER ecosystem, including collaborative hardware, software and application developments – all designed to revolutionize the data center.The post 2016 OpenPOWER Summit Announces Speaker Agenda appeared first on insideHPC.
Video: Supercomputing at the University of Buffalo
In this WGRZ video, researchers describe supercomputing at the Center for Computational Research at the University of Buffalo. "The Center’s extensive computing facilities, which are housed in a state-of-the-art 4000 sq ft machine room, include a generally accessible (to all UB researchers) Linux cluster with more than 8000 processor cores and QDR Infiniband, a subset (32) of which contain (64) NVidia Tesla M2050 “Fermi” graphics processing units (GPUs)."The post Video: Supercomputing at the University of Buffalo appeared first on insideHPC.
New CFD and Geometry Interfaces in Pointwise Meshing Software
Today Pointwise announced the latest release of its meshing software featuring updated native interfaces to computational fluid dynamics (CFD) and geometry codes. Pointwise Version 17.3 R5 also includes geometry import and export to the native file format of Pointwise's geometry kernel and a variety of bug fixes.The post New CFD and Geometry Interfaces in Pointwise Meshing Software appeared first on insideHPC.
SURFsara in the Netherlands Upgrades Bull Supercomputer to 1.8 Petaflops
Today SURFsara in the Netherlands announced it will expand the capacity of their Cartesius national supercomputer in the second half of 2016. With an upgrade to 1.8 Petaflops, the Bull sequana system will enable researchers to work on more complex models for climate research, water management, improving medical treatment, research into clean energy, noise reduction and product and process optimization.The post SURFsara in the Netherlands Upgrades Bull Supercomputer to 1.8 Petaflops appeared first on insideHPC.
HPE to Deliver SGI UV Technology for Mission Critical Solutions
Today, SGI and Hewlett Packard Enterprise announced an agreement in which HPE will OEM the SGI UV technology as the foundation for an 8-socket system – the HPE Integrity MC990 X Server. Extending HPE’s solution portfolio for mission critical environments, including HPE’s flagship mission critical solution Superdome X, the new system leverages the scale-up architecture of the SGI UV technology and provides HPE customers with an advanced follow on solution to the 8-socket HPE ProLiant DL980 G7 Server. Through this partnership with SGI, HPE will address time-to-market demands while meeting the performance, scalability and availability requirements of enterprise customers.The post HPE to Deliver SGI UV Technology for Mission Critical Solutions appeared first on insideHPC.
Students: Apply for International Summer School on HPC Challenges by Feb. 15
The deadline is just one week away for Students to apply for the International Summer School on HPC Challenges in Computational Sciences. "Graduate students and postdoctoral scholars from institutions in Canada, Europe, Japan and the United States are invited to apply for the seventh HPC Summer School, to be held June 26 to July 1, 2016, in Ljubljana, Slovenia. The summer school is sponsored by the Extreme Science and Engineering Discovery Environment (XSEDE) with funds from the U.S. National Science Foundation, Compute/Calcul Canada, the Partnership for Advanced Computing in Europe (PRACE) and the RIKEN Advanced Insti­tute for Computational Science (RIKEN AICS)."The post Students: Apply for International Summer School on HPC Challenges by Feb. 15 appeared first on insideHPC.
Call for Contributions: Hot Chips 2016
hotchipsThe Hot Chips 2016 conference has issues its Call for Proposals. The event takes place August 21-23 in Cupertino, California. "Presentations at HOT CHIPS are in the form of 30 minute talks using PowerPoint or PDF. Presentation slides will be published in the HOT CHIPS Proceedings. Participants are not required to submit written papers, but a select group will be invited to submit a paper for inclusion in a special issue of IEEE Micro."The post Call for Contributions: Hot Chips 2016 appeared first on insideHPC.
Watson for President Foundation Explores AI as Commander-in-Chief
If the current set of Presidential candidates has you down, the Watson for President Foundation may just have an answer for you. As an independent organization not affiliated with Watson's creator, IBM, the foundation contends that the artificial intelligence technology that won Jeopardy! would be well-suited to be the leader of the free world.The post Watson for President Foundation Explores AI as Commander-in-Chief appeared first on insideHPC.
Video: AMD’s next Generation GPU and High Bandwidth Memory Architecture
"HBM is a new type of CPU/GPU memory (“RAM”) that vertically stacks memory chips, like floors in a skyscraper. In doing so, it shortens your information commute. Those towers connect to the CPU or GPU through an ultra-fast interconnect called the “interposer.” Several stacks of HBM are plugged into the interposer alongside a CPU or GPU, and that assembled module connects to a circuit board. Though these HBM stacks are not physically integrated with the CPU or GPU, they are so closely and quickly connected via the interposer that HBM’s characteristics are nearly indistinguishable from on-chip integrated RAM."The post Video: AMD’s next Generation GPU and High Bandwidth Memory Architecture appeared first on insideHPC.
Ellexus Launches Mistral Software for Balancing Shared Storage across HPC Clusters
Today Ellexus in the UK announced the release of Mistral, a "ground breaking" product for balancing shared storage across a high performance computing cluster. Developed in collaboration with ARM’s IT department, Mistral monitors application IO and cluster performance so that jobs exceeding the expected IO thresholds can be automatically identified and slowed down through IO throttling.The post Ellexus Launches Mistral Software for Balancing Shared Storage across HPC Clusters appeared first on insideHPC.
Video: Meet IME – The World’s First Burst Buffer
"DDN’s IME14K revolutionizes how information is saved and accessed by compute. IME software allows data to reside next to compute in a very fast, shared pool of non-volatile memory (NVM). This new data adjacency significantly reduces latency by allowing IME software’s revolutionary, fast data communication layer to pass data without the file locking contention inherent in today’s parallel file systems."The post Video: Meet IME – The World’s First Burst Buffer appeared first on insideHPC.
Video: Altera’s Stratix 10 – 14nm FPGA Targeting 1GHz Performance
In this video from the 2015 Hot Chips Conference, Mike Hutton from Altera presents: Stratix 10 Altera’s 14nm FPGA Targeting 1GHz Performance. "Stratix 10 FPGAs and SoCs deliver breakthrough advantages in performance, power efficiency, density, and system integration: advantages that are unmatched in the industry. Featuring the revolutionary HyperFlex core fabric architecture and built on the Intel 14 nm Tri-Gate process, Stratix 10 devices deliver 2X core performance gains over previous-generation, high-performance FPGAs with up to 70% lower power."The post Video: Altera’s Stratix 10 – 14nm FPGA Targeting 1GHz Performance appeared first on insideHPC.
Job of the Week: Information Technologist at ICER at Michigan State
ICER at Michigan State is seeking an Information Technologist in our Job of the Week. "As a joint appointment between Michigan State University’s Information Technology Services and the Institute for Cyber-Enabled Research, the storage server administers computer storage clusters totaling a few nodes, including high speed Ethernet network interconnections. The position will involve Linux systems administration and working in a team environment with systems administrators, programmers, and research specialists to support the university's research computing needs; will deploy and test new systems and services; will monitor, diagnose, support, and upgrade existing services (using the technologies described in the 'Desired Qualifications' section); will work with staff to document internal and external procedures; will develop, expand, and implement tools and scripts to facilitate administration ; will work with users on how to use object-oriented Ceph-based systems."The post Job of the Week: Information Technologist at ICER at Michigan State appeared first on insideHPC.
Agenda Posted for HPC User Forum in Tucson, April 11-13
IDC has published the agenda for their next HPC User Forum. The event will take place April 11-13 in Tucson, AZ. "Don't miss the chance to hear top experts on these high-innovation, high-growth areas of the HPC market. At this meeting, you'll also hear about government initiatives to get ready for future-generation supercomputers, machine learning, and High Performance Data Analytics."The post Agenda Posted for HPC User Forum in Tucson, April 11-13 appeared first on insideHPC.
Auburn University Launches Hopper Supercomputer from Lenovo
Today Auburn University unveiled its new $1 million supercomputer that will enhance research across campus, from microscopic gene sequencing to huge engineering tasks. The university is also initiating a plan to purchase a new one every few years as research needs evolve and expand.The post Auburn University Launches Hopper Supercomputer from Lenovo appeared first on insideHPC.
Saving East African Crops with Supercomputing
"Because the silverfly species are identical to look at, the best way to distinguish them is by examining their genetic difference, so we are deploying a mix of genomics, supercomputing, and evolutionary history. This knowledge will help African farmers and scientists distinguish between the harmless and the invasive ones, develop management strategies, and breed new whitefly-resistant strains of cassava. The computational challenge for our team is in processing the genomic data the sequencing machines produce."The post Saving East African Crops with Supercomputing appeared first on insideHPC.
Chalk Talk: What is a Data Lake?
"If you think of a data mart as a store of bottled water – cleansed and packaged and structured for easy consumption – the data lake is a large body of water in a more natural state. The contents of the data lake stream in from a source to fill the lake, and various users of the lake can come to examine, dive in, or take samples.” These “data lake” systems will hold massive amounts of data and be accessible through file and web interfaces. Data protection for data lakes will consist of replicas and will not require backup since the data is not updated. Erasure coding will be used to protect large data sets and enable fast recovery. Open source will be used to reduce licensing costs and compute systems will be optimized for map reduce analytics. Automated tiering will be employed for performance and long-term retention requirements. Cold storage, storage that will not require power for long-term retention, will be introduced in the form of tape or optical media."The post Chalk Talk: What is a Data Lake? appeared first on insideHPC.
Registration Opens for Inaugural Nimbix Developer Summit in Dallas
Registration is now open for the inaugural Nimbix Developer Summit. With an impressive lineup of speakers & sponsors from Mellanox, migenius, Xilinx, and more, the event takes place March 15 in Dallas, Texas. "The summit agenda will feature topics such as hardware acceleration, coprocessing, photorealistic rendering, bioinformatics, and high performance analytics. The sessions will conclude with a panel of developers discussing how to overcome challenges of creating and optimizing cloud-based applications."The post Registration Opens for Inaugural Nimbix Developer Summit in Dallas appeared first on insideHPC.
Anton 2 Supercomputer to Speed Molecular Simulations at PSC
Today the Pittsburgh Supercomputing Center (PSC) announced a $1.8-million National Institutes of Health grant to make the next-generation Anton 2 supercomputer developed by D. E. Shaw Research (DESRES) available to the biomedical research community. A specialized system for modeling the function and dynamics of biomolecules, the Anton 2 machine at PSC will be the only one of its kind publicly available to U.S. scientists. The grant also extends the operation of the Anton 1 supercomputer currently at PSC until the new Anton 2 is deployed, expected in the Fall of 2016.The post Anton 2 Supercomputer to Speed Molecular Simulations at PSC appeared first on insideHPC.
CCRT in France Acquires 1.4 Petaflop “Cobalt” Supercomputer from Bull
Today Atos announced that the French CEA and its industrial partners at the Centre for Computing Research and Technology, CCRT, have invested in a new 1.4 petaflop Bull supercomputer. "Three times more powerful than the current computer at CCRT, the new system will be installed in the CEA’s Very Large Computing Centre in Bruyères-le-Châtel, France, mid-2016 to cover expanding industrial needs. Named COBALT, the new Intel Xeon-based supercomputer will be powered by over 32,000 compute cores and storage capacity of 2.5 Petabytes with a throughput of 60 GB/s."The post CCRT in France Acquires 1.4 Petaflop “Cobalt” Supercomputer from Bull appeared first on insideHPC.
Video: Bill Dally on Scaling Performance in the Post-Dennard Era
"It was indicated in my keynote this morning there are two really fundamental challenges we're facing in the next two years in all sorts of computing - from supercomputers to cell phones. The first is that of energy efficiency. With the end of Dennard scaling, we're no longer getting a big improvement in performance per watt from each technology generation. The performance improvement has dropped from a factor of 2.8 x back when we used to scale supply voltage with each new generation, now to about 1.3 x in the post-Dennard era. With this comes a real challenge for us to come up with architecture techniques and circuit techniques for better performance per watt."The post Video: Bill Dally on Scaling Performance in the Post-Dennard Era appeared first on insideHPC.
Coarse Grained Parallelism
"MPI is generally a coarse-grained, as the parallelism where MPI is used would be higher up in the algorithm. Using MPI for an application requires a developer to think more about the individual servers and how to distribute parts of the application to these servers. The amount of work done by each of the connected servers would ideally be equal and equal to 1/Nth of the total work, where N is the number of servers."The post Coarse Grained Parallelism appeared first on insideHPC.
ALCF Celebrates 10 Years of Leadership Computing
This week, the Argonne Leadership Computing Facility (ALCF) turns one decade old. ALCF is home to Mira, the world's fifth-fastest supercomputer, along with teams of experts that help researchers from all over the world perform complex simulations and calculations in almost every branch of science. To celebrate its 10th anniversary, Argonne is highlighting 10 accomplishments since the facility opened its doors.The post ALCF Celebrates 10 Years of Leadership Computing appeared first on insideHPC.
Video: Mars–A 64-Core ARMv8 Processor
In this video from the 2015 Hot Chips Conference, Charles Zhang from Phytium presents: Mars - A 64-Core ARMv8 Processor. Formed in China in 2012, Phytium is a unique technology provider of HPC servers, focusing mainly on high performance general microprocessor, accelerator chip, reference board design and various servers design from blade, cluster, standard stack to HPC Server. "Optimized for HPC, the Mars chip features eight panels, each with eight “Xiaomi” cores. The panels share an L2 cache of 32 MB, two Directory Control Units and a routing cell for the internal mesh."The post Video: Mars – A 64-Core ARMv8 Processor appeared first on insideHPC.
Mira Supercomputer Shaping Fusion Plasma Research
The IBM Blue Gene/Q supercomputer Mira, housed at the Argonne national laboratory Argonne Leadership Computing Facility (ACLF), is delivering new insights into the physics behind nuclear fusion, helping researchers to develop a new understanding of the electron behavior in edge plasma – a critical step to creating an efficient fusion reaction.The post Mira Supercomputer Shaping Fusion Plasma Research appeared first on insideHPC.
Poznan Launches Eagle Supercomputer with Liquid Cooling from CoolIT Systems
Today CoolIT Systems announced that it has successfully completed the second deployment of its Rack DCLC liquid cooling solution at the Poznan Supercomputing and Networking Center (PSNC) in partnership with Huawei. "We are pleased to have migrated from a liquid cooled pilot project with CoolIT Systems to a full-scale rollout,” said Radoslaw Januszewski, IT Specialist at PSNC. “The pilot project proved to be very reliable, it met our efficiency goals, and provided a bonus performance boost with the processors very happy to be kept at a cool, consistent temperature as a result of liquid cooling’s effectiveness.”The post Poznan Launches Eagle Supercomputer with Liquid Cooling from CoolIT Systems appeared first on insideHPC.
insideHPC Manufacturing Webinar
Manufacturing is enjoying an economic and technological resurgence with the help of high performance computing. In this insideHPC webinar, you’ll learn how the power of CAE and simulation is transforming the industry with faster time to solution, better quality, and reduced costs.The post insideHPC Manufacturing Webinar appeared first on insideHPC.
LLNL Teams with Industry to Advance Energy Technologies using HPC
Today Lawrence Livermore National Laboratory (LLNL) announced the selection of six industry projects for the advancement of energy technologies using high performance computing. Called the "hpc4energy incubator," this pilot program aims to innovate and accelerate the development of energy technology and boost U.S. economic competitiveness in the global marketplace by teaming industry with the scientific and computing resources at national laboratories.The post LLNL Teams with Industry to Advance Energy Technologies using HPC appeared first on insideHPC.
High Performance Computing for Energy Project Kicks off at BSC
High Performance Computing for Energy (HPC4E) project officially launched this week with a kick-off meeting for partners at the Barcelona Supercomputing Center (BSC). Coordinated by BSC and running through November 2017, the project has been granted €2 million in funding by the EU’s Horizon 2020 research and innovation program.The post High Performance Computing for Energy Project Kicks off at BSC appeared first on insideHPC.
John Zannos to Chair OpenPOWER Foundation
Today the OpenPOWER Foundation announced the election of John Zannos from Canonical as Chair and Calista Redmond from IBM as President of the OpenPOWER Foundation Board of Directors, effective January 1, 2016. Zannos and Redmond bring deep knowledge of the open technology development community and intimate familiarity with the Foundation’s core mission, with both playing key roles within the Foundation since 2014. The new leadership will continue to guide the proliferation of OpenPOWER-based technology solutions built on IBM’s POWER architecture in today’s datacenters.The post John Zannos to Chair OpenPOWER Foundation appeared first on insideHPC.
Video: Storage Architecture for Innovation & Research at the University of Florida
In this video from the DDN booth at SC15, Dr. Erik Deumens of the University of Florida describes why unpredictable and less standard architectures and system configurations are necessary to meet the agility, availability and responsiveness requirements to meet the mission of innovation and exploration. "The University of Florida’s Interdisciplinary Center for Biotechnology Research (ICBR) offers access to cutting-edge technologies designed to enable university faculty, staff and students, as well as research and commercial partners worldwide with the tools and resources needed to advance scientific research."The post Video: Storage Architecture for Innovation & Research at the University of Florida appeared first on insideHPC.
Seismic Processing Places High Demand on Storage
Oil and gas exploration is always a challenging endeavor, and with today's large risks and rewards, optimizing the process is of critical importance. A whole range of High Performance Computing (HPC) technologies need to be employed for fast and accurate decision making. This Intersect360 Research whitepaper, Seismic Processing Places High Demand on Storage, is an excellent summary of the challenges and solutions that are being address by storage solutions from Seagate.The post Seismic Processing Places High Demand on Storage appeared first on insideHPC.
DDN WOS Object Storage Wins 2016 Storage Visions Award
Today DDN announced that its WOS 360 v2.0 object storage software was named a Visionary Product in the Professional Class Storage category at the fifteenth Annual Storage Visions Conference. The groundbreaking WOS enables organizations to build highly reliable, infinitely scalable and cost-efficient storage repositories to meet any unstructured data need and the most demanding storage requirements. With massively scalable storage technology that is able to outpace the performance requirements and growth of Enterprise Big Data, DDN continues to lead the market with revolutionary products that solve the end-to-end data lifecycle from cache and SSD to high performance file storage, cloud and archive.The post DDN WOS Object Storage Wins 2016 Storage Visions Award appeared first on insideHPC.
UCLA Researchers Simulate Injured Human Leg
Researchers at UCLA have created the first detailed computer simulation model of an injured human leg--complete with spurting blood. The simulation is designed to make training for combat medics more realistic. "To create the simulator model, researchers combined detailed knowledge of anatomy with real-life CAT scans and MRIs to map out layers of a human leg--the bone, the soft tissue containing muscle and blood vessels and the skin surrounding everything. Then the design team applied physics and mathematical equations, fluid dynamics, and pre-determined rates of blood flow from specific veins and arteries to simulate blood loss for wounds of varying sizes and severity."The post UCLA Researchers Simulate Injured Human Leg appeared first on insideHPC.
...195196197198199200201202203204...