by staff on (#28MW9)
"The DS8880 All-Flash family is targeted at users that have experienced poor storage performance due to latency, low server utilization, high energy consumption, low system availability and high operating costs. These same users have been listening, learning and understand the data value proposition of being a cognitive business,†said Ed Walsh, general manager, IBM Storage and Software Defined Infrastructure. “In the coming year we expect an awakening by companies to the opportunity that cognitive applications, and hybrid cloud enablement, bring them in a data driven marketplace.â€The post IBM Rolls Out All-flash Storage for Cognitive Workloads appeared first on insideHPC.
|
High-Performance Computing News Analysis | insideHPC
Link | https://insidehpc.com/ |
Feed | http://insidehpc.com/feed/ |
Updated | 2024-11-05 23:00 |
by staff on (#28MTP)
"Just as a software ecosystem helped to create the immense computing industry that exists today, building a quantum computing industry will require software accessible to the developer community,†said Bo Ewald, president, D-Wave International Inc. “D-Wave is building a set of software tools that will allow developers to use their subject-matter expertise to build tools and applications that are relevant to their business or mission. By making our tools open source, we expand the community of people working to solve meaningful problems using quantum computers.â€The post D-Wave Releases Open Quantum Software Environment appeared first on insideHPC.
|
by Richard Friedman on (#28MP1)
While HPC developers worry about squeezing out the ultimate performance while running an application on dedicated cores, Intel TBB tackles a problem that HPC users never worry about: How can you make parallelism work well when you share the cores that you run upon?†This is more of a concern if you’re running that application on a many-core laptop or workstation than a dedicated supercomputer because who knows what will also be running on those shared cores. Intel Threaded Building Blocks reduce the delays from other applications by utilizing a revolutionary task-stealing scheduler. This is the real magic of TBB.The post A Decade of Multicore Parallelism with Intel TBB appeared first on insideHPC.
|
by Rich Brueckner on (#28MF4)
Today Cray announced the appointment of Stathis Papaefstathiou to the position of senior vice president of research and development. Papaefstathiou will be responsible for leading the software and hardware engineering efforts for all of Cray’s research and development projects. "At our core, we are an engineering company, and we’re excited to have Stathis’ impressive and diverse technical expertise in this key leadership position at Cray,†said Peter Ungaro, president and CEO of Cray. “Leveraging the growing convergence of supercomputing and big data, Stathis will help us continue to build unique and innovative products for our broadening customer base.â€The post Cray Appoints Stathis Papaefstathiou as SVP of Research and Development appeared first on insideHPC.
|
by Peter ffoulkes on (#28MBR)
"With three primary network technology options widely available, each with advantages and disadvantages in specific workload scenarios, the choice of solution partner that can deliver the full range of choices together with the expertise and support to match technology solution to business requirement becomes paramount."The post Selecting HPC Network Technology appeared first on insideHPC.
|
by staff on (#28BX3)
In this week’s Sponsored Post, Nicolas Dube of Hewlett Packard Enterprise outlines the future of HPC and the role and challenges of exascale computing in this evolution. The HPE approach to exascale is geared to breaking the dependencies that come with outdated protocols. Exascale computing will allow users to process data, run systems, and solve problems at a totally new scale, which will become increasingly important as the world’s problems grow ever larger and more complex.The post Exascale Computing: A Race to the Future of HPC appeared first on insideHPC.
|
by staff on (#28J2C)
Each year the OpenFabrics Alliance (OFA) hosts an annual workshop devoted to advancing the state of the art in networking. "One secret to the enduring success of the workshop is the OFA’s emphasis on hosting an interactive, community-driven event. To continue that trend, we are once again reaching out to the community to create a rich program that addresses topics important to the networking industry. We’re looking for proposals for workshop sessions."The post OpenFabrics Alliance Workshop 2017 – Call for Sessions Open appeared first on insideHPC.
|
by Rich Brueckner on (#28GXC)
In this video, Jonathan Allen from LLNL describes how Lawrence Livermore’s supercomputers are playing a crucial role in advancing cancer research and treatment. "A historic partnership between the Department of Energy (DOE) and the National Cancer Institute (NCI) is applying the formidable computing resources at Livermore and other DOE national laboratories to advance cancer research and treatment. Announced in late 2015, the effort will help researchers and physicians better understand the complexity of cancer, choose the best treatment options for every patient, and reveal possible patterns hidden in vast patient and experimental data sets."The post Video: Livermore HPC Takes Aim at Cancer appeared first on insideHPC.
|
by Rich Brueckner on (#28GNK)
Oak Ridge National Laboratory reports that its team of experts are playing leading roles in the recently established DOE’s Exascale Computing Project (ECP), a multi-lab initiative responsible for developing the strategy, aligning the resources, and conducting the R&D necessary to achieve the nation’s imperative of delivering exascale computing by 2021. "ECP’s mission is to ensure all the necessary pieces are in place for the first exascale systems – an ecosystem that includes applications, software stack, architecture, advanced system engineering and hardware components – to enable fully functional, capable exascale computing environments critical to scientific discovery, national security, and a strong U.S. economy."The post Oak Ridge Plays key role in Exascale Computing Project appeared first on insideHPC.
|
by Rich Brueckner on (#28GKD)
"The PRACE Summer of HPC is an outreach and training program that offers summer placements at top High Performance Computing centers across Europe to late-stage undergraduates and early-stage postgraduate students. Up to twenty top applicants from across Europe will be selected to participate. Participants will spend two months working on projects related to PRACE technical or industrial work and produce a report and a visualization or video of their results."The post Apply Now for Summer of HPC 2017 in Barcelona appeared first on insideHPC.
|
by Rich Brueckner on (#28GF6)
"For many urban questions, however, new data sources will be required with greater spatial and/or temporal resolution, driving innovation in the use of sensors in mobile devices as well as embedding intelligent sensing infrastructure in the built environment. Collectively, these data sources also hold promise to begin to integrate computational models associated with individual urban sectors such as transportation, building energy use, or climate. Catlett will discuss the work that Argonne National Laboratory and the University of Chicago are doing in partnership with the City of Chicago and other cities through the Urban Center for Computation and Data, focusing in particular on new opportunities related to embedded systems and computational modeling."The post Understanding Cities through Computation, Data Analytics, and Measurement appeared first on insideHPC.
|
by Rich Brueckner on (#28CKS)
In this video, Maurizio Davini from the University of Pisa describe how the University works with Dell EMC and Intel to test new technologies, integrate and optimize HPC systems with Intel HPC Orchestrator software. "We believe these two companies are at the forefront of innovation in high performance computing," said University CTO Davini. "We also share a common goal of simplifying HPC to support a broader range of users.â€The post Intel HPC Orchestrator Powers Research at University of Pisa appeared first on insideHPC.
|
by Rich Brueckner on (#28CBS)
“Atos is determined to solve the technical challenges that arise in life sciences projects, to help scientists to focus on making breakthroughs and forget about technicalities. We know that one size doesn’t fit all and that is the reason why we studied carefully The Pirbright Institute’s challenges to design a customized and unique architecture. It is a pleasure for us to work with Pirbright and to contribute in some way to reduce the impact of viral diseasesâ€, says Natalia Jiménez, WW Life Sciences lead at Atos.The post Bull Atos Powers New Genomics Supercomputer at Pirbright Institute appeared first on insideHPC.
|
by Rich Brueckner on (#28C7A)
Today Mellanox announced that Spectrum Ethernet switches and ConnectX-4 100Gb/s Ethernet adapters have been selected by Baidu, the leading Chinese language Internet search provider, for Baidu’s Machine Learning platforms. The need for higher data speed and most efficient data movement placed Spectrum and RDMA-enabled ConnectX-4 adapters as key components to enable world leading machine learning […]The post Mellanox Ethernet Accelerates Baidu Machine Learning appeared first on insideHPC.
|
by staff on (#28C5F)
"The recent announcement of HDR InfiniBand included the three required network elements to achieve full end-to-end implementation of the new technology: ConnectX-6 host channel adapters, Quantum switches and the LinkX family of 200Gb/s cables. The newest generations of InfiniBand bring the game changing capabilities of In-Network Computing and In-Network Memory to further enhance the new paradigm of Data-Centric data centers – for High-Performance Computing, Machine Learning, Cloud, Web2.0, Big Data, Financial Services and more – dramatically increasing network scalability and introducing new accelerations for storage platforms and data center security."The post HDR InfiniBand Technology Reshapes the World of High-Performance and Machine Learning Platforms appeared first on insideHPC.
|
by staff on (#288CV)
The Penn State Institute for CyberScience (ICS) is hosting a series of free training workshops on high-performance computing techniques. These workshops are sponsored by the Extreme Science and Engineering Discovery Environment (XSEDE). The first workshop will be 11 a.m. to 5 p.m on Jan.17 in 118 Wagner Building, University Park.The post Penn State Institute for CyberScience to Host HPC Workshops appeared first on insideHPC.
|
by Rich Brueckner on (#288CW)
Today the Canada Foundation for Innovation announced an award of $69,455,000 through its Major Science Initiative Fund for the Compute Canada project. This award will be used to continue the operation of the national advanced research computing platform that serves more than 10,000 researchers at universities, post-secondary institutions and research institutions across Canada.The post Compute Canada Receives Funding for National Supercomputing Platform appeared first on insideHPC.
|
by Rich Brueckner on (#28892)
The Pacific Northwest National Laboratory (PNNL) is seeking a Research Scientist for High Performance Computing in our Job of the Week. "The HPC group is seeking a Scientist to actively participate in challenging software and hardware research projects that will impact future High Performance Computing systems as well as constituent technologies. In particular, the researcher will be involved in research into data analytics, large-scale computation, programming models, and introspective run-time systems. The successful researcher will join a vibrant research group whose core capabilities are in Modeling and Simulation, System Software and Applications, and Advanced Architectures."The post Job of the Week: Research Scientist for HPC at PNNL appeared first on insideHPC.
|
by Rich Brueckner on (#2883G)
In this podcast, the Radio Free HPC team shares the things we're looking forward to in 2017. "From the iPhone 8 to storage innovations and camera technologies in the fight against crime, it is looking like 2017 is going to be a great year. It will also be back to the future with the return of specialized processing devices for specific application worksloads and the continuing technology wars between processors and GPUs and Omni-Path vs InfiniBand."The post Radio Free HPC Looks Forward to New Technologies in 2017 appeared first on insideHPC.
|
by Douglas Eadline on (#287S5)
The two methods of scaling processors are based on the method used to scale the memory architecture and are called scaling-out or scale-up. Beyond the basic processor/memory architecture, accelerators and parallel file systems are also used to provide scalable performance. "High performance scale-up designs for scaling hardware require that programs have concurrent sections that can be distributed over multiple processors. Unlike the distributed memory systems described below, there is no need to copy data from system to system because all the memory is globally usable by all processors."The post Scaling Hardware for In-Memory Computing appeared first on insideHPC.
|
by staff on (#284N4)
In this special guest feature from Scientific Computing World, Cray's Barry Bolding gives some predictions for the supercomputing industry in 2017. "2016 saw the introduction or announcement of a number of new and innovative processor technologies from leaders in the field such as Intel, Nvidia, ARM, AMD, and even from China. In 2017 we will continue to see capabilities evolve, but as the demand for performance improvements continues unabated and CMOS struggles to drive performance improvements we'll see processors becoming more and more power hungry."The post Barry Bolding from Cray Shares Four Predictions for HPC in 2017 appeared first on insideHPC.
|
by Rich Brueckner on (#284HY)
In This Week in Machine Learning podcast, Xavier Amatriain from Quora discusses the process of engineering practical machine learning systems. Amatriainis a former machine learning researcher who went on to lead the recommender systems team at Netflix, and is now the vice president of engineering at Quora, the Q&A site. "What the heck is a multi-arm bandit and how can it help us."The post Podcast: Engineering Practical Machine Learning Systems appeared first on insideHPC.
|
by staff on (#281NR)
A new study led by a research scientist at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) highlights a literally shady practice in plant science that has in some cases underestimated plants’ rate of growth and photosynthesis, among other traits. "More standardized fieldwork, in parallel with new computational tools and theoretical work, will contribute to better global plant models," Keenan said.The post Supercomputing Sheds Light on Leaf Study appeared first on insideHPC.
|
by Rich Brueckner on (#281K8)
Dr. Maria Klawe gave this Invited Talk at SC16. "Like many other computing research areas, women and other minority groups are significantly under-represented in supercomputing. This talk discusses successful strategies for significantly increasing the number of women and students of color majoring in computer science and explores how these strategies might be applied to supercomputing."The post Video: Diversity and Inclusion in Supercomputing appeared first on insideHPC.
|
by Rich Brueckner on (#27XV4)
In this video, a new NASA supercomputer simulation depicts the planet and debris disk around the nearby star Beta Pictoris reveals that the planet's motion drives spiral waves throughout the disk, a phenomenon that greatly increases collisions among the orbiting debris. Patterns in the collisions and the resulting dust appear to account for many observed features that previous research has been unable to fully explain.The post Video: How an Exoplanet Makes Waves appeared first on insideHPC.
|
by Rich Brueckner on (#27XQM)
A new site developed by Tin H compares the HPC virtualization capabilities of Docker, Singularity, Shifter, and Univa Grid Engine Container Edition. "They bring the benefits of container to the HPC world and some provide very similar features. The subtleties are in their implementation approach. MPI maybe the place with the biggest difference."The post New Site Compares Docker, Singularity, Shifter, and Univa Grid Engine Container Edition appeared first on insideHPC.
|
by Rich Brueckner on (#27XEJ)
Thomas Schulthess from CSCS gave this Invited Talk at SC16. "Experience with today’s platforms show that there can be an order of magnitude difference in performance within a given class of numerical methods – depending only on choice of architecture and implementation. This bears the questions on what our baseline is, over which the performance improvements of Exascale systems will be measured. Furthermore, how close will these Exascale systems bring us to deliver on application goals, such as kilometer scale global climate simulations or high-throughput quantum simulations for materials design? We will discuss specific examples from meteorology and materials science."The post Reflecting on the Goal and Baseline for Exascale Computing appeared first on insideHPC.
|
by Rich Brueckner on (#27XAH)
In this presentation from Nvidia, top AI experts from the world's most influential companies weigh in on predicted advances for AI in 2017. “In 2017, intelligence will trump speed. Over the last several decades, nations have competed on speed, intent to build the world’s fastest supercomputer," said Ian Buck, VP of Accelerated computing at Nvidia. "In 2017, the race will shift. Nations of the world will compete on who has the smartest supercomputer, not solely the fastest.â€The post Experts Weigh in on 2017 Artificial Intelligence Predictions appeared first on insideHPC.
|
by Rich Brueckner on (#27SRV)
Are you planning for ISC 2017? The deadlines for submissions are fast approaching. The conference takes place June 18 - 22, 2017 in Frankfurt, Germany. "Participation in these sessions and programs will help enrich your experience at the conference, not to mention provide exposure for your work to some of the most discerning HPC practitioners and business people in the industry. We also want to remind you that it’s the active participation of the community that helps make ISC High Performance such a worthwhile event for all involved."The post Submission Deadlines for ISC 2017 are Fast Approaching appeared first on insideHPC.
|
by staff on (#27SND)
Today AMD unveiled preliminary details of its forthcoming GPU architecture, Vega. Conceived and executed over 5 years, Vega architecture enables new possibilities in PC gaming, professional design and machine intelligence that traditional GPU architectures have not been able to address effectively. "It is incredible to see GPUs being used to solve gigabyte-scale data problems in gaming to exabyte-scale data problems in machine intelligence. We designed the Vega architecture to build on this ability, with the flexibility to address the extraordinary breadth of problems GPUs will be solving not only today but also five years from now. Our high-bandwidth cache is a pivotal disruption that has the potential to impact the whole GPU market," said Raja Koduri, senior vice president and chief architect, Radeon Technologies Group, AMD.The post AMD Unveils Vega GPU Architecure with HBM Memory appeared first on insideHPC.
|
by Rich Brueckner on (#27SCP)
The University of Connecticut has partnered with Dell EMC and Intel to create a high performance computing cluster that students and faculty can use in their research. With this HPC Cluster, UConn researchers can solve problems that are computationally intensive or involve massive amounts of data in a matter of days or hours, instead of weeks. The HPC cluster operated on the Storrs campus features 6,000 CPU cores, a high-speed fabric interconnect, and a parallel file system. Since 2011, it has been used by over 500 researchers, from each of the university’s schools and colleges, for over 40 million hours of scientific computation.The post Dell EMC Powers HPC at University of Connecticut appeared first on insideHPC.
|
by MichaelS on (#27S6J)
"Managing the work on each node can be referred to as Domain parallelism. During the run of the application, the work assigned to each node can be generally isolated from other nodes. The node can work on its own and needs little communication with other nodes to perform the work. The tools that are needed for this are MPI for the developer, but can take advantage of frameworks such as Hadoop and Spark (for big data analytics). Managing the work for each core or thread will need one level down of control. This type of work will typically invoke a large number of independent tasks that must then share data between the tasks."The post Programming for High Performance Processors appeared first on insideHPC.
|
by staff on (#27S2W)
A team of international scientists have found a way to make memory chips perform computing tasks, which is traditionally done by computer processors like those made by Intel and Qualcomm. This means data could now be processed in the same spot where it is stored, leading to much faster and thinner mobile devices and computers. This type of chip is one of the fastest memory modules that will soon be available commercially.The post New ReRAM Memory Can Process Data Where it Lives appeared first on insideHPC.
|
by Rich Brueckner on (#27N47)
Are you shopping for Public Cloud services? A new Public Cloud Services Comparison site gives a service & feature level mapping between the 3 major public clouds: Amazon Web Service, Microsoft Azure & Google Cloud. Published by Ilyas F, a Cloud Solution Architect at Xebia Group, the Public Cloud Services Comparison is a handy reference manual to help anyone to quickly learn the alternate features & services between clouds.The post New Site Lists all Comparable Features from AWS, Azure, and Google Cloud appeared first on insideHPC.
|
by Rich Brueckner on (#27MY7)
In this a16z Podcast, Murray Shanahan, Azeem Azhar, and Tom Standage discuss the past, present, and future of A.I. as well as how it fits (or doesn’t fit) with machine learning and deep learning. "Where are we now in the A.I. evolution? What players do we think will lead, if not win, the current race? And how should we think about issues such as ethics and automation of jobs without descending into obvious extremes? All this and more, including a surprise easter egg in Ex Machina shared by Shanahan, whose work influenced the movie."The post a16z Podcast Looks at Artificial Intelligence and the Space of Possible Minds appeared first on insideHPC.
|
by Rich Brueckner on (#27MQF)
"Run your Windows and Linux HPC applications using high performance A8 and A9 compute instances on Azure, and take advantage of a backend network with MPI latency under 3 microseconds and non-blocking 32 Gbps throughput. This backend network includes remote direct memory access (RDMA) technology on Windows and Linux that enables parallel applications to scale to thousands of cores. Azure provides you with high memory and HPC-class CPUs to help you get results fast. Scale up and down based upon what you need and pay only for what you use to reduce costs."The post Video: Azure High Performance Computing appeared first on insideHPC.
|
by Rich Brueckner on (#27MJQ)
Tamara Kolda from Sandia gave this Invited Talk at SC16. "Scientists are drowning in data. The scientific data produced by high-fidelity simulations and high-precision experiments are far too massive to store. For instance, a modest simulation on a 3D grid with 500 grid points per dimension, tracking 100 variables for 100 time steps yields 5TB of data. Working with this massive data is unwieldy and it may not be retained for future analysis or comparison. Data compression is a necessity, but there are surprisingly few options available for scientific data."The post Parallel Multiway Methods for Compression of Massive Data and Other Applications appeared first on insideHPC.
|
by Peter ffoulkes on (#27MAQ)
The TOP500 list is a very good proxy for how different interconnect technologies are being adopted for the most demanding workloads, which is a useful leading indicator for enterprise adoption. The essential takeaway is that the world’s leading and most esoteric systems are currently dominated by vendor specific technologies. The Open Fabrics Alliance (OFA) will be increasingly important in the coming years as a forum to bring together the leading high performance interconnect vendors and technologies to deliver a unified, cross-platform, transport-independent software stack.The post HPC Networking Trends in the TOP500 appeared first on insideHPC.
|
by staff on (#27GJK)
Singapore-based publisher Asian Scientist has launched Supercomputing Asia, a new print title dedicated to tracking the latest developments in high performance computing across the region and making supercomputing accessible to the layman. "Aside from well-established supercomputing powerhouses like Japan and emerging new players like China, Asian countries like Singapore and South Korea have recognized the transformational power of supercomputers and invested accordingly. We hope that this new publication will provide a unique insight into the exciting developments in this region,†said Dr. Rebecca Tan, Managing Editor of Supercomputing Asia.The post Asia’s First Supercomputing Magazine Launches from Singapore appeared first on insideHPC.
|
by staff on (#27GGQ)
"The release of Scyld ClusterWare 7 continues the growth of Penguin’s HPC provisioning software and enables support of large scale clusters ranging to thousands of nodes,†said Victor Gregorio, Senior Vice President of Cloud Services at Penguin Computing. “We are pleased to provide this upgraded version of Scyld ClusterWare to the community for Red Hat Enterprise Linux 7, CentOS 7 and Scientific Linux 7.â€The post Penguin Computing Releases Scyld ClusterWare 7 appeared first on insideHPC.
|
by Rich Brueckner on (#27G55)
In this video from SC16, Janet Morss from Dell EMC and Hugo Saleh from Intel discuss how the two companies collaborated on accelerating CryoEM. "Cryo-EM allows molecular samples to be studied in near-native states and down to nearly atomic resolutions. Studying the 3D structure of these biological specimens can lead to new insights into their functioning and interactions, especially with proteins and nucleic acids, and allows structural biologists to examine how alterations in their structures affect their functions. This information can be used in system biology research to understand the cell signaling network which is part of a complex communication system."The post Dell & Intel Collaborate on CryoEM on Intel Xeon Phi appeared first on insideHPC.
|
by Douglas Eadline on (#27G37)
To achieve high performance, modern computer systems rely on two basic methodologies to scale resources: scale-up or scale-out. The scale-up in-memory system provides a much better total cost of ownership and can provide value in a variety of ways. "If the application program has concurrent sections then it can be executed in a "parallel" fashion. Much like using multiple bricklayers to build a brick wall. It is important to remember that the amount and efficiency of the concurrent portions of a program determine how much faster it can run on multiple processors. Not all applications are good candidates for parallel execution."The post In-Memory Computing for HPC appeared first on insideHPC.
|
by Rich Brueckner on (#27CYM)
We are pleased to kick off the New Year with the announcement of our new insideHPC Jobs Board. "With listings for High Performance Computing jobs from around the world, the insideHPC Jobs Board is a great resource for employers and job seekers alike. As a special promotion, you can place Job Ads for 50 percent off during January."The post Announcing the New insideHPC Jobs Board appeared first on insideHPC.
|
by Rich Brueckner on (#27CT5)
Sadasivan Shankar gave this Invited Talk at SC16. "This talk will explore six different trends all of which are associated with some form of scaling and how they could enable an exciting world in which we co-design a platform dependent on the applications. I will make the case that this form of “personalization of computation†is achievable and is necessary for applications of today and tomorrow."The post Co-Design 3.0 – Configurable Extreme Computing, Leveraging Moore’s Law for Real Applications appeared first on insideHPC.
|
by Rich Brueckner on (#27CF6)
The 3rd annual International Workshop on High-Performance Big Data Computing (HPBDC) has issued its Call for Papers. Featuring a keynote by Prof. Satoshi Matsuoka from Tokyo Institute of Technology, the event takes place May 29, 2017 in Orlando, FL.The post Call for Papers: International Workshop on High-Performance Big Data Computing (HPBDC) appeared first on insideHPC.
|
by staff on (#27CAE)
Remote visualization tools allow employees to dramatically improve productivity by accessing business-critical data and programs regardless of their location. Remote visualization technologies allow users to launch software applications on the server side and display the results locally, letting them leverage the bandwidth and compute power of the cluster while circumventing the latency and security risks of downloading large amounts of data onto their local client.The post Remote Visualization Accelerating Innovation Across Multiple Industries appeared first on insideHPC.
|
by Rich Brueckner on (#279AQ)
"The AI is going to flow across the grid -- the cloud -- in the same way electricity did. So everything that we had electrified, we're now going to cognify. And I owe it to Jeff, then, that the formula for the next 10,000 start-ups is very, very simple, which is to take x and add AI. That is the formula, that's what we're going to be doing. And that is the way in which we're going to make this second Industrial Revolution. And by the way -- right now, this minute, you can log on to Google and you can purchase AI for six cents, 100 hits. That's available right now."The post Video: How AI can bring on a second Industrial Revolution appeared first on insideHPC.
|
by Rich Brueckner on (#27972)
Oak Ridge National Laboratory is seeking a Computational Scientist in our Job of the Week. The National Center for Computational Sciences in the Computing and Computational Sciences Directorate at the Oak Ridge National Laboratory (ORNL) seeks to hire Computational Scientists. We are looking in the areas of Computational Climate Science, Computational Astrophysics, Computational Materials Science, […]The post Job of the Week: Computational Scientist at ORNL appeared first on insideHPC.
|
by Rich Brueckner on (#276D2)
In this AI Podcast, Bob Bond from Nvidia and Mike Senese from Make magazine discuss the Do It Yourself movement for Artificial Intelligence. "Deep learning isn't just for research scientists anymore. Hobbyists can use consumer grade GPUs and open-source DNN software to tackle common household tasks from ant control to chasing away stray cats."The post Podcast: Do It Yourself Deep Learning appeared first on insideHPC.
|
by Rich Brueckner on (#276AH)
Janice Coen from NCAR gave this Invited Talk at SC16. "The past two decades have seen the infusion of technology that has transformed the understanding, observation, and prediction of wildland fires and their behavior, as well as provided a much greater appreciation of its frequency, occurrence, and attribution in a global context. This talk will highlight current research in integrated weather – wildland fire computational modeling, fire detection and observation, and their application to understanding and prediction."The post Video: Advances and Challenges in Wildland Fire Monitoring and Prediction appeared first on insideHPC.
|