by staff on (#27GGQ)
"The release of Scyld ClusterWare 7 continues the growth of Penguin’s HPC provisioning software and enables support of large scale clusters ranging to thousands of nodes,†said Victor Gregorio, Senior Vice President of Cloud Services at Penguin Computing. “We are pleased to provide this upgraded version of Scyld ClusterWare to the community for Red Hat Enterprise Linux 7, CentOS 7 and Scientific Linux 7.â€The post Penguin Computing Releases Scyld ClusterWare 7 appeared first on insideHPC.
|
High-Performance Computing News Analysis | insideHPC
Link | https://insidehpc.com/ |
Feed | http://insidehpc.com/feed/ |
Updated | 2024-11-25 16:30 |
by Rich Brueckner on (#27G55)
In this video from SC16, Janet Morss from Dell EMC and Hugo Saleh from Intel discuss how the two companies collaborated on accelerating CryoEM. "Cryo-EM allows molecular samples to be studied in near-native states and down to nearly atomic resolutions. Studying the 3D structure of these biological specimens can lead to new insights into their functioning and interactions, especially with proteins and nucleic acids, and allows structural biologists to examine how alterations in their structures affect their functions. This information can be used in system biology research to understand the cell signaling network which is part of a complex communication system."The post Dell & Intel Collaborate on CryoEM on Intel Xeon Phi appeared first on insideHPC.
|
by Douglas Eadline on (#27G37)
To achieve high performance, modern computer systems rely on two basic methodologies to scale resources: scale-up or scale-out. The scale-up in-memory system provides a much better total cost of ownership and can provide value in a variety of ways. "If the application program has concurrent sections then it can be executed in a "parallel" fashion. Much like using multiple bricklayers to build a brick wall. It is important to remember that the amount and efficiency of the concurrent portions of a program determine how much faster it can run on multiple processors. Not all applications are good candidates for parallel execution."The post In-Memory Computing for HPC appeared first on insideHPC.
|
by Rich Brueckner on (#27CYM)
We are pleased to kick off the New Year with the announcement of our new insideHPC Jobs Board. "With listings for High Performance Computing jobs from around the world, the insideHPC Jobs Board is a great resource for employers and job seekers alike. As a special promotion, you can place Job Ads for 50 percent off during January."The post Announcing the New insideHPC Jobs Board appeared first on insideHPC.
|
by Rich Brueckner on (#27CT5)
Sadasivan Shankar gave this Invited Talk at SC16. "This talk will explore six different trends all of which are associated with some form of scaling and how they could enable an exciting world in which we co-design a platform dependent on the applications. I will make the case that this form of “personalization of computation†is achievable and is necessary for applications of today and tomorrow."The post Co-Design 3.0 – Configurable Extreme Computing, Leveraging Moore’s Law for Real Applications appeared first on insideHPC.
|
by Rich Brueckner on (#27CF6)
The 3rd annual International Workshop on High-Performance Big Data Computing (HPBDC) has issued its Call for Papers. Featuring a keynote by Prof. Satoshi Matsuoka from Tokyo Institute of Technology, the event takes place May 29, 2017 in Orlando, FL.The post Call for Papers: International Workshop on High-Performance Big Data Computing (HPBDC) appeared first on insideHPC.
|
by staff on (#27CAE)
Remote visualization tools allow employees to dramatically improve productivity by accessing business-critical data and programs regardless of their location. Remote visualization technologies allow users to launch software applications on the server side and display the results locally, letting them leverage the bandwidth and compute power of the cluster while circumventing the latency and security risks of downloading large amounts of data onto their local client.The post Remote Visualization Accelerating Innovation Across Multiple Industries appeared first on insideHPC.
|
by Rich Brueckner on (#279AQ)
"The AI is going to flow across the grid -- the cloud -- in the same way electricity did. So everything that we had electrified, we're now going to cognify. And I owe it to Jeff, then, that the formula for the next 10,000 start-ups is very, very simple, which is to take x and add AI. That is the formula, that's what we're going to be doing. And that is the way in which we're going to make this second Industrial Revolution. And by the way -- right now, this minute, you can log on to Google and you can purchase AI for six cents, 100 hits. That's available right now."The post Video: How AI can bring on a second Industrial Revolution appeared first on insideHPC.
|
by Rich Brueckner on (#27972)
Oak Ridge National Laboratory is seeking a Computational Scientist in our Job of the Week. The National Center for Computational Sciences in the Computing and Computational Sciences Directorate at the Oak Ridge National Laboratory (ORNL) seeks to hire Computational Scientists. We are looking in the areas of Computational Climate Science, Computational Astrophysics, Computational Materials Science, […]The post Job of the Week: Computational Scientist at ORNL appeared first on insideHPC.
|
by Rich Brueckner on (#276D2)
In this AI Podcast, Bob Bond from Nvidia and Mike Senese from Make magazine discuss the Do It Yourself movement for Artificial Intelligence. "Deep learning isn't just for research scientists anymore. Hobbyists can use consumer grade GPUs and open-source DNN software to tackle common household tasks from ant control to chasing away stray cats."The post Podcast: Do It Yourself Deep Learning appeared first on insideHPC.
|
by Rich Brueckner on (#276AH)
Janice Coen from NCAR gave this Invited Talk at SC16. "The past two decades have seen the infusion of technology that has transformed the understanding, observation, and prediction of wildland fires and their behavior, as well as provided a much greater appreciation of its frequency, occurrence, and attribution in a global context. This talk will highlight current research in integrated weather – wildland fire computational modeling, fire detection and observation, and their application to understanding and prediction."The post Video: Advances and Challenges in Wildland Fire Monitoring and Prediction appeared first on insideHPC.
|
by Rich Brueckner on (#273EK)
In this podcast, the Radio Free HPC team honors the Festivus tradition of the annual Airing of Grievances. Our random gripes include: the need for a better HPC benchmark suite, the missed opportunity for ARM servers, the skittish battery in the new Macbook Pro, and a lack of an industry standards body for cloud computing.The post The Festivus Airing of Grievances from Radio Free HPC appeared first on insideHPC.
|
by Rich Brueckner on (#272XJ)
"The SAGE project, which incorporates research and innovation in hardware and enabling software, will significantly improve the performance of data I/O and enable computation and analysis to be performed more locally to data wherever it resides in the architecture, drastically minimizing data movements between compute and data storage infrastructures. With a seamless view of data throughout the platform, incorporating multiple tiers of storage from memory to disk to long-term archive, it will enable API’s and programming models to easily use such a platform to efficiently utilize the most appropriate data analytics techniques suited to the problem space."The post SAGE Project Looks to Percipient Storage for Exascale appeared first on insideHPC.
|
by Rich Brueckner on (#272RQ)
Thomas Sterling presented this Invited Talk at SC16. "Increasing sophistication of application program domains combined with expanding scale and complexity of HPC system structures is driving innovation in computing to address sources of performance degradation. This presentation will provide a comprehensive review of driving challenges, strategies, examples of existing runtime systems, and experiences. One important consideration is the possible future role of advances in computer architecture to accelerate the likely mechanisms embodied within typical runtimes. The talk will conclude with suggestions of future paths and work to advance this possible strategy."The post Thomas Sterling Presents: HPC Runtime System Software for Asynchronous Multi-Tasking appeared first on insideHPC.
|
by Rich Brueckner on (#2703H)
Today Nor-Tech announced that it is now integrating the latest release of simulation platform Abaqus into its leading-edge HPC clusters. "We have been working with SIMULIA for many years," said Nor-Tech President and CEO David Bollig. "While this relationship has been beneficial for both companies, the real winners are our clients because they are able to fully leverage the power of our clusters with Abaqus.â€The post Nor-Tech Rolls Out HPC Clusters Integrated with Abaqus 2017 appeared first on insideHPC.
|
by staff on (#26ZKC)
Over at KAUST News, Nicholas G. Demille​ writes that the Shaheen supercomputer has completed the world's first trillion cell reservoir simulation. A Saudi Aramco research team led by fellow Ali Dogru conducted the reservoir simulation using Shaheen and the company's proprietary software TeraPOWERS. The Aramco researchers were supported by a team of specialists from the KAUST Supercomputing Core Lab, with the work rendering imagery so detailed that it changed the face of natural resource exploration.The post Shaheen Supercomputer at KAUST Completes Trillion Cell Reservoir Simulation appeared first on insideHPC.
|
by Rich Brueckner on (#26ZE9)
In this Intel Chip Chat, Doug Fisher from Intel describes the company's efforts to accelerate innovation in artificial intelligence. "Fisher talks about Intel's upstream investments in academia and open source communities. He also highlights efforts including the launch of the Intel Nervana AI Academy aimed at developers, data scientists, academia, and startups that will broaden participation in AI. Additionally, Fisher reports on Intel's engagements with open source ecosystems to optimize the performance of the most-used AI frameworks on Intel architecture."The post Podcast: Intel Invests Upstream to Accelerate AI Innovation appeared first on insideHPC.
|
by Rich Brueckner on (#26Z7Q)
"The Materials Project is harnessing the power of supercomputing together with state-of-the-art quantum mechanical theory to compute the properties of all known inorganic materials and beyond, design novel materials and offer the data for free to the community together with online analysis and design algorithms. The current release contains data derived from quantum mechanical calculations for over 60,000 materials and millions of properties."The post Video: The Materials Project – A Google of Materials appeared first on insideHPC.
|
by MichaelS on (#26VM2)
"Argonne National Labs has created a process to assist in moving large applications to a new system. Their current HPC system, Mira will give way to the next generation system, Aurora, which is part of the collaboration of Oak Ridge, Argonne, and Livermore (CORAL) joint procurement. Since Aurora contains technology that was not available in Mira, the challenge is to give scientists and developers access to some of the new technology, well before the new system goes online. This allows for a more productive environment once the full scale new system is up."The post Building for the Future Aurora Supercomputer at Argonne appeared first on insideHPC.
|
by staff on (#26VHZ)
Today Pointwise announced that the latest release of their CFD mesh generation software has been extended such that its Tcl-based Glyph scripting language can be called from any scripting language including Python. This new Glyph Server feature was motivated by a user’s presentation at the Pointwise User Group Meeting 2016. "The Glyph Server idea arose after talking to the customer who presented his work on ‘A Python Binding for the Pointwise Glyph Scripting Language’ at our user group meeting,†said John Chawner, Pointwise’s president. “Not only were we able to share new code with the customer to simplify his work but the conversation made us realize how to make Glyph callable from any scripting language.â€The post Pointwise Adds Script Language Support for CFD Mesh Generation appeared first on insideHPC.
|
by staff on (#26VDB)
IDC is out with their latest Worldwide High-Performance Technical Server QView report. The QView presents the HPC market from various perspectives, including by competitive segment, vendor, cluster versus non-cluster, geography, and operating system. It also contains detailed revenue and shipment information by HPC models. "The workgroup segment, and especially the departmental segment, substantially ramped up purchases of HPC servers in the period 2012-2015, in tune with the global economic recovery."The post IDC: Worldwide HPC Server Revenue Grows 3.9% in Third Quarter appeared first on insideHPC.
|
by Rich Brueckner on (#26VBP)
"This talk reviews the history of the changing balances between computation, memory latency, and memory bandwidth in deployed HPC systems, then discusses how the underlying technology changes led to these market shifts. Key metrics are the exponentially increasing relative performance cost of memory accesses and the massive increases in concurrency that are required to obtain increased memory throughput. New technologies (such as stacked DRAM) allow more pin bandwidth per package, but do not address the architectural issues that make high memory bandwidth expensive to support."The post Memory Bandwidth and System Balance in HPC Systems appeared first on insideHPC.
|
by MichaelS on (#26TZS)
With the advent of heterogeneous computing systems that combine both main CPUs and connected processors that can ingest and process tremendous amounts of data and run complex algorithms, artificial intelligence (AI) technologies are beginning to take hold in a variety of industries. Massive datasets can now be used to drive innovation in industries such as autonomous driving systems, controlling power grids and combining data to arrive at a profitable decision, for example. Read how AI can now be used in various industries using the latest hardware and software.The post Artificial Intelligence Becomes More Accessible appeared first on insideHPC.
|
by staff on (#26R2Y)
In this special guest feature, Daniel Gutierrez from insideBIGDATA offers up his 2017 roundup industry predictions from Big Data thought leaders. "AI, ML, and NLP innovations have really exploded this past year but despite a lot of hype, most of the tangible applications are still based on specialized AI and not general AI. We will continue to see new use-cases of such specialized AI across verticals and key business processes. These use-cases would primarily be focused on the evolutionary process improvement side of the digital transformation."The post insideBigData Industry Predictions for 2017 appeared first on insideHPC.
|
by staff on (#26QVQ)
"Building on HPE IDOL’s history of delivering industry-leading analytics engineered for human data, IDOL Natural Language Question Answering is the industry’s first comprehensive approach to delivering enterprise class answers,†said Sean Blanchflower, vice president of engineering, Big Data Platform, Hewlett Packard Enterprise. “Designed to meet the demanding needs of data-driven enterprises, this new, language-independent capability can enhance applications with machine learning powered natural language exchanThe post HPE IDOL Machine Learning Engine Adds Natural Language Processing appeared first on insideHPC.
|
by Rich Brueckner on (#26QRE)
The High Performance Computing Saudi Arabia conference has issued its Call for Posters. The event takes place March 13-15 at KAUST University.The post Call for Posters: HPC Saudi Arabia Conference at KAUST appeared first on insideHPC.
|
by Rich Brueckner on (#26QN7)
In this video, HPCNY staff describe their mission to bring high performance computing to industries in New York. HPCNY provides businesses and research organizations with access to world-class advanced computing expertise through accelerating the engineering and development path of complex, ground-breaking designs to reliable, accurate, innovative product and process performance that can provide competitive advantage.The post HPC^NY Bringing High Peformance Computing to Industry appeared first on insideHPC.
|
by staff on (#26MPA)
"As financial institutions acknowledge the importance of data as an asset and as they continue to deploy sophisticated analytics to realize the benefit of that asset, we will begin to see some achieve competitive advantages in this fast-paced marketplace. Of course, these organizations also face continued regulatory scrutiny and disruptive changes to technology that can pose challenges,†said Michael Hay, Vice President and Chief Engineer at Hitachi Data Systems. “Together with our partner Maxeler Technologies, Hitachi Data Systems can help our customers address business demands, stay compliant and transform data into information, insight and opportunities to win.â€The post Maxeler Dataflow Engine Speeds Compliance Capture and Analytics for Financial Institutions appeared first on insideHPC.
|
by staff on (#26MKA)
By using multiple grids and separating the modes in the problem onto the various grids most efficiently, the researchers can get through their long line of calculations quicker and easier. "GPUs provide a lot of memory bandwidth," Clark said. "Solving LQCD problems computationally is almost always memory-bound, so if you can describe your problem in such a way that GPUs can get maximum use of their memory bandwidth, QCD calculations will go a lot quicker." In other words memory bandwidth is like a roadway in that having more lanes helps keep vehicles moving and lessens the potential for traffic backups."The post Supercomputing Subatomic Particle Research on Titan appeared first on insideHPC.
|
by staff on (#26MHN)
Today Nimbus Data announced the award of a patent for its non-blocking all-flash architecture. Nimbus Data’s Parallel Memory Architecture scales capacity and performance linearly within each ExaFlash system, offering latency and throughput performance up to 6x faster scale-up designs. "Conventional HDD-centric architectures employed by the majority of all-flash array vendors trap flash performance behind legacy shared bus and scale-up designs,†stated Thomas Isakovich, CEO and Founder. “Now patented, Nimbus Data’s Parallel Memory Architecture overcomes the limitations of generic off-the-shelf servers, capturing the full performance potential of all-flash technology.â€The post Nimbus Data Patents Parallel Memory Architecture appeared first on insideHPC.
|
by Rich Brueckner on (#26MG0)
In this AI Podcast, Host Michael Copeland speaks with NVIDIA's Will Ramey about the history behind today's AI boom and the key concepts you need to know to get your head around a technology that's reshaping the world. "AI has been described as 'Thor’s Hammer' and 'the new electricity.' But it’s also a bit of a mystery – even to those who know it best. We’ll connect with some of the world’s leading AI experts to explain how it works, how it’s evolving, and how it intersects with every facet of human endeavor."The post Podcast: Deep Learning 101 appeared first on insideHPC.
|
by Rich Brueckner on (#26HK6)
Australia’s National Computational Infrastructure (NCI) has become the first Australian organization to join the OpenPOWER Foundation, a global open technical community enabling collaborative development and industry growth. NCI has additionally purchased four of IBM’s latest Power System servers for High Performance Computing (HPC) to underpin its research efforts through artificial intelligence, deep learning, high performance data analytics and other compute-heavy workloads.The post NCI Australia Joins OpenPOWER Foundation appeared first on insideHPC.
|
by Rich Brueckner on (#26HEY)
In this video from the Nvidia Booth at SC16, Jonathan Symonds from MapD presents: How GPUs are Remaking Cloud Computing. "This video discusses how price/performance characteristics of GPUs are changing the nature of cloud computing. The talk includes performance benchmarks on Google Cloud, Amazon Web Services and IBM Softlayer as well as a live demonstration."The post Video: How GPUs are Remaking Cloud Computing appeared first on insideHPC.
|
by staff on (#26ESD)
A new supercomputer has been deployed at the Jülich Supercomputing Center (JSC) in Germany. Called QPACE3, the new 447 Teraflop machine is named for "QCD Parallel Computing on the Cell. "QPACE3 is being used by the University of Regensburg for a joint research project with the University of Wuppertal and the Jülich Supercomputing Center for numerical simulations of quantum chromodynamics (QCD), which is one of the fundamental theories of elementary particle physics. Such simulations serve, among other things, to understand the state of the universe shortly after the Big Bang, for which a very high computing power is required."The post Jülich Installs New QPACE3 Supercomputer for Quantum Chromodynamics appeared first on insideHPC.
|
by Rich Brueckner on (#26EP0)
"The multidisciplinary research team and computational facilities –including MareNostrum– make BSC an international centre of excellence in e-Science. Since its establishment in 2005, BSC has developed an active role in fostering HPC in Spain and Europe as an essential tool for international competitiveness in science and engineering. The center manages the Red Española de Supercomputación (RES), and is a hosting member of the Partnership for Advanced Computing in Europe (PRACE) initiative."The post Beauty Meets HPC: An Overview of the Barcelona Supercomputing Center appeared first on insideHPC.
|
by staff on (#26BTW)
Today the HPC Advisory Council announced key dates for its 2017 international conference series in the USA and Switzerland. The conferences are designed to attract community-wide participation, industry leading sponsors and subject matter experts. "HPC is constantly evolving and reflects the driving force behind many medical, industrial and scientific breakthroughs using research that harnesses the power of HPC and yet, we’ve only scratched the surface with respect to exploiting the endless opportunities that HPC, modeling, and simulation present,†said Gilad Shainer, chairman of the HPC Advisory Council. “The HPCAC conference series presents a unique opportunity for the global HPC community to come together in an unprecedented fashion to share, collaborate, and innovate our way into the future.â€The post HPC Advisory Council Announces Global Conference Series for 2017 appeared first on insideHPC.
|
by Rich Brueckner on (#26BDM)
In a step that brings silicon-based quantum computers closer to reality, researchers at Princeton University have built a device in which a single electron can pass its quantum information to a particle of light. The particle of light, or photon, can then act as a messenger to carry the information to other electrons, creating connections that form the circuits of a quantum computer.The post Princeton Research on Electron-photon Small-talk Could have Big Impact on Quantum Computing appeared first on insideHPC.
|
by Rich Brueckner on (#26BBR)
In this Chip Chat podcast, Diane Bryant, EVP/GM for the Data Center Group at Intel, discusses how the company is driving the future of artificial intelligence by delivering breakthrough performance from best-in-class silicon, democratizing access to technology, and fostering beneficial uses of AI. Bryant also outlines her vision for AI's ability to fundamentally transform the way businesses operate and people engage with the world. In a blog Krzanich said: “Intel is uniquely capable of enabling and accelerating the promise of AI. Intel is committed to AI and is making major investments in technology and developer resources to advance AI for business and society.â€The post Podcast: Intel Doubles Down on Artificial Intelligence appeared first on insideHPC.
|
by Rich Brueckner on (#26B9W)
Over at Desktop Engineering, Wolfgang Gentzsch and Burak Yenier write that the new 2016 Compendium of Engineering Cloud Case Studies showcases great progress in CAE Cloud Computing. "In the early days of cloud computing, users complained that they got lost in the complexity of accessing and using cloud resources, and once using one cloud they were not able to easily use another cloud when needed. With the advent of software container technology for CAE and other applications this complexity disappeared."The post UberCloud Details Recent Progress in CAE Cloud Computing appeared first on insideHPC.
|
by Rich Brueckner on (#26B80)
In this video, Dr. Kelly Gaither from TACC describes how 20 students identified by XSEDE's community engagement team participated in a four-day long cohort experience themed around social change at SC16. "The objectives of the program are to engage students in a social change challenge using visualization and data analytics to increase awareness, interest, and ultimately inspire students to continue their path in advanced computing careers; to increase the participation of students historically underserved in STEM at SC."The post SC16 Applies Advanced Computing for Social Change appeared first on insideHPC.
|
by Rich Brueckner on (#267NZ)
The New 2016 Compendium of Engineering Cloud Case Studies is an invaluable resource for engineers, scientists, managers and executives who believe in the strategic importance of Technical Computing as a Service, in the Cloud, for their organization. It is a collection of 18 selected real-life case studies written by the participants of the UberCloud HPC Experiment. Among these case studies you will find scenarios that resonate with your own situation. You will benefit from the candid descriptions of problems encountered, problems solved, and lessons learned.The post UberCloud Publishes 2016 Compendium of Engineering Cloud Case Studies appeared first on insideHPC.
|
by MichaelS on (#267BW)
With modern processors that contain a large number of cores, to get maximum performance it is necessary to structure an application to use as many cores as possible. Explicitly developing a program to do this can take a significant amount of effort. It is important to understand the science and algorithms behind the application, and then use whatever programming techniques that are available. "Intel Threaded Building Blocks (TBB) can help tremendously in the effort to achieve very high performance for the application."The post Speed Your Application with Threading Building Blocks appeared first on insideHPC.
|
by Rich Brueckner on (#267A1)
"The PASC17 Conference offers a unique opportunity for your organization to gain visibility at a national and international level, to showcase your R&D and to network with leaders in the fields of HPC simulation and data science. PASC17 builds on a successful history – with 350 international attendees in 2016 – and continues to expand its program and international profile year on year."The post Call for Exhibitors: PASC17 in Lugano appeared first on insideHPC.
|
by staff on (#26781)
Today SimScale launched the SimScale Academic Program which brings cloud-based CAE software into universities, schools, and classrooms around the world. "SimScale is a new-generation CAE platform that supports Structural Mechanics, Fluid Dynamics, and Thermal Analysis. Students, Researchers, and Educators can now harness the power of the cloud to run engineering simulations on any laptop, anywhere."The post SimScale Brings Cloud-based CAE to Academics and Students Worldwide appeared first on insideHPC.
|
by Peter ffoulkes on (#266ZT)
Today, high performance interconnects can be divided into three categories: Ethernet, InfiniBand, and vendor specific interconnects. Ethernet is established as the dominant low level interconnect standard for mainstream commercial computing requirements. InfiniBand originated in 1999 to specifically address workload requirements that were not adequately addressed by Ethernet, and vendor specific technologies frequently have a time to market (and therefore performance) advantage over standardized offerings.The post High Performance System Interconnect Technology appeared first on insideHPC.
|
by Rich Brueckner on (#263NX)
Today ORNL announced the full schedule of 2017 GPU Hackathons at multiple locations around the world. "The goal of each hackathon is for current or prospective user groups of large hybrid CPU-GPU systems to send teams of at least 3 developers along with either (1) a (potentially) scalable application that could benefit from GPU accelerators, or (2) an application running on accelerators that need optimization. There will be intensive mentoring during this 5-day hands-on workshop, with the goal that the teams leave with applications running on GPUs, or at least with a clear roadmap of how to get there."The post 2017 GPU Hackathons Coming to U.S. and Europe appeared first on insideHPC.
|
by Rich Brueckner on (#263HW)
In this video from SC16, Garima Kochhar from Dell EMC describes the CryoEM Demo on the Dell PowerEdge C6320 rack server powered by Intel Xeon and Intel Xeon Phi. "This demo presents performance results for the 2D alignment and 2D classification phases of the Cryo-electron microscopy (Cryo-EM) data processing workflow using the new Intel Knights Landing architecture, and compares these results to the performance of the Intel Xeon E5-2600 v4 family."The post CryoEM Demo on Dell PowerEdge C6320 at SC16 appeared first on insideHPC.
|
by staff on (#263E2)
“I think one of the things that resellers like about us is that we never take a reseller deal directly. As a channel-first company, we always drive as much business through channel as our customers allow,†said Philip Crocker, senior director of channel marketing and sales enablement at Panasas. “In addition, we have a high-quality yet low-certification entry cost to the program. We also allow 24 x 7 x 365 access to field sales engineers, are highly responsive to partners and have zero sales friction. Specifically, the Accelerate program is compensation-neutral for our sales representatives and distributors.â€The post Panasas Celebrates 2016 Channel Momentum appeared first on insideHPC.
|
by Rich Brueckner on (#263AF)
In this time-lapse video, a team of volunteers build the SCinet, the world's fastest network at SC16. "Created each year for the conference, SCinet brings to life a very high-capacity network that supports the revolutionary applications and experiments that are a hallmark of the SC conference. SCinet links the convention center to research and commercial networks around the world. In doing so, SCinet serves as the platform for exhibitors to demonstrate the advanced computing resources of their home institutions and elsewhere by supporting a wide variety of bandwidth-driven applications including supercomputing and cloud computing."The post Building SCinet at SC16: The World’s Fastest Network appeared first on insideHPC.
|
by Rich Brueckner on (#25ZJ9)
The OpenFabrics Alliance (OFA) has issued its Call for Sessions for its 13th annual OFA Workshop. Registration is now open for the event, which takes place March 27-31, 2017 in Austin, TX.The post Call for Sessions and Registration: OpenFabrics Workshop in Austin appeared first on insideHPC.
|