Receive our newsletter – data, insights and analysis delivered to you
  1. Comment
August 24, 2021

High-Performance Computing (HPC): Technology Trends

High-Performance Computing (HPC) utilises supercomputers and parallel processing techniques to quickly complete time-consuming tasks or multiple tasks simultaneously.

By GlobalData Thematic Research

Technologies such as edge computing and artificial intelligence (AI) can broaden the capabilities of HPC and deliver high-performing processing power to various sectors.

Listed below are the key technology trends impacting the high-performance computing theme, as identified by GlobalData.


The collection of vast amounts of data by the devices used by people and organisations has placed AI at the centre of technological disruption. The vast amounts of data produced every day is of little use without data analytics and AI. Enterprise use of AI is increasing, driving demand for high-performing machines. The renewed interest in HPC to a large extent is due to the need to compute large amounts of data for AI workloads.

The link between AI and HPC is symbiotic as HPC powers AI workloads, but AI can identify improvements in HPC data centres. AI, for example, can optimise heating and cooling systems, reducing electricity costs and improving efficiency. AI systems can also monitor the health of servers, storage, and networking gear, check to see that systems remain correctly configured, and predict when equipment is about to fail.

Furthermore, AI can be used for security purposes, screening and analysing incoming and outgoing data, detecting malware, and implementing behavioural analytics to protect data.

Graphic processing units (GPUs) versus tensor processing units (TPUs)

Gaming was the original use case for GPUs, with the technology revolutionising high-resolution games. Other use cases of GPUs have become apparent, including in HPC. GPUs perform data-intensive work, with applications ranging from machine learning to self-driving cars. They have proven to be superior chips for processing HPC workloads due to their focus on data computations.

Content from our partners
Precision wire: The future of bespoke medical treatment
Why this global life sciences COO believes relocation to Charleston, SC, was key to achieving next-level success
“This technique means everything to us”: How CGM devices empower users 

The rise of GPUs has made Nvidia a key player in HPC, as the company is the leader in GPU manufacturing. Google’s TPUs, however, are starting to threaten the dominance of GPUs. TPUs are application-specific integrated circuits (ASICs) that accelerate AI calculations and algorithms.

Google developed them specifically for neural network machine learning, including its TensorFlow software. GPU’s role in HPC will remain central for now, but in-house chip development by influential players such as Google means that Nvidia cannot afford to rest on its laurels if it is to remain relevant in HPC.


The diversity in processing between the old central processing units (CPUs), GPUs, ASICs, and field-programmable gate arrays (FPGAs) is growing continuously. Workloads can vary substantially, therefore, flexibility that delivers different computing for different use cases is essential.

HPC players increasingly allow customisation of their offerings, and that goes beyond processing capabilities. Huawei offers three different HPC architectures to its clients, whereas IBM allows for data storage customisation and HPE bills clients in line with flexible consumption models.

Clients can choose between having their HPC data centre on-premise, in the cloud, or deployed at the edge. Some vendors offer a mix of solutions for different workloads in one package.

HPC as a service (HPCaaS)

Many vendors have moved from selling equipment to providing HPCaaS. HPCaaS’ rise is linked to the emergence of the cloud as an HPC solution. The trend towards HPCaaS is, therefore, benefitting cloud players such as Amazon Web Services (AWS), Google, and Alibaba although traditional HPC vendors are also offering HPCaaS.

HPCaaS can be a compelling option for end-users as it puts intense data processing and workloads that require high-performance within reach of companies that lack the necessary capital to hire skilled staff and invest in hardware. HPCaaS brings HPC capabilities to those companies that cannot afford to develop HPC knowledge and infrastructure in-house. Subscribing to HPCaaS instead of developing HPC inhouse, however, brings with it all the limitations of HPC deployed in the cloud.

Hybrid solutions

HPC was born in on-premise data centres but during the second half of the 2010s, cloud computing began to change HPC. The edge has recently emerged as a new deployment platform for HPC. Vendors have started offering hybrid options as the high-performance solutions landscape expands. A hybrid HPC solution typically involves cloud capabilities that complement an existing on-premise data centre.

The combination of on-premise and private cloud hosting overcomes some of the public cloud’s weaknesses, including poor performance and optimisation challenges caused by the diversity and complexity of many industry-specific, data-intensive HPC workloads. By contrast, hybrid solutions can be customised and tend to be scalable while simultaneously providing the agility of the cloud.

A move towards hybrid will benefit providers like Dell and HPE. Players such as AWS and Microsoft will be better placed if developments in the cloud allow for its shortcomings to be fixed.


Flexibility, HPCaaS, and the emergence of hybrid solutions all flow into one major trend in HPC: democratisation. This trend relates to more widespread access to HPC and the positioning of the technology within reach of more end users.

Supercomputers used to be the realm of research, academia, or the military. HPC then expanded to stock trading, banking, and oil and gas. The range of businesses using HPC is broad and includes automotive, aerospace, and even food processing. The deployment of HPC at the edge will increase HPC’s reach even further.

Exascale computing

Exascale computing is a computing systems’ ability to perform a billion calculations per second, with performance measured in exaFLOPS instead of FLOPS. The first exascale computer is expected in 2022 at the earliest.

Exascale computing is not a new form of computing like quantum rather it refers to the next level of processing power possible with existing technology. Exascale HPC, however, is bound to bring several improvements in advanced simulation and modelling that will address challenges, such as predicting natural disasters and advancing scientific discoveries, particularly in the medical field.

Microarchitectural improvements

Exascale computing is an advancement in the overall processing capacity of HPC, however, performance improvements increasingly come from smaller design innovations that may be less headline-grabbing but are nonetheless important.

Progress at a microarchitectural level includes faster interconnections, higher computing densities, scalable storage, greater efficiencies in infrastructure, eco-friendliness, space management, and improved security. Advancements such as these will continue to be a trend in HPC over the next few years.

This is an edited extract from the High-Performance Computing – Thematic Research report produced by GlobalData Thematic Research.

Related Companies

NEWSLETTER Sign up Tick the boxes of the newsletters you would like to receive. The top stories of the day delivered to you every weekday. A weekly roundup of the latest news and analysis, sent every Friday. The medical device industry's most comprehensive news and information delivered every month.
I consent to GlobalData UK Limited collecting my details provided via this form in accordance with the Privacy Policy