With winter just a breath away, and summer seems a lifetime away, you may not be thinking about how hot things are getting any time soon.
Spare a thought, then, for your computer’s processor. Performing millions of calculations per second is a hot business.
The laws of thermodynamics mean that the harder your CPU works, the hotter it’s going to get, and this essentially means computer hardware failures.
To reduce the impact of CPU overheating, there are two traditional methods employed to cool CPUs (and other computer hardware), active cooling and passive cooling:
Active cooling employs the tried and tested technique of applying a cooler of the source to reduce the heat, typically by blowing cold air over a heatsink to dissipate the heat built up in and around the heatsink. Typically, this has been a team of enclosure or case fans to create an airflow that allows the internal enclosure temperature to stay closer to the outside (hopefully air-conditioned) air temperature despite the heat that is generated from the equipment contained inside. This is then combined with more locally placed fans to force air over heatsinks or components, where particular heat-sources are located (e.g. CPU, GPU).
The problem with this method, however, is that it contains moving parts, leaving the system susceptible to more points of failure which presents a particular problem in industrial environments. A broken fan means no cooling, and no cooling means CPU throttling or even worse; system failure and costly downtime, despite this it is all too common to have little or no monitoring to alert uses to these sorts of failure.
To prevent system failure, many industrial applications, opt for passive (fanless) cooling. This system employs a heatsink and sometimes heat pipes to draw heat away from the processor and dissipate it to the surrounding air via the enclosure itself. But whilst no moving parts results in improved reliability, the size of the heatsink required is directly correlated to the amount of heat energy trying to be dissipated. This means high-performance computers are typically larger, heavier and more expensive than desired in many applications.
The result then is a compromise of reliability versus performance. In industrial applications where reliability is first and foremost, it is the performance that takes second place as processor throttling is commonly employed as a tactic in small form factor, high-performance, fanless computers to keep the CPU cool.
This tactic may be acceptable and well if your application doesn’t require the full performance of the CPU all of the time, however, some industrial applications demand more performance, 24/7. This is where a standard fanless approach may not be adequate for every application. Design approaches exist to use suitable materials, heat pipes and hardware to prevent this type of performance throttling.
Liquid cooling often uses a closed-loop filled with coolant (not so different to the internal combustion engine in a car) to transport heat very efficiently from hotspots such as a CPU or GPU to a water block. Unfortunately, this can still sometimes feature (with all the drawbacks of moving parts that come with them) to keep this cool, so systems with water-cooled for industry do need to be carefully designed. The upside is it’s possible to dissipate a more heat offering greater performance, and lower component temperatures which intrinsically improves electronic component reliability.
Microsoft, more recently took a somewhat different approach to using liquid to help cool a datacentre, by actually designing one to operate under the sea with access to all the cold liquid you may ever want! Whilst, not every application might not have access to an ocean to use for cooling, the principles of immersion cooling are a possibility for the future too.
Image credit: Futurism.com
To learn more about industrial-grade computer hardware solutions, visit our products page.