Passive and active direct liquid cooling for modules and chassis immersion implementations are becoming more commonly used to cool high-wattage devices — such as graphics processing units (GPUs) and switch application-specific integrated circuits (ASICs).
由于应用程序现在的应用程序包括Hyperscaler和HPC数据中心系统,Prosumer PC和工程工作站,因此这似乎正在推动不断扩大的流体连接器和电缆市场。
However, for enterprise data-center systems, liquid cooling systems are moderate but still included on some roadmaps. For example, Intel’s recently announced $700 million lab facility in Hillsboro, Oregon, is focused on developing new cooling systems. It will likely require industry-specified and standardized fluid connectors and cables, which might be done via the Open Compute Project Foundation (OCP).
是否内部,inside-the-module, liquid connectors and cables will become standardized or stay application-specific with custom designs remains in question. This is true for both the data-center and prosumer market segments.
当前的增长市场可能会发展一定数量的外标液连接器和电缆。尽管这将取决于使用了多少不同的形式。NVIDIA的最新A100产品是一种新的PCIE Form-Factor,即加载项GPU模块,用于用于数据中心的应用程序,并在后板上使用液体内部和端口连接器。
另一个新产品是非法的外部水冷模块,该模块具有新的自锁,快速释放的外部连接器和电缆组件。它支持在CPU和笔记本电脑内的GPU芯片上的温度降低。
FLOW uses a unique two-port fluid connector/hose system that’s quite different from other liquid-cooled products.
以下是描述和图像:
通常,内部液体冷却电缆具有较大的直径,并具有较大的连接器。这是NVIDIA的GPU加速器,该加速器在第一张图片中显示了带有黑色铜IO电缆的白色液体软管。它们在两幅图像中都在自由空气中一起路由。
Data-center external quick-disconnect liquid cooling connectors are typically marked with blue for the cool input port and orange or red for warm output port, much like this SuperMicro example below.
Several new high-end copper and optical connectors and cables will need to be vetted for data-center reliability and performance, especially while fully immersed in inert coolant liquids. This could potentially affect some form factors such as EDSFF, PECL, Ruler, M2, M3, and others.
这是一些用于高功率设备的液体浸入式冷却示例。
This is what full immersion can look like.
Observations
Aside from solving many heating and cooling problems, liquid-cooling infrastructures significantly reduce the need for high-cost chillers and related water use (and cost). They also eliminate the necessity of costly and noisy fans.
What’s more: liquid cooling saves at least 30% in electrical costs compared to air-cooled products. Liquid cooling can also save up to 50% in the necessary slot space per high-wattage module. Now there’s only one PCIe slot instead of two like with older GPU products.
随着未来两年中该市场的发展,我们会看到许多新的应用特定液体连接器吗?会有较新的应用程序吗?这里似乎正在进行高水平的发展。旧类型的液体冷却互连通常使用软管和配件的术语,而不是液态连接器和电缆。也许我们很快就会学习一些新的首字母缩写词!
顶部连接器和电缆供应商正在增加其产品产品,包括更多的传感器,空间和液体冷却互连。标准化这些更高批量的发展可能需要几个或几家大公司。
Internal custom copper semi-flexible and solid piping are still used in low volume. But high-performance HPC systems with large and high-powered wattage CPU chips and newer IPU types are on the rise. These sometimes require copper fittings and welded connections, which do get costly when used in volume.
It seems that distributed-as-needed liquid cooling topology and infrastructure that recycles targeted small amounts of coolant provides the ideal costs and fastest install/servicing turnaround times.
Filed Under:Connector Tips
