For AI, GPU is often talked about. Here is the DPU offering from Nvidia
DPUs are a new class of programmable processor that consists of flexible and programmable acceleration engines which improve applications performance for AI and machine learning, security, telecommunications, storage, among others.
A DPU is a new class of programmable processor that combines three key elements. A DPU is a system on a chip, or SoC, that combines:
An industry-standard, high-performance, software-programmable, multi-core CPU, typically based on the widely used Arm architecture, tightly coupled to the other SoC components.
A high-performance network interface capable of parsing, processing and efficiently transferring data at line rate, or the speed of the rest of the network, to GPUs and CPUs.
A rich set of flexible and programmable acceleration engines that offload and improve applications performance for AI and machine learning, security, telecommunications and storage, among others.
Enable programmability to run at speeds of up to 200Gb/s.
Just as CUDA enables developers to program accelerated computing applications, DOCA enables them to program the acceleration of data processing for moving data into and out of servers, VMs, and containers. DOCA sits alongside CUDA to leverage the entire range of NVIDIA AI applications in a secure, accelerated data center.
https://blogs.nvidia.com/blog/2020/05/20/whats-a-dpu-data-processing-unit/
https://analyticsindiamag.com/dpu-nvidia-mellanox-processors/
https://www.forbes.com/sites/janakirammsv/2020/10/11/what-is-a-data-processing-unit-dpu-and-why-is-nvidia-betting-on-it/?sh=51a903fb6a2d