Deep Learning Inference on PowerEdge R7425

This paper talks about Deep Learning Inference using NVIDIA T4-16GB GPU and TensorRT. The NVIDIA T4-16GB GPU is based on their latest Turing architecture which significantly boosts graphic performance using a new GPU processor (streaming multiprocessor) with improved shader execution efficiency and new memory system architecture that supports GDDR6 memory technology. Turing's Tensor cores provide higher throughput and lower latency for AI Inference applications.

Dell EMC PowerEdge R7425 is based on AMD's EPYC architecture and since EPYC architecture supports higher number of PCIe Gen3 x16 lanes, it allows the server to be used as a scale-up inference server. It becomes a perfect solution when running large production-based AI workloads where both throughput and latency are important.

Download this paper to learn:

  • Results of Inference Optimization on the Dell EMC PowerEdge R7425 server
  • The higher throughput & lower latency for neural models

Register here for the Whitepaper


Yes, I would like to be contacted by a Dell Technologies expert to receive further information.

By clicking submit you agree to Dell Technologies and its group of companies stay in touch and to keep you updated on products, services, solutions, exclusive offers and special events. For information on how we protect your personal data, see our Privacy Statement. You can unsubscribe at any time.

Dell Technologies, Dell, EMC, Dell EMC and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other trademarks may be trademarks of their respective owners.

Windows Server 2022 Be cloud ready with hybrid