Illustration by Alex Castro / The Verge

Nvidia looks to build a bigger presence outside GPU sales as it puts its AI-specific software development kit into more applications.
Nvidia announced that it’s adding support for its TensorRT-LLM SDK to Windows and models like Stable Diffusion. The company said in a blog post that it aims to make large language models (LLMs) and related tools run faster.
TensorRT speeds up inference, the process of going through pretrained information and calculating probabilities to come up with a result — like a newly generated Stable Diffusion image. With this software, Nvidia wants to play a bigger part in the inference side of generative AI.
Its TensorRT-LLM breaks down LLMs and lets them run faster on Nvidia’s H100 GPUs. It works with LLMs like…

Continue reading…

Illustration by Alex Castro / The Verge

Nvidia looks to build a bigger presence outside GPU sales as it puts its AI-specific software development kit into more applications.

Nvidia announced that it’s adding support for its TensorRT-LLM SDK to Windows and models like Stable Diffusion. The company said in a blog post that it aims to make large language models (LLMs) and related tools run faster.

TensorRT speeds up inference, the process of going through pretrained information and calculating probabilities to come up with a result — like a newly generated Stable Diffusion image. With this software, Nvidia wants to play a bigger part in the inference side of generative AI.

Its TensorRT-LLM breaks down LLMs and lets them run faster on Nvidia’s H100 GPUs. It works with LLMs like…

Continue reading…

Read More

Leave a Reply