this post was submitted on 20 May 2025
9 points (90.9% liked)

Free Open-Source Artificial Intelligence

3578 readers
1 users here now

Welcome to Free Open-Source Artificial Intelligence!

We are a community dedicated to forwarding the availability and access to:

Free Open Source Artificial Intelligence (F.O.S.A.I.)

More AI Communities

LLM Leaderboards

Developer Resources

GitHub Projects

FOSAI Time Capsule

founded 2 years ago
MODERATORS
 

Background: This Nomic blog article from September 2023 promises better performance in GPT4All for AMD graphics card owners.

Run LLMs on Any GPU: GPT4All Universal GPU Support

Likewise on GPT4All's GitHub page.

September 18th, 2023: Nomic Vulkan launches supporting local LLM inference on NVIDIA and AMD GPUs.

Problem: In GPT4All, under Settings > Application Settings > Device, I've selected my AMD graphics card, but I'm seeing no improvement over CPU performance. In both cases (AMD graphics card or CPU), it crawls along at about 4-5 tokens per second. The interaction in the screenshot below took 174 seconds to generate the response.

Question: Do I have to use a specific model to benefit from this advancement? Do I need to install a different AMD driver? What steps can I take to troubleshoot this?

Sorry if this is an obvious question. Sometimes I feel like the answer is right in front of me, but I'm unsure of which key words from the documentation should jump out at me.

My system info:

  • GPU: Radeon RX 6750 XT
  • CPU: Ryzen 7 5800X3D processor
  • RAM: 32 GB @ 3200 MHz
  • OS: Linux Bazzite
  • I've installed GPT4All as a flatpak
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 6 points 4 weeks ago* (last edited 4 weeks ago) (11 children)

I am somewhat new to Linux and hosting local LLMs, but I think I had to install AMD ROCm for LLMs to work with my GPU.

https://rocm.docs.amd.com/en/latest/about/release-notes.html

[–] [email protected] 1 points 4 weeks ago (10 children)
[–] [email protected] 1 points 4 weeks ago* (last edited 4 weeks ago) (2 children)

I don't have a clue, I only tried LM Studio and Automatic1111.

[–] [email protected] 1 points 4 weeks ago (1 children)

What card are you running on?

[–] [email protected] 1 points 4 weeks ago* (last edited 4 weeks ago)

7900 XTX. Sorry, forgot that ROCm only supports some cards.

load more comments (7 replies)
load more comments (7 replies)