this post was submitted on 20 May 2025
9 points (90.9% liked)

Free Open-Source Artificial Intelligence

3578 readers
1 users here now

Welcome to Free Open-Source Artificial Intelligence!

We are a community dedicated to forwarding the availability and access to:

Free Open Source Artificial Intelligence (F.O.S.A.I.)

More AI Communities

LLM Leaderboards

Developer Resources

GitHub Projects

FOSAI Time Capsule

founded 2 years ago
MODERATORS
 

Background: This Nomic blog article from September 2023 promises better performance in GPT4All for AMD graphics card owners.

Run LLMs on Any GPU: GPT4All Universal GPU Support

Likewise on GPT4All's GitHub page.

September 18th, 2023: Nomic Vulkan launches supporting local LLM inference on NVIDIA and AMD GPUs.

Problem: In GPT4All, under Settings > Application Settings > Device, I've selected my AMD graphics card, but I'm seeing no improvement over CPU performance. In both cases (AMD graphics card or CPU), it crawls along at about 4-5 tokens per second. The interaction in the screenshot below took 174 seconds to generate the response.

Question: Do I have to use a specific model to benefit from this advancement? Do I need to install a different AMD driver? What steps can I take to troubleshoot this?

Sorry if this is an obvious question. Sometimes I feel like the answer is right in front of me, but I'm unsure of which key words from the documentation should jump out at me.

My system info:

  • GPU: Radeon RX 6750 XT
  • CPU: Ryzen 7 5800X3D processor
  • RAM: 32 GB @ 3200 MHz
  • OS: Linux Bazzite
  • I've installed GPT4All as a flatpak
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 5 points 4 weeks ago (5 children)

Actually I run fedora and have ROCm working. In fact it's in the default package manager.

You can see 'em all by running: sudo dnf list | grep rocm

To get your GPU working you can look up: HSA_OVERRIDE_GFX_VERSION And maybe HIP_VISIBLE_DEVICES

Though this is the techy route. If you get LM studio running, or even better llamacpp, you'll have access to much better quantization formats than q4_1.

So, you'll be at the same speed or even faster than Vulkan, and with high quality outputs.

[–] [email protected] 1 points 4 weeks ago (4 children)

Thanks for the info—maybe I'll give this another whirl when I have some more time.

Which card are you running on?

[–] [email protected] 3 points 4 weeks ago (3 children)

I use the 24GB 7900 XTX.

I wonder why ROCm 6.4 doesn't support you, but ROCm 6.3 does. Maybe there is a way to downgrade. Also that override_gfx environment variable may be enough to get 6.4 working for you. Not sure though.

I'd say an easy route (if it works lol) would be using dnf to install ROCm, and then use LM studio's installer to get the rest.

[–] [email protected] 2 points 4 weeks ago (1 children)

I wonder why ROCm 6.4 doesn’t support you, but ROCm 6.3 does.

Wait, where are you reading that 6.3 supports the 6950 XT? I dug up the System Requirements (Linux) page for 6.3 and it lists the same cards as the 6.4 page. Is there another document out there that covers this topic?

[–] [email protected] 4 points 4 weeks ago* (last edited 4 weeks ago) (1 children)

Okay I rechecked and it looks like 6.4 and 6.3 have similar compatibility/incompatibility with certain cards.

Here are the gfx versions of different amd cards:

https://rocm.docs.amd.com/en/develop/reference/gpu-arch-specs.html

Here are the supported versions of 6.4

https://rocm.docs.amd.com/en/docs-6.4.0/compatibility/compatibility-matrix.html

So given this extra bit of research it looks like you may be able to run ROCm on a 6950XT but I'm not sure about a 6750XT.

From my experience ROCm supports more than they say they do. They say they support the cards they've tested, but other's still may work. I was running ROCm on my 7900 XTX before they officially supported it.

[–] [email protected] 2 points 3 weeks ago

I probably would not have noticed that. I'll have to look into this some more. Thanks for all your help.

load more comments (1 replies)
load more comments (1 replies)
load more comments (1 replies)