GeForce 3070 not utilized?


Message boards : Problems and bug reports : GeForce 3070 not utilized?

Message board moderation

To post messages, you must log in.
AuthorMessage
Stray_Trons

Send message
Joined: 18 Jun 21
Posts: 4
Credit: 3,876,175
RAC: 0
Message 7231 - Posted: 18 Jun 2021, 17:09:04 UTC
Just getting back into the BOINC community after a hiatus following SETI's shutdown. I have loaded my old machines up and they are running both CPU and GPU fine work elements fine with a 1050 TI and a GT 540.
However my main number cruncher built as a Flight Sim server, an AMD Ryzen 9 5900X 12-Core with a Nvidia 3070 card is not running any elements using GPU processing. Is this card too new and not yet supported?
ID: 7231 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
SeanHsu

Send message
Joined: 1 Sep 20
Posts: 1
Credit: 7,659,537
RAC: 0
Message 7233 - Posted: 19 Jun 2021, 2:50:44 UTC
With a GPU you can try other projects. Asteroids@home GPU tasks are not significantly faster than CPU tasks. As a result, those top computers are not equipped with a high-end graphics card.
https://asteroidsathome.net/boinc/top_hosts.php
ID: 7233 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Keith Myers
Avatar

Send message
Joined: 16 Nov 22
Posts: 98
Credit: 53,374,719
RAC: 367,720
Message 7315 - Posted: 16 Nov 2022, 2:02:58 UTC - in response to Message 7231.  
Yes, that is the case for the Ampere cards. They have CC capability of 8.6 and the gpu apps cutoff any cards over CC 7.5.
You can see the scheduler reply stating the case like this:

Unsupported Compute Capability (CC) detected (8.6). Supported Compute Capabilities are between 3.0 and 7.5

So only the Turing and earlier cards work here. You won't get sent work for any Ampere cards.
ID: 7315 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Keith Myers
Avatar

Send message
Joined: 16 Nov 22
Posts: 98
Credit: 53,374,719
RAC: 367,720
Message 7317 - Posted: 16 Nov 2022, 4:48:10 UTC - in response to Message 7233.  

Last modified: 16 Nov 2022, 4:48:34 UTC
With a GPU you can try other projects. Asteroids@home GPU tasks are not significantly faster than CPU tasks. As a result, those top computers are not equipped with a high-end graphics card.
https://asteroidsathome.net/boinc/top_hosts.php

That statement is patently false. GPU apps are 2X-10X faster than any cpu at this project.

A proud member of the OFA (Old Farts Association)
ID: 7317 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
WMD

Send message
Joined: 22 Jun 13
Posts: 4
Credit: 16,879,932
RAC: 23,015
Message 7337 - Posted: 17 Nov 2022, 15:43:03 UTC - in response to Message 7317.  
With a GPU you can try other projects. Asteroids@home GPU tasks are not significantly faster than CPU tasks. As a result, those top computers are not equipped with a high-end graphics card.
https://asteroidsathome.net/boinc/top_hosts.php

That statement is patently false. GPU apps are 2X-10X faster than any cpu at this project.

Indeed... my GPU can finish a unit in 6-7 minutes, but I see CPU hosts taking an hour or more. I think the top hosts being CPU hosts is because they have crazy CPUs - #1 is a dual-Epyc box with 256 total threads!

So only the Turing and earlier cards work here. You won't get sent work for any Ampere cards.

My Volta card works too. (I can't remember which is newer, Turing or Volta...)

Either way, the code should definitely be updated for Ampere. There would be a nice performance bump there!
ID: 7337 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Keith Myers
Avatar

Send message
Joined: 16 Nov 22
Posts: 98
Credit: 53,374,719
RAC: 367,720
Message 7340 - Posted: 17 Nov 2022, 18:07:40 UTC - in response to Message 7337.  

My Volta card works too. (I can't remember which is newer, Turing or Volta...)

Either way, the code should definitely be updated for Ampere. There would be a nice performance bump there!


Volta works because its compute capability is 7.0 and within the accepted range of the applications.

A proud member of the OFA (Old Farts Association)
ID: 7340 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Ian&Steve C.
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 23 Apr 21
Posts: 70
Credit: 50,795,250
RAC: 523,072
Message 7357 - Posted: 18 Nov 2022, 13:50:04 UTC - in response to Message 7317.  
That statement is patently false. GPU apps are 2X-10X faster than any cpu at this project.


but not per watt or per device. and overall production ends up less than a CPU in most cases.

that GPU can do a task 10x faster, at the same power, but the CPU can run 32 of them in parallel. making the GPU slower overall.

also the apps here seem to benefit a lot from FP64 performance. since the project is Nvidia only for the GPU apps, it's hard to see that benefit since most consumer nvidia GPUs have lackluster FP64 performance. I'll have a TitanV here soon to verify that.
ID: 7357 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
WMD

Send message
Joined: 22 Jun 13
Posts: 4
Credit: 16,879,932
RAC: 23,015
Message 7366 - Posted: 19 Nov 2022, 4:24:22 UTC - in response to Message 7357.  
that GPU can do a task 10x faster, at the same power, but the CPU can run 32 of them in parallel. making the GPU slower overall.

If you have a CPU with that many threads. A lot of people don't. (Or, you may not run this project with all available threads/cores.)

also the apps here seem to benefit a lot from FP64 performance. since the project is Nvidia only for the GPU apps, it's hard to see that benefit since most consumer nvidia GPUs have lackluster FP64 performance. I'll have a TitanV here soon to verify that.

I have a Titan V, it seems to average between 6.5 and 7.5 minutes per work unit. (I only run it as a fill-in project, so I only do a handful of units at a time.) Makes sense that FP64 would be advantageous here, since the numbers involved are, dare I say it, astronomical. On the other hand, that speed comes out to about 8.5 work units per hour, and it seems a lot of CPUs can do 1 unit per hour per thread, so, quite a few CPUs would be able to outperform a GPU.

Personally, I try to run CPU-only projects on the CPU, and GPU on GPU. This project, as I mentioned, is an anomaly for me.
ID: 7366 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
grempel

Send message
Joined: 20 Jul 21
Posts: 4
Credit: 596,510
RAC: 850
Message 7474 - Posted: 25 Nov 2022, 16:36:10 UTC - in response to Message 7231.  
have you installed the Cuda toolkit from nvidia?
ID: 7474 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Keith Myers
Avatar

Send message
Joined: 16 Nov 22
Posts: 98
Credit: 53,374,719
RAC: 367,720
Message 7476 - Posted: 25 Nov 2022, 18:02:14 UTC - in response to Message 7474.  
It is not necessary to install the Nvidia SDK toolkit to crunch with a gpu.

The runtime components of the standard drivers are entirely sufficient.

A proud member of the OFA (Old Farts Association)
ID: 7476 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Georgi Vidinski
Volunteer moderator
Project administrator
Project developer
Project tester
Avatar

Send message
Joined: 22 Nov 17
Posts: 159
Credit: 13,180,466
RAC: 47
Message 7479 - Posted: 25 Nov 2022, 20:29:40 UTC
There is a new application plan already prepared, that has to extend the host/device coverage of the 102.16 applications. We just have to wait for the new batch of WUs.
“The good thing about science is that it's true whether or not you believe in it.” ― Neil deGrasse Tyson
ID: 7479 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote

Message boards : Problems and bug reports : GeForce 3070 not utilized?