New applications for GPU released
log in

Advanced search

Message boards : News : New applications for GPU released

Previous · 1 · 2 · 3 · 4 · 5 · 6 · 7 · 8 · Next
Author Message
Profile mikey
Avatar
Send message
Joined: 1 Jan 14
Posts: 264
Credit: 20,638,800
RAC: 0
Message 2365 - Posted: 4 Jan 2014, 15:54:06 UTC - in response to Message 2345.
Last modified: 4 Jan 2014, 15:54:35 UTC

I have not tried mine here yet but I ALWAYS leave a cpu core free when using my gpu's, unless I see in the very low cpu % usage while the gpu is crunching. For instance my 7970, on another project, is using 0.84% cpu and I do have a cpu core free just to keep it fed and running as fast as possible. If I change it to use all cpu cores for crunching my gpu crunch times go up.


I do this for GPUGRID only. Other project like DistrRTGen and Asteroids can live without dedicated CPU core.


DistrRTGen was the one I was referring too, using 0.84% of a cpu core means I always leave a core free just to keep it fed. I am doing units in the 36 minute range on a 7970 that way. I JUST got my first Asteroid units and they are only using 0.01% of a cpu core with an Nvidia 560Ti card. On Einstein, also with a 560Ti Nvidia card, it is using 0.2% of a cpu core per unit, I am running 2 units at once though, so 0.4% total.

Dagorath
Send message
Joined: 16 Aug 12
Posts: 293
Credit: 1,116,280
RAC: 0
Message 2369 - Posted: 4 Jan 2014, 16:48:02 UTC

I see an improvement on some tasks but not others. Maybe the small sample size for the first version explains it. On my GTX 670 the previous version took anywhere from 5,400 to 5,800 secs, sample size 4. With the new version they range (so far) between 3,700 and 5,500 secs, sample size 14.

____________
BOINC FAQ Service
Official BOINC wiki
Installing BOINC on Linux

Profile HA-SOFT, s.r.o.
Project developer
Project tester
Send message
Joined: 21 Dec 12
Posts: 176
Credit: 105,283,200
RAC: 32,649
Message 2373 - Posted: 5 Jan 2014, 11:44:01 UTC - in response to Message 2365.
Last modified: 5 Jan 2014, 14:15:11 UTC

I have not tried mine here yet but I ALWAYS leave a cpu core free when using my gpu's, unless I see in the very low cpu % usage while the gpu is crunching. For instance my 7970, on another project, is using 0.84% cpu and I do have a cpu core free just to keep it fed and running as fast as possible. If I change it to use all cpu cores for crunching my gpu crunch times go up.


I do this for GPUGRID only. Other project like DistrRTGen and Asteroids can live without dedicated CPU core.


DistrRTGen was the one I was referring too, using 0.84% of a cpu core means I always leave a core free just to keep it fed. I am doing units in the 36 minute range on a 7970 that way. I JUST got my first Asteroid units and they are only using 0.01% of a cpu core with an Nvidia 560Ti card. On Einstein, also with a 560Ti Nvidia card, it is using 0.2% of a cpu core per unit, I am running 2 units at once though, so 0.4% total.


It's right for OpenCL and/or ATI. For CUDA App with blocking sync it's not necessary (when CPU is not needed, of course)

suriv
Send message
Joined: 13 Oct 12
Posts: 2
Credit: 16,394,760
RAC: 0
Message 2396 - Posted: 9 Jan 2014, 8:49:19 UTC
Last modified: 9 Jan 2014, 8:51:03 UTC

GTX760 ~4950 s


Linux 64bit
331.20
Intel Xeon E3-1240 v3, 4x 3.40GHz (~7200-7900s)
16GB RAM

Ralph McCrum
Avatar
Send message
Joined: 2 Jan 13
Posts: 2
Credit: 1,197,360
RAC: 1,195
Message 2406 - Posted: 12 Jan 2014, 18:37:37 UTC

I have a question ? Are these new NVIDIA apps the reason that the Period Search Application has quit working correctly on my computer. I do not have NVIDIA I have ATI video card.

Profile HA-SOFT, s.r.o.
Project developer
Project tester
Send message
Joined: 21 Dec 12
Posts: 176
Credit: 105,283,200
RAC: 32,649
Message 2407 - Posted: 12 Jan 2014, 19:08:27 UTC - in response to Message 2406.
Last modified: 12 Jan 2014, 19:08:46 UTC

I have a question ? Are these new NVIDIA apps the reason that the Period Search Application has quit working correctly on my computer. I do not have NVIDIA I have ATI video card.

I think not. What is in message log?

Ralph McCrum
Avatar
Send message
Joined: 2 Jan 13
Posts: 2
Credit: 1,197,360
RAC: 1,195
Message 2408 - Posted: 12 Jan 2014, 20:40:57 UTC - in response to Message 2407.

Do You mean The "event log" I saw nothing there that looked like anything but normal running. But the app sometimes does nothing for a whole day and then starts again. One thing I recently upgraded to Windows 7, (I should have went to Linux)Could that have cause the problem. Is there maybe special settings needed in Windows 7 ?
Thank You for Your help

Dagorath
Send message
Joined: 16 Aug 12
Posts: 293
Credit: 1,116,280
RAC: 0
Message 2410 - Posted: 12 Jan 2014, 21:59:26 UTC - in response to Message 2408.
Last modified: 12 Jan 2014, 22:02:23 UTC

It has nothing to do with your ATI video card. The way BOINC works is that when your host contacts the project server to request work, it reports details of the hardware it is running on and the server decides which application(s) your host can use. Your host would report that it has an ATI video card. The server knows it doesn't have an application for ATI cards so it doesn't send a GPU app to your host.

But the app sometimes does nothing for a whole day and then starts again.


Is it possible your host is crunching tasks from one of the other projects you are running? Under certain circumstances BOINC will ignore one project even if it has tasks for that project in the cache and crunch only tasks from one of your other projects for a while.
____________
BOINC FAQ Service
Official BOINC wiki
Installing BOINC on Linux

Fidel
Send message
Joined: 24 Nov 13
Posts: 1
Credit: 276,240
RAC: 0
Message 2416 - Posted: 15 Jan 2014, 17:55:56 UTC - in response to Message 2227.

carte graphique :ATI FirePro V7800 (FireGL)
____________

Profile Overtonesinger
Avatar
Send message
Joined: 9 Sep 13
Posts: 23
Credit: 28,108,680
RAC: 2,820
Message 2417 - Posted: 16 Jan 2014, 7:19:20 UTC

Good!
I will test this on Win8 x64 - as soon as I will have some time to plug in the new NVidia card into my destop computer.
____________
Melwen - child of the Fangorn Forest

Profile mikey
Avatar
Send message
Joined: 1 Jan 14
Posts: 264
Credit: 20,638,800
RAC: 0
Message 2418 - Posted: 16 Jan 2014, 11:30:34 UTC - in response to Message 2396.

GTX760 ~4950 s


Linux 64bit
331.20
Intel Xeon E3-1240 v3, 4x 3.40GHz (~7200-7900s)
16GB RAM


Just a tad slower in Win7 Ultimate:
Win 7 64bit
driver: 327.23
AMD 6 core 3.3ghz
GTX760 ~5,099.32 s

Have you tried going back to the 327.23 drivers yet? The new drivers are reportedly 10% or so slower for crunching. All 6 cpu's are crunching MilkyWay units.

suriv
Send message
Joined: 13 Oct 12
Posts: 2
Credit: 16,394,760
RAC: 0
Message 2421 - Posted: 18 Jan 2014, 15:13:54 UTC - in response to Message 2418.

Have you tried going back to the 327.23 drivers yet? The new drivers are reportedly 10% or so slower for crunching. All 6 cpu's are crunching MilkyWay units.


No, it's not planned. Does this all operating systems?

Profile mikey
Avatar
Send message
Joined: 1 Jan 14
Posts: 264
Credit: 20,638,800
RAC: 0
Message 2422 - Posted: 19 Jan 2014, 11:50:50 UTC - in response to Message 2421.

Have you tried going back to the 327.23 drivers yet? The new drivers are reportedly 10% or so slower for crunching. All 6 cpu's are crunching MilkyWay units.


No, it's not planned. Does this all operating systems?


Yes but not at ALL projects, and not even at all sub-projects at each project, PrimeGrid for example it is slower at some of their sub projects, but not at all of them. It's based on how the programmers are utilizing the gpu for crunching and how the developers have changed the software for faster gaming. Gaming and crunching are always at odds with each, but sometimes they are and our crunching slows down. The suggestion has always been once you find a version that works for you don't upgrade unless you first hear from several others that it is in fact better, because often it is not. And even projects like MilkyWay complain when you use any Beta drivers as they are setup to handle 'released' drivers only.

Igor - brain specialist
Send message
Joined: 17 Jan 14
Posts: 1
Credit: 55,680
RAC: 0
Message 2425 - Posted: 20 Jan 2014, 23:02:52 UTC
Last modified: 20 Jan 2014, 23:05:09 UTC

I'm not sure as to what the advantage is using the NVidia GPU? My recent asteroid CUDA run took 8 hours for 480 credits. I receive 480 credits with a normal 2 hour Asteroid run. Just askin.

Profile mikey
Avatar
Send message
Joined: 1 Jan 14
Posts: 264
Credit: 20,638,800
RAC: 0
Message 2427 - Posted: 21 Jan 2014, 12:53:53 UTC - in response to Message 2425.

I'm not sure as to what the advantage is using the NVidia GPU? My recent asteroid CUDA run took 8 hours for 480 credits. I receive 480 credits with a normal 2 hour Asteroid run. Just askin.


Are you leaving a cpu core free just for the Nvidia card to use? If not that could be your problem, as well as you using one of the bad batch of drivers for crunching. If you are a gamer by all means keep using the driver you are currently using, but if you are just a cruncher then you might find the older driver version 327.23 faster.

Gpu's, the cuda part in your case, can do work up to 10 times faster then a cpu core can, meaning up to 10 times more credits, but keeping them fed with incoming and outgoing data is the key. Try leaving one cpu core free, for every gpu, and see if your times don't decrease alot. Your GT630 gpu has "CUDA Cores: 96", that is like having 96 little tiny cpu cores on there all crunching one just one unit, instead of just one cpu core like not using the Nvidia is.

Dagorath
Send message
Joined: 16 Aug 12
Posts: 293
Credit: 1,116,280
RAC: 0
Message 2428 - Posted: 21 Jan 2014, 23:42:54 UTC - in response to Message 2425.

I'm not sure as to what the advantage is using the NVidia GPU? My recent asteroid CUDA run took 8 hours for 480 credits. I receive 480 credits with a normal 2 hour Asteroid run. Just askin.


Remember the Asteroids CPU application is not just a run of the mill application. It's a highly optimised application and that makes it difficult for a GPU to beat it.

Also, Asteroid tasks use DP (double precision) calculations. DP takes a lot of time. A GTX 630 is very slow on DP calcs. My GTX 670 does an Asteroids task in about 45 minutes, faster than a 630 because it has more "DP power", but still not extremely fast compared to the CPU app. The only NVIDIA cards that will be extremely fast compared to the CPU app will be the cards that have good DP power which means the Titan and certain Teslas. 670 and 680 cards that have been hacked to unleash their full DP capability should perform close to Titan and Tesla but so far nobody has tried the hack and reported it here, unless I missed it.

A different driver and freeing a CPU core might help a little but your 630 will never be as fast as the CPU app, not even if you do the hack.
____________
BOINC FAQ Service
Official BOINC wiki
Installing BOINC on Linux

[TA]Assimilator1
Avatar
Send message
Joined: 24 Aug 13
Posts: 107
Credit: 29,479,440
RAC: 22,891
Message 2445 - Posted: 26 Jan 2014, 14:38:25 UTC - in response to Message 2297.

Great news for the project, but to be honest I am somewhat disappointed by the performance of mid-range card. I guess I will be sticking around with my CPU somewhat longer...


These are expected results. GPUs are very specialized and it's hard to fit code to every hw architecture.


I take it that's why the speed upgrade for the GPU app vs CPU is relatively modest?
e.g the modern high end 780 Ti is 'only' ~4x faster than my old C2D Pentium E5200 @3.6 GHz.

I do appreciate though that this is the 1st GPU app, so thanks so far & I look forwards to future improvements :).
And I'm glad that your CPU app is so fast, lol :D, means I can run A@H on my CPU & my HD 5850 on MW@H for good output on both :) (when I switch my main rig back to A@H).
____________
Team AnandTech - SETI@H, Muon1 DPAD, Folding@H, MilkyWay@H, Asteroids@H, LHC@H, POGS, Rosetta@H, Einstein@H.

Main rig - i7 4930k @4.1 GHz, 16GB DDR3 1866, RX 580 8GB
2nd rig - Q9550 @3.6 GHz, 8GB DDR2 1066, HD 7870 XT 3GB(DS), Win 7

Profile HA-SOFT, s.r.o.
Project developer
Project tester
Send message
Joined: 21 Dec 12
Posts: 176
Credit: 105,283,200
RAC: 32,649
Message 2446 - Posted: 26 Jan 2014, 14:55:01 UTC - in response to Message 2445.

The GPU app development is still in progress. We will release new version after server maintenance next week. There is a 50% improvement over 1st version and about 20% over current official version.

[TA]Assimilator1
Avatar
Send message
Joined: 24 Aug 13
Posts: 107
Credit: 29,479,440
RAC: 22,891
Message 2447 - Posted: 26 Jan 2014, 16:44:36 UTC - in response to Message 2446.

Nice! :)
____________
Team AnandTech - SETI@H, Muon1 DPAD, Folding@H, MilkyWay@H, Asteroids@H, LHC@H, POGS, Rosetta@H, Einstein@H.

Main rig - i7 4930k @4.1 GHz, 16GB DDR3 1866, RX 580 8GB
2nd rig - Q9550 @3.6 GHz, 8GB DDR2 1066, HD 7870 XT 3GB(DS), Win 7

Profile JStateson
Avatar
Send message
Joined: 16 Jan 14
Posts: 12
Credit: 13,625,760
RAC: 4,468
Message 2468 - Posted: 30 Jan 2014, 16:35:52 UTC - in response to Message 2428.
Last modified: 30 Jan 2014, 17:19:29 UTC

(Dagorath)
Also, Asteroid tasks use DP (double precision) calculations. DP takes a lot of time. A GTX 630 is very slow on DP calcs. My GTX 670 does an Asteroids task in about 45 minutes, faster than a 630 because it has more "DP power", but still not extremely fast compared to the CPU app. The only NVIDIA cards that will be extremely fast compared to the CPU app will be the cards that have good DP power which means the Titan and certain Teslas. 670 and 680 cards that have been hacked to unleash their full DP capability should perform close to Titan and Tesla but so far nobody has tried the hack and reported it here, unless I missed it.


This discussion interests me because I have both a 570 and a 670 and noticed that the 570 performed better but with higher heat. I was not aware of how bad the DP had been crippled until reading about it here. I found a discussion about the mod to the 690 (and other nVidia) to change them into their professional equivalent. Years ago I had modded an Athlon mobile (also "xp") to change them into the multiprocessor equivalent using silver ink and scratching out a trace on the cpu so this NVidia mod interested me. I did read where the author burned out his gtx690 but it was not on account of the mod he was making. Anyway, after reading through most of the 50+ pages, it appears my gtx670 can be modded into a k2 grid but there is no performance gain as shown on some "spec" program that had DP performance as one of its tests. The success seems to be the gain in virtualization for gaming which does not interest me. I had a bad experience replacing an "0402" surface mount resistor and do not want to try it again. However, if it is a larger resistor and on the back side of the card then I might consider trying it.

Previous · 1 · 2 · 3 · 4 · 5 · 6 · 7 · 8 · Next
Post to thread

Message boards : News : New applications for GPU released


Main page · Your account · Message boards


Copyright © 2020 Asteroids@home