New applications for GPU released


Message boards : News : New applications for GPU released

Message board moderation

To post messages, you must log in.
Previous · 1 · 2 · 3 · 4 · 5 · 6 · 7 · 8 · Next
AuthorMessage
Dagorath

Send message
Joined: 16 Aug 12
Posts: 293
Credit: 1,116,280
RAC: 0
Message 2369 - Posted: 4 Jan 2014, 16:48:02 UTC
I see an improvement on some tasks but not others. Maybe the small sample size for the first version explains it. On my GTX 670 the previous version took anywhere from 5,400 to 5,800 secs, sample size 4. With the new version they range (so far) between 3,700 and 5,500 secs, sample size 14.

BOINC FAQ Service
Official BOINC wiki
Installing BOINC on Linux
ID: 2369 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile HA-SOFT, s.r.o.
Project developer
Project tester

Send message
Joined: 21 Dec 12
Posts: 176
Credit: 135,071,861
RAC: 9,747
Message 2373 - Posted: 5 Jan 2014, 11:44:01 UTC - in response to Message 2365.  

Last modified: 5 Jan 2014, 14:15:11 UTC
I have not tried mine here yet but I ALWAYS leave a cpu core free when using my gpu's, unless I see in the very low cpu % usage while the gpu is crunching. For instance my 7970, on another project, is using 0.84% cpu and I do have a cpu core free just to keep it fed and running as fast as possible. If I change it to use all cpu cores for crunching my gpu crunch times go up.


I do this for GPUGRID only. Other project like DistrRTGen and Asteroids can live without dedicated CPU core.


DistrRTGen was the one I was referring too, using 0.84% of a cpu core means I always leave a core free just to keep it fed. I am doing units in the 36 minute range on a 7970 that way. I JUST got my first Asteroid units and they are only using 0.01% of a cpu core with an Nvidia 560Ti card. On Einstein, also with a 560Ti Nvidia card, it is using 0.2% of a cpu core per unit, I am running 2 units at once though, so 0.4% total.


It's right for OpenCL and/or ATI. For CUDA App with blocking sync it's not necessary (when CPU is not needed, of course)
ID: 2373 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
suriv

Send message
Joined: 13 Oct 12
Posts: 2
Credit: 17,799,028
RAC: 2,416
Message 2396 - Posted: 9 Jan 2014, 8:49:19 UTC

Last modified: 9 Jan 2014, 8:51:03 UTC
GTX760 ~4950 s


Linux 64bit
331.20
Intel Xeon E3-1240 v3, 4x 3.40GHz (~7200-7900s)
16GB RAM
ID: 2396 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Ralph McCrum
Avatar

Send message
Joined: 2 Jan 13
Posts: 2
Credit: 1,907,460
RAC: 5
Message 2406 - Posted: 12 Jan 2014, 18:37:37 UTC
I have a question ? Are these new NVIDIA apps the reason that the Period Search Application has quit working correctly on my computer. I do not have NVIDIA I have ATI video card.
ID: 2406 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile HA-SOFT, s.r.o.
Project developer
Project tester

Send message
Joined: 21 Dec 12
Posts: 176
Credit: 135,071,861
RAC: 9,747
Message 2407 - Posted: 12 Jan 2014, 19:08:27 UTC - in response to Message 2406.  

Last modified: 12 Jan 2014, 19:08:46 UTC
I have a question ? Are these new NVIDIA apps the reason that the Period Search Application has quit working correctly on my computer. I do not have NVIDIA I have ATI video card.

I think not. What is in message log?
ID: 2407 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Ralph McCrum
Avatar

Send message
Joined: 2 Jan 13
Posts: 2
Credit: 1,907,460
RAC: 5
Message 2408 - Posted: 12 Jan 2014, 20:40:57 UTC - in response to Message 2407.  
Do You mean The "event log" I saw nothing there that looked like anything but normal running. But the app sometimes does nothing for a whole day and then starts again. One thing I recently upgraded to Windows 7, (I should have went to Linux)Could that have cause the problem. Is there maybe special settings needed in Windows 7 ?
Thank You for Your help
ID: 2408 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Dagorath

Send message
Joined: 16 Aug 12
Posts: 293
Credit: 1,116,280
RAC: 0
Message 2410 - Posted: 12 Jan 2014, 21:59:26 UTC - in response to Message 2408.  

Last modified: 12 Jan 2014, 22:02:23 UTC
It has nothing to do with your ATI video card. The way BOINC works is that when your host contacts the project server to request work, it reports details of the hardware it is running on and the server decides which application(s) your host can use. Your host would report that it has an ATI video card. The server knows it doesn't have an application for ATI cards so it doesn't send a GPU app to your host.

But the app sometimes does nothing for a whole day and then starts again.


Is it possible your host is crunching tasks from one of the other projects you are running? Under certain circumstances BOINC will ignore one project even if it has tasks for that project in the cache and crunch only tasks from one of your other projects for a while.
BOINC FAQ Service
Official BOINC wiki
Installing BOINC on Linux
ID: 2410 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Fidel

Send message
Joined: 24 Nov 13
Posts: 1
Credit: 276,240
RAC: 0
Message 2416 - Posted: 15 Jan 2014, 17:55:56 UTC - in response to Message 2227.  
carte graphique :ATI FirePro V7800 (FireGL)
ID: 2416 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Overtonesinger
Avatar

Send message
Joined: 9 Sep 13
Posts: 23
Credit: 32,607,879
RAC: 563
Message 2417 - Posted: 16 Jan 2014, 7:19:20 UTC
Good!
I will test this on Win8 x64 - as soon as I will have some time to plug in the new NVidia card into my destop computer.
Melwen - child of the Fangorn Forest
ID: 2417 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile mikey
Avatar

Send message
Joined: 1 Jan 14
Posts: 300
Credit: 32,198,578
RAC: 8,815
Message 2418 - Posted: 16 Jan 2014, 11:30:34 UTC - in response to Message 2396.  
GTX760 ~4950 s


Linux 64bit
331.20
Intel Xeon E3-1240 v3, 4x 3.40GHz (~7200-7900s)
16GB RAM


Just a tad slower in Win7 Ultimate:
Win 7 64bit
driver: 327.23
AMD 6 core 3.3ghz
GTX760 ~5,099.32 s

Have you tried going back to the 327.23 drivers yet? The new drivers are reportedly 10% or so slower for crunching. All 6 cpu's are crunching MilkyWay units.
ID: 2418 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
suriv

Send message
Joined: 13 Oct 12
Posts: 2
Credit: 17,799,028
RAC: 2,416
Message 2421 - Posted: 18 Jan 2014, 15:13:54 UTC - in response to Message 2418.  
Have you tried going back to the 327.23 drivers yet? The new drivers are reportedly 10% or so slower for crunching. All 6 cpu's are crunching MilkyWay units.


No, it's not planned. Does this all operating systems?
ID: 2421 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile mikey
Avatar

Send message
Joined: 1 Jan 14
Posts: 300
Credit: 32,198,578
RAC: 8,815
Message 2422 - Posted: 19 Jan 2014, 11:50:50 UTC - in response to Message 2421.  
Have you tried going back to the 327.23 drivers yet? The new drivers are reportedly 10% or so slower for crunching. All 6 cpu's are crunching MilkyWay units.


No, it's not planned. Does this all operating systems?


Yes but not at ALL projects, and not even at all sub-projects at each project, PrimeGrid for example it is slower at some of their sub projects, but not at all of them. It's based on how the programmers are utilizing the gpu for crunching and how the developers have changed the software for faster gaming. Gaming and crunching are always at odds with each, but sometimes they are and our crunching slows down. The suggestion has always been once you find a version that works for you don't upgrade unless you first hear from several others that it is in fact better, because often it is not. And even projects like MilkyWay complain when you use any Beta drivers as they are setup to handle 'released' drivers only.
ID: 2422 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Igor - brain specialist

Send message
Joined: 17 Jan 14
Posts: 1
Credit: 55,680
RAC: 0
Message 2425 - Posted: 20 Jan 2014, 23:02:52 UTC

Last modified: 20 Jan 2014, 23:05:09 UTC
I'm not sure as to what the advantage is using the NVidia GPU? My recent asteroid CUDA run took 8 hours for 480 credits. I receive 480 credits with a normal 2 hour Asteroid run. Just askin.
ID: 2425 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile mikey
Avatar

Send message
Joined: 1 Jan 14
Posts: 300
Credit: 32,198,578
RAC: 8,815
Message 2427 - Posted: 21 Jan 2014, 12:53:53 UTC - in response to Message 2425.  
I'm not sure as to what the advantage is using the NVidia GPU? My recent asteroid CUDA run took 8 hours for 480 credits. I receive 480 credits with a normal 2 hour Asteroid run. Just askin.


Are you leaving a cpu core free just for the Nvidia card to use? If not that could be your problem, as well as you using one of the bad batch of drivers for crunching. If you are a gamer by all means keep using the driver you are currently using, but if you are just a cruncher then you might find the older driver version 327.23 faster.

Gpu's, the cuda part in your case, can do work up to 10 times faster then a cpu core can, meaning up to 10 times more credits, but keeping them fed with incoming and outgoing data is the key. Try leaving one cpu core free, for every gpu, and see if your times don't decrease alot. Your GT630 gpu has "CUDA Cores: 96", that is like having 96 little tiny cpu cores on there all crunching one just one unit, instead of just one cpu core like not using the Nvidia is.
ID: 2427 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Dagorath

Send message
Joined: 16 Aug 12
Posts: 293
Credit: 1,116,280
RAC: 0
Message 2428 - Posted: 21 Jan 2014, 23:42:54 UTC - in response to Message 2425.  
I'm not sure as to what the advantage is using the NVidia GPU? My recent asteroid CUDA run took 8 hours for 480 credits. I receive 480 credits with a normal 2 hour Asteroid run. Just askin.


Remember the Asteroids CPU application is not just a run of the mill application. It's a highly optimised application and that makes it difficult for a GPU to beat it.

Also, Asteroid tasks use DP (double precision) calculations. DP takes a lot of time. A GTX 630 is very slow on DP calcs. My GTX 670 does an Asteroids task in about 45 minutes, faster than a 630 because it has more "DP power", but still not extremely fast compared to the CPU app. The only NVIDIA cards that will be extremely fast compared to the CPU app will be the cards that have good DP power which means the Titan and certain Teslas. 670 and 680 cards that have been hacked to unleash their full DP capability should perform close to Titan and Tesla but so far nobody has tried the hack and reported it here, unless I missed it.

A different driver and freeing a CPU core might help a little but your 630 will never be as fast as the CPU app, not even if you do the hack.
BOINC FAQ Service
Official BOINC wiki
Installing BOINC on Linux
ID: 2428 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
[TA]Assimilator1
Avatar

Send message
Joined: 24 Aug 13
Posts: 111
Credit: 31,156,736
RAC: 9,004
Message 2445 - Posted: 26 Jan 2014, 14:38:25 UTC - in response to Message 2297.  
Great news for the project, but to be honest I am somewhat disappointed by the performance of mid-range card. I guess I will be sticking around with my CPU somewhat longer...


These are expected results. GPUs are very specialized and it's hard to fit code to every hw architecture.


I take it that's why the speed upgrade for the GPU app vs CPU is relatively modest?
e.g the modern high end 780 Ti is 'only' ~4x faster than my old C2D Pentium E5200 @3.6 GHz.

I do appreciate though that this is the 1st GPU app, so thanks so far & I look forwards to future improvements :).
And I'm glad that your CPU app is so fast, lol :D, means I can run A@H on my CPU & my HD 5850 on MW@H for good output on both :) (when I switch my main rig back to A@H).
Team AnandTech - SETI@H, Muon1 DPAD, Folding@H, MilkyWay@H, Asteroids@H, LHC@H, POGS, Rosetta@H, Einstein@H,DHPE & CPDN

Main rig - Ryzen 3600, 32GB DDR4 3200, RX 580 8GB, Win10
2nd rig - i7 4930k @4.1 GHz, 16GB DDR3 1866, HD 7870 XT 3GB(DS), Win7
ID: 2445 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile HA-SOFT, s.r.o.
Project developer
Project tester

Send message
Joined: 21 Dec 12
Posts: 176
Credit: 135,071,861
RAC: 9,747
Message 2446 - Posted: 26 Jan 2014, 14:55:01 UTC - in response to Message 2445.  
The GPU app development is still in progress. We will release new version after server maintenance next week. There is a 50% improvement over 1st version and about 20% over current official version.
ID: 2446 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
[TA]Assimilator1
Avatar

Send message
Joined: 24 Aug 13
Posts: 111
Credit: 31,156,736
RAC: 9,004
Message 2447 - Posted: 26 Jan 2014, 16:44:36 UTC - in response to Message 2446.  
Nice! :)
Team AnandTech - SETI@H, Muon1 DPAD, Folding@H, MilkyWay@H, Asteroids@H, LHC@H, POGS, Rosetta@H, Einstein@H,DHPE & CPDN

Main rig - Ryzen 3600, 32GB DDR4 3200, RX 580 8GB, Win10
2nd rig - i7 4930k @4.1 GHz, 16GB DDR3 1866, HD 7870 XT 3GB(DS), Win7
ID: 2447 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile JStateson
Avatar

Send message
Joined: 16 Jan 14
Posts: 17
Credit: 27,390,494
RAC: 21,456
Message 2468 - Posted: 30 Jan 2014, 16:35:52 UTC - in response to Message 2428.  

Last modified: 30 Jan 2014, 17:19:29 UTC
(Dagorath)
Also, Asteroid tasks use DP (double precision) calculations. DP takes a lot of time. A GTX 630 is very slow on DP calcs. My GTX 670 does an Asteroids task in about 45 minutes, faster than a 630 because it has more "DP power", but still not extremely fast compared to the CPU app. The only NVIDIA cards that will be extremely fast compared to the CPU app will be the cards that have good DP power which means the Titan and certain Teslas. 670 and 680 cards that have been hacked to unleash their full DP capability should perform close to Titan and Tesla but so far nobody has tried the hack and reported it here, unless I missed it.


This discussion interests me because I have both a 570 and a 670 and noticed that the 570 performed better but with higher heat. I was not aware of how bad the DP had been crippled until reading about it here. I found a discussion about the mod to the 690 (and other nVidia) to change them into their professional equivalent. Years ago I had modded an Athlon mobile (also "xp") to change them into the multiprocessor equivalent using silver ink and scratching out a trace on the cpu so this NVidia mod interested me. I did read where the author burned out his gtx690 but it was not on account of the mod he was making. Anyway, after reading through most of the 50+ pages, it appears my gtx670 can be modded into a k2 grid but there is no performance gain as shown on some "spec" program that had DP performance as one of its tests. The success seems to be the gain in virtualization for gaming which does not interest me. I had a bad experience replacing an "0402" surface mount resistor and do not want to try it again. However, if it is a larger resistor and on the back side of the card then I might consider trying it.
ID: 2468 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Dagorath

Send message
Joined: 16 Aug 12
Posts: 293
Credit: 1,116,280
RAC: 0
Message 2474 - Posted: 31 Jan 2014, 4:22:54 UTC - in response to Message 2468.  

Last modified: 31 Jan 2014, 4:36:59 UTC
Anyway, after reading through most of the 50+ pages, it appears my gtx670 can be modded into a k2 grid but there is no performance gain as shown on some "spec" program that had DP performance as one of its tests


I recall reading that too now that you mention it and I believe I made a mental note to investigate it further because I was somewhat confused. Eventually and as my interest in the hack waned I forgot to investigate further. I still don't know what to make of it.

0402 resistors are hard to work with even with the best of tools and if you don't have the right tools and steady hands it's nearly impossible. I have the tools and the hands but the price of the card is a de-motivating factor for me. If it was a $50 card or if other components on the board weren't so close to the resistor(s) in question I would have attempted it months ago.

Another thing that discourages me is that HA-Soft said the best configuration is a CPU with the AVX 2.0 instruction set extension. If I understand him correctly he is saying AVX 2.0 will complete an A@H task faster than a Titan. Well, I'll be ordering a Haswell with AVX 2.0 fairly soon and if it turns out faster than a GTX 670 on current A&H tasks then I see no reason to do a risky hack on an expensive video card.

(Please, nobody should get the impression that I'm suggesting AVX 2.0 is better on DP than a fast GPU for all applications. Maybe AVX 2.0 is faster for the algorithm in use at A@H and if that is true it doesn't mean it's true for every algorithm.)

Given all that and the fact that A@H apps are being continually updated and improved and given the fact there has been mention of a second project (a sub-project?) here at A@H and that it might use GPU, I think the wise thing for me to do is hold off on hacks for now. Or maybe just buy a Titan. Or maybe the second project will require less DP and more SP which would suit my 670 better.

If you or anybody else wants some tips on soldering surface mount resistors I'll be glad to share what I know (or should I say share what works for me) as long as you understand I don't do it for a living and I probably don't use the same techniques a certified board technician would use. I'm a certified crazy bored hacker, big difference ;-)
BOINC FAQ Service
Official BOINC wiki
Installing BOINC on Linux
ID: 2474 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Previous · 1 · 2 · 3 · 4 · 5 · 6 · 7 · 8 · Next

Message boards : News : New applications for GPU released