New applications for GPU released
Message boards :
News :
New applications for GPU released
Message board moderation
Author | Message |
---|---|
Send message Joined: 16 Aug 12 Posts: 293 Credit: 1,116,280 RAC: 0 |
I see an improvement on some tasks but not others. Maybe the small sample size for the first version explains it. On my GTX 670 the previous version took anywhere from 5,400 to 5,800 secs, sample size 4. With the new version they range (so far) between 3,700 and 5,500 secs, sample size 14. BOINC FAQ Service Official BOINC wiki Installing BOINC on Linux |
Send message Joined: 21 Dec 12 Posts: 176 Credit: 136,462,135 RAC: 8 |
Last modified: 5 Jan 2014, 14:15:11 UTC I have not tried mine here yet but I ALWAYS leave a cpu core free when using my gpu's, unless I see in the very low cpu % usage while the gpu is crunching. For instance my 7970, on another project, is using 0.84% cpu and I do have a cpu core free just to keep it fed and running as fast as possible. If I change it to use all cpu cores for crunching my gpu crunch times go up. It's right for OpenCL and/or ATI. For CUDA App with blocking sync it's not necessary (when CPU is not needed, of course) |
Send message Joined: 13 Oct 12 Posts: 2 Credit: 18,052,256 RAC: 1,528 |
Last modified: 9 Jan 2014, 8:51:03 UTC |
Send message Joined: 2 Jan 13 Posts: 2 Credit: 1,907,460 RAC: 0 |
|
Send message Joined: 21 Dec 12 Posts: 176 Credit: 136,462,135 RAC: 8 |
Last modified: 12 Jan 2014, 19:08:46 UTC |
Send message Joined: 2 Jan 13 Posts: 2 Credit: 1,907,460 RAC: 0 |
Do You mean The "event log" I saw nothing there that looked like anything but normal running. But the app sometimes does nothing for a whole day and then starts again. One thing I recently upgraded to Windows 7, (I should have went to Linux)Could that have cause the problem. Is there maybe special settings needed in Windows 7 ? Thank You for Your help |
Send message Joined: 16 Aug 12 Posts: 293 Credit: 1,116,280 RAC: 0 |
Last modified: 12 Jan 2014, 22:02:23 UTC It has nothing to do with your ATI video card. The way BOINC works is that when your host contacts the project server to request work, it reports details of the hardware it is running on and the server decides which application(s) your host can use. Your host would report that it has an ATI video card. The server knows it doesn't have an application for ATI cards so it doesn't send a GPU app to your host. But the app sometimes does nothing for a whole day and then starts again. Is it possible your host is crunching tasks from one of the other projects you are running? Under certain circumstances BOINC will ignore one project even if it has tasks for that project in the cache and crunch only tasks from one of your other projects for a while. BOINC FAQ Service Official BOINC wiki Installing BOINC on Linux |
Send message Joined: 24 Nov 13 Posts: 1 Credit: 276,240 RAC: 0 |
|
Send message Joined: 9 Sep 13 Posts: 23 Credit: 32,670,898 RAC: 291 |
|
Send message Joined: 1 Jan 14 Posts: 302 Credit: 32,671,868 RAC: 0 |
GTX760 ~4950 s Just a tad slower in Win7 Ultimate: Win 7 64bit driver: 327.23 AMD 6 core 3.3ghz GTX760 ~5,099.32 s Have you tried going back to the 327.23 drivers yet? The new drivers are reportedly 10% or so slower for crunching. All 6 cpu's are crunching MilkyWay units. |
Send message Joined: 13 Oct 12 Posts: 2 Credit: 18,052,256 RAC: 1,528 |
|
Send message Joined: 1 Jan 14 Posts: 302 Credit: 32,671,868 RAC: 0 |
Have you tried going back to the 327.23 drivers yet? The new drivers are reportedly 10% or so slower for crunching. All 6 cpu's are crunching MilkyWay units. Yes but not at ALL projects, and not even at all sub-projects at each project, PrimeGrid for example it is slower at some of their sub projects, but not at all of them. It's based on how the programmers are utilizing the gpu for crunching and how the developers have changed the software for faster gaming. Gaming and crunching are always at odds with each, but sometimes they are and our crunching slows down. The suggestion has always been once you find a version that works for you don't upgrade unless you first hear from several others that it is in fact better, because often it is not. And even projects like MilkyWay complain when you use any Beta drivers as they are setup to handle 'released' drivers only. |
Send message Joined: 17 Jan 14 Posts: 1 Credit: 55,680 RAC: 0 |
Last modified: 20 Jan 2014, 23:05:09 UTC |
Send message Joined: 1 Jan 14 Posts: 302 Credit: 32,671,868 RAC: 0 |
I'm not sure as to what the advantage is using the NVidia GPU? My recent asteroid CUDA run took 8 hours for 480 credits. I receive 480 credits with a normal 2 hour Asteroid run. Just askin. Are you leaving a cpu core free just for the Nvidia card to use? If not that could be your problem, as well as you using one of the bad batch of drivers for crunching. If you are a gamer by all means keep using the driver you are currently using, but if you are just a cruncher then you might find the older driver version 327.23 faster. Gpu's, the cuda part in your case, can do work up to 10 times faster then a cpu core can, meaning up to 10 times more credits, but keeping them fed with incoming and outgoing data is the key. Try leaving one cpu core free, for every gpu, and see if your times don't decrease alot. Your GT630 gpu has "CUDA Cores: 96", that is like having 96 little tiny cpu cores on there all crunching one just one unit, instead of just one cpu core like not using the Nvidia is. |
Send message Joined: 16 Aug 12 Posts: 293 Credit: 1,116,280 RAC: 0 |
I'm not sure as to what the advantage is using the NVidia GPU? My recent asteroid CUDA run took 8 hours for 480 credits. I receive 480 credits with a normal 2 hour Asteroid run. Just askin. Remember the Asteroids CPU application is not just a run of the mill application. It's a highly optimised application and that makes it difficult for a GPU to beat it. Also, Asteroid tasks use DP (double precision) calculations. DP takes a lot of time. A GTX 630 is very slow on DP calcs. My GTX 670 does an Asteroids task in about 45 minutes, faster than a 630 because it has more "DP power", but still not extremely fast compared to the CPU app. The only NVIDIA cards that will be extremely fast compared to the CPU app will be the cards that have good DP power which means the Titan and certain Teslas. 670 and 680 cards that have been hacked to unleash their full DP capability should perform close to Titan and Tesla but so far nobody has tried the hack and reported it here, unless I missed it. A different driver and freeing a CPU core might help a little but your 630 will never be as fast as the CPU app, not even if you do the hack. BOINC FAQ Service Official BOINC wiki Installing BOINC on Linux |
Send message Joined: 24 Aug 13 Posts: 111 Credit: 31,766,294 RAC: 3,326 |
Great news for the project, but to be honest I am somewhat disappointed by the performance of mid-range card. I guess I will be sticking around with my CPU somewhat longer... I take it that's why the speed upgrade for the GPU app vs CPU is relatively modest? e.g the modern high end 780 Ti is 'only' ~4x faster than my old C2D Pentium E5200 @3.6 GHz. I do appreciate though that this is the 1st GPU app, so thanks so far & I look forwards to future improvements :). And I'm glad that your CPU app is so fast, lol :D, means I can run A@H on my CPU & my HD 5850 on MW@H for good output on both :) (when I switch my main rig back to A@H). Team AnandTech - SETI@H, Muon1 DPAD, Folding@H, MilkyWay@H, Asteroids@H, LHC@H, POGS, Rosetta@H, Einstein@H,DHPE & CPDN Main rig - Ryzen 3600, 32GB DDR4 3200, RX 580 8GB, Win10 2nd rig - i7 4930k @4.1 GHz, 16GB DDR3 1866, HD 7870 XT 3GB(DS), Win7 |
Send message Joined: 21 Dec 12 Posts: 176 Credit: 136,462,135 RAC: 8 |
|
Send message Joined: 24 Aug 13 Posts: 111 Credit: 31,766,294 RAC: 3,326 |
|
Send message Joined: 16 Jan 14 Posts: 17 Credit: 30,384,573 RAC: 11,770 |
Last modified: 30 Jan 2014, 17:19:29 UTC (Dagorath) This discussion interests me because I have both a 570 and a 670 and noticed that the 570 performed better but with higher heat. I was not aware of how bad the DP had been crippled until reading about it here. I found a discussion about the mod to the 690 (and other nVidia) to change them into their professional equivalent. Years ago I had modded an Athlon mobile (also "xp") to change them into the multiprocessor equivalent using silver ink and scratching out a trace on the cpu so this NVidia mod interested me. I did read where the author burned out his gtx690 but it was not on account of the mod he was making. Anyway, after reading through most of the 50+ pages, it appears my gtx670 can be modded into a k2 grid but there is no performance gain as shown on some "spec" program that had DP performance as one of its tests. The success seems to be the gain in virtualization for gaming which does not interest me. I had a bad experience replacing an "0402" surface mount resistor and do not want to try it again. However, if it is a larger resistor and on the back side of the card then I might consider trying it. |
Send message Joined: 16 Aug 12 Posts: 293 Credit: 1,116,280 RAC: 0 |
Last modified: 31 Jan 2014, 4:36:59 UTC Anyway, after reading through most of the 50+ pages, it appears my gtx670 can be modded into a k2 grid but there is no performance gain as shown on some "spec" program that had DP performance as one of its tests I recall reading that too now that you mention it and I believe I made a mental note to investigate it further because I was somewhat confused. Eventually and as my interest in the hack waned I forgot to investigate further. I still don't know what to make of it. 0402 resistors are hard to work with even with the best of tools and if you don't have the right tools and steady hands it's nearly impossible. I have the tools and the hands but the price of the card is a de-motivating factor for me. If it was a $50 card or if other components on the board weren't so close to the resistor(s) in question I would have attempted it months ago. Another thing that discourages me is that HA-Soft said the best configuration is a CPU with the AVX 2.0 instruction set extension. If I understand him correctly he is saying AVX 2.0 will complete an A@H task faster than a Titan. Well, I'll be ordering a Haswell with AVX 2.0 fairly soon and if it turns out faster than a GTX 670 on current A&H tasks then I see no reason to do a risky hack on an expensive video card. (Please, nobody should get the impression that I'm suggesting AVX 2.0 is better on DP than a fast GPU for all applications. Maybe AVX 2.0 is faster for the algorithm in use at A@H and if that is true it doesn't mean it's true for every algorithm.) Given all that and the fact that A@H apps are being continually updated and improved and given the fact there has been mention of a second project (a sub-project?) here at A@H and that it might use GPU, I think the wise thing for me to do is hold off on hacks for now. Or maybe just buy a Titan. Or maybe the second project will require less DP and more SP which would suit my 670 better. If you or anybody else wants some tips on soldering surface mount resistors I'll be glad to share what I know (or should I say share what works for me) as long as you understand I don't do it for a living and I probably don't use the same techniques a certified board technician would use. I'm a certified crazy bored hacker, big difference ;-) BOINC FAQ Service Official BOINC wiki Installing BOINC on Linux |
Message boards :
News :
New applications for GPU released