RTX 2080 Ti vs GTX 1060 3GB


Message boards : Number crunching : RTX 2080 Ti vs GTX 1060 3GB

Message board moderation

To post messages, you must log in.
AuthorMessage
[AF>Amis des Lapins] Jean-Luc

Send message
Joined: 11 Aug 12
Posts: 3
Credit: 4,410,836
RAC: 52,512
Message 6571 - Posted: 18 Apr 2020, 8:54:57 UTC
Hi,

My wife has a computer with à GPU NVidia GTX 1060 3GB, OS Windows.
I have a computer with two GPUs NVidia RTX 2080 Ti, OS Xubuntu.

With one GPU NVidia GTX 1060 3GB : 35000 points per day.
With two NVidia RTX 2080 Ti : 120000 points per day !!!
With one GPU NVidia GTX 1060 3GB : 1300 seconds for one task.
With one NVidia RTX 2080 Ti : 680 seconds for one task !!!


What is the problem ?
Linux ?

Are 102.13 (cuda102_linux) exactly the same tasks as 102.00 (cuda55) ?
If so, I don't think it's interesting for me to do GPU calculations for Asteroids@home with my two NVidia RTX 2080 Ti.
ID: 6571 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
[AF] Hydrosaure

Send message
Joined: 23 Mar 16
Posts: 1
Credit: 15,567,351
RAC: 0
Message 6576 - Posted: 20 Apr 2020, 9:15:39 UTC - in response to Message 6571.  
To add to this topic.

I have two Linux hosts one with a 1060 3GB and one with a 960.

1060 is
    running NVidia drivers 440.82
    using application cuda102_linux
    consuming 100% of one CPU core
    averaging at 3530 points a day



960 is

    running NVidia drivers 390.132
    using application cuda55
    consuming less than 1% CPU
    averaging at 7340 points a day



There definitely seems to be something odd about this. The lesser card is doing double the points ?!

ID: 6576 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile JohnMD
Avatar

Send message
Joined: 7 Apr 14
Posts: 18
Credit: 5,380,609
RAC: 0
Message 6584 - Posted: 21 Apr 2020, 21:51:32 UTC - in response to Message 6571.  

Last modified: 21 Apr 2020, 21:53:59 UTC
Hi,

My wife has a computer with à GPU NVidia GTX 1060 3GB, OS Windows.
I have a computer with two GPUs NVidia RTX 2080 Ti, OS Xubuntu.

With one GPU NVidia GTX 1060 3GB : 35000 points per day.
With two NVidia RTX 2080 Ti : 120000 points per day !!!
With one GPU NVidia GTX 1060 3GB : 1300 seconds for one task.
With one NVidia RTX 2080 Ti : 680 seconds for one task !!!


What is the problem ?
Linux ?

Are 102.13 (cuda102_linux) exactly the same tasks as 102.00 (cuda55) ?
If so, I don't think it's interesting for me to do GPU calculations for Asteroids@home with my two NVidia RTX 2080 Ti.


Superficially it looks like your math doesn't stack up. One 2080 is apparently 2 times faster than a 1060, and two of them are about 4 times faster. What did you expect ?
ID: 6584 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Presrvd
Avatar

Send message
Joined: 15 Jun 15
Posts: 16
Credit: 123,015,840
RAC: 0
Message 6585 - Posted: 21 Apr 2020, 22:20:53 UTC - in response to Message 6584.  

Superficially it looks like your math doesn't stack up. One 2080 is apparently 2 times faster than a 1060, and two of them are about 4 times faster. What did you expect ?

I was going to say the same thing. Every time his wife's machine completes a single task, his machine (roughly) completes four. I'm afraid I don't see the issue...
ID: 6585 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
[AF>Amis des Lapins] Jean-Luc

Send message
Joined: 11 Aug 12
Posts: 3
Credit: 4,410,836
RAC: 52,512
Message 6587 - Posted: 22 Apr 2020, 13:41:12 UTC - in response to Message 6584.  

Superficially it looks like your math doesn't stack up. One 2080 is apparently 2 times faster than a 1060, and two of them are about 4 times faster. What did you expect ?



If I take Project Collatz as an example :

With one GPU NVidia GTX 1060 3GB : 1600 seconds for one task, 2,000,000 points per day.
With one NVidia RTX 2080 Ti : 130 seconds for one task, 19,000,000 points per day.

19000000/2000000=9.5 and 1600/130=12.3

For Collatz project, the NVidia RTX 2080 Ti is about 10 times faster than the NVidia GTX 1060.
So I'm very surprised that for Asteroids@home, it's only twice as fast ?
But maybe it's the numbers for Collatz that are abnormal !

Indeed, the theoretical power of RTX 2080 Ti is 13.45 TFlops and that of GTX 1060 is 4.4 TFlops.
13.45/4.4=3.
I really don't understand why there are such differences between BOINC projects ?

But I'm not a computer professional !
ID: 6587 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile mikey
Avatar

Send message
Joined: 1 Jan 14
Posts: 302
Credit: 32,660,968
RAC: 2,065
Message 6588 - Posted: 22 Apr 2020, 14:24:14 UTC - in response to Message 6587.  

Superficially it looks like your math doesn't stack up. One 2080 is apparently 2 times faster than a 1060, and two of them are about 4 times faster. What did you expect ?



If I take Project Collatz as an example :

With one GPU NVidia GTX 1060 3GB : 1600 seconds for one task, 2,000,000 points per day.
With one NVidia RTX 2080 Ti : 130 seconds for one task, 19,000,000 points per day.

19000000/2000000=9.5 and 1600/130=12.3

For Collatz project, the NVidia RTX 2080 Ti is about 10 times faster than the NVidia GTX 1060.
So I'm very surprised that for Asteroids@home, it's only twice as fast ?
But maybe it's the numbers for Collatz that are abnormal !

Indeed, the theoretical power of RTX 2080 Ti is 13.45 TFlops and that of GTX 1060 is 4.4 TFlops.
13.45/4.4=3.
I really don't understand why there are such differences between BOINC projects ?

But I'm not a computer professional !


Each Project has it's own programmer to write the app, due to MANY differences there just no way to make this app just as fast as that app at another Project. One difference is the amount of data being processed, Collatz is looking for a result to a math problems, gpu's can zip thru those in no time. While Asteroids is also doing math stuff it's much more computationaly intense so takes much longer and is harder to optimize because of it.

The reason each Project has different programmers comes down to money and confidentially, project a doesn't want others stealing the way they do things so gets an in house person to do the programming while project b can afford to pay someone who has done it before so it's more optimized. BOTH work which is the whole point. Another difference is the money programmers want to do the app, some projects have the money to pay someone good while other projects only have the money to pay someone who can do it. Seti got Nvidia help writing theirs so it was highly optimized, Nvidia has not helped other projects to the same degree.
ID: 6588 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
[AF>Amis des Lapins] Jean-Luc

Send message
Joined: 11 Aug 12
Posts: 3
Credit: 4,410,836
RAC: 52,512
Message 6593 - Posted: 24 Apr 2020, 14:31:25 UTC - in response to Message 6588.  
Thank you very much for your honest answer !
Thank you for the detailed explanations.
Like many other people, I don't necessarily want to have absolutely as many points as possible.
There's also a kind of curiosity behind it that pushes me to make calculations for different projects, even if they don't bring in many points. And asteroids are very stimulating !
I ask this kind of question about remuneration rather to make sure that my equipment is used to 100% of its potential.
ID: 6593 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ProDigit

Send message
Joined: 8 Nov 19
Posts: 15
Credit: 3,200,160
RAC: 0
Message 6615 - Posted: 27 Apr 2020, 21:05:56 UTC - in response to Message 6588.  

Superficially it looks like your math doesn't stack up. One 2080 is apparently 2 times faster than a 1060, and two of them are about 4 times faster. What did you expect ?



If I take Project Collatz as an example :

With one GPU NVidia GTX 1060 3GB : 1600 seconds for one task, 2,000,000 points per day.
With one NVidia RTX 2080 Ti : 130 seconds for one task, 19,000,000 points per day.

19000000/2000000=9.5 and 1600/130=12.3

For Collatz project, the NVidia RTX 2080 Ti is about 10 times faster than the NVidia GTX 1060.
So I'm very surprised that for Asteroids@home, it's only twice as fast ?
But maybe it's the numbers for Collatz that are abnormal !

Indeed, the theoretical power of RTX 2080 Ti is 13.45 TFlops and that of GTX 1060 is 4.4 TFlops.
13.45/4.4=3.
I really don't understand why there are such differences between BOINC projects ?

But I'm not a computer professional !


Each Project has it's own programmer to write the app, due to MANY differences there just no way to make this app just as fast as that app at another Project. One difference is the amount of data being processed, Collatz is looking for a result to a math problems, gpu's can zip thru those in no time. While Asteroids is also doing math stuff it's much more computationaly intense so takes much longer and is harder to optimize because of it.

The reason each Project has different programmers comes down to money and confidentially, project a doesn't want others stealing the way they do things so gets an in house person to do the programming while project b can afford to pay someone who has done it before so it's more optimized. BOTH work which is the whole point. Another difference is the money programmers want to do the app, some projects have the money to pay someone good while other projects only have the money to pay someone who can do it. Seti got Nvidia help writing theirs so it was highly optimized, Nvidia has not helped other projects to the same degree.

The collatz awards a quick return bonus.
That being said, a 2080Ti should be twice as fast as a 2060, not 1060!
I came here because I've been running a few batches of asteroids on 2080Tis and feel like I get WAY too little credit for it.
Granted, about 80Wus still need to be validated, but I feel we're not getting an equivalent PPD score on asteroids, vs on most other projects.

There might be an issue with the CPU not being fast enough to feed an rtx2080ti.
ID: 6615 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile mikey
Avatar

Send message
Joined: 1 Jan 14
Posts: 302
Credit: 32,660,968
RAC: 2,065
Message 6626 - Posted: 29 Apr 2020, 12:41:04 UTC - in response to Message 6615.  

I came here because I've been running a few batches of asteroids on 2080Tis and feel like I get WAY too little credit for it.
Granted, about 80Wus still need to be validated, but I feel we're not getting an equivalent PPD score on asteroids, vs on most other projects.


Think 'credit new' for the credits here, they use the Seti idea of granting credits while other Projects like Collatz have their own idea of how many credits you should get for each workunit. If you just care about credits, most people don't BTW, then for gpu's crunch for Collatz and for cpu's crunch for http://nci.goofyxgridathome.net/ when they have workunits. I believe those are the two highest paying Projects at the moment.
ID: 6626 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ProDigit

Send message
Joined: 8 Nov 19
Posts: 15
Credit: 3,200,160
RAC: 0
Message 6632 - Posted: 30 Apr 2020, 10:46:18 UTC - in response to Message 6626.  

I came here because I've been running a few batches of asteroids on 2080Tis and feel like I get WAY too little credit for it.
Granted, about 80Wus still need to be validated, but I feel we're not getting an equivalent PPD score on asteroids, vs on most other projects.


Think 'credit new' for the credits here, they use the Seti idea of granting credits while other Projects like Collatz have their own idea of how many credits you should get for each workunit. If you just care about credits, most people don't BTW, then for gpu's crunch for Collatz and for cpu's crunch for http://nci.goofyxgridathome.net/ when they have workunits. I believe those are the two highest paying Projects at the moment.

I'm not looking for max amount of points.
But asteroids is by far the lowest PPD per hour of any GPU project!
On average I get the same PPD as my CPU crunching (in the likes of 200k PPD, vs 1M to 2M PPD on other projects. (The collatz gets me 80M PPD, and I'm not saying we should follow that, but at least be more equal to the other projects?).
ID: 6632 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile mikey
Avatar

Send message
Joined: 1 Jan 14
Posts: 302
Credit: 32,660,968
RAC: 2,065
Message 6633 - Posted: 30 Apr 2020, 17:04:30 UTC - in response to Message 6632.  

I came here because I've been running a few batches of asteroids on 2080Tis and feel like I get WAY too little credit for it.
Granted, about 80Wus still need to be validated, but I feel we're not getting an equivalent PPD score on asteroids, vs on most other projects.


Think 'credit new' for the credits here, they use the Seti idea of granting credits while other Projects like Collatz have their own idea of how many credits you should get for each workunit. If you just care about credits, most people don't BTW, then for gpu's crunch for Collatz and for cpu's crunch for http://nci.goofyxgridathome.net/ when they have workunits. I believe those are the two highest paying Projects at the moment.

I'm not looking for max amount of points.
But asteroids is by far the lowest PPD per hour of any GPU project!
On average I get the same PPD as my CPU crunching (in the likes of 200k PPD, vs 1M to 2M PPD on other projects. (The collatz gets me 80M PPD, and I'm not saying we should follow that, but at least be more equal to the other projects?).


They ARE like other Projects using the 'Seti New' way of doing credits. To change things you need to talk to an Admin and they don't hang out here getting into a discussion with us crunchers very often...they have a Project to run.
ID: 6633 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
tito

Send message
Joined: 22 Jul 13
Posts: 5
Credit: 24,589,805
RAC: 57,140
Message 6638 - Posted: 1 May 2020, 7:35:08 UTC
Re Collatz points on different GPU:
Turing class cards are way faster than Pascal on Collatz due to half precision capabilities:
https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units#GeForce_16_series
ID: 6638 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ProDigit

Send message
Joined: 8 Nov 19
Posts: 15
Credit: 3,200,160
RAC: 0
Message 6647 - Posted: 4 May 2020, 0:50:18 UTC - in response to Message 6638.  
Re Collatz points on different GPU:
Turing class cards are way faster than Pascal on Collatz due to half precision capabilities:
https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units#GeForce_16_series

Yeah, I think they do hpp and count it as FPP or DPP or something...
ID: 6647 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote

Message boards : Number crunching : RTX 2080 Ti vs GTX 1060 3GB