Posts by mikey

21) (Message 6634)
Posted 30 Apr 2020 by Profile mikey
Post:
An update. I went into Nvidia GeForce Experience to reset my video recording before going in-game and it couldn’t even access information about the card now.


My 1660Ti is working here but is not using the standard Nvidia drivers:

296191943 126909715 30 Apr 2020, 1:59:54 UTC 30 Apr 2020, 8:17:09 UTC Completed and validated 1,369.96 3.72 480.00 Period Search Application v102.14 (cuda102_win10)

No idea what happened or why but it does work!!
22) (Message 6633)
Posted 30 Apr 2020 by Profile mikey
Post:

I came here because I've been running a few batches of asteroids on 2080Tis and feel like I get WAY too little credit for it.
Granted, about 80Wus still need to be validated, but I feel we're not getting an equivalent PPD score on asteroids, vs on most other projects.


Think 'credit new' for the credits here, they use the Seti idea of granting credits while other Projects like Collatz have their own idea of how many credits you should get for each workunit. If you just care about credits, most people don't BTW, then for gpu's crunch for Collatz and for cpu's crunch for http://nci.goofyxgridathome.net/ when they have workunits. I believe those are the two highest paying Projects at the moment.

I'm not looking for max amount of points.
But asteroids is by far the lowest PPD per hour of any GPU project!
On average I get the same PPD as my CPU crunching (in the likes of 200k PPD, vs 1M to 2M PPD on other projects. (The collatz gets me 80M PPD, and I'm not saying we should follow that, but at least be more equal to the other projects?).


They ARE like other Projects using the 'Seti New' way of doing credits. To change things you need to talk to an Admin and they don't hang out here getting into a discussion with us crunchers very often...they have a Project to run.
23) (Message 6630)
Posted 29 Apr 2020 by Profile mikey
Post:
I'm running ateroids@home on a Raspberry Pi 1B+ and currently have 165 work units that have returned an "error while computing." The Task detail shows:

Stderr output

<core_client_version>7.14.2</core_client_version>
<![CDATA[
<message>
process got signal 4</message>
<stderr_txt>

</stderr_txt>
]]>

This computer was processing work units satisfactorily until today. How to correct the problem?


The Boinc error codes say this means:
https://boinc.mundayweb.com/wiki/index.php?title=Process_got_signal_4

"This process error can happen when your BOINC version is outdated. Try to update to a more up-to-date version. The problem can also be attributed to a large amount of disk errors."

In your case the Boinc version is up to date, there is a newer one but maybe not for the Pi.
24) (Message 6627)
Posted 29 Apr 2020 by Profile mikey
Post:
Wish this project would STOP sending me notices to accept CPU workunits. I only do GPU workunits ONLY!!!! STOP IT!!!!!!!


It's NOT the Prject, it's hard coded into the Boinc software to remind us that other projects and apps want our resources too. Just be glad they only let one Project 'remind us' not the 20+ Projects and apps alot of people could be attached too.
25) (Message 6626)
Posted 29 Apr 2020 by Profile mikey
Post:

I came here because I've been running a few batches of asteroids on 2080Tis and feel like I get WAY too little credit for it.
Granted, about 80Wus still need to be validated, but I feel we're not getting an equivalent PPD score on asteroids, vs on most other projects.


Think 'credit new' for the credits here, they use the Seti idea of granting credits while other Projects like Collatz have their own idea of how many credits you should get for each workunit. If you just care about credits, most people don't BTW, then for gpu's crunch for Collatz and for cpu's crunch for http://nci.goofyxgridathome.net/ when they have workunits. I believe those are the two highest paying Projects at the moment.
26) (Message 6623)
Posted 28 Apr 2020 by Profile mikey
Post:
I noticed something else that looks strange.

I am running the new application on a computer with 2 graphics cards (GTX 1070 Ti and GTX 2070).

In BOINC manager it shows 2 GPU tasks running (one on device0 and one on device1 as expected), but when I look at the GPU load using GPU-Z, I see that the GPU Load is 98% for the GTX 2070, but always 0% for the GTX 1070 Ti.

The workunits are completing successfully, but it doesn't look like the GTX 1070 Ti is actually being used.

I am using Windows 10 Professional with latest updates, BOINC Manager 7.14.2 and NVidia driver v445.75

I have tried resetting the project, but this didn't help.

Edit:
I just tried it on another computer with 2 GPUs (two identical GTX 760 cards) and the same thing is happening.
Even though BOINC manager shows 2 GPU tasks running, I only see load on 1 GPU.


Same problem here with GTX 970 and 1660 Super.
I had the same issue with 2 Radeon at Moo! Wrapper some month ago and never found a solution.


Plug in a screen to the 2nd gpu when you boot the machine or Windows can disable a gpu that's 'not being used'. There is a plug you can make to simulate a monitor if you'd like:
https://www.geeks3d.com/20091230/vga-hack-how-to-make-a-vga-dummy-plug/
27) (Message 6622)
Posted 28 Apr 2020 by Profile mikey
Post:
I noticed something else that looks strange.

I am running the new application on a computer with 2 graphics cards (GTX 1070 Ti and GTX 2070).

In BOINC manager it shows 2 GPU tasks running (one on device0 and one on device1 as expected), but when I look at the GPU load using GPU-Z, I see that the GPU Load is 98% for the GTX 2070, but always 0% for the GTX 1070 Ti.

The workunits are completing successfully, but it doesn't look like the GTX 1070 Ti is actually being used.

I am using Windows 10 Professional with latest updates, BOINC Manager 7.14.2 and NVidia driver v445.75

I have tried resetting the project, but this didn't help.

Edit:
I just tried it on another computer with 2 GPUs (two identical GTX 760 cards) and the same thing is happening.
Even though BOINC manager shows 2 GPU tasks running, I only see load on 1 GPU.


Do you have a cc_config.xml file with <use_all_gpus> in it?
28) (Message 6606)
Posted 26 Apr 2020 by Profile mikey
Post:
Asteroids@home: Notice from BOINC
Your settings do not allow fetching tasks for CPU. To fix this, you can change Project Preferences on the project's web site.
4/24/2020 12:19:01 PM

This very annoying BOINC message keeps popping up in my notices section on my BOINC client. Since I DO NOT crunch CPU workunits, is there a way to keep the project from sending me this notice every day?


This is hard coded into the Boinc you downloaded and is NOT Server side based, to remind you other Projects want your help too, be thankful it only shows one Project not the 2 dozen some people are attached too!!
29) (Message 6605)
Posted 26 Apr 2020 by Profile mikey
Post:
Actually, why can't we see pictures on the forum? ?


My guess would be because they are using an older version of the software and it doesn't support them. Or it could because you used a png extension and it doesn't support those, your signature is a gif file and it shows.

Try it this way instead:

https://i.imgur.com/4brOg6t.png
30) (Message 6590)
Posted 22 Apr 2020 by Profile mikey
Post:

BTW, has anybody been working with app_config setting with their GPUs?
I have several, so would like to know what is the setting for:
- GT 1030
- GT 730
- GTX 1050Ti
- GTX 1650OC

Thanks


The only way is to try it on your system and see what happens, ie try 2 wu's at once and it is slower or crashes then go back to one wu at a time.

The Forums are only visited by less than 10% of the total users so the chance of finding someone with your specs is small. Most people just sign up and crunch, if it doesn't work they try another Project, if that too doesn't work most give up and stop crunching.
31) (Message 6589)
Posted 22 Apr 2020 by Profile mikey
Post:
System configuration: Windows 10, two (2) GTX 1080 FE and two (2) Xeon E5-2697 v2

I'm having some difficulty running both GPU's optimally. Both GPU's have tasks. Only one GPU is running at > 2000 MHz for the full task, the other GPU runs one task at < 139 MHz, practically idle but does appear to be processing the task.

<app_config>
<app>
<name>period_search</name>
<gpu_versions>
<gpu_usage>1.00</gpu_usage>
<cpu_usage>0.25</cpu_usage>
</gpu_versions>
</app>
</app_config>

if I add "<max_concurrent>4</max_concurrent>"

<app_config>
<app>
<name>period_search</name>
<max_concurrent>4</max_concurrent>
<gpu_versions>
<gpu_usage>1.00</gpu_usage>
<cpu_usage>1.00</cpu_usage>
</gpu_versions>
</app>
</app_config>

all CPU's flag "waiting to run", I've never experienced this with other projects.

Would appreciate any insight.


In the 2nd one you are specifying one cpu core for each gpu workunit you run, that's 4 cpu cores, of course your cpu workunits are 'waiting to run' as they are tied up running the gpu workunits.
32) (Message 6588)
Posted 22 Apr 2020 by Profile mikey
Post:

Superficially it looks like your math doesn't stack up. One 2080 is apparently 2 times faster than a 1060, and two of them are about 4 times faster. What did you expect ?



If I take Project Collatz as an example :

With one GPU NVidia GTX 1060 3GB : 1600 seconds for one task, 2,000,000 points per day.
With one NVidia RTX 2080 Ti : 130 seconds for one task, 19,000,000 points per day.

19000000/2000000=9.5 and 1600/130=12.3

For Collatz project, the NVidia RTX 2080 Ti is about 10 times faster than the NVidia GTX 1060.
So I'm very surprised that for Asteroids@home, it's only twice as fast ?
But maybe it's the numbers for Collatz that are abnormal !

Indeed, the theoretical power of RTX 2080 Ti is 13.45 TFlops and that of GTX 1060 is 4.4 TFlops.
13.45/4.4=3.
I really don't understand why there are such differences between BOINC projects ?

But I'm not a computer professional !


Each Project has it's own programmer to write the app, due to MANY differences there just no way to make this app just as fast as that app at another Project. One difference is the amount of data being processed, Collatz is looking for a result to a math problems, gpu's can zip thru those in no time. While Asteroids is also doing math stuff it's much more computationaly intense so takes much longer and is harder to optimize because of it.

The reason each Project has different programmers comes down to money and confidentially, project a doesn't want others stealing the way they do things so gets an in house person to do the programming while project b can afford to pay someone who has done it before so it's more optimized. BOTH work which is the whole point. Another difference is the money programmers want to do the app, some projects have the money to pay someone good while other projects only have the money to pay someone who can do it. Seti got Nvidia help writing theirs so it was highly optimized, Nvidia has not helped other projects to the same degree.
33) (Message 6557)
Posted 13 Apr 2020 by Profile mikey
Post:
Since the Server Status doesn't show which kinds of units are available you could be getting what they have that works that also has the number of tasks you want instead of getting all they have all of one kind and then some of another kind, there's just no way to know that.


To be honest I thought the WUs are the same, just running in different apps - it is not unusual for the same WU to be computed by different apps.
Either way the W3680 not receiving SSE3 units is a long term issue of mine going months back. I could understand, if the project decided that the SSE2 runs better on the hardware than the SSE3, but how does it know, if it never tries?


That's something only an Admin can answer.
34) (Message 6555)
Posted 12 Apr 2020 by Profile mikey
Post:
My W3680 only receives SSE2 uits, not a single SSE3 unit. My 1245v2 used to receive AVX all the time but now it started receiving SSE2 as well. Is there some way to encourage the project to send the correct type of work units?


Seems like the 1245v2 was just related to the release of the new app and the server is giving me AVX now.
The W3680 on the other hand received 8 WUs of the new app, but all SSE2. Could there be an issue detecting SSE3(PNI)?


Since the Server Status doesn't show which kinds of units are available you could be getting what they have that works that also has the number of tasks you want instead of getting all they have all of one kind and then some of another kind, there's just no way to know that.
35) (Message 6485)
Posted 23 Mar 2020 by Profile mikey
Post:
Hi mg13 [HWU],
Thank you for participating in Asteroids@home.

Creating OpenCL application to support AMD GPUs was always at our RoadMap for years. Even now there is ongoing development on that. No mater there were some turbulence during the years, postponing and getting back on, we are doing our best to make it happen. Once we have solid PoC application we will be glad to provide it to our contributors.

Regards,

Georgi.


Perhaps since Seti is closing you can get a copy of their version to cut the process in half.
36) (Message 6460)
Posted 8 Mar 2020 by Profile mikey
Post:
Hello from the Carolinas.


Hey me too!!

I'm in North Carolina about 10 miles North of the South Carolina border.
37) (Message 6430)
Posted 20 Jan 2020 by Profile mikey
Post:
The GTX 1660 Ti would be in more people’s reach. l ask that we get the app updated to support the Turing based cards please. Hopefully it would just be a matter of recompiling it with the latest CUDA compiler but I bet it’s not that simple.


My 1080Ti, 1060 and 760 gpu's work just fine here, about 11 minutes per wu for the 1080Ti and about 47 minutes for the 760.
38) (Message 6417)
Posted 28 Dec 2019 by Profile mikey
Post:
I will probably have to abort half of these.


It's alright. Project server will adapt to that and eventually tasks will get crunched somewhere.

It can be difficult to find a good work cache setting if you run cpu tasks from several projects at the same time. I don't know how the Boinc scheduler makes decisions whenever it calculates what work it should download at some moment. It may seem that Asteroids tends to be able to fill the cache relatively aggressively. I've had frustration with that a few times when mixing cpu tasks from other projects. In general it can take quite a long time in that kind of mixed-cpu-work scenario until cpu tasks will flow in smoothly between all the projects.


Between projects Boinc tries to do it on a daily RAC level based on the percentage you have for each Project, but it takes time as you said.

Alot of the problem comes in when a project provides both cpu and gpu workunits and the user crunches both from the same project as Boinc has no programming to deal with the differences at this point. Hopefully the Developers, who are all volunteers now, are working on the problem but it's been a long time coming.
39) (Message 6416)
Posted 28 Dec 2019 by Profile mikey
Post:
What was your setting about the cache size for work units?

If the run time of the WUs increase extremely, than every project will ran into timing problem.

A solution would be if the stupid Boinc Manager will have some more options (e.g. number of WUs instead of time for the cache size or if a new sort of batch is detected to wait for download of many WUs until the first is finished)


But as you stated if the run time increases dramatically you will still run into cases where you have too many workunits, Boinc itself works out the problem but you have to actually crunch some workunits for Boinc to know that you can only do x numner of workunits per day to get the cache size right. No the formula still isn't perfect but over time it does get better.
40) (Message 6330)
Posted 9 Aug 2019 by Profile mikey
Post:
I haven't thought about adding more new badges but I suppose I can add 1 or 2 more badges. I'll have it on my mind.


I think that's a great idea especially since gpu's can be used here now.


Previous 20 · Next 20