Posts by Presrvd

1) (Message 6767)
Posted 10 Jun 2020 by Profile Presrvd
I'd expect within the next couple of days to run into server disk space full error messages again if this isn't resolved soon. Also, according to the server status page, the "work units awaiting validation" appears to be WAY off. Currently, it only shows "3", but I alone have nearly 500 waiting validation.

Just a heads up, folks, expect some issues in the near future....
2) (Message 6759)
Posted 29 May 2020 by Profile Presrvd
db_purge has been running the four different times I checked it today. The no WUs thing is not at all uncommon here. They likely won't have any for at least another day or two. Usually when the 'Tasks in Progress' drops below 75-80,000-ish, it will start back up. That's not any kind of math at all, just observation, so please only use it as a loose point of reference....
3) (Message 6697)
Posted 19 May 2020 by Profile Presrvd

Hope a day will come when asteroids@home gets things sorted out.
or we get something like lunatics to help with that.

It will. It takes more than a few work units, but you'll notice the BOINC client will start to alter the run times and then request based off of those times. The ones in my queue are still not exact, but I'm definitely not getting dozens to hundreds of WUs I can't possibly process.
4) (Message 6696)
Posted 19 May 2020 by Profile Presrvd
I'm not sure which is worse: the project periodically running out of work units and going a few days with nothing, or the project periodically running out of disk space for a few days in which WUs cannot be uploaded.....

Actually, I guess I prefer the latter. At least my machines remain gainfully employed, even if they can't upload. It'll catch up eventually....
5) (Message 6585)
Posted 21 Apr 2020 by Profile Presrvd

Superficially it looks like your math doesn't stack up. One 2080 is apparently 2 times faster than a 1060, and two of them are about 4 times faster. What did you expect ?

I was going to say the same thing. Every time his wife's machine completes a single task, his machine (roughly) completes four. I'm afraid I don't see the issue...
6) (Message 6479)
Posted 20 Mar 2020 by Profile Presrvd
I have over 900 avx tasks with computation errors just in the last 48ish hours...
7) (Message 6437)
Posted 29 Jan 2020 by Profile Presrvd
Is this a known problem that trying to crunch with rtx super cards gives computation error instantly? If there was no support for these cards the boinc should not download the gpu work units or am I wrong?

This is definitely a known issue. Asteroids just simply doesn't support the Turing video cards. It would seem the project is lacking some serious updates to become current, and they run out of WUs on a pretty much weekly basis before replenishing a few days later. Keeping the current status quo might be all the current administration can muster...
8) (Message 6310)
Posted 12 Jun 2019 by Profile Presrvd
"Unfortunately, running a database isn't as simple as adding more drives for space."

Actually, it is

You add the physical drives to the DB server, then you extend the size of the database

Over simplification. There's a few things to consider: (1) Is there room on the server for another drive? (2) A new disk would need to be added to the RAID array (3) the current logical volume is SAS. While you technically could add SATA to the SAS array, I don't know a single person that would ever consider it, much less recommend it (4) you're going to take a performance hit if you're using a non-matching drive (10k v 7200--or Bob forbid 5400, SAS v SATA, 64MB v 128MB or 256MB cache, etc.). There are too many unknowns about the server being discussed to just say 'add a drive and extend the database', as per my actual statement, but thanks for cherry-picking the comment.
9) (Message 6304)
Posted 11 Jun 2019 by Profile Presrvd

I know nothing about running this kind of project, and as such I want to be clear that I'm not criticizing anyone about the servers running out of disk space, Im just curious as to how that actually happens. Like, it would make sense if the servers for this project are also being used for the long term storage of the results, but if that is handled by some other organization, wouldn't freeing up server space for the project be as simple as transferring data to whatever server is storing the results long term?

Also, couldn't the total storage capacity of the project servers be increased simply by adding additional hard drives? I know really big, server grade ones are probably expensive, but cant you also use regular consumer grade ones as well? Sure its not ideal, since they were not really meant to handle the same load, and they have lower bandwidth, and if you wanted them cheap you would be getting smaller ones, but how expensive would it be to make an additional RAID array out of cheap, old desktop drives with a capacity of around 250 GB each? The server status page says that there are 2 arrays in use right now, both using RAID 5. One is made from 8 600 GB drives, the other from 3 600 GB drives. If I understand how RAID 5 works, that would give a total storage capacity of 4200 GB, right? Say you get 5 250 GB consumer level drives for cheap, and in RAID 5 that would give you an extra 1000 GB of space. And if you wanted to have greater redundancy in the event of a failure since they are all old, consumer grade drives, just tell it to use 2 drives instead of 1 for redundancy, at the cost of 250 GB of storage space.

If any of that is even remotely viable, (and again, I am by no means an expert on this stuff) would it be better to 'donate' by mailing a couple old hard drives rather than a $20 donation potentially? Again, I don't want to tell anyone how to do their job, or imply that I could do it better, (because I really cant) Im just curious about why things are done the way they are done.

Unfortunately, running a database isn't as simple as adding more drives for space. Imagine you have a 4GB mp4 file, and two 2GB thumb drives. Yes, you can split an mp4. Yes, you can split a database, but that doesn't necessarily make it wise to do so. Mixing drives is a terribly bad idea, though, especially used ones. One also has to consider that perhaps there are no more HDD slots on their server. Connecting them via USB just isn't an option. There's no way the disks could handle the I/O over that channel to a database facing the internet with potentially 250k+ computers attempting access. It's just one of those things where you don't want to use 128 shot glasses when you really need a gallon jug... Hope that helps.
10) (Message 6268)
Posted 5 May 2019 by Profile Presrvd
My machines have been trying since Friday, and not a single work unit has been downloaded. Log says "No Tasks Available".
11) (Message 4771)
Posted 20 Jan 2016 by Profile Presrvd
I take a look at the running tasks daily and find the next task in queue. I have decided for example: That next task in line needs to have the deadline NOT BEFORE tomorrow evening. And same procedure always... every day.

If there are tasks that have deadline earlier than tomorrow evening, I'll cancel enough of them until my requirement is met. Then I let the scheduler download new tasks and those will be placed in the end of the tail.

That way I can leave the computer, relax and sleep long in the morning, knowing there won't be any tasks missing the deadline if I manage to check at the host again before the next evening. Or sometimes I might even change the rule to something like "next deadline at least two days ahead".

If your tasks are at the knife edge at the moment and some of them are ending up late, just cancel a ton of them in cold blood (tasks for the next five days, for example). Those cancelled tasks will reborn somewhere else.

No offense, but that is entirely too much babysitting for me.

My late WUs are now in the hundreds, and yet there is still no reason or explanation why. From reading through the forum, it looks like this is a norm. Perhaps it is time to retire this project.
12) (Message 4766)
Posted 14 Jan 2016 by Profile Presrvd
My real issue isn't really the credits not changing, even though the workload is. I'm more annoyed with the BOINC client not updating with the WU processing times. I'm now starting to lose entire WUs (which just means wasted processor, and electricity) because my BOINC client is still pulling down WUs based on the old calculating times. It's almost like Asteroids@home has times that are propagating in place of my calculating time. In the last two days, I've lost 30 WUs, and over .5 million processing seconds. That, and the severe lack of communication on this project are frustrating.
13) (Message 4758)
Posted 6 Jan 2016 by Profile Presrvd
Looks like someone kick-started the validator. My queue of WUs is processing at a decent speed so far. Now, if they could only explain why the WUs are taking so much longer to actually compute....
14) (Message 4749)
Posted 5 Jan 2016 by Profile Presrvd
I have the same thing going on with my machines. Seems pretty recent, though. Looks like my machines started doubling their WU times yesterday...
15) (Message 4589)
Posted 6 Aug 2015 by Profile Presrvd
Personally I'm of the opinion that if you miss the deadline then the server should tell you "sorry I'm not accepting that". Rather than shafting the 3rd person.

I think I like this idea better as well... Seems to me that once the 3rd WU is generated, the 2nd should just be cancelled...

Thank you for the filling in the informational gap between my ears. =)
16) (Message 4580)
Posted 30 Jul 2015 by Profile Presrvd
I'm sure these questions have been answered before, but the search feature doesn't seem to be finding the answers very well, so hopefully someone can enlighten me.

Why are there so many WUs that error out with "Cancelled by Server" status? I've had about a dozen in the last week. Yeah, I know that's not really a lot considering, but I am a curious people...

I'm assuming that the WUs crunch with a buddy cruncher, which is possibly why I have over 200 tasks in the Validation Pending cue, but I would like to confirm that, if possible, as I've got some WUs that have been pending validation since 19 July...

Thanks for any assistance, and I apologize for bringing this up for quite possibly the billionth time.