Unfair



Message board moderation

To post messages, you must log in.
AuthorMessage
Steve Gaber

Send message
Joined: 7 Mar 14
Posts: 65
Credit: 6,170,215
RAC: 1,815
Message 6335 - Posted: 24 Aug 2019, 5:26:36 UTC
After a period of only small batches of work coming from Asteroids, the project just downloaded 73 tasks to my computer, all with the same deadline - September 3.

There's no way I can meet this deadline unless I cease everything else the computer is doing, including two other projects and some email, and maybe not even then.

I will probably have to abort half of these.

It just ain't fair.

Steven Gaber
Oldsmar, FL
ID: 6335 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Richie

Send message
Joined: 25 Jul 14
Posts: 64
Credit: 100,582,080
RAC: 0
Message 6336 - Posted: 24 Aug 2019, 13:17:15 UTC - in response to Message 6335.  
I will probably have to abort half of these.


It's alright. Project server will adapt to that and eventually tasks will get crunched somewhere.

It can be difficult to find a good work cache setting if you run cpu tasks from several projects at the same time. I don't know how the Boinc scheduler makes decisions whenever it calculates what work it should download at some moment. It may seem that Asteroids tends to be able to fill the cache relatively aggressively. I've had frustration with that a few times when mixing cpu tasks from other projects. In general it can take quite a long time in that kind of mixed-cpu-work scenario until cpu tasks will flow in smoothly between all the projects.
ID: 6336 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Steve Gaber

Send message
Joined: 7 Mar 14
Posts: 65
Credit: 6,170,215
RAC: 1,815
Message 6337 - Posted: 24 Aug 2019, 16:45:19 UTC - in response to Message 6336.  
"It's alright. Project server will adapt to that and eventually tasks will get crunched somewhere."

Richie:
Thanks for the reply and the commiseration.
Steve Gaber
Oldsmar, FL
ID: 6337 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
magiceye04

Send message
Joined: 14 May 13
Posts: 7
Credit: 13,360,707
RAC: 29,466
Message 6392 - Posted: 19 Oct 2019, 7:08:42 UTC
What was your setting about the cache size for work units?

If the run time of the WUs increase extremely, than every project will ran into timing problem.
But its absolutely no problem to abort some WUs - they will be sent to other people then.

A solution would be if the stupid Boinc Manager will have some more options (e.g. number of WUs instead of time for the cache size or if a new sort of batch is detected to wait for download of many WUs until the first is finished)
ID: 6392 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
wolfman1360

Send message
Joined: 17 Feb 17
Posts: 13
Credit: 44,071,565
RAC: 0
Message 6393 - Posted: 20 Oct 2019, 5:25:05 UTC - in response to Message 6392.  
What was your setting about the cache size for work units?

If the run time of the WUs increase extremely, than every project will ran into timing problem.
But its absolutely no problem to abort some WUs - they will be sent to other people then.

A solution would be if the stupid Boinc Manager will have some more options (e.g. number of WUs instead of time for the cache size or if a new sort of batch is detected to wait for download of many WUs until the first is finished)

There are some projects that have this listed in the preferences e.g. how many specific WU's are sent to each device.
I wish that Asteroids was smarter about picking SSE and AVX tasks on supported processors. Maybe not a huge difference, but a big enough one.
ID: 6393 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile mikey
Avatar

Send message
Joined: 1 Jan 14
Posts: 300
Credit: 32,050,819
RAC: 14,680
Message 6416 - Posted: 28 Dec 2019, 1:30:20 UTC - in response to Message 6392.  
What was your setting about the cache size for work units?

If the run time of the WUs increase extremely, than every project will ran into timing problem.

A solution would be if the stupid Boinc Manager will have some more options (e.g. number of WUs instead of time for the cache size or if a new sort of batch is detected to wait for download of many WUs until the first is finished)


But as you stated if the run time increases dramatically you will still run into cases where you have too many workunits, Boinc itself works out the problem but you have to actually crunch some workunits for Boinc to know that you can only do x numner of workunits per day to get the cache size right. No the formula still isn't perfect but over time it does get better.
ID: 6416 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile mikey
Avatar

Send message
Joined: 1 Jan 14
Posts: 300
Credit: 32,050,819
RAC: 14,680
Message 6417 - Posted: 28 Dec 2019, 1:37:07 UTC - in response to Message 6336.  

Last modified: 28 Dec 2019, 1:38:13 UTC
I will probably have to abort half of these.


It's alright. Project server will adapt to that and eventually tasks will get crunched somewhere.

It can be difficult to find a good work cache setting if you run cpu tasks from several projects at the same time. I don't know how the Boinc scheduler makes decisions whenever it calculates what work it should download at some moment. It may seem that Asteroids tends to be able to fill the cache relatively aggressively. I've had frustration with that a few times when mixing cpu tasks from other projects. In general it can take quite a long time in that kind of mixed-cpu-work scenario until cpu tasks will flow in smoothly between all the projects.


Between projects Boinc tries to do it on a daily RAC level based on the percentage you have for each Project, but it takes time as you said.

Alot of the problem comes in when a project provides both cpu and gpu workunits and the user crunches both from the same project as Boinc has no programming to deal with the differences at this point. Hopefully the Developers, who are all volunteers now, are working on the problem but it's been a long time coming.
ID: 6417 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile JohnMD
Avatar

Send message
Joined: 7 Apr 14
Posts: 18
Credit: 5,380,088
RAC: 308
Message 6419 - Posted: 29 Dec 2019, 0:28:20 UTC - in response to Message 6337.  

Last modified: 29 Dec 2019, 0:52:59 UTC
"It's alright. Project server will adapt to that and eventually tasks will get crunched somewhere."

Richie:
Thanks for the reply and the commiseration.
Steve Gaber
Oldsmar, FL

It's not alright - the project has to wait for an eventual time-out before it can resend. This delays their research.

It is far more relevant to ask whether
Steve Gaber
runs 24/7.
If not - how can one expect a scheduler to guess time scales ?
Otherwise - the scheduler takes all projects' tasks' deadlines into consideration.
Estimates for new (sub-)projects can be far out - but that's true of all new activity.
And who leaves their PC's unattended for 10 days' buffer ?
ID: 6419 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Steve Gaber

Send message
Joined: 7 Mar 14
Posts: 65
Credit: 6,170,215
RAC: 1,815
Message 6422 - Posted: 4 Jan 2020, 18:43:25 UTC - in response to Message 6419.  
[quote]"It's alright. Project server will adapt to that and eventually tasks will get crunched somewhere."

Richie:
Thanks for the reply and the commiseration.
Steve Gaber
Oldsmar, FL

It's not alright - the project has to wait for an eventual time-out before it can resend. This delays their research.

It is far more relevant to ask whether
Steve Gaber
runs 24/7.

This computer DOES run Asteroids@Home, as well as SETI@Home and Rosetta 24/7 for the past several years. But Asteroids has been erratic in providing work -- sometimes none for weeks at a time, and occasionally, 73 at once. It is less reliable than the others.
ID: 6422 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Steve Gaber

Send message
Joined: 7 Mar 14
Posts: 65
Credit: 6,170,215
RAC: 1,815
Message 6423 - Posted: 4 Jan 2020, 18:52:05 UTC - in response to Message 6422.  
Right now, the server status link shows ZERO tasks ready to send.
Tasks ready to send 0
Tasks in progress 238974
Workunits waiting for validation 2
Workunits waiting for assimilation 1
Workunits waiting for file deletion 0
Tasks waiting for file deletion 0
Transitioner backlog (hours) 0.00

Is that because the project is run by one guy who has other jobs? Are the project's servers and other gear not up to the job? Does the project get any administrative or technical support from the university? Do administrators care about whether the project succeeds or not?
ID: 6423 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
hericks

Send message
Joined: 2 Jan 20
Posts: 1
Credit: 3,936,480
RAC: 0
Message 6424 - Posted: 13 Jan 2020, 20:28:59 UTC - in response to Message 6423.  
Hi,

some projects have and will always have infinit amount of work easily available for chasing mathematical unicorns. Sending over a range of numbers and a simple algorithm is easy.

There is no data reduction to be done upfront, that you may not be able to source to client computers since it would involve sending over terabyte of FITS images from telescopes or the need of constantly querying large databases of stars or asteroids.
Even as the project is so successful on drawing computing power for the final steps, I can imagine that the real work lies beforehand. I have all respect for the team running it. Sending out such small workunits (memorywise) must have involved great investment on the computing logic.

Cheers
ID: 6424 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote