Unfair
Message boards :
Number crunching :
Unfair
Message board moderation
Author | Message |
---|---|
Send message Joined: 7 Mar 14 Posts: 80 Credit: 6,922,087 RAC: 1,509 |
After a period of only small batches of work coming from Asteroids, the project just downloaded 73 tasks to my computer, all with the same deadline - September 3. There's no way I can meet this deadline unless I cease everything else the computer is doing, including two other projects and some email, and maybe not even then. I will probably have to abort half of these. It just ain't fair. Steven Gaber Oldsmar, FL |
Send message Joined: 25 Jul 14 Posts: 64 Credit: 100,582,080 RAC: 0 |
I will probably have to abort half of these. It's alright. Project server will adapt to that and eventually tasks will get crunched somewhere. It can be difficult to find a good work cache setting if you run cpu tasks from several projects at the same time. I don't know how the Boinc scheduler makes decisions whenever it calculates what work it should download at some moment. It may seem that Asteroids tends to be able to fill the cache relatively aggressively. I've had frustration with that a few times when mixing cpu tasks from other projects. In general it can take quite a long time in that kind of mixed-cpu-work scenario until cpu tasks will flow in smoothly between all the projects. |
Send message Joined: 7 Mar 14 Posts: 80 Credit: 6,922,087 RAC: 1,509 |
|
Send message Joined: 14 May 13 Posts: 7 Credit: 14,713,170 RAC: 2,717 |
What was your setting about the cache size for work units? If the run time of the WUs increase extremely, than every project will ran into timing problem. But its absolutely no problem to abort some WUs - they will be sent to other people then. A solution would be if the stupid Boinc Manager will have some more options (e.g. number of WUs instead of time for the cache size or if a new sort of batch is detected to wait for download of many WUs until the first is finished) |
Send message Joined: 17 Feb 17 Posts: 13 Credit: 44,071,565 RAC: 0 |
What was your setting about the cache size for work units? There are some projects that have this listed in the preferences e.g. how many specific WU's are sent to each device. I wish that Asteroids was smarter about picking SSE and AVX tasks on supported processors. Maybe not a huge difference, but a big enough one. |
Send message Joined: 1 Jan 14 Posts: 302 Credit: 32,739,514 RAC: 3,509 |
What was your setting about the cache size for work units? But as you stated if the run time increases dramatically you will still run into cases where you have too many workunits, Boinc itself works out the problem but you have to actually crunch some workunits for Boinc to know that you can only do x numner of workunits per day to get the cache size right. No the formula still isn't perfect but over time it does get better. |
Send message Joined: 1 Jan 14 Posts: 302 Credit: 32,739,514 RAC: 3,509 |
Last modified: 28 Dec 2019, 1:38:13 UTC I will probably have to abort half of these. Between projects Boinc tries to do it on a daily RAC level based on the percentage you have for each Project, but it takes time as you said. Alot of the problem comes in when a project provides both cpu and gpu workunits and the user crunches both from the same project as Boinc has no programming to deal with the differences at this point. Hopefully the Developers, who are all volunteers now, are working on the problem but it's been a long time coming. |
Send message Joined: 7 Apr 14 Posts: 18 Credit: 5,382,829 RAC: 156 |
Last modified: 29 Dec 2019, 0:52:59 UTC "It's alright. Project server will adapt to that and eventually tasks will get crunched somewhere." It's not alright - the project has to wait for an eventual time-out before it can resend. This delays their research. It is far more relevant to ask whether Steve Gaber runs 24/7. If not - how can one expect a scheduler to guess time scales ? Otherwise - the scheduler takes all projects' tasks' deadlines into consideration. Estimates for new (sub-)projects can be far out - but that's true of all new activity. And who leaves their PC's unattended for 10 days' buffer ? |
Send message Joined: 7 Mar 14 Posts: 80 Credit: 6,922,087 RAC: 1,509 |
[quote]"It's alright. Project server will adapt to that and eventually tasks will get crunched somewhere." It's not alright - the project has to wait for an eventual time-out before it can resend. This delays their research. It is far more relevant to ask whether Steve Gaber runs 24/7. This computer DOES run Asteroids@Home, as well as SETI@Home and Rosetta 24/7 for the past several years. But Asteroids has been erratic in providing work -- sometimes none for weeks at a time, and occasionally, 73 at once. It is less reliable than the others. |
Send message Joined: 7 Mar 14 Posts: 80 Credit: 6,922,087 RAC: 1,509 |
Right now, the server status link shows ZERO tasks ready to send. Tasks ready to send 0 Tasks in progress 238974 Workunits waiting for validation 2 Workunits waiting for assimilation 1 Workunits waiting for file deletion 0 Tasks waiting for file deletion 0 Transitioner backlog (hours) 0.00 Is that because the project is run by one guy who has other jobs? Are the project's servers and other gear not up to the job? Does the project get any administrative or technical support from the university? Do administrators care about whether the project succeeds or not? |
Send message Joined: 2 Jan 20 Posts: 1 Credit: 3,936,480 RAC: 0 |
Hi, some projects have and will always have infinit amount of work easily available for chasing mathematical unicorns. Sending over a range of numbers and a simple algorithm is easy. There is no data reduction to be done upfront, that you may not be able to source to client computers since it would involve sending over terabyte of FITS images from telescopes or the need of constantly querying large databases of stars or asteroids. Even as the project is so successful on drawing computing power for the final steps, I can imagine that the real work lies beforehand. I have all respect for the team running it. Sending out such small workunits (memorywise) must have involved great investment on the computing logic. Cheers |
Message boards :
Number crunching :
Unfair