Posts by Dagorath

1) (Message 2843)
Posted 20 Mar 2014 by Dagorath
Post:
i want to see if my 'puter and boinc will run more efficiently if i set it to 4 with "on multiprocessor systems use at most 100% of the processors" setting


It won't.

It sounds like maybe you're thinking that if you have tasks from 5 projects then your computer crunches all 5 simultaneously. It doesn't. If you have a 4 core CPU and have "on multiprocessor systems use at most 100% of the processors" set then it will crunch only 4 tasks simultaneously.
2) (Message 2842)
Posted 20 Mar 2014 by Dagorath
Post:
I dunno, never read that thread and if it's at the Milkyway forums I likely never will (I have no use for their gong show, skank admins, mods, and if they have a sceensaver it sucks too). Maybe it helps for the mentioned cards. Maybe it applies to Milky way only, I dunno. I would like a model/formula that applies to every GPU and every project. Why? Well, why not?
3) (Message 2833)
Posted 18 Mar 2014 by Dagorath
Post:
Ahh, there's the work, your gold star is in the mail ;-)

Yes, I see now what you meant. I misinterpreted, probably read too fast or something.

Actually there is another deficiency in the model I presented. In the model the newer card, B, uses less power than the older, cheaper card. That's rarely the case in the real world. What is true is that newer cards based on smaller lithography use less power to produce the same amount of work. Hence the need for a better model. I'm working on it, anybody else close?
4) (Message 2821)
Posted 18 Mar 2014 by Dagorath
Post:
Or perhaps it's caused by 4 to 6 hours of unannounced maintenance that has ballooned into 40 to 60 hours, or will it become 4 to 6 DAYS!


Crap happens so get used to it because it ain't gonna change. Get used to it and do the smart thing on your end. Or do the dumb thing and miss deadlines and waste electricity if that's what you want.

In case you still didn't get it.... the smart thing is to keep a small cache. Don't believe me? OK, you'll learn eventually. Or maybe you won't, whatever.
5) (Message 2819)
Posted 18 Mar 2014 by Dagorath
Post:
Probably you will this time but don't count on it everytime. If you have tasks that are already beyond deadline then you're pushing your luck more than I would push. It's up to you of course but you might want to decrease your cache if that's what's causing it.
6) (Message 2818)
Posted 18 Mar 2014 by Dagorath
Post:
Those numbers look like they might be right but since you didn't show your work you get only half marks ;-)

It's easy to see that you are at the point (or soon will be) where if you add the purchase price of the cheaper card to the cost of the electricity you've purchased to operate it, you've spent the equivalent of the purchase price of the more expensive card. In other words you spent (or soon will have spent) the money but don't own the faster and more efficient card, Furthermore, if you happened to have used a credit card to buy the GPU and paid interest then you're losing money faster than a drunken sailor on shore leave in Dublin.

It goes something like this....

Define TCO (total cost of owning) to be purchase price + cost of operating.
Define * to mean multiplication.

You buy an old video card for $40 and a new one for $265. Let's call the old one A and the new one B. A requires 100 watts to operate, B requires 65. You plug them both in and start crunching with both cards. At that point in time TCO_B (TCO of B) is much higher than TCO_A however, since A costs more to operate per hour than B, it follows that at some point in the future TCO_A will catch up to TCO_B. In other words, at some point in the future we will have the condition where TCO_A = TCO_B.

Obviously, TCO_A = p_A + (oc_A * t), where p_A is A's purchase price, (oc_A * t) is the cost of operating over t amount of time, and oc_A is the operating cost of A per unit of time.

Similarly, TCO_B = p_B + (oc_B * t).

So we can write, p_A + (oc_A * t) = p_B + (oc_B * t) then solve for t to get the operating time at which TCO_A = TCO_B. If you continue to operate A after that amount of time then TCO_A will become larger than TCO_B in spite of it's initial lower purchase price.

Of course there is at least 1 deficiency in the above model and that is that it doesn't tell us which one, A or B, is the cheaper bang for the buck with respect to total cost per task over time. Obviously if one wanted to crunch just 1 task then A is the less expensive option. Same if we wanted to crunch just 2 task or maybe even 4 tasks but what about 100 tasks or 500 tasks? When does the total cost (purchase price + electricity) of A per task become equal to the cost of B per task? Can anyone develop the model further to answer that question?
7) (Message 2804)
Posted 17 Mar 2014 by Dagorath
Post:
Thanks, MarkJ! A 1 hour wait is not a problem, it's a good compromise. I'll be updating and removing the "report immediately" option from cc_config.xml soon.
8) (Message 2798)
Posted 17 Mar 2014 by Dagorath
Post:
Maintenance happens, no sense whining about it, deal with it instead.

"...these guys are serious guys, if it wasn't necessary, they wouldn't have done it.."


Agreed.

I keep an extremely small cache and put "return results immediately" in a cc_config.xml in the BOINC data directory

I like a big "switch between applications every" setting, at least 4 hours. It seems that way I get fewer tasks sitting there with only 30 minutes remaining.


1. Which flag do you use for uploading results ASAP? And will this not mean a lot of traffic for the update server/update servers?


It's not a flag it's an option. And I'm not talking about uploading results, I'm talking about reporting results. The uploading of the actual result file always happens as soon as the task completes. The reporting of the result is a separate event and BOINC client may postpone that for rather long periods of time, too long for my tastes.

If you're looking at this page from the BOINC User Manual then scroll down past the section on logging flags to the section on options.

So, as an option the statement to turn the option on goes between <option> tags thusly:
<cc_config>
   <log_flags>
   </log_flags>
   <options>
      <report_results_immediately>0|1</report_results_immediately>
   </options>
</cc_config>


The notion that reporting results immediately goes back years to when project servers ran on single core CPUs, disks were slow and disk caches were small. The concern was not so much the increased network traffic as the additional database access overhead. Recording 3 results in the database requires only a wee bit more overhead than recording 1 result so it was preferred that BOINC client should wait until at least 3 results were ready to report.

If BOINC client were programmed to wait until there are 3 results ready and then report them, I would go along with that but instead it can wait until there are over 20 ready. I can't accept that and since it's my system and my electricity I run it the way I like it and report them immediately. Several admins have told me it's not a problem for them and that seems sensible given how much faster disks and CPUs are these days. If it does become a problem then the issue will get reexamined and there will eventually be code changes. That might mean the "report results immediately" option gets replaced by an option similar to "report results when 3 are ready" or it might mean they remove any such option completely, it's hard to say what the BOINC devs might do. Not that what they do is always the final verdict anyway. If they remove the "report tasks immediately" option I'll just run a script that monitors the ready reports and forces an update when I want.

2. Does it really change anything to change "switch between applications every" setting to a larger setting? My CPU's does most jobs in 1.5 hours and the GPU's in 2.5 hours.. Maybe changing the setting to 3 hours would be good for me!?


That setting pertains to all projects. I have some that run 6, 10 or even 24 hours so so 4 works for me. If the longest task you crunch from any project is 2.5 hours then 3 might be better for you. I have no proof, rational or empirical, to support my claim that a large switch time helps anything. It just seems to me I see fewer tasks with a mere 30 minutes remaining, it might not actually be so.
9) (Message 2795)
Posted 17 Mar 2014 by Dagorath
Post:
Maintenance happens, no sense whining about it, deal with it instead. I don't like getting caught with a bunch of results waiting for upload and wondering if they'll make deadline or become wasted effort but instead of whining at the admin over an issue I can deal with myself, I keep an extremely small cache and put "return results immediately" in a cc_config.xml in the BOINC data directory.

Also, once in a while a project is forced to cancel several thousand tasks which unfortunately sometimes trashes results waiting to upload. That's another good reason to keep a miniscule cache and return results immediately. The longer tasks and results sit around on my system the greater the chance something bad is going to happen to them so I get just a few at a time, crunch 'em and send 'em on their way ASAP.

I like a big "switch between applications every" setting, at least 4 hours. It seems that way I get fewer tasks sitting there with only 30 minutes remaining. It's disappointing to see a task that ran for 5 hours and needed only 30 more minutes get canceled, lost to a disk crash, power outage, whatever.
10) (Message 2791)
Posted 16 Mar 2014 by Dagorath
Post:
Interesting. The older cards might be the most cost effective at A@H if you consider only purchase price. If you include the cost of electricity to operate them you might get a different picture. If you operate one long enough you'll reach a point beyond which you end up paying more per task than if you had just saved your money until you can afford a newer, more efficient model. I don't know how long it takes to reach that point but it's a pretty simple system of simultaneous equations type of problem, basically high school math. Anyone care to have a go at it? The simple scammy-hash way or Gauss-Jordan elimination?
11) (Message 2790)
Posted 16 Mar 2014 by Dagorath
Post:
That's what's referred to as micro-managing and it's not recommended because it usually creates more problems than it solves. Use the "resource share settings" in your website preferences at each of your projects to determine how much of your compute time is allocated to each project. The default is 100, so if you have 2 projects and they're both set at 100 for resource share then each will get roughly 100/200 or 50% of your resources. BOINC will take care of scheduling tasks in a way that your resources are shared according to the shares you specify. That does not mean you will always have a task from each project in your cache, sometimes you will, sometimes you won't. But over the longterm your resource shares will be honored. If you don't know where to find the resource share settings then just ask.

If you feel you want to crunch more of one project than another then set that project's resource share higher so that it receives more resources (compute time) on your computer.

If one were to compare BOINC to an automobile then it's not intended to be driven with your foot on the throttle pedal and your hands on the wheel. You set it on auto-pilot and let it steer itself. It isn't perfect but it works very well. It will crunch project A for a while and ignore B but eventually it will ignore A and crunch B for a while. And sometimes it will crunch both side-by-side. That's normal. Some people enjoy playing with BOINC and steering it manually but eventually the thrill wears off like it does with any new toy.
12) (Message 2777)
Posted 16 Mar 2014 by Dagorath
Post:
Welcome to A@H, Steve.

Your list of tasks doesn't show any in progress so you must have processed all the A@H tasks you had. It's not necessary to ask in the forums if there are more tasks available, just go to the Server Status page and see for yourself. I mean you're welcome to ask but why wait for someone else to check for you when you can get the answer quicker on your own, right? The link to the Server Status page is on the home page, same as for most other projects including SETI.

If there are tasks available and you don't have any then maybe it's because BOINC isn't requesting any. If it's not requesting any then maybe it's because BOINC thinks it should crunch SETI tasks for a while in order to honor the project resource shares you've set. The Event Log will give you info regarding whether it's requesting tasks or not and why it isn't getting tasks if it is requesting them.

P.S.

Yes, that's it. I just checked your list of SETI tasks for your new AMD machine and it shows 8 tasks in progress. After you crunch a few of those you'll get more A@H tasks.
13) (Message 2774)
Posted 16 Mar 2014 by Dagorath
Post:
Not sure why you want an app_info but then I'm app_info challenged.

If it helps... I'm getting the AVX app on my Haswell system without an app_info. All I did was plug it in and turn it on. HA_Soft says it's because Linux detects AVX properly but Windows can't find its own ass with both hands and a mirror. As usual.

You will get a mix of SSE2 and AVX tasks for a while until the server detects that your system does AVX way faster than SSE2 then from that point on you'll receive only AVX.

As for crunching here with your GPUs, unless you have Titans or Teslas or something with decent DP float power you'll find tasks run faster on Haswell than on GPU, I think. I tried my 670 here a while back and the times were not what I would call impressive and it's because DP is crippled on GTX 670. They have improved the GPU app at least once since then but if I understand correctly it's still slower than AVX on anything other than Titan and Tesla which have full (uncrippled) DP float capability.

Still, I didn't need an app_info for GPU either, just had to turn GPU on in website prefs.
14) (Message 2772)
Posted 16 Mar 2014 by Dagorath
Post:

If he has permission to install BOINC on all of them then I doubt his account's connectivity is restricted.


I'm guessing it's in their best interest to run asteroids@home given the link i posted above... Guess what that company is interested in ...


They want pizza delivered hot in 30 minutes or it's free??

Instant credit would't that be cool ?!?


If'n I had my druthers there'd be no credits at all, Unca Jed.

So why don't you all chill till you get your credit... ?!?


I'm here to do science and troll the newbies, credits just make me puke. I don't even like the badges. Look like loaves of mouldy bread to me.
15) (Message 2769)
Posted 15 Mar 2014 by Dagorath
Post:
And we don't yet know if the person/persons involved have 24/7 connectivity.


I figure those 130 machines must be servers therefore assume 24/7 connectivity. If he has permission to install BOINC on all of them then I doubt his account's connectivity is restricted.

Even a restriction of 20 cached tasks per core would be a big help in solving the problem. If one assumes ~1 hour per task then 20 tasks per core should be enough to last from 1 connected window to another. It wouldn't be perfect for everybody but the pros definitely outweigh the cons. Remember back to several months ago when Kyong made a mistake on the task duration estimate and everybody got 10 times as many tasks as they should have. The kind of restriction we're talking about would have helped a lot in that situation too.
16) (Message 2764)
Posted 15 Mar 2014 by Dagorath
Post:


BTW, another thought has crept into my mind. Some of the lads get up to all sorts of shenanigans when there is a challenge going on. If you look at the OS recorded for each of those hosts it says they're Linux virtual hosts. I am beginning to wonder if some joker didn't fake 32 cores on a 4 core virtual machine and then clone that original into 130 bogus hosts. It's an entirely doable scenario and I know because I've done similar myself. Now each of those clones has downloaded a few thousand tasks for a total of over 100,000 but they're all running on just a single 4 core i5 CPU. Hmmmmmm?


Brilliant deduction! First of all that team wasn't in the challenge. Secondly I'm impressed that "fake" machines could throw up 61M credits in a week. I need to check into how that's done.


http://stats.free-dc.org/stats.php?page=userbycpid&cpid=68583442208b71aea7c8eed7bc0f4784


Hmmm. That blows a pretty big hole in my theory. It's too late now but I did notice his huge RAC a few hours prior to posting my quack theory but forgot all about it. For my penance I will PM Jamie and straighten him out. I hope he doesn't punch me in the eye. With that RAC he's starting to sound like a pretty tough hombre.

@Andrew:

LOL! I did clone a few virtual hosts once but not for the purpose of hoarding tasks. And as Bryan pointed out my theory doesn't hold a lot of water anyway. I hope we can all get along too and I apologize for egging you on.
17) (Message 2748)
Posted 15 Mar 2014 by Dagorath
Post:
Its been said above, that a lot of projects would love to have the resources of these (in my opinion) over large hosts. however "IF" these hosts end up "timing out" on a large capacity of W/U`s then where`s the gain to the project, it just means those work units will have to be sent out again and take longer to validate.


You're right, Tom, but you're thinking short term. After the noob gets his caches tuned properly and stops returning bad results he'll more than make up for the mayhem he's caused in a few weeks if not sooner. With that many hosts under his control you can be sure he's no idiot. He'll get it sorted in short order. And our RACs will be back to normal in plenty of time.

BTW, another thought has crept into my mind. Some of the lads get up to all sorts of shenanigans when there is a challenge going on. If you look at the OS recorded for each of those hosts it says they're Linux virtual hosts. I am beginning to wonder if some joker didn't fake 32 cores on a 4 core virtual machine and then clone that original into 130 bogus hosts. It's an entirely doable scenario and I know because I've done similar myself. Now each of those clones has downloaded a few thousand tasks for a total of over 100,000 but they're all running on just a single 4 core i5 CPU. Hmmmmmm?

Why would anybody do that? Well, maybe they like seeing guys like Andy get all bent out of shape over nothing. Maybe they are on one of the teams in the challenge and they intend to win by hoarding all the tasks so that the other team can't have any, or something like that. Actually I doubt bthat could work but if the guy thinks it would work then he might try it. Another motive might be that he wanted Kyong to set a limit on cacheable tasks but Kyong refused or forgot or wouldn't listen so he's on a mission to teach Kyong a lesson.

Yes, the more I think about it the stronger the aroma of fish becomes. ROFLMAO!! Good one Jamie Kinney, whoever you are. As well, it's a nice change from cheating on credits.
18) (Message 2747)
Posted 15 Mar 2014 by Dagorath
Post:
Aww come on now. We don't hold it against you that you talk like a whore. We know the mind controlling boogeyman computer holding your credits ransom made you talk that way.

And they don't depend on luck here, Andy, they use skill and facts. You should try it sometime.


I fail to understand how the other users skill, facts and piracy, had anything to do with the problems now existing at A@H?


I meant the project admins rely on skill and facts not that newbie running the monster machines that are holding all the tasks. And he's no pirate. He's a kind hearted, generous newbie who did a lot of work brining that many machines online to crunch here. Unfortunately he seems to have relied on the default cache settings or perhaps boosted them even higher than the defaults and now he's holding probably over 100,000 tasks in his ~130 caches. I don't see any reason to get pissed about it, it's hilarious, IMHO.

Tell you what Andy, the rest of us took a vote and we decided that if you ever come back then you are appointed to be the one to PM that noob and tell him to shape up. With that many hosts under his thumb he's probably a BOFH and knows some stuff but don't you back down. Rip him a new one. We'll be here listening. Do you know who to PM?

Just remember this... we will all get our credits eventually, we just need to be patient. So don't go telling him he's a pirate because then he'll know you're a noob too and he'll blow you out of the water, OK? Just stick to the facts which are: 1) his cache is too big, 2) our results aren't validating, 3) if he doesn't smarten up he's gonna have to deal with ol' Dag. Off you go now.
19) (Message 2734)
Posted 14 Mar 2014 by Dagorath
Post:
The problem with HUGE users is that they will just go elsewhere with their resources if they can't get what they want here. There are ALSO plenty of projects that do NOT throttle the workunit load. I am guessing most projects would LOVE to have some super user come in with this guys resources and start crunching for them. This ONE guy is like adding 100 of us normal folks.


I think Gerard meant 5 cached tasks per core not 5 per core per day. BOINC server has a setting for it.

With a max of 5 in the cache at any given time any user with 24/7 connectivity would have all the tasks he can crunch. Only hosts that have limited connectivity (eg. traveling laptops) would be affected.

Another big benefit of restricting the number of tasks in the cache is that if the project server is offline for a few days and lots of hosts are completely dry and want tasks then fewer hosts will receive the "download errors" some hosts get when many thousands of hosts all want tasks at the same time. Each host gets a few at a time instead of 50 each which means other thirsty hosts can squeeze in and have a quick drink too.

BTW I think this new kid is more like 1,500 of us normal guys. 50 hosts with 32 cores each = ~1,600 cores. Those must be servers so they'll be busy with other work unlike dedicated crunchers so maybe more like 1,000 normal hosts.

P.S.

Correction: Here's his list of hosts. He has 132 hosts, not 50. Each host has 32 cores. You do the math, because I get dizzy when I try to imagine that many cores in one man's control.

@God

We're going to run out of asteroids to crunch soon. Make more, please, but don't put them so close to Earth this time. Close ones are scary.
20) (Message 2726)
Posted 14 Mar 2014 by Dagorath
Post:
The "problem" of crunchers with a big cache of tasks and therefore long waiting times for validation, could easily be solved.
Just limit the number of task to a maximum of 5 per core, and the time to complete them to 5 days.
There are more projects that do that.!


+1


Next 20