Posts by Dagorath

41) (Message 2544)
Posted 18 Feb 2014 by Dagorath
Post:
Read carefully, Andrew, very carefully. Notice Kyong said androboinc is just a manager that you can use to connect to BOINC clients. Many people are not aware that there is a BOINC client as well as a BOINC manager. A BOINC manager only allows you to control BOINC client. A manager does not run project applications, that's what the client does. Therefore if you were thinking you might be able to use androboinc to run tasks on a phone/tablet/whatever then no, that won't work, stay away from it.

If, however, you want to leave BOINC client running on your computer at home while you go on a trip and you want to use your android phone/tablet/whatever to check on how that client is doing and possibly interact with it then you might try androboinc on your phone/tablet/whatever. But bear in mind neither BOINC client nor project apps would be running on the phone/tablet/whatever, only the manager (androboinc) would be running on the phone.
42) (Message 2541)
Posted 18 Feb 2014 by Dagorath
Post:
Thank you so much, for the very good explanations you gave me, and for the time you spent to write it down here. I don't know if you're right or not, but it makes sense, and all about the Cuda 32/64bits tasks is more clear to me now. ;)


You're welcome and if I am not correct then someone will correct me or add to my explanation if it is not complete.
43) (Message 2535)
Posted 17 Feb 2014 by Dagorath
Post:
Precision refers to the number of bytes used to represent numbers. Single precision numbers use 32 bits whereas double precision numbers use 64 bits. More bits allows larger numbers to be represented but operations (addition, subtraction, multiplication, division, etc.) on double precision numbers are slower than operations on single precision numbers. In other words it takes more time to add 2 double precision numbers than it does to add 2 single precision numbers.

Some projects and games use predominately single precision calculations and they proceed very quickly. Asteroids@home uses a lot of double precision calculations therefore calculations proceed slower. Some GPUs have far more CUDAs with which to perform double precision calculations and they can do more double precision calculations per second than a GPU with fewer CUDAs. Your GTX 650 has relatively few CUDAS so it is relatively slow when a task has many millions of double precision calculations. In my previous post I was theorizing that perhaps your card's double precision power is so small that it cannot keep up with Asteroids tasks and the demands of the display and that the WDDM portion of Windows thinks it's so slow the driver has crashed. That's just a theory, it might not be correct.
44) (Message 2533)
Posted 16 Feb 2014 by Dagorath
Post:
The lines will prevent you from receiving any GPU workunits from Asteroids@home... no CUDA, no OpenCL, no NVIDIA, no ATI, nothing, beaucoup de rien :-)

Is that what you want? I am not sure what you want because I didn't completely understand your previous post.

For more details on how the cc_config.xml file works click the Official BOINC Wiki link in my signature below, then click the Client configuration link. Note that you can exclude tasks for only NVIDIA but while allowing tasks for ATI and vice-versa. If you have multiple cards you can exclude tasks for one of your cards but allow tasks for the other(s). There are numerous possible combinations.

Note also that this project does not have tasks for ATI at this time so in reality it would not make sense to exclude ATI tasks but it would do no harm if you did exclude ATI.

P.S.

OK, now I understand, you don't want any CUDA tasks from this project. Configuring the option in your preferences on the A@H website will prevent any CUDA tasks from being sent to you. The recommended lines in a cc_config.xml file will accomplish the same thing... no CUDA tasks... plus it will exclude (exclure) AMD tasks.

It is possible that your GTX 650's double-precision power is so slow that it slows your screen updates too. Perhaps the Windows Display Driver Manager (WDDM) notices the slowness and produces the black screen to prevent a complete system lock-up? I'm just guessing, I don't know Windows very well.
45) (Message 2520)
Posted 13 Feb 2014 by Dagorath
Post:
Your computers are still hidden. Either you have 2 different accounts and you unhid your computers on the other account or you selected "show computers" on this account but did not click the Update Settings button.
46) (Message 2514)
Posted 12 Feb 2014 by Dagorath
Post:
I'm not sure. I renewed the thermal paste and deburred while I had the heatsink off. I also ran it through the dishwasher because there was a fine layer of shmutz on the fins that refused to come off when I hit it with the compressed air. Temps were lower after I reassembled it but I can't say for sure whether the new thermal grease, the cleaning or the deburring was responsible. I didn't notice any difference in noise level which is to be expected I suppose since most of the noise is from the fan. And remember I'm half deaf anyway so I would be last one to notice a small change in noise level.
47) (Message 2512)
Posted 7 Feb 2014 by Dagorath
Post:
Yah, we get free refrigeration here :-) Well, for part of the year anyway.

That's a very high vcore. I had to boost it that high to keep it stable at that clock speed and I didn't leave it there much more than an hour. I don't normally OC, I do it only to experiment to see what the limits are and what one can do on a tight budget.

I deburred it for the same reason you polish an engine's intake manifold ports... less turbulence, better airflow, a few more horses for nothing more than the cost of a little labor.
48) (Message 2508)
Posted 7 Feb 2014 by Dagorath
Post:
Possibly because CUDA is much easier to work with they say. OpenCL is said to be cumbersome and generally hasn't been well received by programmers, at least that's the sense I get when reading about CUDA vs. OpenCL from many different sources around the web. CUDA and the NVIDIA architecture were designed and built for each other therefore you can expect there to be fewer compromises. OpenCL was designed to work with a number of existing architectures (otherwise it would be senseless to make it open) so there are necessarily compromises. Compromises negatively affect user friendliness and therefore adoption rates. There is also a perception that CUDA is the platform that gives the greater performance to most massively parallel algorithms with the least effort.

I don't program with either (CUDA or OpenCL) so bear that in mind. I'm just repeating what I read from sources I think know what's going on. H.A.-Soft's reasons for going with CUDA/NVIDIA first may not reflect my opinion.

Also, AMD has crippled DP on their latest cards. If they continue to do so in the future their DP advantage will disappear which decreases incentive to learn the more difficult OpenCL.
49) (Message 2507)
Posted 6 Feb 2014 by Dagorath
Post:
Q6700, 1.65V, 3.6 GHz, CPU fan voltage boosted to 14 V for more RPM, cooling fins on heatsink deburred with emery cloth, case temp ~ 10 C, air temp at cool air intake ~15 C.
50) (Message 2502)
Posted 6 Feb 2014 by Dagorath
Post:
I doubt you're the first one to think of that idea. There is probably a reason why it hasn't been implemented already and I would imagine the reason is that nobody wants recognition for implementing a system that has even the slightest chance of reporting a crash when there hasn't been one due to sensor malfunction and injecting foam into a perfectly good airplane's fuel over the Pacific.
51) (Message 2498)
Posted 4 Feb 2014 by Dagorath
Post:
Not 100% sure but I think all disks including SSD use a volatile cache/buffer. In addition the OS itself has (or can have) a volatile cache/buffer.

The better (say more expensive) hardware RAID controllers often have a volatile cache (so it's fast) that is on an on-card battery backup that can maintain the cache content for a few hours, maybe more, hopefully long enough to allow the sysadmin to fix the problem, at which point the cache gets flushed to disk. That adds a layer of reliability. Not sure if A@H has such a RAID card, they are expensive.

I think maybe there are also ethernet NICs that have a battery maintained cache too? To prevent losing data blocks that have been acknowledged but not saved to non-volatile storage?

A reliable server also has a UPS and software that does a graceful shutdown if the power goes off and stays off for more than x minutes. I have a UPS on one of my rigs and I am convinced that of all the factors that help make that rig stable and robust, the UPS is the biggest factor. It's a Linux machine which is pretty stable compared to Windows but some power failures mess things up anyway. The worst are the ones where the power stutters (goes on and off or else down and back up very quickly) a few times and the rig doesn't die immediately. Those are the worst! A UPS prevents all of that.
52) (Message 2497)
Posted 4 Feb 2014 by Dagorath
Post:
You're welcome.

If you're still having trouble even after upgrading to 7.2.33 then it might be that while trying to deal with what 7.2.33's .xml file content/format, 7.0.64 got confused and wrote stuff to the .xml files that 7.2.33 doesn't understand. That shouldn't happen because 7.2.33 should be backwards compatible but ya never know. If you still get errors saying like "couldn't parse" or "failed to parse" then that's what happened. Again, the fix would be to reset the affected projects and if that doesn't work delete and re-add them.
53) (Message 2491)
Posted 3 Feb 2014 by Dagorath
Post:
I notice in the Boinc "Notices" that Seti has released GPU support for Intel GPUs that support OpenCL 1.2. According to Intel,

http://www.intel.com/support/graphics/sb/CS-033757.htm

The 3rd Generation HD Graphics 2500 supports OpenCl 1.2, so that's Ivy Bridge. If Seti can support OpenCl 1.2 apps on Ivy Bridge, why can't Asteroids?


The topic of this thread is AVX. OpenCL has nothing to do with AVX. Try one of the threads related to GPU. And what makes you think Asteroids can't support OpenCL? Hmm??? That's all been explained already in threads related to the topic.
54) (Message 2487)
Posted 1 Feb 2014 by Dagorath
Post:
I don't understand what you're saying about the graphs and stats but don't worry about that for now.

The "Couldn't parse account file...." errors are probably due to you downgrading BOINC to an earlier version. I suspect 7.2.33 writes a different account file format than does 7.0.64. You might fix that incompatibility by reseting the projects or you may have to delete them and re-add them. Unless you have a good reason for dropping back to 7.0.64, I recommend staying with 7.2.33.

If you fix the "Couldn't parse account file...." problem I have a hunch the graphs and stats issues might resolve themselves.
55) (Message 2479)
Posted 31 Jan 2014 by Dagorath
Post:
Thanks for clarifying. I guess now I should look into that report from BeemerBiker that someone benchmarked DP on a hacked 670 and found it was no faster than before the hack.
56) (Message 2475)
Posted 31 Jan 2014 by Dagorath
Post:
What OS are you using?


For reliability, robustness, user friendliness and TCO, probably Linux and I would bet Debian Linux.

@Kyong,

I almost fell out of my chair when you said $500 for a 600 GB hard drive. Then I checked the server status page where it says they are IBM 600 GB 10,000 rpm 6 Gbps SAS. Do you have the 2.5" hybrid drives described in the link?
57) (Message 2474)
Posted 31 Jan 2014 by Dagorath
Post:
Anyway, after reading through most of the 50+ pages, it appears my gtx670 can be modded into a k2 grid but there is no performance gain as shown on some "spec" program that had DP performance as one of its tests


I recall reading that too now that you mention it and I believe I made a mental note to investigate it further because I was somewhat confused. Eventually and as my interest in the hack waned I forgot to investigate further. I still don't know what to make of it.

0402 resistors are hard to work with even with the best of tools and if you don't have the right tools and steady hands it's nearly impossible. I have the tools and the hands but the price of the card is a de-motivating factor for me. If it was a $50 card or if other components on the board weren't so close to the resistor(s) in question I would have attempted it months ago.

Another thing that discourages me is that HA-Soft said the best configuration is a CPU with the AVX 2.0 instruction set extension. If I understand him correctly he is saying AVX 2.0 will complete an A@H task faster than a Titan. Well, I'll be ordering a Haswell with AVX 2.0 fairly soon and if it turns out faster than a GTX 670 on current A&H tasks then I see no reason to do a risky hack on an expensive video card.

(Please, nobody should get the impression that I'm suggesting AVX 2.0 is better on DP than a fast GPU for all applications. Maybe AVX 2.0 is faster for the algorithm in use at A@H and if that is true it doesn't mean it's true for every algorithm.)

Given all that and the fact that A@H apps are being continually updated and improved and given the fact there has been mention of a second project (a sub-project?) here at A@H and that it might use GPU, I think the wise thing for me to do is hold off on hacks for now. Or maybe just buy a Titan. Or maybe the second project will require less DP and more SP which would suit my 670 better.

If you or anybody else wants some tips on soldering surface mount resistors I'll be glad to share what I know (or should I say share what works for me) as long as you understand I don't do it for a living and I probably don't use the same techniques a certified board technician would use. I'm a certified crazy bored hacker, big difference ;-)
58) (Message 2443)
Posted 24 Jan 2014 by Dagorath
Post:
The database might be corrupt or something similar but another reason for you not seeing any tasks/results is that your computer(s) is attached to a different account than the account you are posting from. That can happen due to a typo in your registered email address when you're adding (attaching) project to a computer and I suppose it could happen if you're using an account manager too in which case several computers could be attached to a different account than the one you think they are attached to. It happens a lot.

One test to determine if that has happened is to watch the RAC associated with the account you are posting from. If your RAC decays but BOINC manager shows you are receiving tasks then that's your problem.
59) (Message 2438)
Posted 24 Jan 2014 by Dagorath
Post:
My cat too! He's still a kitten (25 weeks). He loves that heat vent on the laptop but before he curls up there he likes to read his email and send replies, the little boss. He's not very good at it yet, he just prances around on the keyboard and opens a thousand windows and swats at the things popping up on the screen but he did manage to get the email program open the other day and he was very proud of that accomplishment :-)

The air expelling from my laptop isn't hot enough to burn Raiser (short for Hell Raiser). My concern is that he'll get too close to it and block the airflow. To prevent that I slide it over close to the edge of the desk so there isn't room for him to lay. I also close the lid. Since I have power options set to not sleep/hibernate when the lid is closed it continues crunching which keeps the lid warm enough to appeal to him as a nice place for a nap.

Regarding my balls, it's actually only one ball. It splits into 2 hemispheres along the equator so each then has a flat side to prevent rolling around. The nice thing about the ball is that I can snap the 2 halves back together and toss it into my bag along with the laptop for trips.

Your rack is very nice, have never seen anything like it before. I'd love to try it. Since the bottom is mesh you could easily mount an auxiliary fan on the bottom of the mesh directly below the fan intake. The extra fan would increase the airflow considerably and reduce your CPU temperature. You could probably power it from a USB port though a better way would be to tap some power off of the laptop's power supply.

Raiser is my first cat so I don't have much experience with it yet but they say
cat dander can really plug up the cooling fins in a computer's cooling system and especially laptops. Do you blow yours out regularly?
60) (Message 2428)
Posted 21 Jan 2014 by Dagorath
Post:
I'm not sure as to what the advantage is using the NVidia GPU? My recent asteroid CUDA run took 8 hours for 480 credits. I receive 480 credits with a normal 2 hour Asteroid run. Just askin.


Remember the Asteroids CPU application is not just a run of the mill application. It's a highly optimised application and that makes it difficult for a GPU to beat it.

Also, Asteroid tasks use DP (double precision) calculations. DP takes a lot of time. A GTX 630 is very slow on DP calcs. My GTX 670 does an Asteroids task in about 45 minutes, faster than a 630 because it has more "DP power", but still not extremely fast compared to the CPU app. The only NVIDIA cards that will be extremely fast compared to the CPU app will be the cards that have good DP power which means the Titan and certain Teslas. 670 and 680 cards that have been hacked to unleash their full DP capability should perform close to Titan and Tesla but so far nobody has tried the hack and reported it here, unless I missed it.

A different driver and freeing a CPU core might help a little but your 630 will never be as fast as the CPU app, not even if you do the hack.


Previous 20 · Next 20