Posts by Eugene Stemple

1) (Message 7270)
Posted 20 Oct 2022 by Profile Eugene Stemple
Post:
Just trying to be helpful... The scheduler reply is not right. The scheduler request/reply I tried today returned a reply of 9.8 MB size. I think it should be something in the 30 KB size range. The content of the file is nothing like a .xml structure; it starts off with a "ELF>0..." string before huge amounts of, presumably, binary data. My wild guess is that the PROGRAM for producing the scheduler reply is being sent, instead of the OUTPUT of that program. I see that distinctive "ELF>" string at the beginning of binary application images elsewhere in my Linux system.
2) (Message 4054)
Posted 21 Feb 2015 by Profile Eugene Stemple
Post:
This may wrap up this thread, but posting it for others who stumble into it...
The only way I have found to configure for SSE2 work (and exclude SSE3 and AVX versions for CPU) is via an app_info.xml file, and thus anonymous platform mode, with all the side-effects that implies. The file content is as follows:
-----
<app_info>
<app>
<name>period_search</name>
<user_friendly_name>Asteroids</user_friendly_name>
</app>
<file_info>
<name>period_search_10210_x86_64-pc-linux-gnu__sse2</name>
<executable/>
</file_info>
<app_version>
<app_name>period_search</app_name>
<version_num>1021</version_num>
<avg_ncpus>1.00</avg_ncpus>
<max_ncpus>1.00</max_ncpus>
<plan_class>sse2</plan_class>
<file_ref>
<file_name>period_search_10210_x86_64-pc-linux-gnu__sse2</file_name>
<main_program/>
</file_ref>
</app_version>
</app_info>

------
This is obviously for a 64-bit Linux system. I have abandoned the idea of accepting both CPU and GPU tasks. Mostly because the GPU version shows such a small speed improvement but also because I have more CPU core resources to spare and other boinc projects are heavy users of the Nvidia GPU.

I will go to the boinc message boards to post, or add to, a discussion of the boinc "resource share" performance. I wish it were simply based on execution times but I have read that it is based on estimated credit production. My experience is that it is neither of those (Not even close!!), Enough of that OT ranting...

Gene;
3) (Message 3942)
Posted 14 Jan 2015 by Profile Eugene Stemple
Post:
O.K., Mikey, I see that I can restrict applications to either CPU or NVIDIA (or both) but... there are three different CPU applications, i.e. AVX, SSE2, and SSE3. The relative performance numbers, from Application Details, are: AVX@112 Gflops; SSE2@171 Gflops; and SSE3@138 Gflops.

So, obviously it would be better to run the SSE2 app. And that is what I was trying to get at. Assuming I have selected "Use CPU = yes" then how could I specify just SSE2 work?

At the moment the task buffer has 47 work units, 19 of them are NVIDIA (cuda 55) and all of the remaining 28 are CPU - SSE3. Not a single SSE2 among them! Even though they would run 24% faster. Maybe I should just be happy I don't get AVX tasks. When I first joined this project I got some of each type, so it's not a "supported feature" issue.

I really don't mind the mix of CPU and NVIDIA. It's just that there's a better chance of a CPU core (of 4) being available than the single NVIDIA device when sharing resources with multiple projects. Hmmmm, but the more I think about it the more it makes sense to "Use NVIDIA GPU = no" because of the very small speedup of the NVIDIA vs. the CPU.
4) (Message 3935)
Posted 9 Jan 2015 by Profile Eugene Stemple
Post:
It has been a while since my last post to this thread... I have let three projects contend for the system resources, Seti @ 80%; Einstein @10%; Asteroids @5%; (and NFS @5% currently No New Tasks). The resource sharing hasn't yet settled down to a steady state but appears to be moving in the right direction. Seti has had some work shortages lately so it has not used resources to its full potential.

At the moment A@H has 9 tasks in the buffer and will easily complete them before their deadlines. I haven't calculated the resource share, relative to Seti, but it looks reasonable. All 9 tasks, however, are GPU tasks. No CPU tasks in the buffer at all. The GPU (low end GTX650) can only handle one cuda/opengl task at a time so it is the bottle-neck in work flow. The 4 CPU cores are under utilized. I would be happier if A@H could send me CPU tasks that have a much better chance of finding an idle core instead of more GPU tasks.

I have been led to understand that the boinc (project) servers initially send all usable application variations (i.e. NVIDIA, CPU/SSE2, CPU/AVX, etc.) and subsequently, based on result run times, home in on the most efficient application to the exclusion of the others. Except that others are sent randomly just to refresh the statistics. At the risk of jumping to conclusions that seems to be the case here - now only getting GPU work which is obviously the fastest running.

In the Asteroids@Home Preferences, I don't see a way to choose CPU or GPU applications. There is just one "Period Search" to pick. To get just SSE2 work, for example, do I set up a custom app_info.xml file? Or can it be done in the app_config.xml file? (I have app_info for the Seti project in order to use optimized "anonymous platform" applications.)
5) (Message 3904)
Posted 30 Dec 2014 by Profile Eugene Stemple
Post:
Confirming that jobs cancelled by server or aborted, not started by deadline will clear themselves from the buffer with no user action.

I had a bunch of both types and when the server woke up this morning (Monday) the pending transfers completed and the "undone" tasks were cleared.

I might also have been tempted to intervene manually but I was away from home the past week while my boinc machine churned away. I guess the server move was more difficult than expected, as the upload of completed work got backed up for over a week.
6) (Message 3890)
Posted 16 Dec 2014 by Profile Eugene Stemple
Post:
You have 5 'cores' (including the GPU) so BOINC thinks 150 tasks can be finished in 12.5 / 5 = 2.5 days


Boinc went to "high priority" on this project two days ago, 12/14, but was using just one CPU core + the GPU, i.e. two tasks concurrently. Do I infer that boinc does not manage the CPU and GPU work buffers separately? It looks that way to me. If so, it is a reason to keep the "minimum" buffer size small, especially on projects, such as A@H, that have relatively short deadlines.

The buffer content now (12/16 1700 UTC) is 39 GPU tasks and 49 CPU tasks. There are 70 hours left to the earliest deadline. (NNT was set days ago.) The GPU stream looks very "iffy" at approx. 2 hours per work unit. An hour ago I edited the app_config.xml file for <max_concurrent> = 3, to allow two concurrent CPU plus the GPU. And suspended all other projects. With two CPU on the 49 work units, at approx. 3 hours each, I think the deadline will be o.k., barely. I can go to 4 concurrent if necessary. All this far exceeds the 10% resource share assigned to the project.

I still use BOINC 6.10.58 where the wording of this first value is not "Minimal work" but "Connect about every XX days"


I have boinc 7.2.42.; which labels it as "minimum work buffer" but I have my doubts as to how boinc uses that parameter. Way back in this thread it was observed that a "3 day" value led to a download of 160 work units. I am trying to let boincmgr sort it out but I felt like I had to "help" with the temporary adjustments noted above.

- set all projects to [No New Tasks]
- set "minimum buffer" to e.g. 10 days


NNT set for A@H a couple of days ago when deadlines looked impossible;
A@H is already in high-priority mode;
and I am suspending other projects (maybe not necessary if A@H high priority will really use all cores) to leave resources wide open.

I understand your logic in "minimum buffer = 10" in the context of boinc 6.x.x interpreting it as "connect every xx days" but I am inclined to keep the present setting of 3, or even reduce it, on the assumption that boinc 7.2.42 really means keep a "minimum buffer" of x days. I have NNT set in any event to inhibit any further overload of this project's resources.
7) (Message 3882)
Posted 12 Dec 2014 by Profile Eugene Stemple
Post:
Satatus/progress update...

I found that running more than 1 A@H task caused the new CPU to run at over 38C..


I have a CPU digital thermometer on the desktop; with all 4 cores on various boinc projects the CPU tops out about 120F (48C). I am not overclocking, and that indicated temp is below the max 60C by a margin I consider "safe." If it gets past 55C its time to clean the fans.

I've therefore restricted usage to 1 CPU core for A@H, and temps dropped to 34C


The FX-4300 specs I've looked at indicate "Idle" temps at 25-35C; "Normal" temps at 35-45C; and "Limit" at 61C.

??Will A@H fail to meet deadlines??

I have set NTT for A@H. There are 151 tasks in the buffer already (a mix of CPU and GPU) and most of them were downloaded on Dec. 8 with a deadline of Dec. 19. With average run times about 2 hours it looks questionable whether all will finish before the deadline. I will let boinc manager do its thing but so far no boinc panic (run high priority) yet, which surprises me. With a resource share of 10% for A@H and a buffer limit of 3 days (although I realize that parameter is a "minimum") I don't understand why 160 work units were download in the first day after the various parameter changes, discussed earlier in this thread, were made. It still looks "theoretically" possible to finish all work before deadlines but it will require pre-empting other projects which have been assigned a higher resource share. I am curious to see how boinc manages that conflict. So I'm letting it chug away.

As per an earlier suggestion, the A@H app_config.xml is NOT active, so presumably the defaults are in control. I may soon feel obliged to provide an app_config with <max_concurrent> set to 2 or 3. But first I want to see if boinc will force that issue as deadlines loom.

Gene;
8) (Message 3879)
Posted 10 Dec 2014 by Profile Eugene Stemple
Post:
Here is the update after 24 hours from the previous post. Restarting boinc with all projects suspended, then resuming Asteroids (with app_config.xml renamed to app_config.bak):

the initial flood of downloads..
55 cuda55
57 sse3
0 sse2 and avx

after a lapse of 22 minutes, no tasks completed during that time, more downloads.
2 cuda55
33 sse3

By now the buffer is pretty stable at ~160 work units. New work downloads as tasks are completed and uploaded. The cuda55 to sse3 ratio is about 1:3 .

I don't know how useful, or relevant, the following numbers might be. Here are the reported Gflops (-> computer -> details -> application details )
cuda55 166 Gflops
avx 112 Gflops
sse3 156 Gflops
sse2 188 Gflops

I resumed the suspended Seti project this morning but, alas, this is their maintenance day so not much work downloaded. Empty Seti buffer now, so resumed the Einstein project.

Current task state:
Asteroids CPU-sse3 running
Asteroids GPU-cuda55 waiting (due to E@H GPU high priority for deadline)
Einstein CPU-sse2 running
Einstein CPU-sse2 running
Einstein CPU-sse2 running
Einstein GPU-cuda32 running, high priority deadline Dec 14

I'll let this configuration run for a couple of days.

**Still curious why Asteroids prefers the sse3 (CPU) over sse2 ??
9) (Message 3875)
Posted 9 Dec 2014 by Profile Eugene Stemple
Post:
Restart BOINC (client) and post a 'fresh' startup log


08-Dec-2014 20:34:16 [---] Starting BOINC client version 7.2.42 for x86_64-pc-linux-gnu
08-Dec-2014 20:34:16 [---] log flags: file_xfer, sched_ops, task, unparsed_xml
08-Dec-2014 20:34:16 [---] Libraries: libcurl/7.26.0 OpenSSL/1.0.1e zlib/1.2.7 libidn/1.25 libssh2/1.4.2 librtmp/2.3
08-Dec-2014 20:34:16 [---] Data directory: /home/gene/BOINC
08-Dec-2014 20:34:16 [---] CUDA: NVIDIA GPU 0: GeForce GTX 650 (driver version unknown, CUDA version 6.5, compute capability 3.0, 1023MB, 969MB available, 813 GFLOPS peak)
08-Dec-2014 20:34:16 [---] OpenCL: NVIDIA GPU 0: GeForce GTX 650 (driver version 343.22, device version OpenCL 1.1 CUDA, 1023MB, 969MB available, 813 GFLOPS peak)
08-Dec-2014 20:34:16 [SETI@home] Found app_info.xml; using anonymous platform
08-Dec-2014 20:34:16 [---] Host name: gene64
08-Dec-2014 20:34:16 [---] Processor: 4 AuthenticAMD AMD FX(tm)-4300 Quad-Core Processor [Family 21 Model 2 Stepping 0]
08-Dec-2014 20:34:16 [---] Processor features: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 popcnt aes xsave avx f16c lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs xop skinit wdt lwp fma4 nodeid_msr tbm topoext perfctr_core arat cpb hw_pstate npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold
08-Dec-2014 20:34:16 [---] OS: Linux: 3.2.46-4
08-Dec-2014 20:34:16 [---] Memory: 7.77 GB physical, 9.54 GB virtual
08-Dec-2014 20:34:16 [---] Disk: 183.34 GB total, 130.04 GB free
08-Dec-2014 20:34:16 [---] Local time is UTC -7 hours
08-Dec-2014 20:34:16 [Einstein@Home] Found app_config.xml
08-Dec-2014 20:34:16 [NFS@Home] Found app_config.xml
08-Dec-2014 20:34:16 [SETI@home] Found app_config.xml
08-Dec-2014 20:34:16 [Asteroids@home] Found app_config.xml
08-Dec-2014 20:34:16 [---] Config: simulate 4 CPUs
08-Dec-2014 20:34:16 [Einstein@Home] URL http://einstein.phys.uwm.edu/; Computer ID 3949388; resource share 50
08-Dec-2014 20:34:16 [NFS@Home] URL http://escatter11.fullerton.edu/nfs/; Computer ID 9923; resource share 50
08-Dec-2014 20:34:16 [SETI@home] URL http://setiathome.berkeley.edu/; Computer ID 4774476; resource share 350
08-Dec-2014 20:34:16 [Asteroids@home] URL http://asteroidsathome.net/boinc/; Computer ID 129569; resource share 50
08-Dec-2014 20:34:16 [SETI@home] General prefs: from SETI@home (last modified 19-Sep-2014 08:11:26)
08-Dec-2014 20:34:16 [SETI@home] Computer location: home
08-Dec-2014 20:34:16 [SETI@home] General prefs: no separate prefs for home; using your defaults
08-Dec-2014 20:34:16 [---] Reading preferences override file
08-Dec-2014 20:34:16 [---] Preferences:
08-Dec-2014 20:34:16 [---] max memory usage when active: 7161.65MB
08-Dec-2014 20:34:16 [---] max memory usage when idle: 7559.52MB
08-Dec-2014 20:34:16 [---] max disk usage: 4.00GB
08-Dec-2014 20:34:16 [---] suspend work if non-BOINC CPU load exceeds 50%
08-Dec-2014 20:34:16 [---] max download rate: 199997 bytes/sec
08-Dec-2014 20:34:16 [---] max upload rate: 100004 bytes/sec
08-Dec-2014 20:34:16 [---] (to change preferences, visit a project web site or select Preferences in the Manager)
08-Dec-2014 20:34:16 [---] File projects/escatter11.fullerton.edu_nfs/lasievef_1.10_i686-pc-linux-gnu not found
08-Dec-2014 20:34:16 [---] Not using a proxy
08-Dec-2014 20:34:16 Initialization completed

With <max_concurrent>2</max_concurrent> you want Asteroids@home to run one GPU and one CPU task?


Yes. Within constraints of resource share. either CPU or GPU or rarely both.


For now you may try:
1) [Suspend] all projects except Asteroids@home
2) Increase the first value for 'Days of work' locally
      Since you have the line "Reading preferences override file" you use Local preferences
      Setting 'Days of work' on the web will not have any effect (for this computer), use Local preferences to set for e.g. 3 + 0.1 days

3) I'm not sure how <rec_half_life_days>3</rec_half_life_days> will change the decisions made by BOINC (which projects to ask for what kind of work) - try to set it to the default 10 days
4) Move/rename app_config.xml for Asteroids@home (and 'Read config file')

5) now [Update] Asteroids@home and see if BOINC is asking for CPU work


In works...(1) & (2) done; (3) on hold; (4) done; (5) done; ...
...boincmgr IS downloading a long list of data and work units...

I will post an inventory tomorrow (Dec. 9), at first glance appears to be a mix of CPU and GPU work.

If all looks good - do one change at a time if you are not sure what effect it will have:
- (if you want) reduce the first value for 'Days of work'
- [Resume] one of other projects
- rename again app_config.xml (and 'Read config file' to make it active)
...


O.K.; I'll give this configuration 8 or 12 hours then resume Seti. Wait and see what effect that has before re-storing app_config.xml . As for (local) days of work, maybe leave that at 3 until I feel like Asteroids work is flowing "normally".

On Dec. 6 I did get an avx work unit, on Dec 7 2 sse2 and 6 sse3 ; on Dec 8 one more sse3. (In addition to cuda55 work which I didn't tally.) But let's see how the (above) diagnostic steps proceed.
With regard to <rec_half_life_days> reduced to 3 from default 10, it was an attempt to speed up the servers' convergence to a realistic estimate based on suggestions in message boards elsewhere.

One more question --- Assuming we get Asteroids to give CPU (and GPU) work, is there a way to allow only sse2 work and exclude the sse3 and avx types which run more slowly on my specific CPU chip?

Thx BilBg for your guidance.
10) (Message 3859)
Posted 5 Dec 2014 by Profile Eugene Stemple
Post:
noderaser & BilBg & other readers...

I am running two other projects (while Seti is off-line). Einstein@home,
which has both CPU and GPU applications. NFS@home, which is a CPU only.

As requested, here are the first lines of the Event Log from the most recent restart of boinc manager and boinc client:

06-Nov-2014 11:59:44 [---] Starting BOINC client version 7.2.42 for x86_64-pc-linux-gnu
06-Nov-2014 11:59:44 [---] log flags: file_xfer, sched_ops, task, unparsed_xml
06-Nov-2014 11:59:44 [---] Libraries: libcurl/7.26.0 OpenSSL/1.0.1e zlib/1.2.7 libidn/1.25 libssh2/1.4.2 librtmp/2.3
06-Nov-2014 11:59:44 [---] Data directory: /home/gene/BOINC
06-Nov-2014 11:59:44 [---] CUDA: NVIDIA GPU 0: GeForce GTX 650 (driver version unknown, CUDA version 6.5, compute capability 3.0, 1023MB, 982MB available, 813 GFLOPS peak)
06-Nov-2014 11:59:44 [---] OpenCL: NVIDIA GPU 0: GeForce GTX 650 (driver version 343.22, device version OpenCL 1.1 CUDA, 1023MB, 982MB available, 813 GFLOPS peak)
06-Nov-2014 11:59:44 [SETI@home] Found app_info.xml; using anonymous platform
06-Nov-2014 11:59:44 [---] Host name: gene64
06-Nov-2014 11:59:44 [---] Processor: 4 AuthenticAMD AMD FX(tm)-4300 Quad-Core Processor [Family 21 Model 2 Stepping 0]
06-Nov-2014 11:59:44 [---] Processor features: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 popcnt aes xsave avx f16c lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs xop skinit wdt lwp fma4 nodeid_msr tbm topoext perfctr_core arat cpb hw_pstate npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold
06-Nov-2014 11:59:44 [---] OS: Linux: 3.2.46-4
06-Nov-2014 11:59:44 [---] Memory: 7.77 GB physical, 9.54 GB virtual
06-Nov-2014 11:59:44 [---] Disk: 183.34 GB total, 154.75 GB free
06-Nov-2014 11:59:44 [---] Local time is UTC -7 hours
06-Nov-2014 11:59:44 [Einstein@Home] Found app_config.xml
06-Nov-2014 11:59:44 [NFS@Home] Found app_config.xml
06-Nov-2014 11:59:44 [SETI@home] Found app_config.xml
06-Nov-2014 11:59:44 [Einstein@Home] URL http://einstein.phys.uwm.edu/; Computer ID 3949388; resource share 50
06-Nov-2014 11:59:44 [NFS@Home] URL http://escatter11.fullerton.edu/nfs/; Computer ID 9923; resource share 50
06-Nov-2014 11:59:44 [Cosmology@Home] URL http://www.cosmologyathome.org/; Computer ID 73980; resource share 50
06-Nov-2014 11:59:44 [SETI@home] URL http://setiathome.berkeley.edu/; Computer ID 4774476; resource share 350
06-Nov-2014 11:59:44 [SETI@home] General prefs: from SETI@home (last modified 19-Sep-2014 08:11:26)
06-Nov-2014 11:59:44 [SETI@home] Computer location: home
06-Nov-2014 11:59:44 [SETI@home] General prefs: no separate prefs for home; using your defaults
06-Nov-2014 11:59:44 [---] Reading preferences override file
06-Nov-2014 11:59:44 [---] Preferences:
06-Nov-2014 11:59:44 [---] max memory usage when active: 7161.65MB
06-Nov-2014 11:59:44 [---] max memory usage when idle: 7559.52MB
06-Nov-2014 11:59:44 [---] max disk usage: 4.00GB
06-Nov-2014 11:59:44 [---] suspend work if non-BOINC CPU load exceeds 50%
06-Nov-2014 11:59:44 [---] max download rate: 199997 bytes/sec
06-Nov-2014 11:59:44 [---] max upload rate: 100004 bytes/sec
06-Nov-2014 11:59:44 [---] (to change preferences, visit a project web site or select Preferences in the Manager)
06-Nov-2014 11:59:44 [---] Not using a proxy
06-Nov-2014 11:59:44 Initialization completed

>>>The most recent Aseroids work fetch>>>

04-Dec-2014 20:43:58 [Asteroids@home] Sending scheduler request: To fetch work.
04-Dec-2014 20:43:58 [Asteroids@home] Requesting new tasks for NVIDIA
04-Dec-2014 20:44:00 [Asteroids@home] [unparsed_xml] SCHEDULER_REPLY::parse(): unrecognized external_cpid
04-Dec-2014 20:44:00 [Asteroids@home] [unparsed_xml] SCHEDULER_REPLY::parse(): unrecognized 7bff1c82e46abad47a236b770d0156f8
04-Dec-2014 20:44:00 [Asteroids@home] [unparsed_xml] SCHEDULER_REPLY::parse(): unrecognized /external_cpid
04-Dec-2014 20:44:00 [---] [unparsed_xml] APP::parse(): unrecognized: fraction_done_exact
04-Dec-2014 20:44:00 [---] [unparsed_xml] WORKUNIT::parse(): unrecognized: rsc_mem_bound
04-Dec-2014 20:44:00 [Asteroids@home] Scheduler request completed: got 1 new tasks
04-Dec-2014 20:44:02 [Asteroids@home] Started download of input_47292_8
04-Dec-2014 20:44:04 [Asteroids@home] Finished download of input_47292_8

>>>The most recent Einstein work fetch>>>

04-Dec-2014 17:56:14 [Einstein@Home] Sending scheduler request: To fetch work.
04-Dec-2014 17:56:14 [Einstein@Home] Requesting new tasks for NVIDIA
04-Dec-2014 17:56:17 [Einstein@Home] Scheduler request completed: got 1 new tasks
04-Dec-2014 17:56:19 [Einstein@Home] Started download of p2030.20131223.G181.42-03.34.C.b2s0g0.00000_2224.bin4
04-Dec-2014 17:56:19 [Einstein@Home] Started download of p2030.20131223.G181.42-03.34.C.b2s0g0.00000_2225.bin4
04-Dec-2014 17:56:24 [Einstein@Home] Finished download of p2030.20131223.G181.42-03.34.C.b2s0g0.00000_2224.bin4
04-Dec-2014 17:56:24 [Einstein@Home] Finished download of p2030.20131223.G181.42-03.34.C.b2s0g0.00000_2225.bin4
--clip--

>>>Here is cc_config.xml>>>
<cc_config>
<log_flags>
<unparsed_xml>1</unparsed_xml>
</log_flags>

<options>
<rec_half_life_days>3</rec_half_life_days>
<ncpus>4</ncpus>
</options>
</cc_config>

>>>And here is app_config.xml (for Asteroids)>>>
<app_config>
<app>
<name>period_search</name>
<max_concurrent>2</max_concurrent>
<gpu_versions>
<gpu_usage>0.9</gpu_usage>
<cpu_usage>0.5</cpu_usage>
</gpu_versions>
</app>
</app_config>


You will notice, as I have, that A@H and E@H do not ask for CPU tasks. That explains why I'm not getting any CPU. BUT, why is boinc not asking for CPU tasks? Neither project has asked for CPU tasks in the last 10 days.

Seti is configured for Anonymous Platform, to take advantage of optimized applications. I don't see how or why this would affect Asteroids but I mention it here just for completeness.

Thanks to all who jumped in with ideas. I'm willing to try almost any experiment in config files if it will shed light on the issue.

Sorry this post is so long, but I'm just providing info that was requested.

Gene;
11) (Message 3846)
Posted 3 Dec 2014 by Profile Eugene Stemple
Post:
Mikey --
"Use CPU = Yes"
It has been set that way from the beginning. As noted in the first post, CPU and GPU work flowed at the beginning but no more CPU work since 18 November. (last GPU work is today: 3 December.) In (local) computing preferences I have "on multiprocessor systems, use at most 100% of processors". Would it be right, or harmless to try, setting this to "0" which the note says will ignore this setting?
The Einstein@home project exhibits the same symptoms. Since all(?) projects show the same symptom is there something in cc_config.xml that governs my system activity? According to the cc_config documentation there is a parameter "ncpus" but I was assuming the default would be to use ALL cpus. Reasonable choices for "ncpus" are either 4, the actual number, or -1 to use "all available cpue." Any thoughts or advice based on your own settings?
--Gene--
12) (Message 3843)
Posted 2 Dec 2014 by Profile Eugene Stemple
Post:
Maybe this is a boinc issue. If so I'm sure somebody will point me elsewhere.
I have only a low performance GPU (GTX650) and it is happily crunching away but there are 3 idle CPU cores that could be doing something in parallel. How would I set up app_config.xml to allow A@H to use at least one of those idle cores? When I first joined this project the servers downloaded all the CPU apps (SSE2, etc.) as well as the GPU app (cuda55) and then followed with work units that ran in each of the apps. I understood that was a "calibration" period to allow the servers to discover which apps were the most productive and that process has come to the natural conclusion that the GPU is best. So, now only the GPU apps run. If I set "max_concurrent" = 2 will that try to run 2 GPU tasks? (That would be counter-productive as the GPU is maxed out on 1 task.) How about 2 concurrent but use "gpu_usage" and "cpu_usage" parameters to control the resource assignment? I have those at 0.9 and 0.5, respectively, now. And, one last question... is there a way to configure the SSE2 app to be used, as that seems to be the fastest CPU app in previously completed work units? I could try the brute force approach, simply delete the SSE3 and AVX apps, but then the servers might notice they were gone and reload them.
Somebody else must have figured this out already. I would appreciate the benefit of your experience. Thanks.
13) (Message 3842)
Posted 2 Dec 2014 by Profile Eugene Stemple
Post:
I also notice a recent (approximate) double in run times. But when I look back farther in the logs I see that in early November the "short" runs began to replace "long" runs that had been typical up to that time. So, in effect, the WU size seems to have reverted back to a previous value. Who knows what the "right" size is? I just assumed it is something inherent in the batches of work that are distributed.