Posts by Aurum

1) (Message 8228)
Posted 21 Jan 2024 by Aurum
Post:
Running the CPU aps is burning up my CPUs. The sse2 and sse3 aps are the most energy efficient and run cooler. There's no way to allow one to control which of these dangerous aps gets DLed. Best way is to mitigate the problem is eliminate the cause. Run this command after starting BOINC:

sudo rm /var/lib/boinc-client/projects/asteroidsathome.net_boinc/period_search_10213_x86_64-pc-linux-gnu__avx_linux && sudo rm /var/lib/boinc-client/projects/asteroidsathome.net_boinc/period_search_10213_x86_64-pc-linux-gnu__avx512_linux && sudo rm /var/lib/boinc-client/projects/asteroidsathome.net_boinc/period_search_10213_x86_64-pc-linux-gnu__fma_linux
2) (Message 7724)
Posted 27 Jan 2023 by Aurum
Post:
Depends on the cpu architecture and features set. Can be more efficient than AVX2.
Energy efficiency or race speed?
3) (Message 7723)
Posted 27 Jan 2023 by Aurum
Post:
It has been discussed here already: https://asteroidsathome.net/boinc/forum_thread.php?id=793&postid=6541
Says nothing about energy efficiency. Just a single WU race mentality. But then you're not paying for the electricity.
4) (Message 7710)
Posted 16 Jan 2023 by Aurum
Post:
...finding the best performed application for every particular system...
Radim Vančo (FoxKyong)
How do you actually define what's the best instruction set? I think it should be the most energy efficient. See, for example:
Thermal design power and vectorized instructions behavior, Amina Guermouche & Anne-Cécile Orgerie, CONCURRENCY & COMPUTATION: PRACTICE & EXPERIENCE, Feb 2021.
https://hal.archives-ouvertes.fr/hal-03185821/document
5) (Message 7708)
Posted 16 Jan 2023 by Aurum
Post:
BOINC CreditNew algorithm rewards credit based on the number of FLOPs involved in the calculation. NOT based on the time taken to perform those calculations.

A task run on a cpu and the same task run on a gpu involve crunching through the exact same number of calculations so the credit rewarded is the same.

Doesn't matter that the gpu doing parallel calculations can run through them much faster that the single thread run on the cpu.

It is up to each project to define which credit algorithm to follow and install in their servers.
https://boinc.berkeley.edu/trac/wiki/CreditOptions
https://boinc.berkeley.edu/trac/wiki/CreditNew
Yes, it's still a very poor system.
6) (Message 7707)
Posted 16 Jan 2023 by Aurum
Post:
You are using the incorrect task limitation in your app_info or app_config.
Project_max_concurrent applies to all tasks, both cpu and gpu in a project.

If you want to limit the number of tasks being crunched for each device type, use the max_concurrent statement in each app_version section. Read the configuration options documentation.

https://boinc.berkeley.edu/wiki/Client_configuration#Application_configuration

If you actually read that link you'd see it goes inside <app></app> and not <app_version>.
Are you trying to say that's why I can only get CPU WUs and never GPU WUs when I use an app_info?
BTW, this happens even if I have no app_config file and <max_concurrent> does not even appear in my app_info file.
7) (Message 7704)
Posted 15 Jan 2023 by Aurum
Post:
I would prefer the credit to be a little higher too for at least the gpu app considering the power these new cards pull!
The current credit structure does not reward running a gpu.
Since the run time for CPU WUs is about 3x that for GPU WUs I thought that CPU WUs should be awarded 3x.
Interesting idea to scale per unit Watt expended. But how would the server know the power expended per WU?
8) (Message 7703)
Posted 15 Jan 2023 by Aurum
Post:
...use the Anonymous platform by adding 'app_info.xml' file to the project's folder with the appropriate data.
I've been trying to use an app_info.xml file to only get the more energy efficient sse instruction set. The problem is that since both CPU and GPU WUs are named "period_search" it causes problems. If I only include parameters to limit CPU WUs to sse3_linux that works fine and I only get sse3_linux CPU WUs but I don't get any GPU WUs. When I try to include a section for the GPU WUs, which I shouldn't even need if the WUs had been named "period_search_CPU" and "period_search_GPU", I do not get any GPU WUs. I've tried restarting BOINC and rebooting the computer but I get the same message each time: "Not requesting tasks: don't need (CPU: ; NVIDIA GPU: )"
Here's the latest version of the app_info.xml I've tried:
<app_info>
<app>
    <name>period_search</name>
</app>
<file_info>
    <name>period_search_10213_x86_64-pc-linux-gnu__sse3_linux</name>
    <executable/>
</file_info>
<app_version>
    <app_name>period_search</app_name>
    <version_num>10213</version_num>
    <avg_ncpus>1.000000</avg_ncpus>
    <flops>216957983664.719849</flops>
    <plan_class>sse3_linux</plan_class>
    <api_version>7.17.0</api_version>
    <file_ref>
        <file_name>period_search_10213_x86_64-pc-linux-gnu__sse3_linux</file_name>
        <main_program/>
    </file_ref>
</app_version>
<file_info>
    <name>period_search_10217_x86_64-pc-linux-gnu__cuda118_linux</name>
    <executable/>
</file_info>
<app_version>
    <app_name>period_search</app_name>
    <version_num>10217</version_num>
    <platform>x86_64-pc-linux-gnu</platform>
    <avg_ncpus>0.010000</avg_ncpus>
    <flops>2846331819585.037598</flops>
    <plan_class>cuda118_linux</plan_class>
    <api_version>7.17.0</api_version>
    <file_ref>
        <file_name>period_search_10217_x86_64-pc-linux-gnu__cuda118_linux</file_name>
        <main_program/>
    </file_ref>
</app_version>
</app_info>
I put a copy of both executables in the folder before restarting. If anyone can tell me how to make both CPU and GPU work I'd appreciate it.
I suspect that as soon as the executable version is upgraded the app_info.xml file will stop working and no WUs will be DLed.
The best solution would be to give us the ability select which CPU instruction set we want in our Project Preference file. Better still would be to only compile the most energy efficient sse instruction set.
Also, another quirk of naming both period_search is that the commands
<project_max_concurrent>36</project_max_concurrent>
and
<project_max_concurrent>36</project_max_concurrent>
count the total number of WUs = CPU WUs + GPU WUs. Making the behavior different than any other BOINC project.
9) (Message 7079)
Posted 18 Oct 2020 by Aurum
Post:
Each day some of my computers stop running Linux 102.15 GPU WUs. Yet Tasks by Application, with only a single line, says there are over 255000 WUs available. With a couple dozens aps that could mean anything.
How can I know A@H has run out of my flavor?
Is there some other rule that cuts me off, e.g. maximum number of WUs to a computer per day?
10) (Message 7078)
Posted 18 Oct 2020 by Aurum
Post:
I saw a post recently with someone saying something to the effect of my Ryzen can run 32 WUs in the time it takes GPU to run run one WU. (I can't find it now.)
I think it may be a false equivalence to compare WUs per unit time for a CPU to a GPU. As I understand it using a GPU is for parallel processing. I wonder if Light Curves solved per hour wouldn't be a better comparison. I don't know what's inside a WU but I suspect a CPU WU may have one or a few Light Curves to solve whereas a GPU WU may have dozens.
A photon for your thoughts.
11) (Message 7014)
Posted 8 Aug 2020 by Aurum
Post:
Is there a way I can throttle it down to run at like 75% power or something to control heat during the day?
For a single GPU create this script:
#!/bin/bash
/usr/bin/nvidia-smi -pm 1
/usr/bin/nvidia-smi -acp UNRESTRICTED
/usr/bin/nvidia-smi -i 0 -pl 160
/usr/bin/nvidia-settings -a "[gpu:0]/GPUPowerMizerMode=1"
/usr/bin/nvidia-settings -a "[gpu:0]/GPUMemoryTransferRateOffset[3]=400" -a "[gpu:0]/GPUGraphicsClockOffset[3]=100"
/usr/bin/nvidia-settings -a "[gpu:0]/GPUFanControlState=1" -a "[fan:0]/GPUTargetFanSpeed=75" -a "[fan:1]/GPUTargetFanSpeed=75"

For 2 or more GPUs expand the script like so:
#!/bin/bash
/usr/bin/nvidia-smi -pm 1
/usr/bin/nvidia-smi -acp UNRESTRICTED
/usr/bin/nvidia-smi -i 0 -pl 160
/usr/bin/nvidia-smi -i 1 -pl 160
/usr/bin/nvidia-settings -a "[gpu:0]/GPUPowerMizerMode=1"
/usr/bin/nvidia-settings -a "[gpu:1]/GPUPowerMizerMode=1"
/usr/bin/nvidia-settings -a "[gpu:0]/GPUMemoryTransferRateOffset[3]=400" -a "[gpu:0]/GPUGraphicsClockOffset[3]=100"
/usr/bin/nvidia-settings -a "[gpu:1]/GPUMemoryTransferRateOffset[3]=400" -a "[gpu:1]/GPUGraphicsClockOffset[3]=100"
/usr/bin/nvidia-settings -a "[gpu:0]/GPUFanControlState=1" -a "[fan:0]/GPUTargetFanSpeed=75" -a "[fan:1]/GPUTargetFanSpeed=75"
/usr/bin/nvidia-settings -a "[gpu:1]/GPUFanControlState=1" -a "[fan:2]/GPUTargetFanSpeed=75" -a "[fan:3]/GPUTargetFanSpeed=75"
[3] for Pascal or [4] for Turing

Then run this command once:
sudo nvidia-xconfig --enable-all-gpus --cool-bits=28 --allow-empty-initial-configuration

Use this to check if it took:
sudo xed /etc/X11/xorg.conf
Saves me a lot on electricity and still runs BOINC projects great.
12) (Message 6570)
Posted 17 Apr 2020 by Aurum
Post:
Thanks Kyong, I'm getting a steady stream of work now.
13) (Message 6567)
Posted 17 Apr 2020 by Aurum
Post:
Are Linux WUs available? Applications page shows they're running.
I tried Reset. Then Update. Update.
Win7 computers getting steady supply of WUs.
Checked Preferences & Nvidia GPU selected. No CPU.
Latest Nvidia driver.

What's the trick to get Linux WUs ???
14) (Message 6564)
Posted 17 Apr 2020 by Aurum
Post:
What does this message mean???
Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 32 bytes) in /home/boincadm/projects/boinc/html/inc/db_conn.inc on line 119

Does it have anything do with my not getting any Linux WUs???
15) (Message 6562)
Posted 15 Apr 2020 by Aurum
Post:
This is not a race.

It is now. You've been selected for the next Formula-BOINC Sprint.

http://formula-boinc.org/sprint.py?sprint=4&lang=&year=2020
16) (Message 6391)
Posted 13 Oct 2019 by Aurum
Post:
Me want Dino-Killer asteroid for hitting a billion.
https://www.nhm.ac.uk/content/dam/nhmwww/discover/dinosaur-extinction/dino-extinction-asteroid-impact-full-width.jpg



Lmao, as if the asteroid was THAT BIG. Man that picture is stupid.

The Chicxulub asteroid was about 110 miles in diameter. The scale looks about right. What they screwed up was they made the diameter of the Earth too small.
17) (Message 6390)
Posted 13 Oct 2019 by Aurum
Post:
Hey Asteroid-Man, When are you going to compile your code for CUDA 10 so the Turing GPUs will work???
18) (Message 6334)
Posted 13 Aug 2019 by Aurum
Post:
Me want Dino-Killer asteroid for hitting a billion.
https://www.nhm.ac.uk/content/dam/nhmwww/discover/dinosaur-extinction/dino-extinction-asteroid-impact-full-width.jpg

19) (Message 6218)
Posted 19 Mar 2019 by Aurum
Post:
Feed the Beast!!!
The beast must eat asteroids!!!
Beast be hungry for space rock!!!
20) (Message 6217)
Posted 19 Mar 2019 by Aurum
Post:
Give us WUs!!!
We want WUs!!!
We need WUs!!!


Next 20