Computation error Period Search cuda118_win10 K80


Message boards : Problems and bug reports : Computation error Period Search cuda118_win10 K80

Message board moderation

To post messages, you must log in.
AuthorMessage
Colin

Send message
Joined: 12 Feb 17
Posts: 9
Credit: 13,724,292
RAC: 6
Message 7504 - Posted: 28 Nov 2022, 17:07:57 UTC

Last modified: 28 Nov 2022, 17:19:34 UTC
I'm getting a large number of Computation error messages with the Period Search Application 102.16 (cuda118_win10).
My machine has an Nvidia Tesla K80 and 3090 card, with drivers 472.12.

The work units are failing because the compute capability of the K80 is 3.7, yet the work is still being sent over to my machine. The K80 is the top end of the Kepler series, and outperforms the 3090 on several CUDA applications.
To be clear, it only supports Cuda 11.4, not 11.8, so I dont have any expectations that it would be compatible, just that it would not receive work.
How do we prevent the work units from being sent when they will just fail?
ID: 7504 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Keith Myers
Avatar

Send message
Joined: 16 Nov 22
Posts: 98
Credit: 52,691,540
RAC: 358,762
Message 7508 - Posted: 28 Nov 2022, 18:08:39 UTC - in response to Message 7504.  
There were new applications published recently and changes to the scheduler code that should prevent sending you the newer application that won't run on the Kepler card. It should be sent the older application.

I would try resetting the project first to see if the host picks up the older application. If that works, great.

Failing that, you can exclude the Kepler card from the project with exclude_gpu statements in the cc_config.xml file.

https://boinc.berkeley.edu/wiki/Client_configuration#Application_configuration

A proud member of the OFA (Old Farts Association)
ID: 7508 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Colin

Send message
Joined: 12 Feb 17
Posts: 9
Credit: 13,724,292
RAC: 6
Message 7509 - Posted: 28 Nov 2022, 18:40:13 UTC - in response to Message 7508.  
Keith - thanks for the suggestion. I've let the tasks finish, told it No new tasks, and reset it. Will report back on what happens when it starts getting new tasks again.

I am aware of the new 11.8 tasks, and wasn't expecting to see them for the K80 as it is 11.4 only.

Hopefully Radim and team have a look at why I (and anyone else) was getting these, if they haven't done so already.
Cheers
Colin
ID: 7509 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Colin

Send message
Joined: 12 Feb 17
Posts: 9
Credit: 13,724,292
RAC: 6
Message 7511 - Posted: 28 Nov 2022, 20:50:59 UTC - in response to Message 7509.  
Reset did not help.
I'll wait to hear from the project team.
ID: 7511 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Ian&Steve C.
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 23 Apr 21
Posts: 70
Credit: 49,919,809
RAC: 520,548
Message 7514 - Posted: 29 Nov 2022, 2:36:15 UTC
FYI, CUDA 11 brought forward compatibility. so CUDA 11.4 drivers actually are compatible with CUDA 11.8 applications. you don't need to update drivers for minor version changes anymore.

the problem is that you have wildly different cards in the system as far as compatibility goes and the project and scheduler have no idea that you have the K80 in that system. BOINC only presents the "best" GPU to projects. so to the project they think you have 3x 3090s. no one app fits all for your system because the cuda 10.2 app wont work on your 3090 and the cuda 11.8 app wont work on your K80.

you can fix this problem by moving the K80 to its own system. so it's properly presented and the project can send the right app.

or find a way to download the CUDA 10.2 app and the 11.8 app and keep them separate, then form an app_info.xml to define both apps with their respective plan classes. and finally define in your cc_config.xml which plan classes each card is restricted to.

but that's a lot of work. splitting the K80 off into it's own system is a much simpler solution.

ID: 7514 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Georgi Vidinski
Volunteer moderator
Project administrator
Project developer
Project tester
Avatar

Send message
Joined: 22 Nov 17
Posts: 159
Credit: 13,180,466
RAC: 58
Message 7515 - Posted: 29 Nov 2022, 5:38:06 UTC
Hi folks,
The truth is that the Boinc client makes pretty good distinction between every single GPU form same Type within the same host. As in this case with Colin where his host (724246) is stated having 3 cards of the same type ([3] NVIDIA NVIDIA GeForce RTX 3090...) in the web portal, each of them has their own ID and are declared with their specs to the server. For example, as you can see from the 'Stderr output' in this Task:

core_client_version: 7.20.2
message:
Incorrect function.
(0x1) - exit code 1 (0x1)
stderr_txt:
BOINC client version 7.20.2
BOINC GPU type 'NVIDIA', deviceId=2, slot=17
Application: period_search_10216_windows_x86_64__cuda118_win10.exe
Version: 102.16.0.0
CUDA version: 11080
CUDA Device number: 2
CUDA Device: Tesla K80 11448MB
Compute capability: 3.7
Shared memory per Block | per SM: 49152 | 114688
Multiprocessors: 13
Unsupported Compute Capability (CC) detected (3.7). Supported Compute Capabilities are between 5.3 and 8.9.

your Tesla K80 GPU has ID=2.
As to why the web portal shows that the host has three (3) GPUs of the same type is a big unknown and I'll have to double check that.
On other hand your host should not be served with Cuda app v102.16 for your GPU ID=2 unit at all.
We still experience some issues with the server's 'app plan' and utilisation of some GPUs. For instance, in another case, we have two hosts, with GPUs supporting exactly the same CC, both having latest drivers, both declared as CUDA12 (another boinc thing how it declares 11.8) where one host has been served with the new app (102.16) while at the same time the other one has been served with the old one (102.15).

Colin, may I ask you to just the beginning of your 'stdoutdae.txt' log file from the date when you last started the client, starting with 'Starting BOINC client', down to 'Setting up project and slot directories' please?

Meanwhile I can suggest a simple solution. You just need to place a single line inside your cc_config.xml file right before the 'proxy_info' tag:
<ignore_cuda_dev>2</ignore_cuda_dev>"
 

This way your client will exclude that Tesla K80 GPU from any tasks for the moment.

Thanks
“The good thing about science is that it's true whether or not you believe in it.” ― Neil deGrasse Tyson
ID: 7515 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Ian&Steve C.
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 23 Apr 21
Posts: 70
Credit: 49,919,809
RAC: 520,548
Message 7517 - Posted: 29 Nov 2022, 15:09:25 UTC - in response to Message 7515.  
As in this case with Colin where his host (724246) is stated having 3 cards of the same type ([3] NVIDIA NVIDIA GeForce RTX 3090...) in the web portal, each of them has their own ID and are declared with their specs to the server. For example, as you can see from the 'Stderr output' in this Task:

core_client_version: 7.20.2
message:
Incorrect function.
(0x1) - exit code 1 (0x1)
stderr_txt:
BOINC client version 7.20.2
BOINC GPU type 'NVIDIA', deviceId=2, slot=17
Application: period_search_10216_windows_x86_64__cuda118_win10.exe
Version: 102.16.0.0
CUDA version: 11080
CUDA Device number: 2
CUDA Device: Tesla K80 11448MB
Compute capability: 3.7
Shared memory per Block | per SM: 49152 | 114688
Multiprocessors: 13
Unsupported Compute Capability (CC) detected (3.7). Supported Compute Capabilities are between 5.3 and 8.9.

your Tesla K80 GPU has ID=2.
As to why the web portal shows that the host has three (3) GPUs of the same type is a big unknown and I'll have to double check that.


It’s because of how the BOINC client works as I described in my last post.

The stderr output comes from the application which is interacting directly with the GPU.

But the project doesn’t see this information until after processing has completed (reported tasks) and the scheduler process only interacts between the project and the BOINC client. During scheduling, only the best GPU is reported (with a multiple of how many GPUs total in the system), despite the fact that each device is captured in the coproc_info.xml. That information just isn’t transmitted to the project unless they are different vendors (Nvidia, AMD, Intel). If you have different types of the same vendor, only the best one is reported. This isn’t something that can be changed at the project level, it’s how the client operates.

Yes Colin can exclude the GPU to prevent it from running the wrong app, but I believe he wants to use the card as it should be compatible with both the cuda55 and cuda102 application. For that, it would be best to move it to its own system.

For example, he could take the RTX 3060 out of one of his hosts and put it together with the 3090. Both are ampere and can process the same application. Then take the K80 and put it in the host that the 3060 came from. Then it will get only the right application for it. Problem solved.

ID: 7517 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Colin

Send message
Joined: 12 Feb 17
Posts: 9
Credit: 13,724,292
RAC: 6
Message 7518 - Posted: 29 Nov 2022, 15:41:21 UTC - in response to Message 7515.  

Last modified: 29 Nov 2022, 15:43:26 UTC
As in this case with Colin where his host (724246) is stated having 3 cards of the same type ([3] NVIDIA NVIDIA GeForce RTX 3090...) in the web portal, each of them has their own ID and are declared with their specs to the server.
...
your Tesla K80 GPU has ID=2.
As to why the web portal shows that the host has three (3) GPUs of the same type is a big unknown and I'll have to double check that.

Not a big unknown - at least to me - The K80 is a dual GPU card. The 3090 is device 0, and the K80 first GPU is dev 1, and second GPU is dev 2.


On other hand your host should not be served with Cuda app v102.16 for your GPU ID=2 unit at all.

Agreed, and also ID=1 which is the other GPU on the K80.
ID=0 should be fine on the 3090.

Colin, may I ask you to just the beginning of your 'stdoutdae.txt' log file from the date when you last started the client, starting with 'Starting BOINC client', down to 'Setting up project and slot directories' please?

Here it is:
29-Nov-2022 10:14:22 [---] Starting BOINC client version 7.20.2 for windows_x86_64
29-Nov-2022 10:14:22 [---] log flags: file_xfer, sched_ops, task
29-Nov-2022 10:14:22 [---] Libraries: libcurl/7.84.0-DEV Schannel zlib/1.2.12
29-Nov-2022 10:14:22 [---] Data directory: C:\ProgramData\BOINC
29-Nov-2022 10:14:22 [---] Running under account Colin
29-Nov-2022 10:14:23 [---] CUDA: NVIDIA GPU 0: NVIDIA GeForce RTX 3090 (driver version 472.12, CUDA version 11.4, compute capability 8.6, 24576MB, 24576MB available, 36526 GFLOPS peak)
29-Nov-2022 10:14:23 [---] CUDA: NVIDIA GPU 1: Tesla K80 (driver version 472.12, CUDA version 11.4, compute capability 3.7, 11448MB, 11448MB available, 2806 GFLOPS peak)
29-Nov-2022 10:14:23 [---] CUDA: NVIDIA GPU 2: Tesla K80 (driver version 472.12, CUDA version 11.4, compute capability 3.7, 11448MB, 11448MB available, 2806 GFLOPS peak)
29-Nov-2022 10:14:23 [---] OpenCL: NVIDIA GPU 0: NVIDIA GeForce RTX 3090 (driver version 472.12, device version OpenCL 3.0 CUDA, 24576MB, 24576MB available, 36526 GFLOPS peak)
29-Nov-2022 10:14:23 [---] OpenCL: NVIDIA GPU 1: Tesla K80 (driver version 472.12, device version OpenCL 3.0 CUDA, 11448MB, 11448MB available, 2806 GFLOPS peak)
29-Nov-2022 10:14:23 [---] OpenCL: NVIDIA GPU 2: Tesla K80 (driver version 472.12, device version OpenCL 3.0 CUDA, 11448MB, 11448MB available, 2806 GFLOPS peak)
29-Nov-2022 10:14:23 [---] OpenCL: Intel GPU 0: Intel(R) UHD Graphics 770 (driver version 31.0.101.3790, device version OpenCL 3.0 NEO, 52325MB, 52325MB available, 422 GFLOPS peak)
29-Nov-2022 10:14:23 [---] Windows processor group 0: 32 processors
29-Nov-2022 10:14:23 [---] Host name: TRAUMA
29-Nov-2022 10:14:23 [---] Processor: 32 GenuineIntel 13th Gen Intel(R) Core(TM) i9-13900K [Family 6 Model 183 Stepping 1]
29-Nov-2022 10:14:23 [---] Processor features: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss htt tm pni ssse3 fma cx16 sse4_1 sse4_2 movebe popcnt aes f16c rdrandsyscall nx lm avx avx2 vmx smx tm2 pbe fsgsbase bmi1 smep bmi2
29-Nov-2022 10:14:23 [---] OS: Microsoft Windows 11: Professional x64 Edition, (10.00.22621.00)
29-Nov-2022 10:14:23 [---] Memory: 127.75 GB physical, 255.75 GB virtual
29-Nov-2022 10:14:23 [---] Disk: 930.24 GB total, 666.39 GB free
29-Nov-2022 10:14:23 [---] Local time is UTC -5 hours
29-Nov-2022 10:14:23 [---] No WSL found.
29-Nov-2022 10:14:23 [---] VirtualBox version: 6.1.34
29-Nov-2022 10:14:23 [---] Config: use all coprocessors
29-Nov-2022 10:14:23 [---] General prefs: from http://setiathome.berkeley.edu/ (last modified 25-Aug-2009 20:48:08)
29-Nov-2022 10:14:23 [---] Host location: none
29-Nov-2022 10:14:23 [---] General prefs: using your defaults
29-Nov-2022 10:14:23 [---] Reading preferences override file
29-Nov-2022 10:14:23 [---] Preferences:
29-Nov-2022 10:14:23 [---]    max memory usage when active: 117731.47 MB
29-Nov-2022 10:14:23 [---]    max memory usage when idle: 117731.47 MB
29-Nov-2022 10:14:27 [---]    max disk usage: 100.00 GB
29-Nov-2022 10:14:27 [---]    max CPUs used: 30
29-Nov-2022 10:14:27 [---]    suspend work if non-BOINC CPU load exceeds 75%
29-Nov-2022 10:14:27 [---]    max download rate: 6144000 bytes/sec
29-Nov-2022 10:14:27 [---]    max upload rate: 2048000 bytes/sec
29-Nov-2022 10:14:27 [---]    (to change preferences, visit a project web site or select Preferences in the Manager)
29-Nov-2022 10:14:27 [---] Setting up project and slot directories



Meanwhile I can suggest a simple solution. You just need to place a single line inside your cc_config.xml file right before the 'proxy_info' tag:
<ignore_cuda_dev>2</ignore_cuda_dev>"
 

This way your client will exclude that Tesla K80 GPU from any tasks for the moment.

I'll look at making it project-specific, as other projects like Milkyway or
Einstein could use the K80.

Your comments and suggestions are appreciated everyone!
ID: 7518 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Colin

Send message
Joined: 12 Feb 17
Posts: 9
Credit: 13,724,292
RAC: 6
Message 7519 - Posted: 29 Nov 2022, 16:03:52 UTC - in response to Message 7518.  
As a workaround, I've set the cc_config.xml as follows to exclude both Tesla K80 GPUs, and it seems to happily be using the 3090 only.

<cc_config>
   <log_flags>
   </log_flags>
   <options>
       <use_all_gpus>1</use_all_gpus>
       <exclude_gpu>
            <url>https://asteroidsathome.net/boinc/</url>
            <device_num>1</device_num>
       </exclude_gpu>
       <exclude_gpu>
            <url>https://asteroidsathome.net/boinc/</url>
            <device_num>2</device_num>
       </exclude_gpu>
   </options>
</cc_config>


That way other projects can use the K80's 2 GPU units.
The PSE 118 seems to be running fine on the 3090.
Also, for those that follow nvidia driver updates, they have released 474.04 for the older GPUs but I have not tried it yet, as I'd like to make sure everything is stable first.
Let me know if there is anything you'd like me to try.
ID: 7519 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Ian&Steve C.
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 23 Apr 21
Posts: 70
Credit: 49,919,809
RAC: 520,548
Message 7520 - Posted: 29 Nov 2022, 18:26:03 UTC - in response to Message 7519.  
why not swap the K80 into the 3060 system? then move the 3060 to be paired with the 3090 as I suggested?

what you've done is a workaround to stop BOINC from running asteroids on the K80 GPUs, which is fine to stop the errors, but doesnt solve the underlying problem.

but if you just move the card to a different PC so it's visible to BOINC in the scheduling phase, then you'll be able to use the K80 here again.

ID: 7520 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Colin

Send message
Joined: 12 Feb 17
Posts: 9
Credit: 13,724,292
RAC: 6
Message 7521 - Posted: 29 Nov 2022, 19:24:18 UTC - in response to Message 7520.  
why not swap the K80 into the 3060 system? then move the 3060 to be paired with the 3090 as I suggested?

This machine is used for more than BOINC. It is staying where it is.

what you've done is a workaround to stop BOINC from running asteroids on the K80 GPUs, which is fine to stop the errors, but doesnt solve the underlying problem.

You are correct that this is a workaround.
Solving the underlying problem is out of my control.

but if you just move the card to a different PC so it's visible to BOINC in the scheduling phase, then you'll be able to use the K80 here again.

I don't follow your logic.
ID: 7521 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Ian&Steve C.
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 23 Apr 21
Posts: 70
Credit: 49,919,809
RAC: 520,548
Message 7522 - Posted: 29 Nov 2022, 19:47:01 UTC - in response to Message 7521.  

Last modified: 29 Nov 2022, 19:48:07 UTC
I don't follow your logic.


it's less "my" logic, but rather "BOINC" logic that you need to follow. when the BOINC client makes a schedule request to a project, it transmits some data about the host to the project. the project uses this data to determine which applications are appropriate for your hardware.

in the case where you have multiple GPUs installed that are not all the same, it only transmits the data for the "best" GPU as determined by the priority in the BOINC client code. this for nvidia, this priority is in based on: 1) GPU compute capability, 2) GPU memory amount. So since the 3090 has a a CC of 8.6 and the K80, has a CC of 3.7, the 3090 info gets transmitted for the purposes of scheduling. this all happen on the BOINC client on your system itself and has nothing to do with the project. nothing the project does will solve this.

so as far as the project is concerned, you have 3x RTX 3090 in that system. it can't "see" the K80 at all. the stderr output that Georgi referenced is a result of the application running, which interfaces with the GPU directly and which is why it knows about the GPU. the project server scheduler does not use the stderr output for scheduling decisions. it only uses the information transmitted during the sched RPC (3x 3090).

leaving the 3090 and K80 in the same system will never work together under a single BOINC client. the 3090 will ONLY work with the 11.8 app which doesnt work for the K80. and conversely the K80 will only work with the cuda55 or cuda102 app which doesnt support the Ampere cards. it's a catch22 and the project has their hands tied, they cant create a CUDA application that will work for both Kepler and Ampere since Nvidia removed Kepler support from CUDA 11+, which is needed for Ampere.

that's the "logic" of moving the K80 to a more compatible system. so the scheduler actually knows it's there to send the proper application. or run a complicated scheme of config files under Anonymous Platform (using an app_info.xml).


another workaround could be to run multiple BOINC clients on that host. one client for each GPU. one, excluding the K80 with the more powerful ignore command that Georgi referenced (which will make BOINC ignore its existence altogether). And then a second boinc client set to ignore the 3090, forcing it to present the K80. this will effectively make the one single host look like 2 separate hosts, and maintaining separate work queues with different applications. that's really the only way to make both GPUs work on Asteroids without physically moving the K80 to a different system given the current application constraints.

ID: 7522 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Georgi Vidinski
Volunteer moderator
Project administrator
Project developer
Project tester
Avatar

Send message
Joined: 22 Nov 17
Posts: 159
Credit: 13,180,466
RAC: 58
Message 7523 - Posted: 29 Nov 2022, 20:29:57 UTC - in response to Message 7522.  
Hi Ian,
There is something I don't get it. Why you insist so much about your solution? Are you familiar with the settings inside the server, based on which its logic decides how to distribute the applications among the host systems, its assessment logic?
And again, why you insist so much about your solution? Especially if he already said that he is OK for the moment with his workaround in place.

I just asked Colin for some details, which further I will compare with our logs, twist our setting files, debug and over again, till we find a solution. And when that happens Colin will be able to remove the restriction and get it's K80 back in the game with our project.

Please guys, let's be more creative and less competitive.

Cheers!
“The good thing about science is that it's true whether or not you believe in it.” ― Neil deGrasse Tyson
ID: 7523 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Ian&Steve C.
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 23 Apr 21
Posts: 70
Credit: 49,919,809
RAC: 520,548
Message 7524 - Posted: 29 Nov 2022, 20:36:07 UTC - in response to Message 7523.  

Last modified: 29 Nov 2022, 21:03:49 UTC
It's not about being competitive. I just understand how BOINC operates on a deeper level than most people do so I'm trying to give a big picture view of the issue as a whole to highlight how the root cause of this problem lies with the client and not the project server.

nothing I posted is incorrect. his symptoms are entirely due to limitations in the BOINC client. These kinds of issues happen at every BOINC project, not just here. the client is only setup to transmit the "best" GPU. this is fact. that means the server scheduler MUST be setup to act on this information only. it cannot differentiate between two different nvidia GPUs that require different apps because it only knows about the "best" one. it can only act on different GPUs if they are from different vendors like AMD or Intel.

I don't see how there's much you can do from the server side since the client isn't giving you the information you require. Unless you plan to diverge your server code from the standard BOINC model in order to implement custom stuff like a feedback loop using stderr output data into your scheduler. if so, more power to you. but your comments on other threads seemed to indicate that you planned to stick with the standard BOINC model. really the only solution the project could do here is implement an OpenCL application for nvidia to give broader device compatibility.

the things I've proposed really are the best solutions given the BOINC limitations. It's totally fine if Colin wants to not move GPUs around, it was just a suggestion. everyone is free to operate however they like if they are satisfied with a workaround.

ID: 7524 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Colin

Send message
Joined: 12 Feb 17
Posts: 9
Credit: 13,724,292
RAC: 6
Message 7526 - Posted: 29 Nov 2022, 22:03:09 UTC
Georgi - My main concern is that Asteroids would receive garbage project results from the K80. It looks like that's not the case - the project code rejects the K80 at the start of the run. It was a temporary waste of resources (server sending unusable work), but not corrupting the science.
I'm not looking for your team to downgrade the 11.8 code to 11.4 or 11.5 for my situation.

Ian - thanks for clarifying that a key issue is the BOINC client has an inability to communicate the specifics of differing cards with different capabilities when from the same vendor to the server. Sounds like a challenge for the BOINC team future feature request list ;-)
One correction for you - 11.4 (and 11.5) is compatible with most of the architectures. e.g. table here, which is lacking some updates for Ada Lovelace GPUs but adequate: https://docs.nvidia.com/deploy/cuda-compatibility/index.html
And yes, the Kepler is deprecated in CUDA toolkits go forward from 11.8.

The input from all parties is appreciated. The two K80 GPUs with 12GB of DDR5 are happily running tasks from other projects now.

If there are any future changes that would allow this odd setup to be used, am open minded about trying it, with the exception of yanking the K80 out and stuffing into a different machine. At 300W power consumption and forced air cooling for the board, it's not super easy to stuff it into another machine.
ID: 7526 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Colin

Send message
Joined: 12 Feb 17
Posts: 9
Credit: 13,724,292
RAC: 6
Message 7527 - Posted: 29 Nov 2022, 22:03:12 UTC

Last modified: 29 Nov 2022, 22:10:45 UTC
Double post by accident.
ID: 7527 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote

Message boards : Problems and bug reports : Computation error Period Search cuda118_win10 K80