Download failed
log in

Advanced search

Message boards : Problems and bug reports : Download failed

1 · 2 · 3 · Next
Author Message
MarkJ
Avatar
Send message
Joined: 27 Jun 12
Posts: 129
Credit: 61,906,880
RAC: 5,151
Message 1108 - Posted: 15 Apr 2013, 10:00:07 UTC
Last modified: 15 Apr 2013, 10:03:08 UTC

Got the following just now:

15/04/2013 7:55:27 PM | Asteroids@home | Giving up on download of input_5013_165: permanent HTTP error

Link to work unit: here

It looks like everyone else had the same problem. First one I have seen from this project.
____________
BOINC blog

Profile Kyong
Project administrator
Project developer
Project tester
Project scientist
Avatar
Send message
Joined: 9 Jun 12
Posts: 576
Credit: 52,667,664
RAC: 0
Message 1109 - Posted: 15 Apr 2013, 11:17:42 UTC - in response to Message 1108.

Strange, but I can see that it is being computed now.

MarkJ
Avatar
Send message
Joined: 27 Jun 12
Posts: 129
Credit: 61,906,880
RAC: 5,151
Message 1123 - Posted: 16 Apr 2013, 10:31:23 UTC

It seems another one that had it "in progress" has reported it as a download failure. It's now out with two more, which I expect when they report will have the same problem. Something wrong with this one.
____________
BOINC blog

MarkJ
Avatar
Send message
Joined: 27 Jun 12
Posts: 129
Credit: 61,906,880
RAC: 5,151
Message 1127 - Posted: 17 Apr 2013, 8:39:47 UTC
Last modified: 17 Apr 2013, 8:42:10 UTC

But wait there's more...

1. One
2. Two
3. Three
4. Four
5. Five

Kyong, looks like you have a problem on the server side.

Also you might want to set the Error/Total/Success values a bit lower. Allowing 20,20,20 seems like overkill. Maybe set it to something like 5. Seti uses 5,10,5 for theirs.
____________
BOINC blog

Profile Kyong
Project administrator
Project developer
Project tester
Project scientist
Avatar
Send message
Joined: 9 Jun 12
Posts: 576
Credit: 52,667,664
RAC: 0
Message 1128 - Posted: 17 Apr 2013, 9:19:03 UTC - in response to Message 1127.

It seems that script made some errors, these units aren't in download folders.

Enric Surroca
Send message
Joined: 14 Dec 12
Posts: 43
Credit: 15,942,240
RAC: 267
Message 1129 - Posted: 17 Apr 2013, 10:04:17 UTC

I also have had some errors downloading units with the next message:

"tderr output

<core_client_version>7.0.28</core_client_version>
<![CDATA[
<message>
WU download error: couldn't get input files:
<file_xfer_error>
<file_name>input_5012_141</file_name>
<error_code>-224</error_code>
<error_message>permanent HTTP error</error_message>
</file_xfer_error>

</message>
]]>
"

Enric Surroca
Send message
Joined: 14 Dec 12
Posts: 43
Credit: 15,942,240
RAC: 267
Message 1130 - Posted: 17 Apr 2013, 10:05:29 UTC

I also have had some errors downloading units with the next message:

"tderr output

<core_client_version>7.0.28</core_client_version>
<![CDATA[
<message>
WU download error: couldn't get input files:
<file_xfer_error>
<file_name>input_5012_141</file_name>
<error_code>-224</error_code>
<error_message>permanent HTTP error</error_message>
</file_xfer_error>

</message>
]]>
"

MarkJ
Avatar
Send message
Joined: 27 Jun 12
Posts: 129
Credit: 61,906,880
RAC: 5,151
Message 1131 - Posted: 17 Apr 2013, 11:15:02 UTC - in response to Message 1128.

It seems that script made some errors, these units aren't in download folders.


I'd suggest deleting these work unit so we don't waste bandwidth trying to get them. I presume there is a whole batch of them in error that need to be deleted.
____________
BOINC blog

Dataman
Avatar
Send message
Joined: 11 Aug 12
Posts: 7
Credit: 53,668,200
RAC: 6,272
Message 1146 - Posted: 23 Apr 2013, 15:16:26 UTC

What's up with all the download errors? They are starting to get annoying.
____________

Profile brutis
Send message
Joined: 9 Feb 13
Posts: 1
Credit: 11,603,400
RAC: 0
Message 1163 - Posted: 27 Apr 2013, 19:02:10 UTC

Any update on fixing this problem?

Profile Kyong
Project administrator
Project developer
Project tester
Project scientist
Avatar
Send message
Joined: 9 Jun 12
Posts: 576
Credit: 52,667,664
RAC: 0
Message 1169 - Posted: 28 Apr 2013, 7:57:36 UTC

Unfortunatelly not, some workunits were finally sent so I can't abort WUs according to their state. I'll try to process all workunits but it take some time for server. Maybe faster solution is to let the current batch finish. Next batch should be with no problem and the download errors were supposed to be only at the beggining of the current batch because there were some problems with connections during adding new workunits.

Alessandro Freda
Send message
Joined: 13 Jan 13
Posts: 12
Credit: 146,946,600
RAC: 0
Message 1652 - Posted: 4 Sep 2013, 15:09:41 UTC - in response to Message 1169.

Any news about the problem ?
On my PCs seems that the download fails are more than successful one.

Profile HA-SOFT, s.r.o.
Project developer
Project tester
Send message
Joined: 21 Dec 12
Posts: 176
Credit: 120,915,840
RAC: 81,736
Message 1654 - Posted: 4 Sep 2013, 15:37:15 UTC - in response to Message 1652.

Any news about the problem ?
On my PCs seems that the download fails are more than successful one.


See:

http://asteroidsathome.net/boinc/forum_thread.php?id=184&postid=1653#1653

Profile Kyong
Project administrator
Project developer
Project tester
Project scientist
Avatar
Send message
Joined: 9 Jun 12
Posts: 576
Credit: 52,667,664
RAC: 0
Message 1655 - Posted: 4 Sep 2013, 15:44:12 UTC

Yes, I discovered the problem but there is problem to cancel the workunits because they are quite chaotic. The problem started with the recent server overload because in one of the day I was adding new work and server have many problems in communicating with database. So there are many good workunits and database entries without inputs. So due to the mixing well added and not well aded, I can either cancel ALL WUs added that day or let them all. After 20 errors they will by canceled automatically.

Sonoraguy
Send message
Joined: 11 Jun 13
Posts: 8
Credit: 15,481,080
RAC: 0
Message 1658 - Posted: 4 Sep 2013, 16:07:49 UTC

I'm not sure if we're supposed to do something about this but I am getting literally hundreds of downloads failed; something over 1,000 in the last day. I've noticed if I got to "Advanced" and "Do Network Communication" that I improve (by a little) the number of successful downloads in a batch but it looks like a mess.

So - Bottom line: Is this a problem that's just going to work itself out or should we do something to help?
____________

Profile HA-SOFT, s.r.o.
Project developer
Project tester
Send message
Joined: 21 Dec 12
Posts: 176
Credit: 120,915,840
RAC: 81,736
Message 1659 - Posted: 4 Sep 2013, 16:23:25 UTC - in response to Message 1658.


So - Bottom line: Is this a problem that's just going to work itself out or should we do something to help?


The problem solves itself, but may take a time.

MarkJ
Avatar
Send message
Joined: 27 Jun 12
Posts: 129
Credit: 61,906,880
RAC: 5,151
Message 1667 - Posted: 5 Sep 2013, 9:02:47 UTC - in response to Message 1659.
Last modified: 5 Sep 2013, 9:48:25 UTC


So - Bottom line: Is this a problem that's just going to work itself out or should we do something to help?


The problem solves itself, but may take a time.


Basically for the bad work units the server has to try and send them to 20 users before it gives up and flags them as failed. Mixed in with that you'll get some that do download.

I think the 20 limit is overkill as I said before. Seti uses 5,10,5, here its set to 20,20,20. Do we really need to try it 20 times (for 20 different users) before we can fail it?

What I did was to allow my BOINC client to download what it could and error the rest. It will then back off thinking it has comms problems. Hit the update button on the project tab to report the failed work units and try and request more work. Repeat until you have sufficent work.
____________
BOINC blog

Profile Kyong
Project administrator
Project developer
Project tester
Project scientist
Avatar
Send message
Joined: 9 Jun 12
Posts: 576
Credit: 52,667,664
RAC: 0
Message 1668 - Posted: 5 Sep 2013, 12:18:45 UTC - in response to Message 1667.

I will decrease the limit to 10, but unfotunatelly I have to wait till this que is over.

MarkJ
Avatar
Send message
Joined: 27 Jun 12
Posts: 129
Credit: 61,906,880
RAC: 5,151
Message 1670 - Posted: 5 Sep 2013, 12:38:12 UTC - in response to Message 1668.

I will decrease the limit to 10, but unfotunatelly I have to wait till this que is over.

Even 10 is too many, while 5 is probably too low. How about setting it to 7? That gives it a good chance of working if its going to.
____________
BOINC blog

Highlander
Send message
Joined: 16 Aug 13
Posts: 4
Credit: 1,372,320
RAC: 0
Message 1672 - Posted: 5 Sep 2013, 13:21:48 UTC

How about creating this batch once more on a temporary server and copy the WU files over to the DL server without overwriting existing ones?

But i don't really knows, if this can work (wu-names, hashes, etc)

Seems a theoretically better idea than do a more or less DDoS with around 14 million DL tries (at the moment, i get around 2-5 % successfull downloads).

1 · 2 · 3 · Next
Post to thread

Message boards : Problems and bug reports : Download failed


Main page · Your account · Message boards


Copyright © 2020 Asteroids@home