1) Message boards : News : File upload issues (Message 33859)
Posted 14 Jan 2018 by Lars Vindal
Post:
Now the one I reported as stuck earlier have timed out and shows up as error in my account because the server locked it with just over half a percent uploaded... :-(

Please fix this locking issue on server side before lots of more tasks error out because of this and have to be sent out again!
2) Message boards : News : File upload issues (Message 33773)
Posted 10 Jan 2018 by Lars Vindal
Post:
Strange thing is that this file lock issue only affect one of my tasks so far. After my previous post BOINC started another Sixtrack task and successfully uploaded it, while the one mentioned above still has problems.

10.01.2018 13:26:58 | LHC@home | Starting task workspace1_hl13_collision_scan_62.3250_60.3125_chrom_15_oct_-300_B1__22__s__62.31_60.32__6_8__5__15_1_sixvf_boinc1883_1
10.01.2018 14:17:34 | LHC@home | Computation for task workspace1_hl13_collision_scan_62.3250_60.3125_chrom_15_oct_-300_B1__22__s__62.31_60.32__6_8__5__15_1_sixvf_boinc1883_1 finished
10.01.2018 14:17:36 | LHC@home | Started upload of workspace1_hl13_collision_scan_62.3250_60.3125_chrom_15_oct_-300_B1__22__s__62.31_60.32__6_8__5__15_1_sixvf_boinc1883_1_r821681728_0
10.01.2018 14:17:56 | LHC@home | Finished upload of workspace1_hl13_collision_scan_62.3250_60.3125_chrom_15_oct_-300_B1__22__s__62.31_60.32__6_8__5__15_1_sixvf_boinc1883_1_r821681728_0
10.01.2018 14:17:57 | LHC@home | Sending scheduler request: To report completed tasks.
10.01.2018 14:17:57 | LHC@home | Reporting 1 completed tasks


10.01.2018 18:02:55 | LHC@home | Started upload of workspace1_hl13_collision_scan_62.3250_60.3125_chrom_15_oct_-300_B1__22__s__62.31_60.32__4_6__5__45_1_sixvf_boinc1876_1_r612308196_0
10.01.2018 18:02:58 | LHC@home | [error] Error reported by file upload server: [workspace1_hl13_collision_scan_62.3250_60.3125_chrom_15_oct_-300_B1__22__s__62.31_60.32__4_6__5__45_1_sixvf_boinc1876_1_r612308196_0] locked by file_upload_handler PID=-1
10.01.2018 18:02:58 | LHC@home | Temporarily failed upload of workspace1_hl13_collision_scan_62.3250_60.3125_chrom_15_oct_-300_B1__22__s__62.31_60.32__4_6__5__45_1_sixvf_boinc1876_1_r612308196_0: transient upload error
3) Message boards : News : File upload issues (Message 33759)
Posted 10 Jan 2018 by Lars Vindal
Post:
I don't do ATLAS tasks, but my most recently completed sixTrack task is stuck like this:

10.01.2018 02:31:18 | LHC@home | Started upload of workspace1_hl13_collision_scan_62.3250_60.3125_chrom_15_oct_-300_B1__22__s__62.31_60.32__4_6__5__45_1_sixvf_boinc1876_1_r612308196_0
10.01.2018 02:31:21 | LHC@home | [error] Error reported by file upload server: [workspace1_hl13_collision_scan_62.3250_60.3125_chrom_15_oct_-300_B1__22__s__62.31_60.32__4_6__5__45_1_sixvf_boinc1876_1_r612308196_0] locked by file_upload_handler PID=-1
10.01.2018 02:31:21 | LHC@home | Temporarily failed upload of workspace1_hl13_collision_scan_62.3250_60.3125_chrom_15_oct_-300_B1__22__s__62.31_60.32__4_6__5__45_1_sixvf_boinc1876_1_r612308196_0: transient upload error

Is this file locking issue depending on NFS storage issues, or is it something on my end?
4) Message boards : Sixtrack Application : Inconclusive, valid/invalid results (Message 31129)
Posted 27 Jun 2017 by Lars Vindal
Post:
<< EDIT >>
Original post written while your message about revalidation inconclusives. Looking forward to seeing how that will end. In the meantime, here is my original information...

<< END EDIT >>
=====
My current list show 5 valid results, 12 inconclusive and 9 in process, a single pending validation, no invalid or errors. These numbers only cover those that are not purged from the database, so relative comparison over my whole LHC career is not possible.

I hope this information can be useful in narrowing down some of the problems.

The few times I have checked my result list under the current system, I have also not observed any invalid or errored tasks. Neither did I have errored or invalid tasks under the old system when LHC was split in two separate projects.

If the current system could keep a running total of all tasks for each host (and keep those numbers after work units and tasks get deleted from database), we would have probably a much better basis for seeing which hosts have problems with their results?


Now, back to my current list of inconclusive and pending results...


Host ID; 10236061 (Intel Core2 Duo E8500 @ 3.16GHz, not overclocked)
4 GB RAM (2x Corsair CM2X2048-6400C4PRO, not overclocked)
Motherboard: ASUSTeK Maximus Formula II
Windows 10 Pro x64, version 1607 (build 14393.1358)


Common to all listed below is that all results from the other participants were short runs, also ending with inconclusive result.

All work units have also listed a third task that is ready to send. Given the inconclusive status of all tasks in those work units, I doubt the third one will succeed, but there's always hope.. :-)

I hope this situation with inconclusive and bad results will come to and end. Nothing hurts a project more than hundreds or thousands of bad or potentially bad results that need to be manually checked... I wish all the best of luck finding a solution to this!

-----

My inconclusive results with short run times:
WU 71637755, task 148057184
WU 71637757, task 148057188
WU 71637758, task 148057190
WU 70954146, task 146755417
WU 70746028, task 146334441
WU 70605235, task 146049158
WU 70605236, task 146049159


My inconclusive results with long run times:
WU 70894751, task 146635362
WU 70894759, task 146635378
WU 70746026, task 146334437
WU 70541544, task 145918243
WU 70502221, task 145842412


Pending validation:
WU 70928940, task 146704351
In this case, my partner's task timed out with no response.



©2024 CERN