Message boards :
Sixtrack Application :
SIXTRACKTEST
Message board moderation
Previous · 1 · 2 · 3 · 4 · 5 · 6 · 7 · 8 . . . 9 · Next
Author | Message |
---|---|
Send message Joined: 27 Jul 17 Posts: 3 Credit: 18,350 RAC: 0 |
Hi, Same for me, I'm not able to receive any task on my raspberry pi 3 (arm V7). In the mean time, einstein@home works like a charm |
Send message Joined: 9 Dec 14 Posts: 202 Credit: 2,533,875 RAC: 0 |
...Same for me, I'm not able to receive any task on my raspberry pi 3 (arm V7)... Thats because the sixtracktest queue is empty at the moment. |
Send message Joined: 27 Sep 08 Posts: 817 Credit: 683,324,135 RAC: 119,737 |
I got some today. My Linux computer is taking its time though 3hrs on one task which is unusal for Sixtrack. |
Send message Joined: 6 Mar 12 Posts: 7 Credit: 3,130,996 RAC: 0 |
For information only! ls -l stderrdae.txt -rw-r--r-- 1 boinc boinc 1122581 Oct 16 02:31 stderrdae.txt Too many errors: "No such file or directory" mv: rename slots/4/Sixout.zip to projects/lhcathome.cern.ch_lhcathome/Dtwo_42_hlbbo_2222_1.1_0.75__1__s__62.3025_60.3082__10_12__6__54_1_sixvf_boinc2463_3_r1215309536_1: No such file or directory mv: rename slots/4/Sixout.zip to projects/lhcathome.cern.ch_lhcathome/Dtwo_42_hlbbo_2222_1.1_0.75__1__s__62.3025_60.3082__10_12__6__54_1_sixvf_boinc2463_3_r1215309536_1: No such file or directory mv: rename slots/4/Sixout.zip to projects/lhcathome.cern.ch_lhcathome/Dtwo_42_hlbbo_2222_1.1_0.75__1__s__62.3025_60.3082__10_12__6__54_1_sixvf_boinc2463_3_r1215309536_1: No such file or directory mv: rename slots/4/Sixout.zip to projects/lhcathome.cern.ch_lhcathome/Dtwo_42_hlbbo_2222_1.1_0.75__1__s__62.3025_60.3082__10_12__6__54_1_sixvf_boinc2463_3_r1215309536_1: No such file or directory mv: rename slots/4/Sixout.zip to projects/lhcathome.cern.ch_lhcathome/Dtwo_42_hlbbo_2222_1.1_0.75__1__s__62.3025_60.3082__10_12__6__54_1_sixvf_boinc2463_3_r1215309536_1: No such file or directory mv: rename slots/4/Sixout.zip to projects/lhcathome.cern.ch_lhcathome/Dtwo_42_hlbbo_2222_1.1_0.75__1__s__62.3025_60.3082__10_12__6__54_1_sixvf_boinc2463_3_r1215309536_1: No such file or directory See "Error while computing": https://lhcathome.cern.ch/lhcathome/results.php?hostid=10482091&offset=0&show_names=0&state=0&appid=10 (This is sixtracktest application) Or "Too many total results" https://lhcathome.cern.ch/lhcathome/workunit.php?wuid=76434668 and https://lhcathome.cern.ch/lhcathome/results.php?hostid=10482091&offset=0&show_names=0&state=6&appid= |
Send message Joined: 22 Sep 13 Posts: 11 Credit: 660,161 RAC: 0 |
Could someone give a status update please ? I would like to have an application for Raspberry Pi (ARM on Linux), but it has not yet been released for sixtrack. The sixtracktest application seemed to work fine. Any chance of releasing the sixtracktest ARM Linux binaries for sixtrack (there seems to be plenty of work available there at the moment) ? Thanks, Tom |
Send message Joined: 29 Feb 16 Posts: 157 Credit: 2,659,975 RAC: 0 |
Hello Tom, Thanks for asking. James has already given a brief overview on the current issues with ARM exes: https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4570&postid=33802 We are working on it and performing tests. We should be able to push the new exes out when ready (not immediately, but not next year - just be patient). At the same time, we are going to update the sixtracktest exes with a bug fix on the physics side, and you will see most probably some tasks coming out ready to be crunched. Thanks a lot for the help and support, Happy crunching! A. |
Send message Joined: 22 Sep 13 Posts: 11 Credit: 660,161 RAC: 0 |
There seems to be a problem downloading the the new sixtracktest tasks (same problem on different systems): 23-Jan-2018 12:29:10 [LHC@home] Sending scheduler request: To fetch work. 23-Jan-2018 12:29:10 [LHC@home] Requesting new tasks for CPU 23-Jan-2018 12:29:13 [LHC@home] Scheduler request completed: got 2 new tasks 23-Jan-2018 12:29:13 [LHC@home] Resent lost task wTestNew_hl13B1_20180121__1__s__62.31_60.32__14.4_14.5__6__15_1_sixvf_boinc251_1 23-Jan-2018 12:29:13 [LHC@home] Resent lost task wTestNew_hl13B1_20180121__1__s__62.31_60.32__11.7_11.8__6__45_1_sixvf_boinc118_0 23-Jan-2018 12:29:13 [LHC@home] State file error: missing file wTestNew_hl13B1_20180121__1__s__62.31_60.32__14.4_14.5__6__15_1_sixvf_boinc251_1_r110204253_2 23-Jan-2018 12:29:13 [LHC@home] Can't handle task wTestNew_hl13B1_20180121__1__s__62.31_60.32__14.4_14.5__6__15_1_sixvf_boinc251_1 in scheduler reply 23-Jan-2018 12:29:13 [LHC@home] State file error: missing file wTestNew_hl13B1_20180121__1__s__62.31_60.32__11.7_11.8__6__45_1_sixvf_boinc118_0_r3465899_2 23-Jan-2018 12:29:13 [LHC@home] Can't handle task wTestNew_hl13B1_20180121__1__s__62.31_60.32__11.7_11.8__6__45_1_sixvf_boinc118_0 in scheduler reply Tom |
Send message Joined: 19 Dec 11 Posts: 1 Credit: 7,349,201 RAC: 1,424 |
Same thing for me. Impossible to get out of this situation after near 3 days... My project seems to be locked on this task. |
Send message Joined: 29 Feb 16 Posts: 157 Credit: 2,659,975 RAC: 0 |
Thanks a lot for the precious feedback - we are investigating that. I will come back to you with more infos. |
Send message Joined: 29 Feb 16 Posts: 157 Credit: 2,659,975 RAC: 0 |
ok it seems that the issue is due to a malformed result template file - though not clear to me why the problem is at download level and not at upload... I have deleted the existing sixtracktest WUs (which where all affected) and restarted 18 new ones, just as a check. Hence, at the next "update project" you should see that your client gets the notification of the killed job, and should be able to proceed with new sixtracktest WUs, if there are some left. Please report if the issue has disappeared or not. Thanks! Cheers, A. |
Send message Joined: 22 Sep 13 Posts: 11 Credit: 660,161 RAC: 0 |
Received one, but the website shows four tasks for this host: https://lhcathome.cern.ch/lhcathome/results.php?hostid=10497508 It downloaded without problems and is running right now (it may take a couple of days on my Raspberry Pi model 2B). Thanks for fixing this, Tom |
Send message Joined: 22 Sep 13 Posts: 11 Credit: 660,161 RAC: 0 |
There seems to be a problem with some of the new workunits as well. On this host https://lhcathome.cern.ch/lhcathome/results.php?hostid=10497508 I had two tasks that were sent out in the first batch and these workunits are not marked as "WU cancelled" (probably the ones in the message log below that are marked as expired), but they still show as green in the task list on the website. It looks like two new tasks were issued. One of these is running fine, but the second one has a missing file error during download. 27-Jan-2018 11:26:34 [LHC@home] Requesting new tasks for CPU 27-Jan-2018 11:26:40 [LHC@home] Scheduler request completed: got 1 new tasks 27-Jan-2018 11:26:40 [LHC@home] Didn't resend lost task wTestNew_hl13B1_20180121__1__s__62.31_60.32__14.4_14.5__6__15_1_sixvf_boinc251_1 (expired) 27-Jan-2018 11:26:40 [LHC@home] Didn't resend lost task wTestNew_hl13B1_20180121__1__s__62.31_60.32__11.7_11.8__6__45_1_sixvf_boinc118_0 (expired) 27-Jan-2018 11:26:40 [LHC@home] Resent lost task wTestNew_hl13B1_20180121__1__s__62.31_60.32__2_4__6__9.47368_1_sixvf_boinc282_2 27-Jan-2018 11:26:40 [LHC@home] State file error: missing file wTestNew_hl13B1_20180121__1__s__62.31_60.32__2_4__6__9.47368_1_sixvf_boinc282_2_r1617352886_2 27-Jan-2018 11:26:40 [LHC@home] Can't handle task wTestNew_hl13B1_20180121__1__s__62.31_60.32__2_4__6__9.47368_1_sixvf_boinc282_2 in scheduler reply Why are the expired tasks still listed in green on the website (and why do I keep receiving the "Didn't resend lost task" messages ? What is wrong with the second new task ? Thanks, Tom |
Send message Joined: 29 Feb 16 Posts: 157 Credit: 2,659,975 RAC: 0 |
Hello Tom, thanks for the feedback - it shows that fixing the result template was not the whole story, but for investigating further I need the IT guys (they will be back tomorrow) Thanks! Cheers, A. |
Send message Joined: 29 Feb 16 Posts: 157 Credit: 2,659,975 RAC: 0 |
Dear all, yes the problem was with the result template - not clear why, but re-issuing additional tasks was ending in re-using the wrong result template. We deleted all those tasks, and we will retry with the correct template. At the same time, we will test a new version of the exes, with a bug fix in the physics of crab cavities - a limited set of simulation cases on BOINC, but still worth the patch. Thanks in advance for the collaboration! Cheers, A. |
Send message Joined: 22 Sep 13 Posts: 11 Credit: 660,161 RAC: 0 |
I just noted one of my systems received a number of tasks the day before yesterday. Unfortunately all of them failed: </stderr_txt> <message> upload failure: <file_xfer_error> <file_name>wTestNew_hl13B1_20180121__1__s__62.31_60.32__2_4__4__85.2631_1_sixvf_boinc334_1_r628128846_2</file_name> <error_code>-131 (file size too big)</error_code> </file_xfer_error> </message> ]]> Looks like it is not just me having these problems: https://lhcathome.cern.ch/lhcathome/workunit.php?wuid=85955467 Tom |
Send message Joined: 29 Feb 16 Posts: 157 Credit: 2,659,975 RAC: 0 |
Hello Tom, thanks for reporting this issue, and sorry for my late reply - I was in duty trip to US. Yes, I was setting up returning the main output file from sixtrack tasks, and accidentally submitted a case with too much output. I will resume testing soon, fixing the issue of space. Cheers, A. |
Send message Joined: 29 Feb 16 Posts: 157 Credit: 2,659,975 RAC: 0 |
Dear all, I had to cancel 11 WUs on sixtracktest as they had wrong input parameteres - they are named wTestNew_hl10_7TeV_base_oct_-570_B1_HTCondor Apologies for the inconvenience. Happy crunching! A. |
Send message Joined: 14 Jan 10 Posts: 1369 Credit: 9,128,790 RAC: 3,905 |
I got 8 sixtracktest tasks this afternoon. First one finished early with the errorlines in BOINC's event log: LHC@home 17 Dec 14:35:18 Aborting task sixtrack5p1p1_hl13B1__1__s__62.31_60.32__1_2__5__45_1_sixvf_boinc26_2: exceeded disk limit: 198.61MB > 190.73MB LHC@home 17 Dec 14:35:19 Computation for task sixtrack5p1p1_hl13B1__1__s__62.31_60.32__1_2__5__45_1_sixvf_boinc26_2 finished LHC@home 17 Dec 14:35:19 Output file sixtrack5p1p1_hl13B1__1__s__62.31_60.32__1_2__5__45_1_sixvf_boinc26_2_r2031625971_0 for task sixtrack5p1p1_hl13B1__1__s__62.31_60.32__1_2__5__45_1_sixvf_boinc26_2 absent The 190.73MB is your rsc_disk_bound setting of 200000000 bytes, that was to small for this task. Other tasks are still running. Just before the 'crash', I found these big files in the working slot directory 4: fort.16 size 4517924 fort.31 size 1177848 fort.6 size 668307 fort.9 size 4611986 and mem_alloc.log size 195516806 |
Send message Joined: 14 Jan 10 Posts: 1369 Credit: 9,128,790 RAC: 3,905 |
The second one with the same error: EXIT_DISK_LIMIT_EXCEEDED LHC@home 17 Dec 15:45:25 Aborting task sixtrack5p1p1_hl13B1__1__s__62.31_60.32__2_3__5__40_1_sixvf_boinc42_2: exceeded disk limit: 514.37MB > 190.73MB Big files found: 4517924 = fort.16 1177848 = fort.31 715197 = fort.6 503927282 = mem_alloc.log and 2 other tasks with the same error: https://lhcathome.cern.ch/lhcathome/result.php?resultid=212498885 https://lhcathome.cern.ch/lhcathome/result.php?resultid=212498886 |
Send message Joined: 14 Jan 10 Posts: 1369 Credit: 9,128,790 RAC: 3,905 |
Out of 8 tasks 3 successful and 5 errors because of the 200000000 bytes slot limit "Disk usage limit exceeded" This is the fifth error task: https://lhcathome.cern.ch/lhcathome/result.php?resultid=212498880 Big files before computation error: 4517924 bytes fort.16 1177848 bytes fort.31 677757 bytes fort.6 4611986 bytes fort.9 343580018 bytes mem_alloc.log |
©2024 CERN