Message boards :
CMS Application :
no new WUs available
Message board moderation
Previous · 1 · 2 · 3 · 4 · 5 · 6 · 7 · 8 . . . 24 · Next
Author | Message |
---|---|
Send message Joined: 29 Aug 05 Posts: 1065 Credit: 7,940,489 RAC: 14,669 |
Edit: just to find out, I tried to download a Theory task and a LHCb task, for both plenty of unsent should be available according to the Project Status Page.Are you not getting any?I got tasks during this day, but now again it says "no tasks available for CMS simulation, although the Project Status Page shows about 190 unsent. Strange, tho' I note that there has been a drop-off in running jobs from about 1715. You don't have a time-of-day-controlled firewall by any chance? :-0! |
Send message Joined: 18 Dec 15 Posts: 1831 Credit: 119,660,110 RAC: 48,822 |
the Project Status Page once more shows "0" unsent tasks for CMS, as well as for Theory and LHCb. Any idea what's going on? |
Send message Joined: 29 Aug 05 Posts: 1065 Credit: 7,940,489 RAC: 14,669 |
|
Send message Joined: 18 Dec 15 Posts: 1831 Credit: 119,660,110 RAC: 48,822 |
right now, the Project Status Page shows 200 "unsent" CMS tasks (and about same number for LHCb and Theory). However, when trying to download any, BOINC says "no tasks available" |
Send message Joined: 18 Dec 15 Posts: 1831 Credit: 119,660,110 RAC: 48,822 |
I guess now it seems clear what the problem is - Nils wrote in the Sixtrack Thead short time ago: "...Sadly yesterday and today the problem is much worse than usual as the response-time of our database is degraded and BOINC has problems pulling out tasks from the DB. At times BOINC clients will not be able to fetch any task from LHC@home, not even for the VM applications..." |
Send message Joined: 29 Aug 05 Posts: 1065 Credit: 7,940,489 RAC: 14,669 |
|
Send message Joined: 18 Dec 15 Posts: 1831 Credit: 119,660,110 RAC: 48,822 |
Yes, that seems to be part of the problem. ...It's really too bad that since about Mid-December, when the huge ATLAS desaster startet (followed by all the Sixtrack problems), the whole LHC situation becomes worse and worse. So far, CMS, LHCb and Theory were NOT affected - until day before yesterday. From then on, ALL sub-projects are having major problems :-( |
Send message Joined: 29 Aug 05 Posts: 1065 Credit: 7,940,489 RAC: 14,669 |
I got some tasks at 1909, and the server status page is showing some tasks available. So far the number of running jobs hasn't dropped too far, but it remains to be seen if we can get back to the levels of last weekend. I need to submit a new batch of jobs tomorrow; before they take effect we might get a shot at some smaller test batches that are waiting to run, hopefully with some of them going to a new Tier-3 site we are setting up (it will be Laurence's farm eventually, but currently I believe just one VM is involved). If we get that to work we will be a lot closer to integrating into the CMS Production team upon which my need to continually monitor will drop away (just in time for retirement?). |
Send message Joined: 18 Dec 15 Posts: 1831 Credit: 119,660,110 RAC: 48,822 |
early evening yesterday, new CMS tasks could be downloaded :-) However, what I notice on my tasks list (website): for all CMS tasks that were finished and uploaded from about yesterday noon on, in the column "credit" it says "pending". With CMS, I never saw this before. Normally, the credit shows up short time after upload. What's wrong? |
Send message Joined: 15 Jun 08 Posts: 2549 Credit: 255,457,580 RAC: 67,034 |
... What's wrong? Nothing, as long as your tasks are in the validation queue. See: https://lhcathome.cern.ch/lhcathome/server_status.php Task data as of 3 Feb 2018, 7:58:10 UTC Workunits waiting for validation 298 |
Send message Joined: 15 Jun 08 Posts: 2549 Credit: 255,457,580 RAC: 67,034 |
... just in time for retirement ... :'( |
Send message Joined: 18 Dec 15 Posts: 1831 Credit: 119,660,110 RAC: 48,822 |
See:it's hard to image that only 298 tasks are waiting for validation. If you figure how many Sixtrack tasks are being uploaded permanently, and are then waiting for validation over days and even weeks; so, the figure "298" is definitely wrong. |
Send message Joined: 28 Sep 04 Posts: 736 Credit: 49,884,924 RAC: 35,291 |
I agree with Erich56, the server status shows now pending as 327 and I have on my 3 hosts about 650 pending validations. |
Send message Joined: 15 Jun 08 Posts: 2549 Credit: 255,457,580 RAC: 67,034 |
See:it's hard to image that only 298 tasks are waiting for validation. If you figure how many Sixtrack tasks are being uploaded permanently, and are then waiting for validation over days and even weeks; so, the figure "298" is definitely wrong. Well, at the end it's just a status flag in the server's database that is not yet set for whatever reason (guess due to the high load you mentioned). Nothing you should be worried about as the number is not very large nor can you do anything on your side to speed up the process. |
Send message Joined: 29 Sep 04 Posts: 5 Credit: 3,043,759 RAC: 0 |
ATM the SSP shows a Transitioner backlog of about 16 h. So i think that is the reason for the wrong WU Status shown on the task list. I see these false status flags also on my task list for sixtrack and Atlas. |
Send message Joined: 2 May 07 Posts: 2245 Credit: 174,025,522 RAC: 9,726 |
The good news is: Sixtrack tasks are downgrading to ZERO: At the moment:177k. So, if this is realy and no new Sixtrack tasks for the pipeline, we will see tomorrow a better performance for the VM-Projects :-)) |
Send message Joined: 29 Aug 05 Posts: 1065 Credit: 7,940,489 RAC: 14,669 |
The good news is: The bad news is: The WMAgent is having problems. My monitor shows only about another 40 minutes before the Condor job queue is empty. Messages sent, I hope someone at CERN is checking their e-mails. |
Send message Joined: 2 May 07 Posts: 2245 Credit: 174,025,522 RAC: 9,726 |
Ivan, thank you, your work seem 24/7 ;-). |
Send message Joined: 29 Aug 05 Posts: 1065 Credit: 7,940,489 RAC: 14,669 |
|
Send message Joined: 15 Jun 08 Posts: 2549 Credit: 255,457,580 RAC: 67,034 |
The job queue is dry. Confirmed. Thanks Ivan. |
©2025 CERN