Message boards :
Number crunching :
no more work?
Message board moderation
Previous · 1 · 2 · 3 · 4 · 5 · Next
Author | Message |
---|---|
Send message Joined: 15 Oct 11 Posts: 3 Credit: 1,292,103 RAC: 0 |
Ok, but that still doesn't explain why I've seen zero tasks for the last 5 days. Prior to this the longest was three days, but I upped the number of days buffered to two and not seen an outage longer than two days. Now it's 5. Just wondering. http://lhcathomeclassic.cern.ch/sixtrack/results.php?hostid=9939655 |
Send message Joined: 2 Sep 04 Posts: 209 Credit: 1,482,496 RAC: 0 |
Ok, but that still doesn't explain why I've seen zero tasks for the last 5 days. Prior to this the longest was three days, but I upped the number of days buffered to two and not seen an outage longer than two days. Now it's 5. Just wondering. Remember that normal tasks have a 7 day deadline. There are 12,000 plus hosts attached, but that does not say how many cpu cores there are 1,2,4,6,8,12,24 per host. It does not take long, minutes sometimes, for less than half of those hosts to snatch up 5,000 or 12,000 tasks at one task per core. However it sometimes can take up to 7 days for all those to clear out, then some will be over deadline and reissued, only to be snatched up by the next waiting hosts, a new small batch appears, then gets snatched up. To you it might look like the same 5,000 in progress, but it can be a different 5,000 or mix of the two. This project has batches of work, then the results need to be annalyzed by a human before they issue more work, also too Eric has been on vcacation if you read his status thread. There can be periods of no new work issued here and that is normal. Actually since the restart of the project back at Cern, it has had more work than normal. There have been small batches issued in the last 5 days, it just happens that some of the other 11,999 hosts got the work before yours made another request for work. |
Send message Joined: 25 Jan 11 Posts: 179 Credit: 83,858 RAC: 0 |
I upped the number of days buffered to two and not seen an outage longer than two days. Now it's 5. Just wondering. A bigger buffer won't help you get more work at this project, in fact a large buffer can even decrease the amount of work you receive. Sixtrack allows you to have only 1 task per core at a time. If you set your buffer too high then tasks will sit around in your cache and increase your task turnaround time. If your turnaround time is too high the server won't issue resends to your computer because resends are issued only to machines with short turnaround times. In addition to what Keith said, there are other reasons why you can go for 5 days with no work when it's obvious work has been issued to other computers. Every time your computer asks Sixtrack for work and doesn't receive any, it multiplies the time it delays the next request by 2, up to a maximum of 24 hrs. While your computer is waiting for 10 hours (for example) to request more work, a huge batch of 90 second tasks can be gobbled up, crunched and returned by other computers. I've seen my 8 core machine run off 150 short tasks in less than 20 minutes because if it happens to request work within a few minutes of a batch of shorties being loaded into the queue AND it's hungry for Sixtrack tasks because it hasn't had any for a while, then it just keeps requesting work and getting it and not increasing its delay between requests for work. If a couple thousand machines in the pool all line up that way it's like a shark feeding frenzy. It's a matter of lucky timing. As always, if you have certain skills (know how boinccmd.exe works, can write batch files and program Windows Event Scheduler), you can shift the odds in your favor, drastically in your favor, and be one of the sharks who just happens to be in the right place at the right time all the time ;-) |
Send message Joined: 22 Oct 08 Posts: 26 Credit: 75,214 RAC: 0 |
I upped the number of days buffered to two and not seen an outage longer than two days. Now it's 5. Just wondering. I only crunch LHC@Home 1.0 on alternate weekends and work has been a little dry lately, that said last night I allowed sixtrack to fetch more work, and even though the status page said none were available, it got a couple of tasks, good timing I guess. One second the feeder could be full and the next empty, it can be very luck of the draw. |
Send message Joined: 15 Oct 11 Posts: 3 Credit: 1,292,103 RAC: 0 |
Well, okay then. I've bumped the backlog back down to 1 day and will not worry about what tasks are available. I will however look for another boink based program to soak up the three available cores (I allocated 7 of the 8 with 50% to boink based tasks). Any suggestions? I'd be looking for a computationally intensive set of tasks to balance the data intensive one already committed to. |
Send message Joined: 17 Feb 07 Posts: 86 Credit: 968,855 RAC: 0 |
Einstein@home has always work to crunch. Greetings from, TJ |
Send message Joined: 25 Jan 11 Posts: 179 Credit: 83,858 RAC: 0 |
A 1 day cache is good. I use a 0.1 day cache and so do many other volunteers. If you're going to explore other projects then a small cache is a good idea because some projects severely underestimate how long their tasks take so you can end up receiving more work than you can possibly finish before deadline. A 1 day cache should avoid that problem for you. I see you have an i7-2600K @3.4 GHz. So do I. It's up to you but with muscle in that CPU there is probably no need to cut your CPU usage back to 50%. Most BOINC projects run at low priority while your regular apps run at normal priority or higher. That means when your applications (eg. movie player, web browser, etc.) need the CPU they get get it immediately and the BOINC related apps get nothing until your apps don't need the CPU. Thus BOINC lives on your spare CPU cycles. You could probably bump that 50% back up to 100% and, if you find your computer is at all sluggish, free up another core so that BOINC gets only 6 of the 8. Also you should have the "While processor usage is less than __ %" set to 0, otherwise BOINC will constantly suspend and resume the science apps. I let BOINC run on all 8 cores at 100% usage and never experience any slowdown but you might run CPU intensive games, I don't. |
Send message Joined: 2 Sep 04 Posts: 4 Credit: 867,126 RAC: 0 |
Deadline should be made shorter. Most of the work gets returned quickly but then there are the last ~2000 that just sit in caches for 7 days before they time out. Change it to 4 or 5 days for the first try and 2 days for resends. This way the scientists don't need to wait so long and we get credits faster. |
Send message Joined: 22 Aug 10 Posts: 2 Credit: 2,353,917 RAC: 0 |
Einstein@home has always work to crunch. Einstein@home is the best project with tasks for CPU and GPU to crunch. You can also join Milkyway@home. They are sometimes down over weekend, but they offer singele threaded and multi threaded CPU tasks and GPU tasks, which can only be crunched with video cards, that have double precision capability |
Send message Joined: 12 Sep 11 Posts: 38 Credit: 218,154 RAC: 0 |
I have seen 0 tasks ready to send, and a steadily decreasing number of tasks in progress over the past several days. My last task came on December 9 and I only got 1. The number in progress has gone from 2-3 thousand to 519. Is this possibly related to the switch in electricity going on? |
Send message Joined: 25 Jan 11 Posts: 179 Credit: 83,858 RAC: 0 |
Probably nothing to do with the electricity switch. This project is a stop 'n go affair and probably always will be, for good reason. The pattern is they issue a batch of tasks, wait for all the results to be returned, take time to study the results and later issue another batch the composition of which is based on their analysis of the previous batch's results. |
Send message Joined: 17 Dec 11 Posts: 5 Credit: 45,682 RAC: 0 |
Hi all, just joined, but aready out of work. Anything due in over the next few days? |
Send message Joined: 2 Sep 04 Posts: 209 Credit: 1,482,496 RAC: 0 |
Don't really know. Being that it is nearing Christmas and New Years, I really would not expect much until after taht as the people there need time off too for family. |
Send message Joined: 26 Nov 11 Posts: 3 Credit: 1,980,118 RAC: 0 |
Hi, I got several tasks yesterday and today, some are still in progress, probably all depends on when your BOINC client asks for new tasks because they are less tasks than available CPUs. This is a good for the LHC to have plenty of hungry CPUs. Remark : at this time test4theory has tasks ready to be sent ;-) |
Send message Joined: 13 Jul 05 Posts: 143 Credit: 263,300 RAC: 0 |
This is where BOINC manager comes in handy -- it picked up packets to work even though I was sound asleep .... yea! Work coming in ......... If I've lived this long, I've gotta be that old |
Send message Joined: 4 Aug 08 Posts: 14 Credit: 278,575 RAC: 0 |
Haven't seen work for the last couple of days, last work was send back a few days ago. Maybe I should add another host account, have 2 switched off and 2 running. Knight Who Says Ni N! |
Send message Joined: 4 May 07 Posts: 250 Credit: 826,541 RAC: 0 |
Haven't seen work for the last couple of days, last work was send back a few days ago. Tasks for LHC can be kind of spotty. I got a few yesterday and earlier today but nothing is available now. It might be that things have quieted down because of the Holidays. But don't get anxious if you don't see work for a few days. It can also get snapped up pretty quickly when some becomes available. You might want to consider joining Test4Theory. http://lhcathome2.cern.ch/test4theory/ It is a project associated with CERN but using the Oracle Virtual Machine. The project is still sorting some issues/features out but it does have Work Units on a regular basis and is stable. |
Send message Joined: 13 Jul 05 Posts: 143 Credit: 263,300 RAC: 0 |
I did find this note posted on the Test4Theory main page: "News CERN end of year shutdown Starting tomorrow, December 22nd, CERN closes for the holiday period and reopens on January 5th, 2012. During this time, our project will operate on a "best-effort" basis. No special action is needed by volunteers but the flow of jobs may be interrupted and the T4T web site itself may not respond in the case of power cuts. We send our holiday and New Year greetings to all our volunteers and thank everyone who contributed to T4T's very successful first year of growth and development. We have a New Year present ready to announce when we come back: a major T4T revision including much more detailed information for volunteers on the CERN jobs they are running. See you next year, The team!" Since I don't have any of the virtual Machine software loaded on my machine, I won't be joining Test4Theory. If I've lived this long, I've gotta be that old |
Send message Joined: 8 Jun 07 Posts: 13 Credit: 250,850 RAC: 0 |
Now it's New Year,when we get new Work? Greetings. |
Send message Joined: 4 May 07 Posts: 250 Credit: 826,541 RAC: 0 |
CERN is on Holiday until the 5th. |
©2024 CERN