Message boards : LHC@home Science : Great article about BOINC at CERN
Message board moderation

To post messages, you must log in.

AuthorMessage
Toby Broom
Volunteer moderator

Send message
Joined: 27 Sep 08
Posts: 798
Credit: 644,666,060
RAC: 236,119
Message 25854 - Posted: 23 Sep 2013, 1:44:22 UTC

ID: 25854 · Report as offensive     Reply Quote
Profile Tom95134

Send message
Joined: 4 May 07
Posts: 250
Credit: 826,541
RAC: 0
Message 25855 - Posted: 23 Sep 2013, 2:51:48 UTC - in response to Message 25854.  

Lots of numbers crunched.

BTW, have we accidentally found the Lost Continent of Atlantis with 362 participants stuck someplace off-shore in the Atlantic Ocean? Or is this a group of people working through a link via the Bermuda Triangle?
ID: 25855 · Report as offensive     Reply Quote
Profile Ben Segal
Volunteer moderator
Project administrator

Send message
Joined: 1 Sep 04
Posts: 139
Credit: 2,579
RAC: 0
Message 25856 - Posted: 23 Sep 2013, 9:15:42 UTC - in response to Message 25855.  

Lots of numbers crunched.

BTW, have we accidentally found the Lost Continent of Atlantis with 362 participants stuck someplace off-shore in the Atlantic Ocean? Or is this a group of people working through a link via the Bermuda Triangle?


Sorry to disappoint you, but we put everyone in the Bermuda Triangle whose IP address we can't geolocate.

Ben
ID: 25856 · Report as offensive     Reply Quote
alvin
Avatar

Send message
Joined: 12 Mar 12
Posts: 128
Credit: 20,013,377
RAC: 0
Message 25857 - Posted: 24 Sep 2013, 2:15:47 UTC - in response to Message 25856.  

Ben
could you please confirm that no WU available at time?
Any estimations then it might be resolved?
ID: 25857 · Report as offensive     Reply Quote
Profile Ray Murray
Volunteer moderator
Avatar

Send message
Joined: 29 Sep 04
Posts: 281
Credit: 11,859,285
RAC: 1
Message 25858 - Posted: 24 Sep 2013, 17:25:06 UTC

Hi Costa,
This project regularly has periods where no work is available to us during which time the team will be analysing the work we have done and perhaps setting up the next batch of work or fixing stuff like the recent Linux errors. You can check the Server Status from the front page or just leave your machine(s) attached and they will pick up more work when it becomes available. Or you could have a look at Test4Theory, simulating the collisions within the LHC. It's a bit more tricky than simply attaching through Boinc as you need to download another program first. Check the boards over there for help and advice.
ID: 25858 · Report as offensive     Reply Quote
alvin
Avatar

Send message
Joined: 12 Mar 12
Posts: 128
Credit: 20,013,377
RAC: 0
Message 25859 - Posted: 25 Sep 2013, 4:07:32 UTC - in response to Message 25858.  
Last modified: 25 Sep 2013, 4:09:07 UTC

Thanks Ray

Yes, I aware about lack of tasks periods thats why I've requested a bit more detailed schedule. I have processing farm of 25+ units and its not that easy to manage them all the time.
My main point is not to spread calculus power to different projects as seems easy but to concentrate it on one. So its a bit pain to see no tasks with no clear idea then there would be one.
As for Test4Theory tasks yes I have it too.
ID: 25859 · Report as offensive     Reply Quote
henry

Send message
Joined: 15 Sep 13
Posts: 73
Credit: 5,763
RAC: 0
Message 25860 - Posted: 25 Sep 2013, 16:48:39 UTC - in response to Message 25859.  

Costa,

I have a farm of 50+ hosts which I find very easy to manage in spite of the unpredictable periods of no work and lack of a detailed schedule. Forgive me if I've missed the obvious but I don't understand how having a more detailed schedule could make management any easier for you even if you want to concentrate your processing power on 1 project.

Please explain to us (in some detail) how/why unpredictable task availability and lack of detailed schedule make things more difficult. I am curious to learn and know. Maybe we need to petition the BOINC developers for additional features?

IMHO, you will find the Sixtrack project admins don't have a schedule for releasing more tasks. I am quite certain you will find they won't release more tasks until they finish analysing the results from the previous batch of tasks. They have no idea how long that analysis might take nor do they immediately know upon completion of analysis what kinds of new tasks they should release. It's been pretty much like that since the early days of this project and IMHO that's not going to change soon.

If you wish to concentrate your processing power on Sixtrack, the best you can do is implement a script that causes your hosts to stalk and pounce on new Sixtrack tasks minutes after they become available and abort unstarted tasks from other projects. That's what Sixtrack diehards do.
ID: 25860 · Report as offensive     Reply Quote
Profile Magic Quantum Mechanic
Avatar

Send message
Joined: 24 Oct 04
Posts: 1114
Credit: 49,501,728
RAC: 4,157
Message 25861 - Posted: 26 Sep 2013, 19:34:49 UTC


You can always use them to run the Einstein tasks since they never run out of work.


Volunteer Mad Scientist For Life
ID: 25861 · Report as offensive     Reply Quote
alvin
Avatar

Send message
Joined: 12 Mar 12
Posts: 128
Credit: 20,013,377
RAC: 0
Message 25862 - Posted: 27 Sep 2013, 11:33:26 UTC - in response to Message 25860.  

Costa,

I have a farm of 50+ hosts which I find very easy to manage in spite of the unpredictable periods of no work and lack of a detailed schedule. Forgive me if I've missed the obvious but I don't understand how having a more detailed schedule could make management any easier for you even if you want to concentrate your processing power on 1 project.

Please explain to us (in some detail) how/why unpredictable task availability and lack of detailed schedule make things more difficult. I am curious to learn and know. Maybe we need to petition the BOINC developers for additional features?


How do you have them in physical arrangement?

ID: 25862 · Report as offensive     Reply Quote
alvin
Avatar

Send message
Joined: 12 Mar 12
Posts: 128
Credit: 20,013,377
RAC: 0
Message 25863 - Posted: 27 Sep 2013, 11:39:12 UTC - in response to Message 25861.  


You can always use them to run the Einstein tasks since they never run out of work.



Yes, for sure.
But my belief is to imply maximum possible resources into certain priority.
Therefore for now LHC seems the most important project for science instead of others.
Its great to get new arrangement for Milky Way 3D shape or having some math calculations, but I'm convinced to immediate science, which will affect everything around.
So my point is to maximise resources to LHC means if I run other projects I will sacrify initial project.
Yes LHC has project priority as much as 300 instead of others have 100 and less.
I'd love do not spread resources for tasks/projects which I will abort/suspend to give LHC back-on-track priority.
ID: 25863 · Report as offensive     Reply Quote
henry

Send message
Joined: 15 Sep 13
Posts: 73
Credit: 5,763
RAC: 0
Message 25865 - Posted: 28 Sep 2013, 2:09:02 UTC - in response to Message 25862.  

Costa,

I have a farm of 50+ hosts which I find very easy to manage in spite of the unpredictable periods of no work and lack of a detailed schedule. Forgive me if I've missed the obvious but I don't understand how having a more detailed schedule could make management any easier for you even if you want to concentrate your processing power on 1 project.

Please explain to us (in some detail) how/why unpredictable task availability and lack of detailed schedule make things more difficult. I am curious to learn and know. Maybe we need to petition the BOINC developers for additional features?


How do you have them in physical arrangement?



Hehe... I asked first. Answer my question then I will answer yours :-)

ID: 25865 · Report as offensive     Reply Quote
henry

Send message
Joined: 15 Sep 13
Posts: 73
Credit: 5,763
RAC: 0
Message 25866 - Posted: 28 Sep 2013, 3:08:55 UTC - in response to Message 25863.  
Last modified: 28 Sep 2013, 3:11:04 UTC


You can always use them to run the Einstein tasks since they never run out of work.



Yes, for sure.
But my belief is to imply maximum possible resources into certain priority.
Therefore for now LHC seems the most important project for science instead of others.
Its great to get new arrangement for Milky Way 3D shape or having some math calculations, but I'm convinced to immediate science, which will affect everything around.
So my point is to maximise resources to LHC means if I run other projects I will sacrify initial project.
Yes LHC has project priority as much as 300 instead of others have 100 and less.
I'd love do not spread resources for tasks/projects which I will abort/suspend to give LHC back-on-track priority.


First of all, there is absolutely NOTHING wrong with aborting tasks. YOU own your hosts and pay the bills therefore YOU decide which programs will run, which programs will die and when. Downloading a task does not imply a promise on your part to complete the task. If you abort a task they just send it to a another host, no problem. Ignore all the whining about wingmen having to wait for their credits, the credits are worthless so it doesn't matter whether they receive them now, 10 weeks from now or 10 years from now. However if it bothers you to the point that you can't sleep at night then keep a VERY small cache so you abort/suspend only a few tasks.

Secondly, you are trying to do the impossible and the reason it is impossible is simple.... Sixtrack doesn't have a steady supply of tasks and they never will. They will never have a schedule either. If I am wrong then one of the admins will correct me but I am confident they agree with me.

Finally, the work Sixtrack does is just one small part of the work that goes on at the LHC. If you want to help the LHC as much as possible then join the Test4Theory project which also does work for the LHC and almost never runs out of work. IMHO, the work at Test4Theory is even more important and interesting than the work here at Sixtrack.
ID: 25866 · Report as offensive     Reply Quote

Message boards : LHC@home Science : Great article about BOINC at CERN


©2024 CERN