Message boards :
Number crunching :
I think we should restrict work units
Message board moderation
Previous · 1 . . . 7 · 8 · 9 · 10 · 11 · Next
Author | Message |
---|---|
Send message Joined: 28 Sep 05 Posts: 21 Credit: 11,715 RAC: 0 |
I do not appreciate your personal attack! Perhaps I misunderstood your post. My response was not a personal attack but merely a comment on the practice of deliberately manipulating the Boinc client so as to obtain an unusually large cache of WUs. My comment remains the same, I've said what 'heard' when I read your post and my opinion has not changed - I am allowed to disagree with what you've said. Persuade me that I'm wrong.
Irrelevant.
Irrelevant.
Irrelevant.
I did not suggest that you did.
This remark is too vague - perhaps you'll tell us what your idea of 'same opportunity' is. I'd tell you what I think it is but I'm sure you'll assume I'm attacking you again. I can argue anything I like without your permission and even if I'm in the wrong or ill-informed. Who are you? - the formum police?
This last remark (bold) is simply insulting, rude, un-called for and irrelevant.
My remark specifically addresses the 'grab as many as you can while they're available' attitude. Clearly there is no problem when WUs are in abundance.
Sorry, that was just my impression of what you wrote. Perhaps I am wrong. |
Send message Joined: 1 May 06 Posts: 34 Credit: 64,492 RAC: 0 |
*sigh* When are people going to realise that most of these delays are caused by lost WU's that need to be reissued. No matter how carefull people are. No matter what they do some WU's wont process properly for one reaon or another (eg> crashed computer). No matter how fast everyone returns the rest of the WU's we still have to wait for any that have got lost to be reissued and crunched. The way to speed this up is shorter deadlines not smaller caches. Use your brains for a change..................... |
Send message Joined: 1 Sep 04 Posts: 14 Credit: 3,857 RAC: 0 |
looking at the preferences for LHC, I noticed that the cache setting isnt in the LHC@home preferences but rather in the general prefrences and apply to all attached boinc projects in the home/school/work catagory. so using such a high cache isnt the idea of LHC but rather a general setting for the boinc program. you just access that setting from any project (einstien, lhc, seti). so saying that they would change it isnt all fair, that settuing is for the program itself not LHC. this I copied from that page. "These apply to all BOINC projects in which you participate. On computers attached to multiple projects, the most recently modified preferences will be used. " so turning up a cache up to 10 days turns up all projects to 10 days. on the projects that almost always have W.U. and dont rely on them to be processed before making new ones, caching doesent hurt anything/anyone. but this project is diffrent, it only releases so many and waits for them to be crunched before more W.U. are released. to me over caching doesent make sense on this project. ive had 3 units in 5 weeks (so nice to finally help LHC :)))... why would you not want to be utilizing all available resources (whole community of willing crunchers, all crunching data unstead of a small few that take alot more time sitting in thier cache not being crunched.). its takin you guys 3 days to do this recent batch of small W.U. released. yep seems your alot faster then all that dont have units, that could be helping YOU GUYS do them. I dont think I deserve any more units then anyone else, be nice to get some once in awhile been 3 weeks now since last unit and that was only 1 that time I got.... and turning up my settings wont help anything... no units now and im on defult setting, what will I download more of by turning it up? air? |
Send message Joined: 30 Dec 05 Posts: 57 Credit: 835,284 RAC: 0 |
Happy to get ONE work unit in 4 weeks This time i had the opportunity to monitor the computer over a quarter of an hour. The distribution of work units lasted only some minutes - enough time for my DSL line to download a new software version and get one share. It will be ready for upload in about 45minutes. Could we get an overview how these work units are distributed over the partipicants ? Are there more participants than WUs to share ? I think we should restrict work units. thanks Jochen from Old Germany |
Send message Joined: 10 Dec 05 Posts: 1 Credit: 216 RAC: 0 |
I've just got a wu in 1 month and a half. The Boincwoman. Member og Boinc@enmark |
Send message Joined: 4 Sep 05 Posts: 112 Credit: 2,068,660 RAC: 66 |
I'm not sure if I followed all that bowlingguy300 so I'll just "respond" to what I think I understand, if you know what I mean. looking at the preferences for LHC, I noticed that the cache setting isn't in the LHC@home preferences but rather in the general preferences and apply to all attached boinc projects... Sorry, I thought that was a known. I should have added that to one of my posts not so long ago. ...to me over caching doesn't make sense on this project.... Increasing the cache reduces the chances of getting LHC work units. It will be filled with other projects work units and is unlikely to take on more work. There is even a chance that BOINC Client may wait up to 10 days to check, by then any LHC work units will be gone. NOTE: bowlingguy300, I'm not highlighting to insult you but I would like others to notice that point. I don't think I deserve any more units then anyone else, be nice to get some once in awhile.... I feel that way too. I have 5 PC's running and just a little while ago there were over 15,000 LHC work units waiting. One of my PC's was looking for work at that time and got about 10 work units, the other four got nothing so far and probably wont. They seem to be crunching a lot of work units for 3 or 4 other projects just at the moment. Don't know what to do about that, my cache is at 0.25 at the moment, LHC resources are at 20% to 40% depending on the PC. Seems to me that should have grabbed plenty but it didn't and there are still 14437 work units in progress. It's hard for anyone to get work units here. Click here to join the #1 Aussie Alliance on LHC. |
Send message Joined: 4 Sep 05 Posts: 112 Credit: 2,068,660 RAC: 66 |
Lets all watch this video Seed: Seed Short Film: Lords of the Ring An exclusive tour of the underground accelerator at CERN led by the scientists who work there. and see if we can find out where the wu's are disappearing to so quickly.... Unfortunately Quick Time stuffed me around again. It seems ever time I use it, it wants a new bit that it can't get! Not on the server this time :-( Sounded good though. ;-) Click here to join the #1 Aussie Alliance on LHC. |
Send message Joined: 30 May 06 Posts: 40 Credit: 220,215 RAC: 0 |
Yea, I saw this video and it's just great. Both scientificaly interesting and amusing. For SC>Mike Mitchell: Try Mplayer instead for playing MOVS. |
Send message Joined: 2 Oct 04 Posts: 9 Credit: 36,319 RAC: 0 |
This guy got 50 new ones today! http://lhcathome.cern.ch/show_host_detail.php?hostid=90486 Let's beat him up and take his work units! ----- |
Send message Joined: 18 Sep 04 Posts: 47 Credit: 1,886,234 RAC: 0 |
I got about 30 (between two machines) today, and they'vre chugging along. They seem to all be "short ones", and take a bit over an hour each. With my resource share, basiclly, little but LHC will be done until they're gone, and that will likely be by midday tomorrow. |
Send message Joined: 2 Sep 04 Posts: 378 Credit: 10,765 RAC: 0 |
This guy got 50 new ones today! At this time of posting, he's completed 14 of those. So he should be finished them by the end of the week at that rate. Dual CPU's must be nice. Some work units in February, Some in June, Some in July. With 50,000 or so user id numbers it's a random draw whether you get work units or not. I wouldnt accuse this computer of 'hoarding' because it's missed a few work unit runs since February. In my opinion, it's just luck. I'm not the LHC Alex. Just a number cruncher like everyone else here. |
Send message Joined: 2 Oct 04 Posts: 9 Credit: 36,319 RAC: 0 |
Of course it's luck. I thought me saying that we should beat him up and take his work units would have been a clue that I was joking -_- ----- |
Send message Joined: 22 Dec 05 Posts: 27 Credit: 46,565 RAC: 0 |
|
Send message Joined: 22 Dec 05 Posts: 27 Credit: 46,565 RAC: 0 |
|
Send message Joined: 24 Nov 05 Posts: 12 Credit: 8,333,730 RAC: 0 |
Perhaps this is a Social or Psychological experiment and not one of Physics... I posted the response below to the Synergy board as there were complaints there too, mostly quanderies. Exellent people, but not as personally involved in particle physics as we are. My apologies if this goes "under" your heads :) Why LHC Starts and Stops I'll admit, I like to crunch for numbers (as well as to further CERN). I consider LHC my pet project. I just think of biology projects as yucky. I always hated biology. I took LHC off of my three dinosaurs. It didn't take too long to see they couldn't take it. I did add three more P4 HTs to the two I was using only to not see enough WU's show up since. I am still waiting to see them strut their stuff. I don't know what this "Cache" thing is. I just checked my SETI specificatons and didn't see Cache mentioned anywhere. Is it the percentage of memory used? I will admit, I do stuff WU's but I can get them done in 2 days. I didn't know there was a problem until recently when I saw the low amount of WU's out there and can't grab any of them. Just now, there were about 500 and within minutes it went to 11,000 but all were taken. If I slow down CERN, I will surely back off, but I haven't seen any of my numbers busting the ones members here have set out. Wasn't this whole thing about the computing power we bring to CERN? If we weren't here, just how much further "behind" would they be on only their processors. One way or another we are a valuable asset to them, not a liability. PS: Right before people went on vacations etc. for the summer, distributed computing put out over 400 Teraflops. The fastest supercomputer in the world puts @280 Teraflops, making us the fastest computer in the world. I am a Geek. We surf in the dark web pages no others will enter. We stand on the router and no one may pass. We live for the Web. We die for the Web. |
Send message Joined: 21 May 06 Posts: 73 Credit: 8,710 RAC: 0 |
.... What happens when two particles traveling in opposite directions at nearly the speed of light hit each other? A collision at nearly twice the speed of light! hmmm.... Sort of like setting your " Connect to network about every..." to 20 (twice the max allowed...) .... It seems so! Take a look at your general preferences for: Connect to network about every (determines size of work cache; maximum 10 days) Bigger numbers allow you to get more work. But, you should set your "Connect to network about every" to no more than one-tenth. That is the "fair" thing to do... |
Send message Joined: 27 Apr 06 Posts: 26 Credit: 13,559 RAC: 0 |
To Galeon 7 Thanks for posting that excellent 'Why does LHC stop and start' post. Very informative. You mentioned two tracks in that post. If we are crunching 'six track' I suspect we will be reconfiguring and recrunching for a very long time? D |
Send message Joined: 1 Sep 04 Posts: 506 Credit: 118,619 RAC: 0 |
Not so. The beams do indeed run parallel within the same tunnel. There is no 'exit point'. Rather, at points around the ring the beams cross, and it is at these points that the real experimental science takes place. As the beams cross there is an occasional collison. I say 'occasional' for a reason. Most particles in the beam just sail straight through and continue round the accelerator. 'Most' is very nearly all - maybe all of them most of the time. But, because of the immense speeds generated any given particle will cross the collison points millions of times per second, the possibility of a collision within a sensible time frame becomes workable. There is a separate modelling program (Geant4) that models the behaviour of particles and their products at the collision point. This was the subject of a BOINC porting exercise, but the computing environment required will defeat most home computers. Gaspode the UnDressed http://www.littlevale.co.uk |
Send message Joined: 24 Nov 05 Posts: 12 Credit: 8,333,730 RAC: 0 |
[quote .... It seems so! Take a look at your general preferences for: Connect to network about every (determines size of work cache; maximum 10 days) Bigger numbers allow you to get more work. But, you should set your "Connect to network about every" to no more than one-tenth. That is the "fair" thing to do...[/quote] Then I would assume that .04 is ok? I am a Geek. We surf in the dark web pages no others will enter. We stand on the router and no one may pass. We live for the Web. We die for the Web. |
Send message Joined: 24 Nov 05 Posts: 12 Credit: 8,333,730 RAC: 0 |
Mike, you know, teaching physics 101 to grad students really sucks :) Yeah, what I said does imply an exit point. Crossing the beams (magnetic deflection) in Alice, Atlas, LHCB and CMS are the exits I am referring to. With a drawing it is easy to see, but without, a little hard to envision that something inline can actually be an exit point. You know, they tried doing this in Ghost Busters and almost destroyed the space/time continuum don't you? And it's all your fault! Note to self. Never try to explain the difference between a quark and a qwerk to a physicist and a psychologist, because then they will get married and bring about the end of the universe. Philisophically speaking of course. I am a Geek. We surf in the dark web pages no others will enter. We stand on the router and no one may pass. We live for the Web. We die for the Web. |
©2025 CERN