1) Message boards : Number crunching : Gone in a flash! (Message 16816)
Posted 4 May 2007 by Profile JigPu
Post:
I snagged 52 WUs from this last batch o.O. Based on the low numbers everybody else is getting, I double-checked my settings to make sure that my settings weren't stupid, but they look fine.

Guess it was a "perfect storm" in my case:
LHC: -430,000 seconds long-term debt (offered up a lot of short WUs)
CPDN: 260,000 seconds long-term debt (and been crunching since SETI was ofline, so high short-term debt as well)
SETI: Unavailable
2) Message boards : Number crunching : My comps got work >:P (Message 12074)
Posted 16 Jan 2006 by Profile JigPu
Post:
all of my machines have work :)


Your lucky - mine is having problems downloading.

Ditto -- though unlike you, mine is caused by a LDT of -500 seconds still hanging around :D A bit more CPDN and my main box should be back to sending those particles round-and-round again :)
3) Message boards : Number crunching : Boinc farms. (Message 11709)
Posted 21 Dec 2005 by Profile JigPu
Post:
Athlon XP1800+
P4 2.8GHz Prescott (...laptop :eek:)

It's still only an herb garden compared to some of your guys' farms, but it works good enough for me :)
4) Message boards : Number crunching : Host corruption (Message 11004)
Posted 26 Oct 2005 by Profile JigPu
Post:
Corrupted Hosts: 28041, 27461 (others not corrupted, but no longer running LHC)

Last Connections: 25 Oct 2005 4:15:19 UTC, 24 Oct 2005 12:19:31 UTC (respectively)

BOINC Versions: Unknown, probably 4.45?? (XP Pro), 4.68 (XP Home)


UN-corrupted values (everything else is wrong):
* IP Address
* Domain Name
* Local Standard Time
* Name
* Created
* Total Credit
* Recent Average Credit
* Results
* Number of times client has contacted server
* Last Time Contacted Server
* % of time BOINC client is running
* While BOINC running, % of time work is allowed
* Average CPU efficiency

Puffy
5) Message boards : Number crunching : New kind of workunits? (Message 10704)
Posted 12 Oct 2005 by Profile JigPu
Post:
To continue with the dreary prospects for SETI, it's not only a matter of listening to the right portion of the sky at the right frequency at the right time, but also of actually detecting the signal.

Arecibo may be huge, and SETI nice and sensitive, but you're not going to be picking up incident RF "leakage" at any distance. There's an FAQ out there that happens to mention theoretical detection distances for various transmitters. The only type of signal that SETI could detect at more than 1 light-year are ones which are very narrow-band and very powerful. A 1GW EIRP 0.1Hz "wide" signal should be detectible from 5 light-years, and a 1TW EIRP signal at 150 light-years. The chances though that such a powerful signal is going to be directed at us (and furthermore, remain on us when Berkeley gets around to confirming the signal) are, well, abysmal.

I do still run SETI though for the very reason Travis DJ mentioned. The probabilities may be infinitessimal, but it would turn the world on it's head. Definatly worth at least some of my CPU time! :)

Puffy
6) Message boards : Number crunching : When will Units = zero (Message 9953)
Posted 6 Sep 2005 by Profile JigPu
Post:
All out of work here, so might as well revise the countdown :D
<blockquote>26654 units in progress @ 09:54 UTC

Units difference = 810
Time diff. = 51 min.

That's ~15.88 units/min, so I estimate the units will be done in 27 hours
=> 7 sep. 12:54 UTC
</blockquote>
16565 units in progress @ 22:09 UTC

Units difference = 10089
Time diff. = 11 hrs. 24 min. (684 min. total)

~14.75 units/min => 1123 minutes projected (18 hrs. 43 min.) => 7 Sep. 16:52 UTC

Puffy
7) Message boards : Number crunching : LHC@Home is NOT a science project (Message 9802)
Posted 1 Sep 2005 by Profile JigPu
Post:
Amen.

If you're computer can't cope with the deadlines, by all means suspend or detach from the project (it's a lot nicer than being forced to micromanage!). However, assuming you're not on dial-up, I'd also try figuring out why BOINC isn't doing it's job correctly (and earliest-deadline-first mode IS correct functioning if it returns results before deadline ;)) since it should be quite capable of completing work before it's due (especially with LHC's extrodinarily incorrect predictions, BOINC's built in 80% fudge-factor, and the notoriously low system-uptime/boinc-uptime numbers BOINC generates).

Puffy
8) Message boards : Number crunching : Network connection interval greater than wu deadline (Message 9801)
Posted 1 Sep 2005 by Profile JigPu
Post:
<blockquote>now i work with changed settings of 3 Days and also get the same amonth of Work for one day...
.
.
.
also my WU´s allways come in with will be finished in 6 hours
but most the time i finish the WU´s in 50% of that time most WU´s finish between 3-4 Hours...
</blockquote>
That's not a BOINC problem, that's an LHC problem. Because of how variable the work units are, it's difficult to know beforehand just how long one will take. Because of this, LHC provides a worst-case guess, which usually is off by quite a bit. I usually finish an LHC WU in a hair over 5 hours, yet the esitmate is over twice that. Just the nature of the beast.

<blockquote>this hole System of WU and % of Worktime is a Pice of Junk...</blockquote>
It seems to be a bit buggy (as can be seen with BOINC claiming 50% uptime for your 24/7 cruncher), but the concept is definatly nice to have IMO. A computer that is on (or has BOINC open) only 6 hours a day is going to need a lot less work to fill a N day cache than a 24/7 cruncher.


<blockquote>just one Question
were is the dam button were i can reset this Boinc Junk % Time counter piece to ZERO
so it noticed iam not longer in Vaccation and back 24h a day for LHC</blockquote>
It's probably hiding in one of the many XML files residing in your BOINC directory. I'm not sure which, but if you find it, closing BOINC and resetting it back to 100% should do the trick I'd think.

Puffy
9) Message boards : Number crunching : information should not be that difficult (Message 9800)
Posted 1 Sep 2005 by Profile JigPu
Post:
<blockquote><blockquote>we are not getting work. while it is obviously there.

lowerring the network connection helps sometimes. why?

you have your own set of boinc rules?
maybe then you could make this public?

</blockquote>

my question was simple. there is work, i just am not getting any. why?

a lower my cache size or a lower network connection time makes not any difference.

is it so difficult to put on the home page how to get work? i think not.

information is not difficult. just put in the effort for your donators.</blockquote>
I would check to make sure that your LTD (Long Term Debt) for LHC isn't negative, and that there aren't any error messages popping up in the messages pane. Also, have you tried clicking the "Update" button to manually request more work from LHC?

Puffy
10) Message boards : Number crunching : information should not be that difficult (Message 9798)
Posted 1 Sep 2005 by Profile JigPu
Post:
<blockquote>I don't buy into the statement that BOINC is designed so you can run other projects when one is dry. It is merely a crutch for projects who want to play in the big leagues of DC, but can't generate enough work to satisfy demand. I will agree that it was designed as a platform that projects could utilize without creating their own from the ground up, and that it makes it possible (but NOT MANDATORY)to run multiple projects.</blockquote>
You might bot buy into that, but it's the truth. In a recent interview David Anderson himself pointed out that for BOINC, "High availability is not a goal." BOINC was designed to be a framework to allow people to participate in multiple projects simultanously. It was never designed with running only a single project in mind (and so, sucks rather royally when attempting to do so).

If you're in BOINC just for the stats (I admit that stats are a large portion of why I run BOINC as well, and totaly understand why a large cache is good for that :D), then why not run multiple projects? I understand that there are a few teams out there who like to focus on a single project, though you can always create a team for other projects if one doesn't already exist.

Having long deadlines (and thus large queues) is a nice thing to have. But given how BOINC can honor resource share in the face of some pretty outstanding circumstances, I'm OK with a project with short deadlines. It's not exactly fun to see BOINC always in earliest-deadline-first mode, but if I ignore it everything always works out in the end, with each project getting their bit of my defined resource share.

Puffy
11) Message boards : Number crunching : the Problem with Boinc (Message 9632)
Posted 24 Aug 2005 by Profile JigPu
Post:
Should the OP be required to run more than one project for BOINC to work as he wishes -- No.
Should the OP be required to run his PC 24/7, regardless of wether there isn't any work -- No.
Should the OP be penalized (in amount of work downloaded) for running BOINC only while work is available? -- No.


Where the OP is getting screwed is the last statement. While BOINC dosen't penalize you for running it only while work is available, it does "penalize" you for running it intermittantly (and rightly so!). It's a safeguard to prevent machines that are only on for a portion of the day from downloading more work than they can possibly handle. If you take my laptop for instance, it's on for only a few hours a day (let's say 3). In order to honor the "connect every" setting (of 2 days) and to make sure that too much work isn't downloaded (going past deadlines == bad), BOINC actually downloads only 6 hours of work. If it actually downloaded 2 days (48 hours) of work, lots of my WUs would likely miss their deadlines.

BOINC dosen't care WHY a machine is off, it just notices it is and so cuts back the amount of work it should download accordingly. I wholeheartedly agree that it needs modification so that it will work in special cases like the OP is in (heck, it isn't quite perfect yet even for the users it was designed to safeguard against).


PS -- Adding a second project won't really help out. I'm pretty sure that the preference is project independant, so if you're only downloading 6 hours of LHC right now, you'll end up downloading 5 hours of LHC and 1 hour of something else. Also, I don't know for sure if leaving the machine on 24/7 will re-increase the number (testing with my laptop shows that it dosen't come back up after going down :() Berkeley definatly needs to do some tweaking with this...

Puffy
12) Message boards : Number crunching : out of work ... for a long time ? (Message 9593)
Posted 23 Aug 2005 by Profile JigPu
Post:
Guess I was part of the lucky few as well then -- once LHC WU on one machine =D Must be results that passed deadline and are being redistributed.

Puffy
13) Message boards : Number crunching : credits (Message 9433)
Posted 16 Aug 2005 by Profile JigPu
Post:
<blockquote>There are only two possibilities:
Don't grant (that's 0.00 credits) for invalids, or add to the totals all that is written as granted. Everything else is either a mayor bug or deceit.</blockquote>

Looks like a pretty bad bug to me. Especially when the following is considered...

Automatically Totaled Credit - 23.52
Manually Totaled Credit - 136.9
Valid Result Credit - 15.33
Invalid Result Credit - 121.57

If only valid results are being added in (as you conjectured), he should have 15.33 credits -- not 23.52.

Either that caching LHC has going on is REALLY weird (he was "granted" over 50 credits a week ago), or something is really screwed up. =(



@locke -- Definatly do the stuff outlined in the post above. With that many invalid results, I'd bet that something is awry with your computer.

Puffy
14) Message boards : Number crunching : Bug in the Credit addition (Message 9432)
Posted 16 Aug 2005 by Profile JigPu
Post:
<blockquote>Chrulle hasn't answered my questions. He has claimed, that there was some bug, that was fixed. It was obviously not.</blockquote>

As was said, Chrulle did attempt to answer your question. Based on the time interval between his post and yours, I'd like to believe he DID answer it, though was incorrect with his reply. To quote him (emphasis is my own), "We <B>believe</B> we have found the reason for the missing credit." It's quite possible that he was mistaken and that there IS some bug that they aren't aware of.


<blockquote>And the questions regarding this substandard credit handling, obviously to avoid detection of invalids, is something that really will turn me away from this project.</blockquote>

I'm very sorry if this roadbump turns you away from the project (more power to you for wanting better! =)), but your idea that this mishandling is due to "[Cern trying to] avoid detection of invalids" sounds a bit too much like a conspiracy theory to me. Perhaps I'm overly trusting, but I'd like to believe that it's just some bug they haven't been aware of (and apparently still aren't given the fact that they claimed the validator was fine).


<blockquote>Where has he written whether partial credit for invalids is still the standing policy?

If so, why it isn't added to the totals?

Or if this should be added, why it's still not done?</blockquote>

1) I haven't seen it written by him, though the validator does indeed "grant" (though apparently never adds in) 1/2*credit for invalid results.

2) Apparently this problem goes beyond what Chrulle and Markku initially believe. It remains to be seen why they aren't being added.


Finally, this problem of credit addition appears to be more complex than you think it is. Your assumption has been that invalid results are not being totaled in, though based off the results turned in by the user you referenced in your post (locke), this can't be. His valid results total up to 15.33 credits. His total credit however is 23.52. To make things even WEIRDER, I can find no way to total the rest of his (invalid yet "granted") credits to get the missing 8.19. This is obviously some kind of problem beyond invalids not getting totaled in or the devs forgetting to do it manually after a dbase crash.

Puffy
15) Message boards : Number crunching : Bug in the Credit addition (Message 9057)
Posted 31 Jul 2005 by Profile JigPu
Post:
I'm no project dev, but here is what I suspect is going on:

The validator was written by the guys at Berkeley along with the rest of the BOINC package. The validator as written is designed to compare returned results, and grant credit to those which form a quorum (a set of several agreeing results). The validator is sufficient for most of the projects, and so any problems with it would be low on the priority list (when compared to bugs that effect all projects).

It has been known for some time that Sixtrack has some issues with generating consistant results among various CPUs. Early in the project, it was decided to enable the validator's "homegenous redundancy" option, which forces work to be passed out only to similar (the exact same?) CPUs. Obviously this isn't a great solution, since it means that results processed on an Intel Celeron D for example can only be processed by other Celeron Ds, greatly reducing the speed of validation.

After quite a while where homogenous redundancy was the norm for LHC, it was decided to try to fix Sixtrack so that it's results were more consistant, allowing homogenous redundancy to be disabled. I believe they were sucessful, but only marginally so -- some CPUs still had issues validating correctly against others. They decided to disable homogenous redundancy anyway since it would validate correctly most of the time, though there was still the odd case where it wouldn't.

As a partial fix for this, LHC modified the validator so that it would generate partial credit for "semi-valid" results. The validator itself would consider such a result invalid (since it does not match with the quorum), but would grant credit anyway (since it's probably actually valid). The problem with this hack as you found though is that credits from invalid results (which would NORMALLY be zero) are not added into the grand total as Chrulle pointed out, so even though they're "granted", it's not reflected.

Why did LHC opt to use partial credit instead of really fixing the validator? Who knows. Perhaps it would have taken far too long, perhaps the Berkeley guys are working on it and gave LHC this validator as a temporary version, perhaps the real fix is much harder than this hack to implement... Like I said, I'm not a project dev, so I haven't a clue.

I highly doubt that this is a case of "deceit", but just an honest oversight as you mentioned (the oversight being that results flagged invalid aren't totaled up, regardless of the partial granted credit).


Puffy
16) Message boards : Number crunching : new work is here!!!! (Message 8212)
Posted 29 Jun 2005 by Profile JigPu
Post:
Current Status -- "Up, 48801 workunits to crunch"

w00t!! LHC work > *, so it's nice to see my caches full and ready to crunch :)
Puffy
17) Message boards : Number crunching : shorter deadline please! (Message 7982)
Posted 6 Jun 2005 by Profile JigPu
Post:
> i run seti and lhc. i have work for both but only seti runs. there is in no
> way a reason for the panic mode in the client software, the seti wu's do not
> have to be returned in about 9 days.
>
If your cache is set too large (which only takes a setting greater than 2 or 3 days =(), BOINC will panic due to possible deadline issues. I think it's WAY too conservative about it's panicking, but the idea (prevent as many WUs from deadling when the cache is too large by running the one with the earliest deadline first) is sound provided it were implemented better and panicked a lot less.

> i have set the client software with a preference for lhc. still i run much
> more seti then lhc due to the nature of this project. so long term or short
> term dept should make no difference, if i have lhc wu's they should run.
>
They should indeed, and will once it comes to "their turn". Once the work is on your computer, it will eventually be crunched. Because of other projects combined with the panic mode it may be sitting at the very end of the queue, but they'll get their chance. If BOINC wern't in panic mode, it would be using the round-robin scheduling (which would finish your LHC fairly quickly) that 4.2 and earlier versions used.

> the size of the cache we donate to these project should never make a
> difference to our choices. we donate the space on our hd's!
>
It should indeed not, and in reality it does not. I could set a huge cache, and BOINC would let me. However, when deadlines are taken into account there is a practical limit on how large we should size our cache. Obviously downloading 100 days of work won't work for projects with 2 week deadlines... However, downloading 7 days of work (with a 2 week deadline) on a 24/7 cruncher shouldn't matter at all (even though with the current client it unfortunatly does =(... THIS is what I'd like to see fixed)

> switching between projects is on 1 hour, as recommended. but does not happen.
>
I haven't paid enough attention to my BOINC client to know (I'm set to switch every 30 minutes), but if it's not honoring it, that's definatly a problem that should be brought up...

> so we have all these choices, but nobody listens to them.
>
> (and that's what happens if we are seen as users. this does not happen to
> donators.)
>
> boinc should listen to littleBouncer's suggestion. it is to say the least
> "user"friendly.
>
Can't agree more =) Hopefully with BOINC being open source, this will change as more of the dontators modify the client, making it more donator-centric.

Puffy
18) Message boards : Number crunching : New server binaries (Message 7903)
Posted 1 Jun 2005 by Profile JigPu
Post:
> That's my point though... I've had no problem in the past honoring resource
> shares and keeping work for all my projects without blowing past deadlines. I
> should be able to request the work I want, when I want, because I've got my
> settings tuned properly for the speed of my machine. I never download more
> work than I can complete.
>
I've always wondered why there's no "Connect Immediatly" button in BOINC. I don't use dial-up, so it's not a personal concern, but it just seems stupid that you can't tell BOINC you'd like to top yourself off...

John mentioned tension between honoring resource share and getting work from all projects all the time. However, why must one get into this situation with a connection button? The scheduler can honor debt and require downloads at the same time.

From what I've read about the latest scheduler, there are two scheduling modes and three work-fetch modes. Of the two schedule modes, panic mode can be set off by either a) short deadlines or b) tight deadlines.
&nbsp;&nbsp;&nbsp;If the scheduler is panicking due due a short deadline, the documentation itself says "Having the CPU scheduler in panic mode for a short deadline will not preclude the downloading of work. If the work unit is due today, but the work otherwise is not in time trouble, there is no reason not to download some more work." -- Force work-fetch to "download required" mode on clicking the button.
&nbsp;&nbsp;&nbsp;If the scheduler is panicking due due a tight deadline, new work may obviously miss the deadline. -- Warn user that forcing a work-request may result in work missing it's deadline. If the user is sure this is OK, force work-fetch into "download required" mode.
&nbsp;&nbsp;&nbsp;If the scheduler is not panicing at all, there's no harm in downloading more work. Instead of waiting a few more days, just connect right now for more work. -- Force work-fetch into "download required" mode.

Again according to the documentation, "The work fetch will always be done in order of highest long term debt." Forcing a work-fetch then would still honor resource share because it would initially attempt to download new work from a project which needs to be run more. A line between resource share and downloading from everybody does not have to be walked with the implementation of a cache refill button, since it would use the the very same debt calculations any other work-fetch would.


Puffy
19) Message boards : Number crunching : '0 cpu' problem solved? (Message 7560)
Posted 9 May 2005 by Profile JigPu
Post:
> Hello!
> Oh no, I dont think so, take a look at this WU:
> http://lhcathome.cern.ch/result.php?resultid=1107178
>
> And it is not only me that got this one!
>
The WU you are talking about is v64D1D2MQonlyinjnoskew1b5offcomp-60s16_18525.9615_1_sixvf_45886_4 (notice the bolded s16_)

As littleBouncer said best....
>This are s16_ or s18_ = 'fast WU's', as Chrulle discribed in another post: They
>have a bigger amplitude at the beginn ('bang') of the simulation and could abort.

:)

Puffy
20) Message boards : Number crunching : If only the numbers were a measure of reality!! (Message 7456)
Posted 5 May 2005 by Profile JigPu
Post:
> anyone know where to get a 30 m copper hat? I'd love to see that!
>
Not me, though you'd need at least a 31m hat anyway ;) :D

Puffy


Next 20


©2020 CERN