1) Message boards : Number crunching : Newest WU not buffering (Message 26823)
Posted 4 Oct 2014 by Professor Ray
Post:
You could do that, but resuming in a couple of days won't alter any parameters for WU already downloaded before now.

So if you abort all WU's and suspend the project for a couple of days, you'll be golden in a few days.

Otherwise you'll have to manually edit the way I splained for each WU in client_state.xml; its not a one shot deal for the project.

The way I un'rstan' it: prollem's fixed for WU's being served up now. I'm guessing ones that are constantly streaming back as failed and being served out now, prolly are fixed. Dunno.

I'll take whatever LHC serves me and I'll make 'er crunch 'em. The work being computed is pretty intense - very cool guy stuff - I like it.
2) Message boards : Number crunching : Newest WU not buffering (Message 26820)
Posted 4 Oct 2014 by Professor Ray
Post:
Is it one of those infamous w3-'s that just appeared in the last coupla-day?

Check you tasks listing on the web-site and check when it was served down to you?

IF it was before now, then edit your client_state.xml to change the rsc_disk parameter for the WU to 600000000 after exiting BOINC manager. See how it runs after that. Again, it'll churn for a few minutes max.

OR you could just abort it. Lots of that going around lately.

You're call: no matter how much time you gots into it, if the disk quota is too low for the WU it'll abend with dik overflow error (unless you make disk quota bigger for WU like I said). If you don't want to edit you client_state.xml and you got any w-3 type WU before now, it(they) will abend end on you anyways.

New WU are being served with disk quota of 1/2 GB per WU, old ones were 190MB. I found at 85% it was at 377MB.

Right now the only WU waiting validation is prolly mine from earlier this afternoon.
3) Message boards : Number crunching : download failed (Message 26819)
Posted 4 Oct 2014 by Professor Ray
Post:
I bet. People saying they've 100's of abortions, err, disk overflow failures * 1000's of clients. Plus users manually aborting 100's of WU's each like mad cause they're failing, none of that to be outdone by LHC admin trying to kill the jobs on the server to no avail.

Why am I not surprised there are download errors.

The entire project was down for about 15 minutes 15 minutes ago.
4) Message boards : Number crunching : Newest WU not buffering (Message 26817)
Posted 4 Oct 2014 by Professor Ray
Post:
I've noticed the latest batch of W3-'s does a lot of that initially on WU restart. It settles down after a couple of minutes.
5) Message boards : Number crunching : download failed (Message 26814)
Posted 4 Oct 2014 by Professor Ray
Post:
I suspect this is related to the WU disk overflow issue, i.e., the WU were flagged as no good.

New WU's are spoda being released with larger disk quota per WU to avoid disk overflow.
6) Message boards : Number crunching : Tasks exceeding disk limit (Message 26811)
Posted 4 Oct 2014 by Professor Ray
Post:
Whoo hoo! Cha-ching! Ringin' the cash register!

Completed and awaiting validation:

http://lhcathomeclassic.cern.ch/sixtrack/result.php?resultid=44914175

Get to it wing-dude / dudettes; I want my credit!
7) Message boards : Number crunching : Tasks exceeding disk limit (Message 26810)
Posted 4 Oct 2014 by Professor Ray
Post:
At this time LHC has grown to 377.98MB at 85% complete.

So I RAR'd 326MB slot 2 - Rosetta - to another HDD and increased LHC disk allocation to 572MB, i.e., 600000000 rsc_disk value in client_state.xml

Crunching on!
8) Message boards : Number crunching : Tasks exceeding disk limit (Message 26806)
Posted 3 Oct 2014 by Professor Ray
Post:
After 3.25 hrs still crunchin' w/ 30 forts comprising 202MB for grantotal of 220 and still growin' - but there be 26 forts of 0kb so I've no idea how this'll play out.

Its chunkin' and ka-thunkin' and whirrin' and blinkin' and flashin' and beepin' and boppin'....the little hamster wheel be whinin' something fierce - we be needin' to put some beef tallow on the bearing hub.

EDIT: at 2340 UTC its crunched 255MB with headroom to 381MB - per aforementioned tweak to client_state.xml - at which point I can still give him 'nother 177MB That notwithstanding, I'm a gonna hafta suspend all other projects w/ no new work - no body else'll have room to run.
9) Message boards : News : Power Supply Ripple (Message 26797)
Posted 3 Oct 2014 by Professor Ray
Post:
Pretty impressive Fancy Dan computation on all that. The power supply is a bit more powerful than your atypical PC gaming rig PSU. The caps in that thing prolly big as a truck packing a few kF, and the inductor coils prolly bigger'n your house.

The energies being harnessed by the cyclotron are respectable; the few hundred particles being whirled about have energies congruent to the momentum of aircraft carrier at flank speed. All around the cyclotron whirl-tube are emergency off-ramps - akin to those found on mountain roads - that are packed with multi semi-trailer trucks worth of graphite.

When the particles escape the grasp of the magnetron beamer thingy, and they shoot up one of the escape ramps, the whole pile of graphite gets hot as steam.

So we all be crunchin' for good affect.
10) Message boards : News : Three Problems, 22nd May. (Message 26796)
Posted 3 Oct 2014 by Professor Ray
Post:
ongoing thread - WU disk overflow - in Number Crunching.
11) Message boards : Number crunching : Tasks exceeding disk limit (Message 26795)
Posted 3 Oct 2014 by Professor Ray
Post:
I change disk value to 400000000 and restart.

You would probably need to make that change in client_state.xml rather than init_data.xml

Very much at your own risk.


I just discovered that init_data.xml gets overwritten on BOINC Manager exit. I discovered that client_state.xml contains entry for LHC WU. I made aforementioned change to both files now.

That's schizo coding; if the slot doesn't get created until a WU is started - the slot contains the init_data.xml - but the client_state.xml is the driver (overwriting init_data.xml edit?).

We be crunching another one now. We'll see if it finishes. If so, there's the validation issue subsequent; if no wingman can complete the WU w/o abend due to disk overflow.
12) Message boards : Number crunching : Tasks exceeding disk limit (Message 26790)
Posted 3 Oct 2014 by Professor Ray
Post:
project completion: 42% @ 2.6666 hrs
project size = 6.20MB (6.21 slack)
slct size = 191MB (192 slack)

slot init_data.xml
<rsc_memory_bound>100000000.000000</rsc_memory_bound>
<rsc_disk_bound>200000000.000000</rsc_disk_bound>

I change disk value to 400000000 and restart BOINC Manager.

I wait short time and Now at 43.5% complete and slot size 195MB (196 slack), project size 6.20MB (6.21 slack). At 2.8 hrs it blow up:

10/3/2014 2:05:45 PM | LHC@home 1.0 | Aborting task w-b3_30000_job.HLLHC_b3_30000.0732__4__s__62.31_60.32__11_13__5__58.2354_1_sixvf_boinc2072_4: exceeded disk limit: 198.24MB > 190.73MB


http://lhcathomeclassic.cern.ch/sixtrack/result.php?resultid=44914176

Dunno. I remember having to edit init for Lattice. Forgot what was done there. Mebbe hafta restart computer after edit init.xml?
13) Message boards : Number crunching : Tasks exceeding disk limit (Message 26775)
Posted 3 Oct 2014 by Professor Ray
Post:
What I un'erstan'in max disk overflow is project specified threshold - its obviously not a client specified parameter. Why is LHC overflowing its disks: dunno.

What's truly weird, is that after I posted, I got the white screen of death message - all links main page - intimated project is down - within a couple minutes everything was all peachy keen - as if 'what happen?' - and none of the servers on status page intimate any, nada, prollem.

I'm not saying its grumpkins and snarks. But its grumpkins and snarks.

Given that overflowed WU seem to complete by wingmen, maybe there's some sort of 'seed' parameter implemented in the cruncher, i.e., varies the crunching slightly. Either that, or its a precision issue, i.e., 2+2=4 on many machines and eventually they overflow, but on other machines 2+2 = 3.5 and they can complete the WU w/ out overflowing.
14) Message boards : Number crunching : Tasks exceeding disk limit (Message 26771)
Posted 3 Oct 2014 by Professor Ray
Post:
Just started happen, other WU cruncher fine.

What is this?

10/2/2014 8:23:33 PM | LHC@home 1.0 | Aborting task w-b3_-10000_job.HLLHC_b3_-10000.0732__1__s__62.31_60.32__11_13__5__24.7059_1_sixvf_boinc1442_1: exceeded disk limit: 198.54MB > 190.73MB

http://lhcathomeclassic.cern.ch/sixtrack/result.php?resultid=44587584

10/2/2014 11:14:44 PM | LHC@home 1.0 | Aborting task w-b3_-16000_job.HLLHC_b3_-16000.0732__4__s__62.31_60.32__13_15__5__38.8236_1_sixvf_boinc822_4: exceeded disk limit: 200.23MB > 190.73MB

http://lhcathomeclassic.cern.ch/sixtrack/result.php?resultid=44587584

BOINC dIsk preferences are to use at most 50% of total disk space (size allocated 12 GB), use less than 100GB (actual in-use: 0.75 GB), leave at most 0.25 GB free (0.8 GB actual free space on HDD) - but event log shows: 10/2/2014 1:37:05 AM | | max disk usage: 1.26GB (????)

Could the output file be exceeding 1/2 GB in size?

I'm gonna deep six these WU's until condition-red returns to safer yellow alert.

EDIT: my stupid; event log shows error tripped threshold 190.73MB. Furthermore, max disk usage 1.26GB = allocated - used - 0.25 GB specified reserve
15) Questions and Answers : Wish list : Comodo Internet Security - Trusted Vendor List Sign Up (Message 22568)
Posted 8 Oct 2010 by Professor Ray
Post:
Per this discussion:

http://boinc.berkeley.edu/dev/forum_thread.php?id=6073

After some investigation into this matter, I determined this isn\'t all that much of an issue. While it may be ideal to have BOINC and any BOINC projects recognized by Comodo Internet Security suite out-of-the-box as trusted applications, if the BOINC projects & slots folders are desigated with \'installer / updateer\' HIPS policy, sandboxing of any BOINC project binaries is circumvented.
16) Questions and Answers : Wish list : Comodo Internet Security - Trusted Vendor List Sign Up (Message 22559)
Posted 24 Sep 2010 by Professor Ray
Post:
Given the dynamic nature of BOINC projects, i.e., multiple clients that may be viable depending on the science that is being conducted for any arbitrary project\'s WUs, it would facilitate things for project participants that utilize firewalls, HIPS applications such as Comodo Internet Security, if project developers would digitally sign their binaries with a certificate from a trusted CA.

http://internetsecurity.comodo.com/trustedvendor/signup.php

The above URL is a link to a form that allows software vendors to request the addition of their software to the Trusted Vendor List that ships with Comodo Internet Security. This ensures their software will be automatically trusted by the application.

•Your software must be available for download by [Comodo] technicians
•Your software must be code signed with a certificate from a trusted CA (self-signed code signing certs are not acceptable)
•The \'Company Name\' you provide below matches the name of the signer on the certificate
•You must provide a valid email address
17) Message boards : LHC@home Science : Large Hadron Collider turned on... (Message 21662)
Posted 22 Nov 2009 by Professor Ray
Post:
...For reasons based in the inscrutable workings of the BOINC software, it is advised that we be detached from projects with no work.

So, we will need to know to re-attach.

Thanks.

>>RSM


And these inscrutable reasons would be?
18) Message boards : LHC@home Science : An impossible machine that could not be built (Message 21633)
Posted 19 Nov 2009 by Professor Ray
Post:
You should check out the apologies thread in the Cafe.

That\'s all I know about it.
19) Message boards : Cafe LHC : Aiaiaiaiaiaiaieeeeee! Posted threads in WRONG fora! (Message 21624)
Posted 15 Nov 2009 by Professor Ray
Post:
Large Hadron Collider scuttled by birdy baguette-bomber

An impossible machine that could not be built

If the powers that be decide to move \'em to here, this thread can get tossed.
20) Message boards : Cafe LHC : Group Protests Treatment of Hadrons at CERN (Message 21623)
Posted 15 Nov 2009 by Professor Ray
Post:
I bet these guys have a hand in the latest SNAFU


Next 20


©2024 CERN