1) Message boards : News : Project down due to a server issue (Message 27522)
Posted 11 Jun 2015 by Carolina Calling
Would this explain why BOINC is producing the following when connecting to LHC@Home?

06/11/2015 13:56:28 | LHC@home 1.0 | Scheduler request failed: HTTP internal server error
2) Message boards : News : Heavy I/O on Windows WUs (Message 27001)
Posted 19 Nov 2014 by Carolina Calling
I'm glad that I got the number of "fort" files per task correct. The problem I have is that my system tends to be running more than one SixTrack job at a time and the impact of the I/O is crippling. The "fort" files are not normally soooo chopped up either. I was astounded to find five SixTrack tasks had ~240 file in over 360,000 fragments. It particularly impacts job creation (when there is heavy use of the swap file).

How often does SixTrack checkpoint? Is it possible that it is somehow checkpointing inappropriately? What controls when SixTrack checkpoints?

Thanks for the response!

-- CCW
3) Message boards : News : Heavy I/O on Windows WUs (Message 26989)
Posted 14 Nov 2014 by Carolina Calling
Has there been a change in the I/O for generating "fort" files in SixTrack?

I don't know whether to blame the work units or SixTrack for these I/O problems (although I suspect the latter).

As the I/O was adversely effecting my system, I've now aborted all the "w-" work units and continue to get no new work units.

I, for one, have been very pleased to be a small part of LHC SixTrack's work and look forward to being part of it again. I continue to process ATLAS and VirtualLHC work units.

Currently, I'm not accepting any new work units for any project in order to upgrade to the new version of BOINC (7.4.27). I let all projects' work units finish before upgrading to start with a clean slate. I'll see what happening with SixTrack so that I may turn it back on along with everything else.

I hope SixTrack will be fixed in the near future.

Please, let us know if the source of the I/O problem is found.

-- CCW
Durham, NC, US
4) Message boards : News : Heavy I/O on Windows WUs (Message 26981)
Posted 13 Nov 2014 by Carolina Calling
I'm not surprised that Windows is seeing a lot of I/O. I was just defraggler'ing my system disk and found 30 LHC fort files that had 73,000+ fragments. (That's 2K+ fragments per file, folks.) I've seen this kind of thing twice. Once, someone had intentionally turned off buffering. The second, someone had decided it might be a "good idea" to do an "fflush()" after every write....

I'm fortunate in two ways, I'm using only six of eight CPUs for BOINC and LHC@home only downloads a limited number of WU per host. If I had a lot of these WUs processing at any one time, I'd run out of extents in a heartbeat....

I'm taking no more LHC@home WUs until this is straightened out.

-- CCW (aka Carolina Calling)
Durham, NC, US
5) Message boards : News : More work coming now. (Message 25556)
Posted 9 May 2013 by Carolina Calling
Well, someone is getting work units as the available count has gone from ~52K to ~27K. Would one of those someones who have actually gotten WU recently tell us what they're running on, please? I've got three Windows machines running XP, 7 and 8 that can't get any of them....
6) Message boards : Number crunching : So, what ARE we doing ... ? (Message 22284)
Posted 27 Apr 2010 by Carolina Calling
Dear Carolina,

Just to let you know that there *is* a project under development here for running \\\\\\\\\\\\\\\"real physics\\\\\\\\\\\\\\\" jobs for the LHC experiments on LHC@home. Live analysis of big data sets with high I/O rates will NOT be possible as BOINC is not suitable for this. But event simulation and some event reconstruction both look very interesting.

The project has been developed with quite a low profile up to now by a few people who are fully aware of its potential interest, both for CERN and for many of you volunteers, and we are just beginning to inform the experiments and the CERN management about it. It is technically quite sophisticated, depending on virtualization and some fairly recent work done here to support that, so we want to be sure that it works well before going seriously public. As you know, a lot of work is needed to turn a working prototype (which we have) into a viable operation.

Thanks a lot for your interest and support,


PS: I was the technical coordinator of the original LHC@home effort starting in 2004 - seems a long time ago!

Thank you, Ben! It seems there *is* something for
which to look forward! Do you think there will be
a time when LHC@home projects might have active support
from your administration?

Thank you for all your efforts!

-- Carolina Calling
7) Message boards : Number crunching : So, what ARE we doing ... ? (Message 22271)
Posted 25 Apr 2010 by Carolina Calling
So it seems we really do not have much support for LHC@home.

That is unfortunate. Maybe someone should calculate the
actual computing power available through LHC@home and
let the department heads or the director general (or the
people in the member nations WHO ARE ACTUALLY PAYING
THE BILLS) how much they could save or how much more
could be done if LHC@home were actually being used.

We volunteers could contact our representatives and
suggest that CERN could well be using this resource
more efficiently (and save money/get more work done)?
8) Message boards : Number crunching : LHC is conflicting with other projects (Message 22270)
Posted 25 Apr 2010 by Carolina Calling
I have seen the high priority behavior discussed.
I have seen it only on computers where I am running
more than three projects. I have found that

1.) I can set one of the other projects to not
get more work. When there are only three
projects have work left, the scheduler works

2.) I can suspend one or more of the projects for
a while and the scheduler operates correctly
for the remaining projects.

You might consider only running three projects at a
per machine....
9) Message boards : Number crunching : So, what ARE we doing ... ? (Message 22260)
Posted 22 Apr 2010 by Carolina Calling
I suspect that many of us joined LHC@home with
the hope of helping LHC with the actual data

The LHC (will) generate pentabytes of data from
its detectors. However, it is stated that the
shear size of these data and network bandwidth
limitations prevent running actual data analysis
for any of the detectors with LHC@home.

I have found no information on the LHC@home web
site that tells me what part of the LHC, or what
level of management, has committed itself to
using LHC@home.

So, *is* there a commitment to LHC@home by
CERN as a whole? Do the director general
or any of the departments have a position
on using LHC@home? We represent a potentially
huge computing resource. Is there a commitment
of resources beyond volunteer programming and use?

I cannot imagine that CERN, with its obviously
huge need for computing, could not find a way
to allow us to participate in a larger way. I
know I, for one, would appreciate it.

Please note, I thank CERN for allowing us to
participate even in the current limited way!
10) Message boards : LHC@home Science : Can I Disconnect From LHC@Home? (Message 21450)
Posted 3 Aug 2009 by Carolina Calling
Yes, it has been a long time since LHC@home has
generated work units ... until today. There are,
as I type, 473 work units to be verified. The last
time I checked, there was one work unit unverified
and two work units outstanding. This means we
again have an active project. The only trick
now is getting the work units when they are released.
There are a lot of folks competing for them.

Good Luck!

©2024 CERN