1)
Message boards :
News :
Project down due to a server issue
(Message 27522)
Posted 11 Jun 2015 by Carolina Calling Post: Would this explain why BOINC is producing the following when connecting to LHC@Home? 06/11/2015 13:56:28 | LHC@home 1.0 | Scheduler request failed: HTTP internal server error |
2)
Message boards :
News :
Heavy I/O on Windows WUs
(Message 27001)
Posted 19 Nov 2014 by Carolina Calling Post: I'm glad that I got the number of "fort" files per task correct. The problem I have is that my system tends to be running more than one SixTrack job at a time and the impact of the I/O is crippling. The "fort" files are not normally soooo chopped up either. I was astounded to find five SixTrack tasks had ~240 file in over 360,000 fragments. It particularly impacts job creation (when there is heavy use of the swap file). How often does SixTrack checkpoint? Is it possible that it is somehow checkpointing inappropriately? What controls when SixTrack checkpoints? Thanks for the response! -- CCW |
3)
Message boards :
News :
Heavy I/O on Windows WUs
(Message 26989)
Posted 14 Nov 2014 by Carolina Calling Post: Has there been a change in the I/O for generating "fort" files in SixTrack? I don't know whether to blame the work units or SixTrack for these I/O problems (although I suspect the latter). As the I/O was adversely effecting my system, I've now aborted all the "w-" work units and continue to get no new work units. I, for one, have been very pleased to be a small part of LHC SixTrack's work and look forward to being part of it again. I continue to process ATLAS and VirtualLHC work units. Currently, I'm not accepting any new work units for any project in order to upgrade to the new version of BOINC (7.4.27). I let all projects' work units finish before upgrading to start with a clean slate. I'll see what happening with SixTrack so that I may turn it back on along with everything else. I hope SixTrack will be fixed in the near future. Please, let us know if the source of the I/O problem is found. -- CCW Durham, NC, US |
4)
Message boards :
News :
Heavy I/O on Windows WUs
(Message 26981)
Posted 13 Nov 2014 by Carolina Calling Post: I'm not surprised that Windows is seeing a lot of I/O. I was just defraggler'ing my system disk and found 30 LHC fort files that had 73,000+ fragments. (That's 2K+ fragments per file, folks.) I've seen this kind of thing twice. Once, someone had intentionally turned off buffering. The second, someone had decided it might be a "good idea" to do an "fflush()" after every write.... I'm fortunate in two ways, I'm using only six of eight CPUs for BOINC and LHC@home only downloads a limited number of WU per host. If I had a lot of these WUs processing at any one time, I'd run out of extents in a heartbeat.... I'm taking no more LHC@home WUs until this is straightened out. -- CCW (aka Carolina Calling) Durham, NC, US |
5)
Message boards :
News :
More work coming now.
(Message 25556)
Posted 9 May 2013 by Carolina Calling Post: Well, someone is getting work units as the available count has gone from ~52K to ~27K. Would one of those someones who have actually gotten WU recently tell us what they're running on, please? I've got three Windows machines running XP, 7 and 8 that can't get any of them.... |
6)
Message boards :
Number crunching :
So, what ARE we doing ... ?
(Message 22284)
Posted 27 Apr 2010 by Carolina Calling Post: Dear Carolina, Thank you, Ben! It seems there *is* something for which to look forward! Do you think there will be a time when LHC@home projects might have active support from your administration? Thank you for all your efforts! -- Carolina Calling |
7)
Message boards :
Number crunching :
So, what ARE we doing ... ?
(Message 22271)
Posted 25 Apr 2010 by Carolina Calling Post: So it seems we really do not have much support for LHC@home. That is unfortunate. Maybe someone should calculate the actual computing power available through LHC@home and let the department heads or the director general (or the people in the member nations WHO ARE ACTUALLY PAYING THE BILLS) how much they could save or how much more could be done if LHC@home were actually being used. We volunteers could contact our representatives and suggest that CERN could well be using this resource more efficiently (and save money/get more work done)? |
8)
Message boards :
Number crunching :
LHC is conflicting with other projects
(Message 22270)
Posted 25 Apr 2010 by Carolina Calling Post: I have seen the high priority behavior discussed. I have seen it only on computers where I am running more than three projects. I have found that 1.) I can set one of the other projects to not get more work. When there are only three projects have work left, the scheduler works again. 2.) I can suspend one or more of the projects for a while and the scheduler operates correctly for the remaining projects. You might consider only running three projects at a per machine.... |
9)
Message boards :
Number crunching :
So, what ARE we doing ... ?
(Message 22260)
Posted 22 Apr 2010 by Carolina Calling Post: I suspect that many of us joined LHC@home with the hope of helping LHC with the actual data analysis. The LHC (will) generate pentabytes of data from its detectors. However, it is stated that the shear size of these data and network bandwidth limitations prevent running actual data analysis for any of the detectors with LHC@home. I have found no information on the LHC@home web site that tells me what part of the LHC, or what level of management, has committed itself to using LHC@home. So, *is* there a commitment to LHC@home by CERN as a whole? Do the director general or any of the departments have a position on using LHC@home? We represent a potentially huge computing resource. Is there a commitment of resources beyond volunteer programming and use? I cannot imagine that CERN, with its obviously huge need for computing, could not find a way to allow us to participate in a larger way. I know I, for one, would appreciate it. Please note, I thank CERN for allowing us to participate even in the current limited way! |
10)
Message boards :
LHC@home Science :
Can I Disconnect From LHC@Home?
(Message 21450)
Posted 3 Aug 2009 by Carolina Calling Post: Yes, it has been a long time since LHC@home has generated work units ... until today. There are, as I type, 473 work units to be verified. The last time I checked, there was one work unit unverified and two work units outstanding. This means we again have an active project. The only trick now is getting the work units when they are released. There are a lot of folks competing for them. Good Luck! |
©2025 CERN