1)
Message boards :
LHC@home Science :
LHC@home - Out of Tasks?
(Message 20967)
Posted 13 Jan 2009 by AdeB Post: I do get no tasks anymore. Are you out of WUs? Please note: this project rarely has work |
2)
Message boards :
Number crunching :
Segmentation violation
(Message 20724)
Posted 30 Oct 2008 by AdeB Post: And it happened again: 30-Oct-2008 09:41:18 [lhcathome] Sending scheduler request: To fetch work. Requesting 714 seconds of work, reporting 0 completed tasks 30-Oct-2008 09:41:23 [lhcathome] Scheduler request succeeded: got 0 new tasks 30-Oct-2008 09:56:37 [lhcathome] Sending scheduler request: To fetch work. Requesting 716 seconds of work, reporting 0 completed tasks 30-Oct-2008 09:56:42 [lhcathome] Scheduler request succeeded: got 0 new tasks 30-Oct-2008 10:11:56 [lhcathome] Sending scheduler request: To fetch work. Requesting 1853 seconds of work, reporting 1 completed tasks 30-Oct-2008 10:12:02 [lhcathome] Scheduler request succeeded: got 0 new tasks SIGSEGV: segmentation violation Stack trace (8 frames): /usr/bin/boinc_client[0x80b7ee0] [0xffffe420] /usr/bin/boinc_client[0x8077b7c] /usr/bin/boinc_client[0x8068f6d] /usr/bin/boinc_client[0x809b68b] /usr/bin/boinc_client[0x809ba19] /lib/libc.so.6(__libc_start_main+0xdc)[0xb7b60fdc] /usr/bin/boinc_client(__gxx_personality_v0+0x115)[0x804b411] Exiting... The last change to this log-file was at 10:12, so the segmentation violation occurred immediately after boinc contacted lhcathome. AdeB |
3)
Message boards :
Number crunching :
Segmentation violation
(Message 20580)
Posted 1 Oct 2008 by AdeB Post: I have seen this as well. I didn\\\'t really suspect LHC but now that you mention it, I think it has only happened while LHC has had work... but I won\\\'t swear to that. I\\\'ve seen it on gentoo with BOINC 5.10. BOINC is installed through portage so it was compiled on the box it is running on. I even tried reinstalling (which means recompiling) in case some underlying library changed but that didn\\\'t help. But it is pretty infrequent which of course means hard to troubleshoot... Looks like yesterday it happened to me as well. AdeB |
4)
Message boards :
Number crunching :
The new look bugs
(Message 19806)
Posted 16 Jul 2008 by AdeB Post: Something goes wrong when you try to look at a workunit: Fatal error: Call to undefined function get_int() in /srv/httpd/lhcathome.cern.ch/lhcathome/html/user/workunit.php on line 8 |
5)
Message boards :
Number crunching :
It's raining LHC WU's - I love it !
(Message 18732)
Posted 18 Dec 2007 by AdeB Post: Once again receiving many WU, however with a completion time of 43 minutes or so the work units are being competed anywhere from .81 seconds to 2.2 seconds, with 00.0 credits. Must of had over forty of them within a two hours period. This is on all my machines that are receiving them. same here |
6)
Message boards :
Number crunching :
pending credit where the other 4 computers have credit granted
(Message 18567)
Posted 6 Nov 2007 by AdeB Post: Like many of us i have my share of pending workunits. I noticed the extra text, I don't know where the 4.65 comes from.I'm running Boinc client version 5.8.15 - this is not the most recent version, but recent enough i think. If the result differs to much it should be marked invalid. I just don't like it to be pending :-) Anne |
7)
Message boards :
Number crunching :
pending credit where the other 4 computers have credit granted
(Message 18561)
Posted 5 Nov 2007 by AdeB Post: Like many of us i have my share of pending workunits. This workunit is different. The other 4 computers that received it have got credit, my result is still pending. Is there an explanation for this? Anne |
8)
Message boards :
Number crunching :
The ghosts in the machine
(Message 16817)
Posted 4 May 2007 by AdeB Post: Well, this round I got 5 ghost WU's where for the last round I got 20 ghosts and one that was actually sent. Yes, I got 3 ghost WU's. |
9)
Message boards :
Number crunching :
New Hosts are Multiplying !
(Message 16683)
Posted 8 Apr 2007 by AdeB Post: There is a fix 49 copies of host #2 in 24 hours ... merged them all, but there has to be a fix for this somewhere!! |
©2024 CERN