61) Message boards : Number crunching : Funny thing with BOINC 4.43 (Message 7812)
Posted 24 May 2005 by Profile Thierry Van Driessche
Post:
> I had v4.43 installed on 7 different Computers Chrulle, I never was able to
> get any work from LHC with that client, as soon as I switched back to v4.25 I
> started getting WU's again ...
>
> I don't know if it was a bug or not but I for 1 will not run the v4.43 Client
> again until somebody says they can actually get WU's from LHC with it ...

I'm on v4.43 since yesterday morning and Boinc went dry for the first time on Seti WU's yesterday afternoon and also for Einstein WU's today. Boinc downloaded some 5 WU's this early morning but for now no way to have it download any WU for those 2 projects even after
24/05/2005 20:29:38||May run out of work in 4.00 days; requesting more

Since that message, no WU's have been downloaded. If this doesn't change after the 2 last crunched Seti WU's I think I will go back to 4.35. With that version I never had any problem at all.
62) Message boards : Number crunching : Holy WU's ... BatMan ... !!! (Message 7362)
Posted 30 Apr 2005 by Profile Thierry Van Driessche
Post:
Well, looks like we are running at full speed again.

Well done, LHC@Home.
63) Message boards : Number crunching : Wish List ... !!! (Message 7358)
Posted 30 Apr 2005 by Profile Thierry Van Driessche
Post:
Hello Poorboy,

There is still a "wish" forum. You can find it here.

Need a fresh shower? ;-)
64) Message boards : LHC@home Science : UK Particle Physicists Prepare for Data Torrent (Message 7233)
Posted 26 Apr 2005 by Profile Thierry Van Driessche
Post:
News from PPARC

UK scientists at CCLRC's Rutherford Appleton Laboratory (RAL) in Oxfordshire recently joined computing centres around the world in a networking challenge that saw RAL transfer 60 million megabytes of data over a ten-day period. A home user with a 512 kilobit per second broadband connection would be waiting 30 years to complete a download of the same size. RAL is a member of the GridPP project - the UK effort by particle physicists to prepare for the massive data volumes expected from the next generation of particle physics experiments.

The exercise was designed to test the global computing infrastructure for the Large Hadron Collider (LHC), the world's biggest particle physics experiment currently being built at CERN in Switzerland. To get ready for the LHC's unprecedented data rates, the worldwide collaboration is carrying out a series of "Service Challenges", the most recent of which (Service Challenge 2) has just been successfully completed. The eight labs involved sustained an average continuous data flow of 600 megabytes per second (MB/s) for 10 days from CERN. The total amount of data transmitted during this challenge (500 million megabytes) would take about 250 years to download using a typical 512 kilobit per second household broadband connection.
.....................
65) Message boards : Number crunching : Hurry up! (Message 7222)
Posted 26 Apr 2005 by Profile Thierry Van Driessche
Post:
> There was a momentary failure in the server-side job submission system. So it
> seems that some workunits are without input files. We'll try to fix up the
> mess. Daily quota will be temporarily increased after that.

Thanks for the bad but also for the good news. See you are still working there.

Thanks Markku,

Regards from Belgium,

Thierry
66) Message boards : Number crunching : Hurry up! (Message 7215)
Posted 26 Apr 2005 by Profile Thierry Van Driessche
Post:
> Within my 'Work' tab I am seeing about ten of these units with a status of
> 'Download failed' - do I delete them or let Boinc clean them up itself.

It will be cleaned when new WU's are downloaded.
67) Message boards : Number crunching : Hurry up! (Message 7210)
Posted 26 Apr 2005 by Profile Thierry Van Driessche
Post:
Same problems of download failures here. How pitty ;-(

and error messages
26/04/2005 19:54:44|LHC@home|MD5 computation error for v64lhc.D1-D2-MQonly-inj-no-skew-48s6_8515.5769_1_sixvf_16794.zip: -108
26/04/2005 19:54:44|LHC@home|Checksum or signature error for v64lhc.D1-D2-MQonly-inj-no-skew-48s6_8515.5769_1_sixvf_16794.zip
26/04/2005 19:54:44|LHC@home|Unrecoverable error for result v64lhc.D1-D2-MQonly-inj-no-skew-48s6_8515.5769_1_sixvf_16794_0 (WU download error: couldn't get input files: v64lhc.D1-D2-MQonly-inj-no-skew-48s6_8515.5769_1_sixvf_16794.zip -108 MD5 computation error)
26/04/2005 19:54:44|LHC@home|Unrecoverable error for result v64lhc.D1-D2-MQonly-inj-no-skew-48s6_8522.5_1_sixvf_16798_0 (WU download error: couldn't get input files: v64lhc.D1-D2-MQonly-inj-no-skew-48s6_8522.5_1_sixvf_16798.zip -108 MD5 computation error)

26 WU failed on 50. Wow, that's more then 50%.
68) Message boards : Number crunching : Average turnaround time (Message 7029)
Posted 15 Apr 2005 by Profile Thierry Van Driessche
Post:
> Well, there is something odd going on with this value. In LHC@Home all but one
> of my computers have 0 days. On other projects almost all have some value
> other than zero.

Same here. 0 at LHC@H but at S@H 1.22 and at E@H 2.99.
Strange.
69) Message boards : Number crunching : Sixtrack 4.67 available (Message 6978)
Posted 12 Apr 2005 by Profile Thierry Van Driessche
Post:
> Why there is a difference in use of memory from the same application 4.67?
>
> See the picture bellow the sixtrack 4.67 once at 51'572 KB and once at 12'072
> KB:
> I ask this because the second one I crunch now the 3th. time on the same host,
> once with 4.66 at alpha success, once here with 4.64 failed with 0 CPU-time,
> and now with 4.67: it hasn't finished yet. But I noticed this difference in
> using the memory.

Hello littleBouncer,

this had to do with the use of the graphics or screensaver AFAIK. Once one of them has been used, the used RAM drops down to some 10.9xxkB on my host but the peak consumption is shown as some 45.9xxkB in the taskmanager.

sixtrack v4.67
70) Message boards : Number crunching : Sixtrack 4.67 available (Message 6973)
Posted 12 Apr 2005 by Profile Thierry Van Driessche
Post:
> Little calculation:
> 1209600 seconds rounded to 1200000!

Don' know if this could be the answer alariz.

At S@H the deadline is exactly 14 days and at E@H it is exactly 7 days.
71) Message boards : Number crunching : Sixtrack 4.67 available (Message 6971)
Posted 12 Apr 2005 by Profile Thierry Van Driessche
Post:
First time I notice this: the report deadline is 14 days minus 2 hours 40 minutes.
Was this not 14 days round?

I have 8 WU's that have been downloaded at 10:53:39 UTC, which is the correct time, but the deadline is stated on the website as 8:13:39 UTC for all of them.

The reported deadline in Boinc Manager, v4.30, is stated as the same time as on the website.
72) Message boards : Number crunching : donated time wasted (Message 6956)
Posted 11 Apr 2005 by Profile Thierry Van Driessche
Post:
> even with the most new versions of the software the problem stays.
> in an other part of this board i found a solution.
> i keep every project in memory. this makes a difference.
>
> it is however a problem which lhc@home should fix.

Agree with this.

If somebody starts with LHC@Home they will use it in the default way concerning the preferences. AFAIK, the default is leave in memory = no.
73) Message boards : Number crunching : Pressing F5 causes error in sixtrack screensaver gui. (Message 6901)
Posted 9 Apr 2005 by Profile Thierry Van Driessche
Post:
Still using Sixtrack 4.64

I did the test with the graphics, not the screensaver. Same problem: show graphics is not working after that.

Win XP Pro SP2.
74) Message boards : Number crunching : LHC news---- Is there any? (Message 6774)
Posted 31 Mar 2005 by Profile Thierry Van Driessche
Post:
> He guys look at his joining date, I don't believe he is a CERN-team-member.

Hello littleBouncer,

Yes he is. Look here.

Greetings from Belgium,
Thierry
75) Message boards : Number crunching : LHC news---- Is there any? (Message 6757)
Posted 30 Mar 2005 by Profile Thierry Van Driessche
Post:
Server Status

Up, 1574 workunits to crunch
76) Message boards : Number crunching : Daily Quota? (Message 6672)
Posted 22 Mar 2005 by Profile Thierry Van Driessche
Post:
> Anyway, the quota is now at 30 per day per host. But we might run out of work
> tonight... More work coming tomorrow, I hope.

Is there any possibilty to link the daily quota to the type of WU?

I mean by this, if a WU is one of 100.000 turns or one of 1.000.000 turns, this makes a difference of 10 times.

Can the quota be adapted regarding the number of turns of the WU's?
77) Message boards : Number crunching : New units (Message 6654)
Posted 21 Mar 2005 by Profile Thierry Van Driessche
Post:
Server Status

Up, 3542 workunits to crunch
62 concurrent connections
78) Message boards : Number crunching : Now we know 'What happened to the prizes?' (Message 6607)
Posted 17 Mar 2005 by Profile Thierry Van Driessche
Post:
> I think it's just coincidence that all of them are on the
> > top 100 producer list.

I'm not one of the top 100 also, being at the 274th place.
Total credit 7,871.55
Recent average credit 57.23

I tried as much as possible to help people starting with LHC@Home, LHC@home member since 1 Sep 2004.

I also tried to find out where we had problems and what could be the reason(s) for it.

My sincere thanks at the people at LHC for having the chance to remember in this way my participation in this project.
79) Message boards : Number crunching : CERN in the news (Message 6606)
Posted 17 Mar 2005 by Profile Thierry Van Driessche
Post:
Article in English:

World's Largest Computing Grid Surpasses 100 Sites
Wed 16 Mar 2005

Yesterday the LHC Computing Grid (LCG) project announced that its computing Grid now includes more than 100 sites in 31 countries. This makes it the world's largest international scientific Grid. The UK is the biggest single contributor to the LCG, with more than a fifth of the Grid's processing power at its 16 sites.
..................
80) Message boards : Number crunching : Very realistic!!! (Message 6577)
Posted 15 Mar 2005 by Profile Thierry Van Driessche
Post:
Hey, come on people, remind we had in the past weeks 100's of WU's with 00:00:00 CPU time.

They were throw away in a minimum of time.


Previous 20 · Next 20


©2024 CERN