Message boards : Number crunching : Will this test stop next?-No, ... ?
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · Next

AuthorMessage
Profile Logan5@SETI.USA
Avatar

Send message
Joined: 30 Sep 04
Posts: 112
Credit: 104,059
RAC: 0
Message 7667 - Posted: 13 May 2005, 2:49:32 UTC - in response to Message 7666.  

> Looking at the exported user stats, in the last 30 days only 2600 users
> returned results.
>
> In the last 7 days 2400 and yesterday only 2000.
>

And we went through the last 220,000 WU's <B><I>How Fast</B></I>?? ;)

If only 2600, then 2400 then 2000 Users returned results with 5k + Host Computers in the last 14-21 days, doesn't that make the speed at which we ran dry this time, even more 'interesting'....?

Imagine if we had 10x the current number of users drawing on the current work product coming out of CERN.

This is why I think that we will not be adding more people soon, <B>unless</B> CERN has stockpiled a fairly large cache of work before throwing open the gates.
ID: 7667 · Report as offensive     Reply Quote
Profile The Gas Giant

Send message
Joined: 2 Sep 04
Posts: 309
Credit: 715,258
RAC: 0
Message 7668 - Posted: 13 May 2005, 5:06:35 UTC

I don't think it is an unreasonable question. Remember we are volunteers and we are participating in a science project with a deadline. The sooner CERN can work out the magnet control of the ring then the better. I say bring it on...another 1,000 or so users would be good for the project! I haven't run out of work since we started to get work again. There were a couple of scares with connection failures, but I didn't run out of work.

How many bugs have been worked out during the latest test? Let's see 0 cpu time, F5 screen saver oh and the latest the file deleter issue. I say this project is ready to expand it's user database! 1,000 when the million wu's are released then 2 weeks later another 1,000. Bring it on!

Live long and crunch!

Paul.

ps Logan 5....I am The Gas Giant (get it right - there is no other ;P ).
ID: 7668 · Report as offensive     Reply Quote
marshall

Send message
Joined: 28 Sep 04
Posts: 16
Credit: 2,678,563
RAC: 0
Message 7669 - Posted: 13 May 2005, 6:12:44 UTC - in response to Message 7664.  
Last modified: 13 May 2005, 6:13:03 UTC

Does it
> really matter if we have a 4 week break and then 4 weeks of work or if we have
> a 7 week break and then one week of work?

It does not, but this of course presumes that LHC people are able to be that fast. Which i do not think is our case.
<br />
--
marshall
ID: 7669 · Report as offensive     Reply Quote
ralic

Send message
Joined: 2 Sep 04
Posts: 28
Credit: 44,344
RAC: 0
Message 7670 - Posted: 13 May 2005, 8:02:26 UTC - in response to Message 7668.  

> we are participating in a science project with a deadline. The sooner CERN
> can work out the magnet control of the ring then the better. I say bring it

And this raises another interesting question: Is the lifetime of this project finite?

S@H has a reasonably infinite sky to scan, similarly E@H. CPDN can run models until hell freezes over. PP@H & F@H can analyse forever, but what happens to LHC@home once the LHC is built?

Wouldn't it be amusing if the "magnet control" is finalised before we get out of Beta? ;-)
ID: 7670 · Report as offensive     Reply Quote
Jayargh

Send message
Joined: 24 Oct 04
Posts: 79
Credit: 257,762
RAC: 0
Message 7672 - Posted: 13 May 2005, 10:01:10 UTC - in response to Message 7670.  

> And this raises another interesting question: Is the lifetime of this project
> finite?
>
>
>My understanding from past posts is that after we are done on this "prework" , we will also being analyzing results after they are completed. And since our computing power has so caught the eye of the physicists I believe they are planning other projects for us as well. The intention once everything is debugged and worked out is to go mainstream and stay that way and never be finished.
ID: 7672 · Report as offensive     Reply Quote
Magish

Send message
Joined: 29 Sep 04
Posts: 4
Credit: 593
RAC: 0
Message 7673 - Posted: 13 May 2005, 10:54:31 UTC

...And from what I've read, once LHC goes "live", the workload will be HUGE - which, for us crunchers, at least, is a GOOD thing. :P
ID: 7673 · Report as offensive     Reply Quote
Profile sysfried

Send message
Joined: 27 Sep 04
Posts: 282
Credit: 1,415,417
RAC: 0
Message 7677 - Posted: 13 May 2005, 13:22:57 UTC - in response to Message 7673.  

> ...And from what I've read, once LHC goes "live", the workload will be HUGE -
> which, for us crunchers, at least, is a GOOD thing. :P
>
That's what I expect as well.... I'm fully aware that the "GRID" computing which is done by the CERN institute does quite differ from LHC@home specs (mainly transfer speed but also other factors as well), but I'm sure they will have some work for us.
ID: 7677 · Report as offensive     Reply Quote
Profile Paul D. Buck

Send message
Joined: 2 Sep 04
Posts: 545
Credit: 148,912
RAC: 0
Message 7678 - Posted: 13 May 2005, 14:59:20 UTC

Just to throw something into the pot. We also have to keep in mind the limitations/capabilities of the hardware. Adding more users may seem like a good idea to improve throughput, but can the system handle the loads?

Look at the problems that SETI@Home has trying to handle the current participant base. Einstein@Home is super aggressive in cleaning the on-line database to reduce the load caused by an increased number of records.

We all can see the evidence that the server-side components do not scale as well as might be expected.

Just something to keep in mind. Adding another 5K users doubles the population but may increase server load by the square rather than by the double ...
ID: 7678 · Report as offensive     Reply Quote
Profile Paul D. Buck

Send message
Joined: 2 Sep 04
Posts: 545
Credit: 148,912
RAC: 0
Message 7679 - Posted: 13 May 2005, 18:04:28 UTC

It is a shame we are out of work. But I took a look and found I had a small stack queued up on one machine. Down to only 5 to go now ... should be done this afternoon ...

Then I can un-suspend the other projects. :)
ID: 7679 · Report as offensive     Reply Quote
Profile Markku Degerholm

Send message
Joined: 3 Sep 04
Posts: 212
Credit: 4,545
RAC: 0
Message 7680 - Posted: 13 May 2005, 21:10:34 UTC - in response to Message 7678.  


> Just something to keep in mind. Adding another 5K users doubles the population
> but may increase server load by the square rather than by the double ...

True. The major bottleneck on the server side is database. Other components can be distributed on separate servers, but then all of them need to use database... And database doesn't scale that easily, at least that old version of MySQL we are using. When comparing to many other database applications, BOINC does lots of database updates. Thus database replication doesn't give that much performance boost (but it complicates matters quite a bit) because all updates must be done on every database server.

Anyway, there are plans to increase number of users. However, I'm not sure when that's going to happen. We'll see.

Markku Degerholm
LHC@home admin
ID: 7680 · Report as offensive     Reply Quote
Profile sysfried

Send message
Joined: 27 Sep 04
Posts: 282
Credit: 1,415,417
RAC: 0
Message 7681 - Posted: 13 May 2005, 22:09:40 UTC - in response to Message 7680.  
Last modified: 13 May 2005, 22:12:06 UTC


> True. The major bottleneck on the server side is database. Other components
> can be distributed on separate servers, but then all of them need to use
> database... And database doesn't scale that easily, at least that old version
> of MySQL we are using. When comparing to many other database applications,
> BOINC does lots of database updates. Thus database replication doesn't give
> that much performance boost (but it complicates matters quite a bit) because
> all updates must be done on every database server.
>
> Anyway, there are plans to increase number of users. However, I'm not sure
> when that's going to happen. We'll see.
>
>
Interesting insight on the LHC server hardware... last thing I remembered was that there was a hardware upgrade at the end of last year.... ;-)

I know about the scaling problems a database can have when the database grows fast (which I believe is the case at LHC). So I'll keep my fingers crossed.... and will wait for the LHC team to do their work.

Happy weekend ya'll!

:-)

Cheers to the LHC Staff!

Sysfried
ID: 7681 · Report as offensive     Reply Quote
Profile FZB

Send message
Joined: 17 Sep 04
Posts: 23
Credit: 6,871,909
RAC: 12
Message 7682 - Posted: 13 May 2005, 23:07:43 UTC - in response to Message 7680.  
Last modified: 13 May 2005, 23:09:35 UTC

> True. The major bottleneck on the server side is database. Other components
> can be distributed on separate servers, but then all of them need to use
> database... And database doesn't scale that easily, at least that old version
> of MySQL we are using. When comparing to many other database applications,
> BOINC does lots of database updates. Thus database replication doesn't give
> that much performance boost (but it complicates matters quite a bit) because
> all updates must be done on every database server.

just out of curiosity and if you read that by chance (don't waste time on it if you don't have it ;) ). what version of mysql are you running? i have not so much mysql experiance but setup a mssql cluster once. would boinc refuse to run with newer mysql versions?

edit:
not that i suggest to "just" to cluster the db, i am well aware of constraints on money and manpower ;)

ID: 7682 · Report as offensive     Reply Quote
Profile Paul D. Buck

Send message
Joined: 2 Sep 04
Posts: 545
Credit: 148,912
RAC: 0
Message 7692 - Posted: 14 May 2005, 14:44:55 UTC - in response to Message 7682.  

> just out of curiosity and if you read that by chance (don't waste time on it
> if you don't have it ;) ). what version of mysql are you running? i have not
> so much mysql experiance but setup a mssql cluster once. would boinc refuse to
> run with newer mysql versions?
>
> edit:
> not that i suggest to "just" to cluster the db, i am well aware of constraints
> on money and manpower ;)

The problem is that now, they have the difficulty of upgrading the airplane while it is in-flight at 80,000 feet. And it is possible that the upgrade will break things that are "working" right now. I am not saying that it will, only that it could.

I don't know for sure, but this last group of work units seemed to be about 250,000; tops, so, even a million would not take us that long to blow through them with the current level of participation. Heck, some of the idle accounts might jump back into the pool.

ID: 7692 · Report as offensive     Reply Quote
ric

Send message
Joined: 17 Sep 04
Posts: 190
Credit: 649,637
RAC: 0
Message 7693 - Posted: 14 May 2005, 15:00:15 UTC - in response to Message 7692.  
Last modified: 14 May 2005, 15:01:14 UTC

Thankyou Paul,

You got the point...

To satisfy our all hungry clients, we need more, much more than just the only million.

airplane ?
Sorry for my words, for me it looks more than a submarine ;-)


//off topic
please remember here in Europe.Switzerland we have Pfingsten/Whitsun/Pentecost
this means "holiday". Nearly nobody is working. Schools also closed. Til Tuesday

I guess at LHC they are nealy all at the lake of geneva, having a swimm or bath

ID: 7693 · Report as offensive     Reply Quote
STE\/E

Send message
Joined: 2 Sep 04
Posts: 352
Credit: 1,393,150
RAC: 0
Message 7704 - Posted: 15 May 2005, 10:06:29 UTC

Heck, some of the idle accounts might jump back into the pool.
==========

It looks like some fresh people did jump back in from seeing a few new people rising up through the rankings ... :)
ID: 7704 · Report as offensive     Reply Quote
Profile Paul D. Buck

Send message
Joined: 2 Sep 04
Posts: 545
Credit: 148,912
RAC: 0
Message 7709 - Posted: 15 May 2005, 19:51:35 UTC - in response to Message 7704.  

> It looks like some fresh people did jump back in from seeing a few new people
> rising up through the rankings ... :)

All *I* have noticed about the stats is they seem to be whacky on all projects I am participating in ... I seem to only have gotten half of my daily normal ...

What is so odd, is that nothing has changed on my systems.
ID: 7709 · Report as offensive     Reply Quote
Profile Logan5@SETI.USA
Avatar

Send message
Joined: 30 Sep 04
Posts: 112
Credit: 104,059
RAC: 0
Message 7714 - Posted: 15 May 2005, 23:08:49 UTC - in response to Message 7709.  

> All *I* have noticed about the stats is they seem to be whacky on all projects
> I am participating in ... I seem to only have gotten half of my daily normal
> ...
>
> What is so odd, is that nothing has changed on my systems.
>

@ Paul Buck:

You're not the only one who has noticed this as well....
ID: 7714 · Report as offensive     Reply Quote
Profile Paul D. Buck

Send message
Joined: 2 Sep 04
Posts: 545
Credit: 148,912
RAC: 0
Message 7716 - Posted: 16 May 2005, 5:34:36 UTC - in response to Message 7714.  

Logan5,

> You're not the only one who has noticed this as well....

Yeah.

One of my more "fun" observations is that there is a "ripple" effect it seems where one project has a hiccup and then we see really odd things happening with all the projects.

I guess it is just that when the world is pinging on only one or two projects it brings out the the lurking demons ...

Then again, maybe I am just paranoid.

On the other hand, just because I am paranoid does not mean that "they" are not out to "get" me ... :)
ID: 7716 · Report as offensive     Reply Quote
Profile littleBouncer
Avatar

Send message
Joined: 23 Oct 04
Posts: 358
Credit: 1,439,205
RAC: 0
Message 7718 - Posted: 16 May 2005, 8:16:18 UTC - in response to Message 7716.  
Last modified: 16 May 2005, 8:24:08 UTC

> One of my more "fun" observations is that there is a "ripple" effect it seems
> where one project has a hiccup and then we see really odd things happening
> with all the projects.
>
> I guess it is just that when the world is pinging on only one or two projects
> it brings out the the lurking demons ...
>
Is it that why I can't line-up (or sync.) my CPID???
For about the last 3 weeeks I let my boxes work for each subscribed project ( with different ressource share), but the CPID do not synchronize:

Before I attached to the Alpha my CPID was: a71d98f6dfbad27f069dc6ef311669a4
then it changed on LHC@home(2 month ago) to : a84fcc4fe9a81ab2c3e7b3c1a104cc25
but the exported CPID (on all projects) is : 96f7bd62c27acec45dc6b82a59f407fe(*)

in all projects my CPID is : a71d98f6dfbad27f069dc6ef311669a4
exeptions are LHC with : a84fcc4fe9a81ab2c3e7b3c1a104cc25
and CPDN (*) : 96f7bd62c27acec45dc6b82a59f407fe (but it exports 'correctly' a71d98f6dfbad27f069dc6ef311669a4 to the stats)
Should the correct CPID not be : a84fcc4fe9a81ab2c3e7b3c1a104cc25 for all projects? As I read, it will be the highest...

How I can solve this 'missmatch'? Do I create a new client and then attach at new to each project or it is a waste of time...?

(*) I know the CPID on CPND will never change to the CPID which all other projects have, but it exports the 'correct' to the stats. (on my account it shows: 96f7bd62c27acec45dc6b82a59f407fe, but for the stats(BOINCstats or Synergy-stats it exports: a71d98f6dfbad27f069dc6ef311669a4

> Then again, maybe I am just paranoid.
>
> On the other hand, just because I am paranoid does not mean that "they" are
> not out to "get" me ... :)
>

Nono "they" will to "get" me not you.....:)

greetz from Switzerland
littleBouncer
BTW: I tried also to edit (correct the CPID) on each .xml-file, but it didn't work.....
ID: 7718 · Report as offensive     Reply Quote
ric

Send message
Joined: 17 Sep 04
Posts: 190
Credit: 649,637
RAC: 0
Message 7721 - Posted: 16 May 2005, 10:46:56 UTC - in response to Message 7718.  

>How I can solve this 'missmatch'
Good morning LB,

just a idea, never tryed it out.

What will happen, when you get a brand new email Adress and change everywhere
the older, different ones to the new, but unique one?

Eine neue email adresse lösen und in allen boinc projektli nur noch mit dieser
email adresse arbeiten, alle Aenderungen abwarten..

> I tried also to edit (correct the CPID) on each .xml-file, but it didn't work
hopeless, the file is overwritten with central stored datas.

gr
ric
ID: 7721 · Report as offensive     Reply Quote
Previous · 1 · 2 · 3 · Next

Message boards : Number crunching : Will this test stop next?-No, ... ?


©2024 CERN