Message boards :
Number crunching :
Large work units
Message board moderation
Author | Message |
---|---|
Send message Joined: 17 Sep 04 Posts: 14 Credit: 12,301 RAC: 0 |
Seems like the units, are alot bigger than last time was crunching. |
Send message Joined: 3 Sep 04 Posts: 212 Credit: 4,545 RAC: 0 |
> Seems like the units, are alot bigger than last time was crunching. > Unfortunately there will be faster work units coming soon. I say unfortunaly because short jobs will increase the server load further. Markku Degerholm LHC@home admin |
Send message Joined: 1 Sep 04 Posts: 157 Credit: 82,604 RAC: 0 |
> Seems like the units, are alot bigger than last time was crunching. My P4 2.88GHz gives me an estimated CPU time of 13,5h. I'm wondering about the real CPU time. Best greetings from Belgium Thierry |
Send message Joined: 17 Sep 04 Posts: 22 Credit: 983,416 RAC: 0 |
its estomating 12:52:12 on my P4 2.8 @ 3.36, but it hasnt started crunching yet, so, well just have to wate and see! ~Keith |
Send message Joined: 17 Sep 04 Posts: 14 Credit: 12,301 RAC: 0 |
Hey Markku Maybe its unfortunaly, with small workunits, but if the workunit's are smaller, then you can get workunit's for 3-5 days instead. So you only have to connect to the server once or twice a week. |
Send message Joined: 2 Sep 04 Posts: 121 Credit: 592,214 RAC: 0 |
Hmm... Also haven't started working the new WorkUnits, LHC predicts : AthlonXP-M 1800+ : 11h56m AthlonXP 3000+ : 8h23m Scientific Network : 45000 MHz - 77824 MB - 1970 GB |
Send message Joined: 2 Sep 04 Posts: 4 Credit: 45,763 RAC: 0 |
hi all Estimating time to 12h32mn01s on PIV 3.06Ghz HT But very surprising ! The first falls down less than 2h30mn, then transfert the result with 12mn35s time passed on the wu. The second falls down slower until the first finished, then grows again because climate start calculating. Is there a problem with hypertrading processor ? Honi soit ki mal y pense ! <img src="http://www.boincstats.com/stats/banner.php?cpid=332dfe11170cde798d4a66eb658a5f60"> |
Send message Joined: 3 Sep 04 Posts: 212 Credit: 4,545 RAC: 0 |
> Is there a problem with hypertrading processor ? Most likely not. CPU time predictions are just not very accurate, and some workunits are "unstable", meaning that their parameters are such that the protons collide to the accelerator walls and the simulation stops prematurely. Markku Degerholm LHC@home admin |
Send message Joined: 17 Sep 04 Posts: 14 Credit: 12,301 RAC: 0 |
I am already crunching my first workunit on a AMD Athlon 64 3200+ In time to completion is says 8 hours and 36 minutes. But already crunched 45 procent in 1½ hour. So it is running pretty well. |
Send message Joined: 17 Sep 04 Posts: 52 Credit: 247,983 RAC: 0 |
|
Send message Joined: 1 Sep 04 Posts: 55 Credit: 21,297 RAC: 21 |
> > Unfortunately there will be faster work units coming soon. I say unfortunaly > because short jobs will increase the server load further. > faster than the 0 to 5 seconds wus I am getting at the moment? ;) my problem is that I set LHC to about 21% cpu share, but it gets very few cycles on it, because it switches the projects right after those very short wus? any suggestions? (my settings are set to "switch every 60 minutes) |
Send message Joined: 17 Sep 04 Posts: 69 Credit: 26,714 RAC: 0 |
> my problem is that I set LHC to about 21% cpu share, but it gets very few > cycles on it, because it switches the projects right after those very short > wus? any suggestions? (my settings are set to "switch every 60 minutes) > > I have the same issue, but on my laptop, Monday the units worked one after the other for the whole hour, but now it switches projects each time WU completes. [edit] Come to think about it I was running Mondays units on 4.19, but now I'm on 4.23.....I wonder [edit] [edit again]Also, mondays' units were not "left in memory" the current ones "are left in memory" [end edit] Formerly mmciastro. Name and avatar changed for a change The New Online Helpsytem help is just a call away. |
Send message Joined: 2 Sep 04 Posts: 545 Credit: 148,912 RAC: 0 |
> > my problem is that I set LHC to about 21% cpu share, but it gets very > few > > cycles on it, because it switches the projects right after those very > short > > wus? any suggestions? (my settings are set to "switch every 60 minutes) > > > > > I have the same issue, but on my laptop, Monday the units worked one after the > other for the whole hour, but now it switches projects each time WU > completes. > > > [edit] Come to think about it I was running Mondays units on 4.19, but now I'm > on 4.23.....I wonder [edit] > As I understand this, BOINC uses the ending of a Work Unit to make the selection of the next WU to process. The difficulty is that regardless of the actual Resource Debt that should exist, an inappropriate choice of project is made. Since it is mostly annoying this is not a high priority problem ... |
Send message Joined: 17 Sep 04 Posts: 52 Credit: 247,983 RAC: 0 |
> As I understand this, BOINC uses the ending of a Work Unit to make the > selection of the next WU to process. The difficulty is that regardless of the > actual Resource Debt that should exist, an inappropriate choice of project is > made. Seems to be a matter of boinc version or something. At least I've noticed that sometimes boinc makes "right" choices while other times it doesn't (ok, I haven't yet made the connection to the versions..) > Since it is mostly annoying this is not a high priority problem ... It's pretty darn annoying when deadline is approacing and boinc switches to another project ;) |
Send message Joined: 23 Oct 04 Posts: 358 Credit: 1,439,205 RAC: 0 |
|
Send message Joined: 2 Sep 04 Posts: 352 Credit: 1,393,150 RAC: 0 |
> > > > It's pretty darn annoying when deadline is approacing and boinc switches > to > > another project ;) > > > > > > agree ! exact expressed ! > > greetz from Switzerland > littleBouncer ========== With the newer Client Version 4.20 & up you can suspend a project or projects if you want to so the GUI will switch to the project you want to run ... :) |
Send message Joined: 17 Sep 04 Posts: 52 Credit: 247,983 RAC: 0 |
|
©2024 CERN