Message boards :
Xtrack/SixTrack :
Xtrack (Xboinc)
Message board moderation
Previous · 1 . . . 5 · 6 · 7 · 8 · 9 · 10 · 11 . . . 12 · Next
Author | Message |
---|---|
Send message Joined: 5 Apr 25 Posts: 51 Credit: 824,170 RAC: 22,683 ![]() ![]() ![]() |
I noticed something similar on a a Win10 machine Batches with the same name but different scan # are alternatively using x86 or x86_64 apps. On a Win11 machine I see the two app versions alternate between tasks that at least have different names. ![]() |
![]() Send message Joined: 7 Aug 11 Posts: 118 Credit: 28,742,838 RAC: 43,399 ![]() ![]() ![]() |
On one of the machines (Ubuntu Server) I have even completely removed any 32bit compatibility libs, yet the work units still show (through Boinc Manager, Tasks tab, Properties button) that they're i686. It seems rather odd. |
![]() Send message Joined: 29 Sep 04 Posts: 188 Credit: 705,487 RAC: 0 ![]() ![]() |
From the late '70's I programmed engineering applications in Fortran-66 and -77. Back then, things like memory were seriously limited! Our SEL 32/77 machine had 1 Megabyte of RAM!!! Multiuser system with 20-30 users on it at any one time. An efficient compiler was absolutely essential!!! Wave upon wave of demented avengers march cheerfully out of obscurity into the dream. |
Send message Joined: 10 Jul 25 Posts: 5 Credit: 186,170 RAC: 7,987 |
Dear Yeti, Is this difference specific in the latest DEV work units or is it exclusively caused by the production workflow? If it's from the production flow, I'm afraid this level of difference is plausible. Every set of particles we send to the volunteers for tracking may or may not be stable, and we have no way to predict it in advance (if we knew in advance, tracking would be not necessary). So a WU might be long up to the expected maximum (10h on a modern CPU, all particles surviving), or possibly very short (most particles, if not all, lost before the target 1M turns). If it's from the DEV workflow I sent yesterday, then I am surprised, it should be jobs of equal length. Perhaps the machines got different loads from prod while working on DEV? Was at least the progress bar in the DEV workflow behaving reasonably? Thank you for the precious feedback and helping with the development! Carlo |
![]() ![]() Send message Joined: 2 Sep 04 Posts: 468 Credit: 214,499,708 RAC: 41,902 ![]() ![]() |
Is this difference specific in the latest DEV work units or is it exclusively caused by the production workflow?It was the DEV-Work-Units If it's from the DEV workflow I sent yesterday, then I am surprised, it should be jobs of equal length. Perhaps the machines got different loads from prod while working on DEV?HM, I don't think that the other tasks (LHC live and CPDN) could make such a huge difference Was at least the progress bar in the DEV workflow behaving reasonably?So far as I can say it looked good and credible ![]() Supporting BOINC, a great concept ! |
Send message Joined: 1 Nov 05 Posts: 2 Credit: 145,125 RAC: 7,311 ![]() |
Ah this explain why different WUs have different lengths! I'm still with 0.41 WUs, but so far everything works fine. The System Monitor on Kubuntu reports that they are all i686, at least, that's the name of the process, are they 32 bit? |
![]() ![]() Send message Joined: 2 Sep 04 Posts: 468 Credit: 214,499,708 RAC: 41,902 ![]() ![]() |
With unlimited and one CPU without days parameter,and This 50 Tasks running on a 64-Core Threadripper (4-6 hours for one Task). I don't get how this should work. If I limit the Client to one core, the workfetch will ask for only 1 core and 50 WUs may be a project-limit per Core, so far this is okay. But now, the client will run WUs only on 1 Core. ![]() Supporting BOINC, a great concept ! |
![]() Send message Joined: 15 Jun 08 Posts: 2679 Credit: 286,639,849 RAC: 99,609 ![]() ![]() |
As long as rsc_fpops_est=500e9 is set for each task there's nothing useful you can do on the client side. With each valid result being reported to the server it will step by step adjust the flops value for each computer. The latter affects work fetch as well as estimated runtimes. Whenever the app's version number gets changed this process will start from scratch. @camontan It's up to the project to send a more realistic rsc_fpops_est value per task. Even a mean value based on an educated guess would be better than the current setting. |
Send message Joined: 18 May 15 Posts: 1 Credit: 2,320,255 RAC: 3,546 ![]() ![]() |
The granted Credits are differing very much. This WU https://lhcathome.cern.ch/lhcathome/result.php?resultid=427470867 with more than 8h runtime got 222.64 Credits. This one https://lhcathome.cern.ch/lhcathome/result.php?resultid=427470891 with slitly less runtime 534.44. |
![]() Send message Joined: 15 Jun 08 Posts: 2679 Credit: 286,639,849 RAC: 99,609 ![]() ![]() |
Ever since LHC@home uses BOINC's built-in credit calculation (and will not change this). Hence, it makes no sense to complain about credits here. If you feel there's something wrong you may open an issue at github: https://github.com/BOINC/boinc |
![]() Send message Joined: 20 Jun 14 Posts: 407 Credit: 238,712 RAC: 0 ![]() ![]() |
Credit is always a hot topic, and yes, this is a BOINC-wide issue rather than something unique to LHC@home. As far as I understand, we are using CreditNew, which bases granted credit on the actual floating-point operations (FLOPs) performed, with a scaling factor to keep credit levels consistent across projects. Providing more accurate values for rsc_fpops_est should improve estimates and reduce distortions. The credit assignment itself is something we can tune, since the validator has hooks that allow us to adjust the equations if needed. |
![]() Send message Joined: 9 Feb 16 Posts: 49 Credit: 540,726 RAC: 69 ![]() ![]() |
Whilst I had no problems with SixTrack, I ended up crashing the deadlines with my Xtrack tasks. Could you make the deadline much longer, please? |
Send message Joined: 16 Sep 25 Posts: 3 Credit: 379,522 RAC: 14,691 ![]() ![]() ![]() |
Credit is always a hot topic, and yes, this is a BOINC-wide issue rather than something unique to LHC@home. As far as I understand, we are using CreditNew, which bases granted credit on the actual floating-point operations (FLOPs) performed, with a scaling factor to keep credit levels consistent across projects. Providing more accurate values for rsc_fpops_est should improve estimates and reduce distortions. The credit assignment itself is something we can tune, since the validator has hooks that allow us to adjust the equations if needed. BOINC has no way to know how many floating point operations are performed by the application. it can only estimate that based on the value the project sets for the estimated flops combined with the estimated performance of the device, which is also a flawed process. the app may use more than one type and size of operation. in my opinion, a flat, static credit ends up being the most fair approach. each WU should have a set value that is independent on the device in which it was run, which is not the case with CreditNew. |
Send message Joined: 5 Apr 25 Posts: 51 Credit: 824,170 RAC: 22,683 ![]() ![]() ![]() |
please let us know if you see a reasonable progress bar! It should be slow at first (all particles still alive) and then become gradually faster towards 100% (particles are getting lost, and it is faster to do an LHC turn). I got 8 resends after someone aborted them. Runtimes are very long on the laptop that snatched them. Just finished the first one after 18 hours (15h CPU time), speed seemed pretty constant (started at around 5,04%/hour and went to 6,12%/hour later on for most tasks), runtime estimate is pretty accurate and once progress hit 100% it was done. The remaining 7 tasks should finish one after another in the next 3-10 hours. ![]() |
Send message Joined: 18 Dec 15 Posts: 1906 Credit: 144,143,639 RAC: 73,952 ![]() ![]() ![]() |
in my opinion, a flat, static credit ends up being the most fair approach. each WU should have a set value that is independent on the device in which it was run, which is not the case with CreditNew.I agree - many other projects handle the credit this way. LHC is one of the few which do not. |
![]() ![]() Send message Joined: 24 Oct 04 Posts: 1234 Credit: 79,423,285 RAC: 131,721 ![]() ![]() |
in my opinion, a flat, static credit ends up being the most fair approach. each WU should have a set value that is independent on the device in which it was run, which is not the case with CreditNew.I agree - many other projects handle the credit this way. LHC is one of the few which do not. YES I have always liked the way the Einstein Project works with the Credits |
Send message Joined: 22 Mar 17 Posts: 76 Credit: 27,456,970 RAC: 110,135 ![]() ![]() |
While I also prefer a static credit per task, projects like E@H have much more consistent run time at any given time. On one of my PCs I see run times of 400s to 10,000 seconds. |
![]() Send message Joined: 28 Sep 04 Posts: 780 Credit: 59,551,448 RAC: 45,514 ![]() ![]() ![]() |
Some new tasks are being downloaded now. They have changed the estimated task size to a 1000 fold, 500 000 GFLOPs. The estimated runtime for these tasks on my 7950X is over 2000 hours so they all go to high priority mode and bypass all other tasks (Atlas vbox). ![]() |
Send message Joined: 4 Mar 17 Posts: 31 Credit: 12,119,585 RAC: 10,701 ![]() ![]() ![]() |
with 500,000 GFLOPs are the new tasks shown on my older PC as 20hours. For me that is still more than double the time i will need for them but at least better to work with then the very short times. |
Send message Joined: 5 Apr 25 Posts: 51 Credit: 824,170 RAC: 22,683 ![]() ![]() ![]() |
with 500,000 GFLOPs are the new tasks shown on my older PC as 20hours. For me that is still more than double the time i will need for them but at least better to work with then the very short times. 20 hours is consistent with the 1-2 minutes that we've seen before, assuming a 1000-fold increase. One of my laptops got 8 new tasks but I don't have access to it until tomorrow evening (Eastern Europe). ![]() |
©2025 CERN