Message boards : Xtrack/SixTrack : Xtrack (Xboinc)
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 5 · 6 · 7 · 8 · 9 · 10 · 11 . . . 12 · Next

AuthorMessage
Garrulus glandarius

Send message
Joined: 5 Apr 25
Posts: 51
Credit: 824,170
RAC: 22,683
Message 52410 - Posted: 1 Oct 2025, 4:05:30 UTC - in response to Message 52409.  
Last modified: 1 Oct 2025, 4:24:29 UTC

I noticed something similar on a a Win10 machine
Batches with the same name but different scan # are alternatively using x86 or x86_64 apps.

On a Win11 machine I see the two app versions alternate between tasks that at least have different names.
ID: 52410 · Report as offensive     Reply Quote
Dark Angel
Avatar

Send message
Joined: 7 Aug 11
Posts: 118
Credit: 28,742,838
RAC: 43,399
Message 52411 - Posted: 1 Oct 2025, 5:08:00 UTC

On one of the machines (Ubuntu Server) I have even completely removed any 32bit compatibility libs, yet the work units still show (through Boinc Manager, Tasks tab, Properties button) that they're i686.
It seems rather odd.
ID: 52411 · Report as offensive     Reply Quote
Profile adrianxw

Send message
Joined: 29 Sep 04
Posts: 188
Credit: 705,487
RAC: 0
Message 52413 - Posted: 1 Oct 2025, 6:56:32 UTC

From the late '70's I programmed engineering applications in Fortran-66 and -77. Back then, things like memory were seriously limited! Our SEL 32/77 machine had 1 Megabyte of RAM!!! Multiuser system with 20-30 users on it at any one time. An efficient compiler was absolutely essential!!!

Wave upon wave of demented avengers march cheerfully out of obscurity into the dream.
ID: 52413 · Report as offensive     Reply Quote
camontan
Project administrator
Project developer
Project tester
Project scientist

Send message
Joined: 10 Jul 25
Posts: 5
Credit: 186,170
RAC: 7,987
Message 52414 - Posted: 1 Oct 2025, 7:14:57 UTC - in response to Message 52407.  


HM, my 7900X has already finished the WUs in round about 3 hours and 15 minutes

My 5900X need something with 7 hours for these Test-WUs, the load of all boxes should be nearly equal (1 CPDN-MultiCoreWU, 4x XTrack LHC, 4x XTrack DEV-LHC)

Is this dramatic Time-Difference something you would expect ?


Dear Yeti,

Is this difference specific in the latest DEV work units or is it exclusively caused by the production workflow?

If it's from the production flow, I'm afraid this level of difference is plausible. Every set of particles we send to the volunteers for tracking may or may not be stable, and we have no way to predict it in advance (if we knew in advance, tracking would be not necessary). So a WU might be long up to the expected maximum (10h on a modern CPU, all particles surviving), or possibly very short (most particles, if not all, lost before the target 1M turns).

If it's from the DEV workflow I sent yesterday, then I am surprised, it should be jobs of equal length. Perhaps the machines got different loads from prod while working on DEV?
Was at least the progress bar in the DEV workflow behaving reasonably?

Thank you for the precious feedback and helping with the development!
Carlo
ID: 52414 · Report as offensive     Reply Quote
Profile Yeti
Volunteer moderator
Avatar

Send message
Joined: 2 Sep 04
Posts: 468
Credit: 214,499,708
RAC: 41,902
Message 52415 - Posted: 1 Oct 2025, 8:12:17 UTC - in response to Message 52414.  

Is this difference specific in the latest DEV work units or is it exclusively caused by the production workflow?
It was the DEV-Work-Units

If it's from the DEV workflow I sent yesterday, then I am surprised, it should be jobs of equal length. Perhaps the machines got different loads from prod while working on DEV?
HM, I don't think that the other tasks (LHC live and CPDN) could make such a huge difference

Was at least the progress bar in the DEV workflow behaving reasonably?
So far as I can say it looked good and credible


Supporting BOINC, a great concept !
ID: 52415 · Report as offensive     Reply Quote
Millenium

Send message
Joined: 1 Nov 05
Posts: 2
Credit: 145,125
RAC: 7,311
Message 52419 - Posted: 1 Oct 2025, 11:52:14 UTC

Ah this explain why different WUs have different lengths!

I'm still with 0.41 WUs, but so far everything works fine.

The System Monitor on Kubuntu reports that they are all i686, at least, that's the name of the process, are they 32 bit?
ID: 52419 · Report as offensive     Reply Quote
Profile Yeti
Volunteer moderator
Avatar

Send message
Joined: 2 Sep 04
Posts: 468
Credit: 214,499,708
RAC: 41,902
Message 52420 - Posted: 1 Oct 2025, 12:14:37 UTC - in response to Message 52405.  
Last modified: 1 Oct 2025, 12:58:11 UTC

With unlimited and one CPU without days parameter,
get a max. of 50 Tasks for XTrack.
and
This 50 Tasks running on a 64-Core Threadripper (4-6 hours for one Task).

I don't get how this should work.

If I limit the Client to one core, the workfetch will ask for only 1 core and 50 WUs may be a project-limit per Core, so far this is okay.

But now, the client will run WUs only on 1 Core.


Supporting BOINC, a great concept !
ID: 52420 · Report as offensive     Reply Quote
computezrmle
Volunteer moderator
Volunteer developer
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 15 Jun 08
Posts: 2679
Credit: 286,639,849
RAC: 99,609
Message 52421 - Posted: 1 Oct 2025, 12:53:36 UTC - in response to Message 52420.  

As long as rsc_fpops_est=500e9 is set for each task there's nothing useful you can do on the client side.
With each valid result being reported to the server it will step by step adjust the flops value for each computer.
The latter affects work fetch as well as estimated runtimes.
Whenever the app's version number gets changed this process will start from scratch.

@camontan
It's up to the project to send a more realistic rsc_fpops_est value per task.
Even a mean value based on an educated guess would be better than the current setting.
ID: 52421 · Report as offensive     Reply Quote
Krümel

Send message
Joined: 18 May 15
Posts: 1
Credit: 2,320,255
RAC: 3,546
Message 52433 - Posted: 2 Oct 2025, 6:27:55 UTC

The granted Credits are differing very much.
This WU https://lhcathome.cern.ch/lhcathome/result.php?resultid=427470867 with more than 8h runtime got 222.64 Credits.
This one https://lhcathome.cern.ch/lhcathome/result.php?resultid=427470891 with slitly less runtime 534.44.
ID: 52433 · Report as offensive     Reply Quote
computezrmle
Volunteer moderator
Volunteer developer
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 15 Jun 08
Posts: 2679
Credit: 286,639,849
RAC: 99,609
Message 52434 - Posted: 2 Oct 2025, 6:47:20 UTC - in response to Message 52433.  

Ever since LHC@home uses BOINC's built-in credit calculation (and will not change this).
Hence, it makes no sense to complain about credits here.
If you feel there's something wrong you may open an issue at github:
https://github.com/BOINC/boinc
ID: 52434 · Report as offensive     Reply Quote
Profile Laurence
Project administrator
Project developer

Send message
Joined: 20 Jun 14
Posts: 407
Credit: 238,712
RAC: 0
Message 52435 - Posted: 2 Oct 2025, 7:37:02 UTC - in response to Message 52434.  

Credit is always a hot topic, and yes, this is a BOINC-wide issue rather than something unique to LHC@home. As far as I understand, we are using CreditNew, which bases granted credit on the actual floating-point operations (FLOPs) performed, with a scaling factor to keep credit levels consistent across projects. Providing more accurate values for rsc_fpops_est should improve estimates and reduce distortions. The credit assignment itself is something we can tune, since the validator has hooks that allow us to adjust the equations if needed.
ID: 52435 · Report as offensive     Reply Quote
Brummig
Avatar

Send message
Joined: 9 Feb 16
Posts: 49
Credit: 540,726
RAC: 69
Message 52439 - Posted: 3 Oct 2025, 11:09:49 UTC

Whilst I had no problems with SixTrack, I ended up crashing the deadlines with my Xtrack tasks. Could you make the deadline much longer, please?
ID: 52439 · Report as offensive     Reply Quote
Ian&Steve C.

Send message
Joined: 16 Sep 25
Posts: 3
Credit: 379,522
RAC: 14,691
Message 52441 - Posted: 3 Oct 2025, 12:08:24 UTC - in response to Message 52435.  

Credit is always a hot topic, and yes, this is a BOINC-wide issue rather than something unique to LHC@home. As far as I understand, we are using CreditNew, which bases granted credit on the actual floating-point operations (FLOPs) performed, with a scaling factor to keep credit levels consistent across projects. Providing more accurate values for rsc_fpops_est should improve estimates and reduce distortions. The credit assignment itself is something we can tune, since the validator has hooks that allow us to adjust the equations if needed.


BOINC has no way to know how many floating point operations are performed by the application. it can only estimate that based on the value the project sets for the estimated flops combined with the estimated performance of the device, which is also a flawed process. the app may use more than one type and size of operation.

in my opinion, a flat, static credit ends up being the most fair approach. each WU should have a set value that is independent on the device in which it was run, which is not the case with CreditNew.
ID: 52441 · Report as offensive     Reply Quote
Garrulus glandarius

Send message
Joined: 5 Apr 25
Posts: 51
Credit: 824,170
RAC: 22,683
Message 52442 - Posted: 3 Oct 2025, 12:18:04 UTC - in response to Message 52386.  
Last modified: 3 Oct 2025, 12:18:45 UTC

please let us know if you see a reasonable progress bar! It should be slow at first (all particles still alive) and then become gradually faster towards 100% (particles are getting lost, and it is faster to do an LHC turn).

I got 8 resends after someone aborted them. Runtimes are very long on the laptop that snatched them.

Just finished the first one after 18 hours (15h CPU time), speed seemed pretty constant (started at around 5,04%/hour and went to 6,12%/hour later on for most tasks), runtime estimate is pretty accurate and once progress hit 100% it was done.

The remaining 7 tasks should finish one after another in the next 3-10 hours.
ID: 52442 · Report as offensive     Reply Quote
Erich56

Send message
Joined: 18 Dec 15
Posts: 1906
Credit: 144,143,639
RAC: 73,952
Message 52443 - Posted: 3 Oct 2025, 16:50:32 UTC - in response to Message 52441.  

in my opinion, a flat, static credit ends up being the most fair approach. each WU should have a set value that is independent on the device in which it was run, which is not the case with CreditNew.
I agree - many other projects handle the credit this way. LHC is one of the few which do not.
ID: 52443 · Report as offensive     Reply Quote
Profile Magic Quantum Mechanic
Avatar

Send message
Joined: 24 Oct 04
Posts: 1234
Credit: 79,423,285
RAC: 131,721
Message 52444 - Posted: 3 Oct 2025, 22:08:12 UTC - in response to Message 52443.  

in my opinion, a flat, static credit ends up being the most fair approach. each WU should have a set value that is independent on the device in which it was run, which is not the case with CreditNew.
I agree - many other projects handle the credit this way. LHC is one of the few which do not.


YES I have always liked the way the Einstein Project works with the Credits
ID: 52444 · Report as offensive     Reply Quote
mmonnin

Send message
Joined: 22 Mar 17
Posts: 76
Credit: 27,456,970
RAC: 110,135
Message 52445 - Posted: 3 Oct 2025, 23:09:01 UTC

While I also prefer a static credit per task, projects like E@H have much more consistent run time at any given time. On one of my PCs I see run times of 400s to 10,000 seconds.
ID: 52445 · Report as offensive     Reply Quote
Harri Liljeroos
Avatar

Send message
Joined: 28 Sep 04
Posts: 780
Credit: 59,551,448
RAC: 45,514
Message 52446 - Posted: 4 Oct 2025, 11:17:21 UTC

Some new tasks are being downloaded now. They have changed the estimated task size to a 1000 fold, 500 000 GFLOPs. The estimated runtime for these tasks on my 7950X is over 2000 hours so they all go to high priority mode and bypass all other tasks (Atlas vbox).
ID: 52446 · Report as offensive     Reply Quote
Toggleton

Send message
Joined: 4 Mar 17
Posts: 31
Credit: 12,119,585
RAC: 10,701
Message 52447 - Posted: 4 Oct 2025, 11:54:02 UTC - in response to Message 52446.  
Last modified: 4 Oct 2025, 11:54:21 UTC

with 500,000 GFLOPs are the new tasks shown on my older PC as 20hours. For me that is still more than double the time i will need for them but at least better to work with then the very short times.
ID: 52447 · Report as offensive     Reply Quote
Garrulus glandarius

Send message
Joined: 5 Apr 25
Posts: 51
Credit: 824,170
RAC: 22,683
Message 52448 - Posted: 4 Oct 2025, 11:56:39 UTC - in response to Message 52447.  

with 500,000 GFLOPs are the new tasks shown on my older PC as 20hours. For me that is still more than double the time i will need for them but at least better to work with then the very short times.


20 hours is consistent with the 1-2 minutes that we've seen before, assuming a 1000-fold increase. One of my laptops got 8 new tasks but I don't have access to it until tomorrow evening (Eastern Europe).
ID: 52448 · Report as offensive     Reply Quote
Previous · 1 . . . 5 · 6 · 7 · 8 · 9 · 10 · 11 . . . 12 · Next

Message boards : Xtrack/SixTrack : Xtrack (Xboinc)


©2025 CERN