InfoMessage
1) Message boards : Xtrack/SixTrack : Xtrack (Xboinc)
Message 52441
Posted 3 Oct 2025 by Ian&Steve C.
Credit is always a hot topic, and yes, this is a BOINC-wide issue rather than something unique to LHC@home. As far as I understand, we are using CreditNew, which bases granted credit on the actual floating-point operations (FLOPs) performed, with a scaling factor to keep credit levels consistent across projects. Providing more accurate values for rsc_fpops_est should improve estimates and reduce distortions. The credit assignment itself is something we can tune, since the validator has hooks that allow us to adjust the equations if needed.


BOINC has no way to know how many floating point operations are performed by the application. it can only estimate that based on the value the project sets for the estimated flops combined with the estimated performance of the device, which is also a flawed process. the app may use more than one type and size of operation.

in my opinion, a flat, static credit ends up being the most fair approach. each WU should have a set value that is independent on the device in which it was run, which is not the case with CreditNew.
2) Message boards : ATLAS application : Python using 75% CPU time
Message 52437
Posted 2 Oct 2025 by Ian&Steve C.
dont worry about the time estimation vs the progress reporting. it's often wrong. most of my tasks end up completing around ~35% progress reporting.
3) Message boards : Xtrack/SixTrack : Xtrack (Xboinc)
Message 52400
Posted 30 Sep 2025 by Ian&Steve C.
GPU apps were mentioned earlier in the thread (CUDA/OpeCL).

ETA on that? I'm highly interested in a CUDA app for Linux. I know you mentioned trying to compile an executable, but you could also just send over the whole python environment via a compressed archive, then execute it with the BOINC wrapper + some scripts. that's how GPUGRID does it for most of their apps.

if you do instead focus on trying to compile an executable, I would highly recommend you use something like CUDA 12.8 or 12.9. compile SASS code for all the supported architectures (CUDA 12.9 supports cc 5.0-12.0, which is Maxwell all the way to Blackwell) and also include PTX code so support future archs that have not been released yet. you can usually do all this with an NVCCFLAG argument like -arch=all if you dont want to define them individually. CUDA 13 is the latest, but will cut off support for devices from cc 5.0-7.0 (Maxwell/Pascal/Volta), so using CUDA 12.8 or 12.9 actually has broader support


©2026 CERN