Message boards : Theory Application : Issues Native Theory application
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · 5 · Next

AuthorMessage
Jim1348

Send message
Joined: 15 Nov 14
Posts: 454
Credit: 12,303,433
RAC: 3,332
Message 38327 - Posted: 19 Mar 2019, 22:33:03 UTC - in response to Message 38322.  

No, there is no graceful shutdown and no time limit of 18 hours.

That is a bit inconvenient for the average cruncher, to put it charitably. Is that on the to-do list to be fixed?
ID: 38327 · Report as offensive     Reply Quote
mmonnin

Send message
Joined: 22 Mar 17
Posts: 44
Credit: 3,801,950
RAC: 0
Message 38329 - Posted: 20 Mar 2019, 2:55:54 UTC - in response to Message 38286.  

The task is set to use 2 CPUs by default and barely over 1 is used and the reported time has run time = exactly CPU time. To the second on every task. At most I see 1.5 cores when the task is really short. 6min run time, 8 min CPU time.
Don't trust the values reported in the results, specially when they are equal.
Example your task: https://lhcathome.cern.ch/lhcathome/result.php?resultid=219459914
It reported 51m 1 sec, that is exactly the reported cpu time at the end of the result. 06:18:02 (32596): cranky exited; CPU time 3061.446043,
but when you calculate job finish time minus the job start time (the job should have ran in one flow)
06:18:02 (32596): cranky exited; CPU time 3061.446043
05:38:19 (32596): wrapper (7.15.26016): starting

you'll find the elapsed time is 2383 seconds, so 1 cpu is used far over 100% or 2 cpu's are partial used.


I am looking at BOINCTasks history for the correct run/CPU times. And my statement is still true. At BEST 1.5 threads used but average is like 1.1 threads per task. I set it use 1 per task with app_config so the CPU would be fully used.
ID: 38329 · Report as offensive     Reply Quote
zombie67 [MM]
Avatar

Send message
Joined: 24 Nov 06
Posts: 76
Credit: 6,720,840
RAC: 0
Message 38330 - Posted: 20 Mar 2019, 5:19:23 UTC
Last modified: 20 Mar 2019, 5:20:07 UTC

I was trying to install cvmfs just now on a couple of machines. But it's failing. Did something break?

I went through all the same steps that made it work earlier. So then I looked again at the messages during the steps, and I think this may be the problem:

$ wget https://ecsft.cern.ch/dist/cvmfs/cvmfs-release/cvmfs-release-latest_all.deb
--2019-03-19 22:12:31-- https://ecsft.cern.ch/dist/cvmfs/cvmfs-release/cvmfs-release-latest_all.deb
Resolving ecsft.cern.ch (ecsft.cern.ch)... 188.184.161.78
Connecting to ecsft.cern.ch (ecsft.cern.ch)|188.184.161.78|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 4724 (4.6K)
cvmfs-release-latest_all.deb: Permission denied

Cannot write to ‘cvmfs-release-latest_all.deb’ (Success).
ID: 38330 · Report as offensive     Reply Quote
pianoman

Send message
Joined: 29 Jun 18
Posts: 6
Credit: 3,288,185
RAC: 2,182
Message 38331 - Posted: 20 Mar 2019, 5:35:02 UTC
Last modified: 20 Mar 2019, 5:35:54 UTC

Took me some fits and starts to set up on my small set of 5 computers. Looks to be working on at least some of them now, but I've had several failed tasks and it looks like my machines are being throttled to 1 task a day. I'm assuming as I start completing tasks without error, that throttling will gradually resolve itself?

By the way, boinc is a 'fire and forget' thing for me, I'm glad someone thought to reach out over email and let me know. I wouldn't have noticed for a long time. I didn't even know native apps without virtualbox were an option (edit: and atlas too!)! I'm gleefully removing virtualbox from all my systems now...
ID: 38331 · Report as offensive     Reply Quote
computezrmle
Volunteer moderator
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 15 Jun 08
Posts: 1504
Credit: 82,794,358
RAC: 78,356
Message 38332 - Posted: 20 Mar 2019, 5:36:21 UTC - in response to Message 38330.  

cvmfs-release-latest_all.deb: Permission denied

Cannot write to ‘cvmfs-release-latest_all.deb’ (Success).

Missing write access.
Check the local folder you are currently in when you run wget.
ID: 38332 · Report as offensive     Reply Quote
computezrmle
Volunteer moderator
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 15 Jun 08
Posts: 1504
Credit: 82,794,358
RAC: 78,356
Message 38333 - Posted: 20 Mar 2019, 5:42:53 UTC - in response to Message 38331.  

... boinc is a 'fire and forget' thing for me, ...

Clearly not recommended for beta apps like this.
One of the reasons to strictly limit the # of sent tasks.
ID: 38333 · Report as offensive     Reply Quote
zombie67 [MM]
Avatar

Send message
Joined: 24 Nov 06
Posts: 76
Credit: 6,720,840
RAC: 0
Message 38334 - Posted: 20 Mar 2019, 5:46:45 UTC - in response to Message 38332.  
Last modified: 20 Mar 2019, 5:55:18 UTC

Missing write access.
Check the local folder you are currently in when you run wget.


I tried it again with sudo, and the "Permission denied" was resolved. So thanks for that. Although I am not sure why this is an issue when it wasn't before. Doing all the same steps from the same directory.

Then I went through all the same steps. However, still getting the cvmfs_config probe failed status.

Edit: So I went through all the same steps again, and now it's working. Not sure why. Nothing changed. I know this for sure, because I didn't re-type the commands, or even re-copy/paste them. I just used "!34, !35, etc" from the history log, through the steps. Frustrating. But at least resolved.
ID: 38334 · Report as offensive     Reply Quote
bronco

Send message
Joined: 13 Apr 18
Posts: 443
Credit: 8,438,885
RAC: 0
Message 38339 - Posted: 20 Mar 2019, 8:45:37 UTC - in response to Message 38329.  
Last modified: 20 Mar 2019, 8:54:17 UTC

The task is set to use 2 CPUs by default and barely over 1 is used and the reported time has run time = exactly CPU time. To the second on every task. At most I see 1.5 cores when the task is really short. 6min run time, 8 min CPU time.
Don't trust the values reported in the results, specially when they are equal.
Example your task: https://lhcathome.cern.ch/lhcathome/result.php?resultid=219459914
It reported 51m 1 sec, that is exactly the reported cpu time at the end of the result. 06:18:02 (32596): cranky exited; CPU time 3061.446043,
but when you calculate job finish time minus the job start time (the job should have ran in one flow)
06:18:02 (32596): cranky exited; CPU time 3061.446043
05:38:19 (32596): wrapper (7.15.26016): starting

you'll find the elapsed time is 2383 seconds, so 1 cpu is used far over 100% or 2 cpu's are partial used.


I am looking at BOINCTasks history for the correct run/CPU times. And my statement is still true. At BEST 1.5 threads used but average is like 1.1 threads per task. I set it use 1 per task with app_config so the CPU would be fully used.

I had the same problem. I traced the cause to an error in my app_config.xml. Maybe you made the same error I did?
To avoid adding "noise" to this thread I posted a proper block for native Theory in a separate thread at https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4975
ID: 38339 · Report as offensive     Reply Quote
Crystal Pellet
Volunteer moderator
Volunteer tester

Send message
Joined: 14 Jan 10
Posts: 967
Credit: 6,361,668
RAC: 387
Message 38343 - Posted: 20 Mar 2019, 12:57:53 UTC

I see that the maximum runtime of last 100 tasks is 25.06 hours.

Would be interesting to know what job and result-id it was.
ID: 38343 · Report as offensive     Reply Quote
maeax

Send message
Joined: 2 May 07
Posts: 980
Credit: 34,577,625
RAC: 18,167
Message 38344 - Posted: 20 Mar 2019, 13:08:24 UTC - in response to Message 38343.  
Last modified: 20 Mar 2019, 13:17:17 UTC

No result so far. Sherpa 1.4.3 for now 25 hours running and hoping to get a good end therefore.
runRivet.log is growing every minute (255Kbyte) nevts=1.000.
Let you know for the result.
ID: 38344 · Report as offensive     Reply Quote
bronco

Send message
Joined: 13 Apr 18
Posts: 443
Credit: 8,438,885
RAC: 0
Message 38351 - Posted: 20 Mar 2019, 21:02:07 UTC - in response to Message 38344.  

No result so far. Sherpa 1.4.3 for now 25 hours running and hoping to get a good end therefore.
runRivet.log is growing every minute (255Kbyte) nevts=1.000.
Let you know for the result.

Death by graceful shutdown must end!
Sherpa lives matter too!!
ID: 38351 · Report as offensive     Reply Quote
Crystal Pellet
Volunteer moderator
Volunteer tester

Send message
Joined: 14 Jan 10
Posts: 967
Credit: 6,361,668
RAC: 387
Message 38353 - Posted: 20 Mar 2019, 22:16:26 UTC

First ERROR task: Exit status 195 (0x000000C3) EXIT_CHILD_FAILED

https://lhcathome.cern.ch/lhcathome/result.php?resultid=219654394

===> [runRivet] Wed Mar 20 19:34:01 UTC 2019 [boinc pp bbbar 7000 - - sherpa 1.2.2p default 100000 32]

Setting environment...
grep: /etc/redhat-release: No such file or directory
MCGENERATORS=/cvmfs/sft.cern.ch/lcg/external/MCGenerators_lcgcmt67c
g++ = /cvmfs/sft.cern.ch/lcg/external/gcc/4.8.4/x86_64-slc6/bin/g++
g++ version = 4.8.4
RIVET=/cvmfs/sft.cern.ch/lcg/external/MCGenerators_lcgcmt67c/rivet/2.6.1/x86_64-slc6-gcc48-opt
Rivet version = rivet v2.6.1
RIVET_REF_PATH=/cvmfs/sft.cern.ch/lcg/external/MCGenerators_lcgcmt67c/rivet/2.6.1/x86_64-slc6-gcc48-opt/share/Rivet
RIVET_ANALYSIS_PATH=/shared/analyses
GSL=/cvmfs/sft.cern.ch/lcg/external/GSL/1.10/x86_64-slc6-gcc48-opt
HEPMC=/cvmfs/sft.cern.ch/lcg/external/HepMC/2.06.08/x86_64-slc6-gcc48-opt
FASTJET=/cvmfs/sft.cern.ch/lcg/external/fastjet/3.0.3/x86_64-slc6-gcc48-opt
PYTHON=/cvmfs/sft.cern.ch/lcg/external/Python/2.7.4/x86_64-slc6-gcc48-opt
ROOTSYS=/cvmfs/sft.cern.ch/lcg/app/releases/ROOT/5.34.26/x86_64-slc6-gcc48-opt/root

Input parameters:
mode=boinc
beam=pp
process=bbbar
energy=7000
params=-
specific=-
generator=sherpa
version=1.2.2p
tune=default
nevts=100000
seed=32

Prepare temporary directories and files ...
workd=/shared
tmpd=/shared/tmp/tmp.MXsBzLETgL
tmp_params=/shared/tmp/tmp.MXsBzLETgL/generator.params
tmp_hepmc=/shared/tmp/tmp.MXsBzLETgL/generator.hepmc
tmp_yoda=/shared/tmp/tmp.MXsBzLETgL/generator.yoda
tmp_jobs=/shared/tmp/tmp.MXsBzLETgL/jobs.log
tmpd_flat=/shared/tmp/tmp.MXsBzLETgL/flat
tmpd_dump=/shared/tmp/tmp.MXsBzLETgL/dump
tmpd_html=/shared/tmp/tmp.MXsBzLETgL/html

Prepare Rivet parameters ...
analysesNames=ATLAS_2011_I926145 CMS_2011_S8941262 LHCB_2010_I867355

Unpack data histograms...
dataFiles =
/cvmfs/sft.cern.ch/lcg/external/MCGenerators_lcgcmt67c/rivet/2.6.1/x86_64-slc6-gcc48-opt/share/Rivet/ATLAS_2011_I926145.yoda
/cvmfs/sft.cern.ch/lcg/external/MCGenerators_lcgcmt67c/rivet/2.6.1/x86_64-slc6-gcc48-opt/share/Rivet/CMS_2011_S8941262.yoda
/cvmfs/sft.cern.ch/lcg/external/MCGenerators_lcgcmt67c/rivet/2.6.1/x86_64-slc6-gcc48-opt/share/Rivet/LHCB_2010_I867355.yoda
output = /shared/tmp/tmp.MXsBzLETgL/flat
make: Entering directory `/shared/rivetvm'
g++ yoda2flat-split.cc -o yoda2flat-split.exe -std=c++11 -Wfatal-errors -Wl,-rpath /cvmfs/sft.cern.ch/lcg/external/MCGenerators_lcgcmt67c/yoda/1.7.1/x86_64-slc6-gcc48-opt/lib `/cvmfs/sft.cern.ch/lcg/external/MCGenerators_lcgcmt67c/yoda/1.7.1/x86_64-slc6-gcc48-opt/bin/yoda-config --cppflags --libs`
make: Leaving directory `/shared/rivetvm'

/shared/rivetvm/complete.sh ./REF_CMS_2011_S8941262_d02-x01-y01.dat
/shared/rivetvm/complete.sh ./REF_LHCB_2010_I867355_d02-x01-y02.dat
/shared/rivetvm/complete.sh ./REF_LHCB_2010_I867355_d02-x01-y01.dat
/shared/rivetvm/complete.sh ./REF_CMS_2011_S8941262_d03-x01-y01.dat
/shared/rivetvm/complete.sh ./REF_ATLAS_2011_I926145_d01-x01-y01.dat
/shared/rivetvm/complete.sh ./REF_LHCB_2010_I867355_d01-x01-y01.dat
/shared/rivetvm/complete.sh ./REF_ATLAS_2011_I926145_d03-x01-y01.dat
/shared/rivetvm/complete.sh ./REF_ATLAS_2011_I926145_d02-x01-y01.dat
/shared/rivetvm/complete.sh ./REF_LHCB_2010_I867355_d01-x01-y02.dat
/shared/rivetvm/complete.sh ./REF_CMS_2011_S8941262_d01-x01-y01.dat

Building rivetvm ...
make: Entering directory `/shared/rivetvm'
g++ rivetvm.cc -o rivetvm.exe -std=c++11 -DNDEBUG -DHIMODE=0 -Wfatal-errors -Wl,-rpath /cvmfs/sft.cern.ch/lcg/external/MCGenerators_lcgcmt67c/rivet/2.6.1/x86_64-slc6-gcc48-opt/lib -Wl,-rpath /cvmfs/sft.cern.ch/lcg/external/HepMC/2.06.08/x86_64-slc6-gcc48-opt/lib `/cvmfs/sft.cern.ch/lcg/external/MCGenerators_lcgcmt67c/rivet/2.6.1/x86_64-slc6-gcc48-opt/bin/rivet-config --cppflags --ldflags --libs` -lHepMC
make: Leaving directory `/shared/rivetvm'

Run sherpa 1.2.2p and Rivet ...
generatorExecString = ./rungen.sh boinc pp bbbar 7000 - - sherpa 1.2.2p default 100000 32 /shared/tmp/tmp.MXsBzLETgL/generator.hepmc
rivetExecString = /shared/rivetvm/rivetvm.exe -a ATLAS_2011_I926145 -a CMS_2011_S8941262 -a LHCB_2010_I867355 -i /shared/tmp/tmp.MXsBzLETgL/generator.hepmc -o /shared/tmp/tmp.MXsBzLETgL/flat -H /shared/tmp/tmp.MXsBzLETgL/generator.yoda -d /shared/tmp/tmp.MXsBzLETgL/dump
make: Entering directory `/shared/plotter'
g++ plotter.cc -o plotter.exe -Wall -Wextra -Werror -Wl,--as-needed -Wl,-rpath `/cvmfs/sft.cern.ch/lcg/app/releases/ROOT/5.34.26/x86_64-slc6-gcc48-opt/root/bin/root-config --libdir` `/cvmfs/sft.cern.ch/lcg/app/releases/ROOT/5.34.26/x86_64-slc6-gcc48-opt/root/bin/root-config --cflags --libs`
===> [rungen] Wed Mar 20 19:34:11 UTC 2019 [boinc pp bbbar 7000 - - sherpa 1.2.2p default 100000 32 /shared/tmp/tmp.MXsBzLETgL/generator.hepmc]

Setting environment for sherpa 1.2.2p ...
tree = hepmc2.06.05
grep: /etc/redhat-release: No such file or directory
MCGENERATORS=/cvmfs/sft.cern.ch/lcg/external/MCGenerators_hepmc2.06.05
LCG_PLATFORM=x86_64-slc5-gcc43-opt
gcc = /usr/bin/gcc
gcc version = 4.4.7
AGILE=/cvmfs/sft.cern.ch/lcg/external/MCGenerators_hepmc2.06.05/agile/1.4.0/x86_64-slc5-gcc43-opt
HEPMC=/cvmfs/sft.cern.ch/lcg/external/HepMC/2.06.05/x86_64-slc5-gcc43-opt
AGILE_GEN_PATH=/cvmfs/sft.cern.ch/lcg/external/MCGenerators_hepmc2.06.05
LHAPDF=/cvmfs/sft.cern.ch/lcg/external/MCGenerators_hepmc2.06.05/lhapdf/5.8.9/x86_64-slc5-gcc43-opt

Input parameters:
mode=boinc
beam=pp
process=bbbar
energy=7000
params=-
specific=-
generator=sherpa
version=1.2.2p
tune=default
nevts=100000
seed=32
outfile=/shared/tmp/tmp.MXsBzLETgL/generator.hepmc

Prepare temporary directories and files ...
workd=/shared
tmpd=/shared/tmp/tmp.MXsBzLETgL
tmp_params=/shared/tmp/tmp.MXsBzLETgL/generator.params

Decoding parameters of generator...
pTmin = 0
pTmax = 7000
mHatMin = 0
mHatMax = 7000

processCode=bbbar

beam1=2212
beam2=2212
beam energy = 3500.
INFO: streering file template = configuration/sherpa-bbbar.params
Prepare sherpa 1.2.2p parameters ...
seed numbers = 33 1
=> /shared/tmp/tmp.MXsBzLETgL/generator.params :
# steering file based on example from Sherpa 1.2.3 distribution:
# share/SHERPA-MC/Examples/Tevatron_UE/Run.dat

(run){
# disable colorizing of text printed on screen:
PRETTY_PRINT = Off

# number of events:
EVENTS = 100000

# set random seed:
# (see section "6.1.3 RANDOM_SEED" of Sherpa manual for details)
RANDOM_SEED = 33 1

# Event output file:
HEPMC2_GENEVENT_OUTPUT = sherpa
# full name of output file will be:
# "sherpa.hepmc2g"

# Define a particle container for light flavours and bottom separately
PARTICLE_CONTAINER 901 jLight 1 -1 2 -2 3 -3 4 -4 21
PARTICLE_CONTAINER 905 jBottom 5 -5

# disable splitting of HepMC output file:
FILE_SIZE = 1000000000

# Makes particles with c*tau > 10 mm stable:
MAX_PROPER_LIFETIME = 10.0
}(run)

(beam){
BEAM_1 = 2212; BEAM_ENERGY_1 = 3500.;
BEAM_2 = 2212; BEAM_ENERGY_2 = 3500.;
}(beam)

(processes){
# No need to include matching in current MCPLOTS analyses -> {0}
Process 901 901 -> 5 -5 901{0};
Order_EW 0; Max_N_Quarks 4;
CKKW sqr(20/E_CMS)
Integration_Error 0.02;
End process;

# b x -> b x not included in current MCPLOTS analyses
# Process 901 905 -> 901 905 901{0};
# Order_EW 0; Max_N_Quarks 4;
# CKKW sqr(20/E_CMS)
# Integration_Error 0.02;
# End process;

# b b -> b b considered negligible for current mcplots analyses
# Process 905 905 -> 905 905 901{0};
# Order_EW 0; Max_N_Quarks 4;
# CKKW sqr(20/E_CMS)
# Integration_Error 0.02;
# End process;
}(processes)

(selector){
# Set cuts
# Use this for hard leading jets in a certain pT window
#NJetFinder 2 0 0.0 1.0 # set min pT
## Or this?
# PT 93 0 7000

# Use this for hard leading jets in a certain mass window
#Mass 93 93 0 7000

}(selector)

(me){
ME_SIGNAL_GENERATOR = Comix
}(me)

(mi){
MI_HANDLER = Amisic # None or Amisic
}(mi)

# Parameter specifications for 'default':
# ---------------------------------------------
#%tuneFile%
# ---------------------------------------------
--------------------------------------

SHERPA=/cvmfs/sft.cern.ch/lcg/external/MCGenerators_hepmc2.06.05/sherpa/1.2.2p/x86_64-slc5-gcc43-opt
Run sherpa 1.2.2p ...
generatorExecString = /cvmfs/sft.cern.ch/lcg/external/MCGenerators_hepmc2.06.05/sherpa/1.2.2p/x86_64-slc5-gcc43-opt/bin/Sherpa -f /shared/tmp/tmp.MXsBzLETgL/generator.params
make: Leaving directory `/shared/plotter'
Updating display...
Display update finished (0 histograms, 0 events).
Welcome to Sherpa, <unknown user>. Initialization of framework underway.
Run_Parameter::Init(): Setting memory limit to 5.72297 GB.
-----------------------------------------------------------------------------
----------- Event generation run with SHERPA started ....... -----------
-----------------------------------------------------------------------------
................................................ | +
................................................ || | + +
................................... .... | | / +
................. ................ _,_ | .... || +| + +
............................... __.' ,\| ... || / +| +
.............................. ( \ \ ... | | | + + \ +
............................. ( \ -/ .... || + | +
........ ................... <S /()))))~~~~~~~~## + /\ +
............................ (!H (~~)))))~~~~~~#/ + + | +
................ ........... (!E (~~~))))) /|/ + +
............................ (!R (~~~))))) ||| + + +
..... ...................... (!P (~~~~))) /| + + +
............................ (!A> (~~~~~~~~~## + + +
............................. ~~(! '~~~~~~~ \ + + + +
............................... `~~~QQQQQDb // | + + + +
........................ .......... IDDDDP|| \ + + + + + +
.................................... IDDDI|| \ +
.................................... IHD HD|| \ + + + + + + + +
................................... IHD ##| :-) + +\ +
......... ............... ......... IHI ## / / + + + + +\ +
................................... IHI/ / / + + + + +
................................... ## | | / / + + + + / +
....................... /TT\ ..... ##/ /// / + + + + + + +/ +
......................./TTT/T\ ... /TT\/\\\ / + + + + + + +/ \ +
version 1.2.2 ......../TTT/TTTT\...|TT/T\\\/ + ++ + /
-----------------------------------------------------------------------------

SHERPA version 1.2.2

Authors: Tanju Gleisberg, Stefan Hoeche, Frank Krauss,
Marek Schoenherr, Steffen Schumann, Frank Siegert,
Jan Winter.
Former Authors: Timo Fischer, Ralf Kuhn, Thomas Laubrich,
Andreas Schaelicke

This program uses a lot of genuine and original research work
by other people. Users are encouraged to refer to
the various original publications.

Users are kindly asked to refer to the documentation
published under JHEP 02(2009)007

Please visit also our homepage

http://www.sherpa-mc.de

for news, bugreports, updates and new releases.

-----------------------------------------------------------------------------

List of Particle Data
IDName kfc MASS[<kfc>] WIDTH[<kfc>] STABLE[<kfc>] MASSIVE[<kfc>] ACTIVE[<kfc>]
d 1 0.01 0 1 0 1
u 2 0.005 0 1 0 1
s 3 0.2 0 1 0 1
c 4 1.42 0 1 0 1
b 5 4.8 0 1 0 1
t 6 175 1.5 1 1 1
e- 11 0.000511 0 1 0 1
nu_e 12 0 0 1 0 1
mu- 13 0.105 0 1 0 1
nu_mu 14 0 0 1 0 1
tau- 15 1.777 2.36e-12 0 0 1
nu_tau 16 0 0 1 0 1
G 21 0 0 1 0 1
P 22 0 0 1 0 1
Z 23 91.188 2.49 1 1 1
W+ 24 80.419 2.06 1 1 1
h0 25 120 0.0037 1 1 1

List of Particle Containers
IDName kfc Constituents
lepton 90 {e-,e+,mu-,mu+,tau-,tau+}
neutrino 91 {nu_e,nu_eb,nu_mu,nu_mub,nu_tau,nu_taub}
fermion 92 {d,db,u,ub,s,sb,c,cb,b,bb,e-,e+,mu-,mu+,tau-,tau+,nu_e,nu_eb,nu_mu,nu_mub,nu_tau,nu_taub}
j 93 {d,db,u,ub,s,sb,c,cb,b,bb,G}
Q 94 {d,db,u,ub,s,sb,c,cb,b,bb}
r 99 {d,db,u,ub,s,sb,c,cb,b,bb,d,db,u,ub,s,sb,c,cb,b,bb,G,G}

Initialized the beams Monochromatic*Monochromatic
CTEQ6_Fortran_Interface::CTEQ6_Fortran_Interface(): Init member 400.
PDF set 'cteq6.6m' loaded from 'libCTEQ6Sherpa' for beam 1 (P+).
Initialization_Handler::InitializeThePDFs() {
Setting \alpha_s according to PDF
perturbative order 1
\alpha_s(M_Z) = 0.118
}
CTEQ6_Fortran_Interface::CTEQ6_Fortran_Interface(): Init member 400.
PDF set 'cteq6.6m' loaded from 'libCTEQ6Sherpa' for beam 2 (P+).
Initialization_Handler::InitializeThePDFs() {
Setting \alpha_s according to PDF
perturbative order 1
\alpha_s(M_Z) = 0.118
}
Initialized the ISR: (SF)*(SF)
CTEQ6_Fortran_Interface::CTEQ6_Fortran_Interface(): Init member 400.
CTEQ6_Fortran_Interface::CTEQ6_Fortran_Interface(): Init member 400.
Initialize the Standard Model from / Model.dat
Initialized the Beam_Remnant_Handler.
Initialized the Shower_Handler.
Initialized the Fragmentation_Handler.
+----------------------------------+
| |
| CCC OOO M M I X X |
| C O O MM MM I X X |
| C O O M M M I X |
| C O O M M I X X |
| CCC OOO M M I X X |
| |
+==================================+
| Color dressed Matrix Elements |
| http://comix.freacafe.de |
| please cite JHEP12(2008)039 |
+----------------------------------+
Matrix_Element_Handler::BuildProcesses(): Looking for processes
Exception_Handler::SignalHandler: Signal (11) caught.
Exception_Handler::GenerateStackTrace(..): Generating stack trace
{
0x7f70cbf94372 in 'PDF::ISR_Handler::CheckConsistency(ATOOLS::Flavour*)'
0x7f70d15522e7 in 'SHERPA::Matrix_Element_Handler::BuildSingleProcessList(PHASIC::Process_Info&, SHERPA::Matrix_Element_Handler::Processblock_Info&, std::string const&, std::string const&, std::vector<std::string, std::allocator<std::string> > const&)'
0x7f70d15562e9 in 'SHERPA::Matrix_Element_Handler::BuildProcesses()'
0x7f70d15578fb in 'SHERPA::Matrix_Element_Handler::InitializeProcesses(MODEL::Model_Base*, BEAM::Beam_Spectra_Handler*, PDF::ISR_Handler*)'
0x7f70d19c05de in 'SHERPA::Initialization_Handler::InitializeTheMatrixElements()'
0x7f70d19ce385 in 'SHERPA::Initialization_Handler::InitializeTheFramework(int)'
0x7f70d1bed8b7 in 'SHERPA::Sherpa::InitializeTheRun(int, char**)'
0x402454 in 'main'
} Cannot continue.
Exception_Handler::GenerateStackTrace(..): Generating stack trace
{
0x7f70cbf94372 in 'PDF::ISR_Handler::CheckConsistency(ATOOLS::Flavour*)'
0x7f70d15522e7 in 'SHERPA::Matrix_Element_Handler::BuildSingleProcessList(PHASIC::Process_Info&, SHERPA::Matrix_Element_Handler::Processblock_Info&, std::string const&, std::string const&, std::vector<std::string, std::allocator<std::string> > const&)'
0x7f70d15562e9 in 'SHERPA::Matrix_Element_Handler::BuildProcesses()'
0x7f70d15578fb in 'SHERPA::Matrix_Element_Handler::InitializeProcesses(MODEL::Model_Base*, BEAM::Beam_Spectra_Handler*, PDF::ISR_Handler*)'
0x7f70d19c05de in 'SHERPA::Initialization_Handler::InitializeTheMatrixElements()'
0x7f70d19ce385 in 'SHERPA::Initialization_Handler::InitializeTheFramework(int)'
0x7f70d1bed8b7 in 'SHERPA::Sherpa::InitializeTheRun(int, char**)'
0x402454 in 'main'
}
Exception_Handler::Terminate(): Pre-crash status saved to '/shared/tmp/tmp.MXsBzLETgL/Status__Wed_Mar_20_19-34-19_2019'.
Exception_Handler::Exit: Exiting Sherpa with code (2)
Return_Value::PrintStatistics(): Statistics {
Generated events: 0
}
Time: 1s on Wed Mar 20 19:34:19 2019
(User: 0s, System: 0s, Children User: 0s, Children System: 0s)
------------------------------------------------------------------------
Please cite the publications listed in 'Sherpa_References.tex'.
Extract the bibtex list by running 'get_bibtex Sherpa_References.tex'
or email the file to 'slaclib2@slac.stanford.edu', subject 'generate'.
------------------------------------------------------------------------
ERROR: failed to run sherpa 1.2.2p
[1] 228 Exit 1 env $origEnv $generatorExecString
[2]- 229 Running $rivetExecString & (wd: /shared/tmp/tmp.MXsBzLETgL)
[3]+ 230 Running display_service $tmpd_dump "$beam $process $energy $params $generator $version $tune" &
ERROR: fail to run sherpa 1.2.2p or Rivet (error exit code)
ID: 38353 · Report as offensive     Reply Quote
maeax

Send message
Joined: 2 May 07
Posts: 980
Credit: 34,577,625
RAC: 18,167
Message 38355 - Posted: 21 Mar 2019, 6:58:32 UTC - in response to Message 38351.  
Last modified: 21 Mar 2019, 7:22:04 UTC

No result so far. Sherpa 1.4.3 for now 25 hours running and hoping to get a good end therefore.
runRivet.log is growing every minute (255Kbyte) nevts=1.000.
Let you know for the result.

Death by graceful shutdown must end!
Sherpa lives matter too!!

2515 Minutes runtime, Bronco!
43 hours, so far.
424,9 kByte Output runRivet.log.
Deadline 21 March 2019 20:10:08 UTC.
Edit: This is no problem of native-Theory!
ID: 38355 · Report as offensive     Reply Quote
bronco

Send message
Joined: 13 Apr 18
Posts: 443
Credit: 8,438,885
RAC: 0
Message 38356 - Posted: 21 Mar 2019, 10:31:53 UTC - in response to Message 38355.  

No result so far. Sherpa 1.4.3 for now 25 hours running and hoping to get a good end therefore.
runRivet.log is growing every minute (255Kbyte) nevts=1.000.
Let you know for the result.

Death by graceful shutdown must end!
Sherpa lives matter too!!

2515 Minutes runtime, Bronco!
43 hours, so far.
424,9 kByte Output runRivet.log.
Deadline 21 March 2019 20:10:08 UTC.
Edit: This is no problem of native-Theory!

Yes, I understand. Sherpa has earned a bad reputation perhaps unfairly. Maybe it just needs more time than it is allowed in the VBox tasks. I will repeat your test next time I get a sherpa. Thank you for pointing the right direction :)
Go, sherpa, go!!
ID: 38356 · Report as offensive     Reply Quote
Crystal Pellet
Volunteer moderator
Volunteer tester

Send message
Joined: 14 Jan 10
Posts: 967
Credit: 6,361,668
RAC: 387
Message 38357 - Posted: 21 Mar 2019, 10:51:10 UTC - in response to Message 38356.  
Last modified: 21 Mar 2019, 10:57:37 UTC

No result so far. Sherpa 1.4.3 for now 25 hours running and hoping to get a good end therefore.
runRivet.log is growing every minute (255Kbyte) nevts=1.000.
2515 Minutes runtime, Bronco!
43 hours, so far.
424,9 kByte Output runRivet.log.
Deadline 21 March 2019 20:10:08 UTC.
Edit: This is no problem of native-Theory!

With only 1000 events it's hard to say, whether there is any progress. Did you see any ... events processed or is it still in integration phase?

What is the first line of runRivet.log? Maybe it's one of the endless runners.

I have a long sherpa runner, but it has progress. 7800 out of 13000 events done. Sherpa.exe running now 755 minutes and it survived an overnight sleep (closing VM with saving the machine state).
ID: 38357 · Report as offensive     Reply Quote
maeax

Send message
Joined: 2 May 07
Posts: 980
Credit: 34,577,625
RAC: 18,167
Message 38360 - Posted: 21 Mar 2019, 11:37:02 UTC - in response to Message 38357.  

No result so far. Sherpa 1.4.3 for now 25 hours running and hoping to get a good end therefore.
runRivet.log is growing every minute (255Kbyte) nevts=1.000.
2515 Minutes runtime, Bronco!
43 hours, so far.
424,9 kByte Output runRivet.log.
Deadline 21 March 2019 20:10:08 UTC.
Edit: This is no problem of native-Theory!

With only 1000 events it's hard to say, whether there is any progress. Did you see any ... events processed or is it still in integration phase?

What is the first line of runRivet.log? Maybe it's one of the endless runners.

I have a long sherpa runner, but it has progress. 7800 out of 13000 events done. Sherpa.exe running now 755 minutes and it survived an overnight sleep (closing VM with saving the machine state).

===> [runRivet] Tue Mar 19 11:36:56 UTC 2019 [boinc pp winclusive 7000 -,-,10 - sherpa 1.4.3 default 1000 32]
Input parameters:
mode=boinc
beam=pp
process=winclusive
energy=7000
params=-,-,10
specific=-
generator=sherpa
version=1.4.3
tune=default
nevts=1000
seed=32
.
Initialisation without a end. Have no problem to let it running up to the end of the Year ;:) - only ONE Cpu.
.
Display update finished (0 histograms, 0 events).
Channel_Elements::GenerateYBackward(6.0246161159342e-07,{-8.98847e+307,0,-8.98847e+307,0,0,},{-10,10,3.89022,}): Y out of bounds !
ymin, ymax vs. y : -7.1611209444046 7.1611209444046 vs. 7.1611209444046
Setting y to upper bound ymax=7.1611209444046
Updating display...
Display update finished (0 histograms, 0 events).
Updating display...
ISR_Handler::MakeISR(..): s' out of bounds.
s'_{min}, s'_{max 1,2} vs. s': 0.0049, 49000000, 49000000 vs. 0.0049
Display update finished (0 histograms, 0 events).
Updating display...
Display update finished (0 histograms, 0 events).
Updating display...
Display update finished (0 histograms, 0 events).
Updating display...
Display update finished (0 histograms, 0 events).
ID: 38360 · Report as offensive     Reply Quote
Crystal Pellet
Volunteer moderator
Volunteer tester

Send message
Joined: 14 Jan 10
Posts: 967
Credit: 6,361,668
RAC: 387
Message 38363 - Posted: 21 Mar 2019, 12:13:06 UTC - in response to Message 38360.  

===> [runRivet] Tue Mar 19 11:36:56 UTC 2019 [boinc pp winclusive 7000 -,-,10 - sherpa 1.4.3 default 1000 32]
Not very hopeful:

                     run	                	   attempts success  failure   lost
pp winclusive 7000 -,-,10 - sherpa 1.4.3 default		17	0	0	17

Maybe you are the first one with a success ;)
ID: 38363 · Report as offensive     Reply Quote
maeax

Send message
Joined: 2 May 07
Posts: 980
Credit: 34,577,625
RAC: 18,167
Message 38364 - Posted: 21 Mar 2019, 12:29:04 UTC - in response to Message 38363.  

===> [runRivet] Tue Mar 19 11:36:56 UTC 2019 [boinc pp winclusive 7000 -,-,10 - sherpa 1.4.3 default 1000 32]
Not very hopeful:

                     run	                	   attempts success  failure   lost
pp winclusive 7000 -,-,10 - sherpa 1.4.3 default		17	0	0	17

Maybe you are the first one with a success ;)

Seeing this task as No.18 with failure, but...:
https://lhcathomedev.cern.ch/lhcathome-dev/forum_thread.php?id=458&postid=6178#6178
ID: 38364 · Report as offensive     Reply Quote
bronco

Send message
Joined: 13 Apr 18
Posts: 443
Credit: 8,438,885
RAC: 0
Message 38365 - Posted: 21 Mar 2019, 13:11:33 UTC - in response to Message 38363.  

Maybe you are the first one with a success ;)

@maeax
I think CP is sherpa shaming you ;)
Don't give in, keep the faith
Go, sherpa, go!!
ID: 38365 · Report as offensive     Reply Quote
Henry Nebrensky

Send message
Joined: 13 Jul 05
Posts: 129
Credit: 13,662,630
RAC: 3,855
Message 38366 - Posted: 21 Mar 2019, 20:38:20 UTC - in response to Message 38365.  
Last modified: 21 Mar 2019, 20:41:15 UTC

The machine I set up to run Theory Native keeps failing them with
14:14:26 2019-03-21: cranky-0.0.28: [INFO] Running Container 'runc'.
container_linux.go:336: starting container process caused "process_linux.go:293: applying cgroup configuration for process caused \"failed to write 0-3\\n to cpuset.cpus: open /sys/fs/cgroup/cpuset/boinc/0/cpuset.cpus: permission denied\""
The machine I didn't set up to run Theory Native (apart from updating CVMFS to keep all machines consistent) somehow managed to get hold of some and run them quite happily: so far it's completed a pythia6, a pythia8, a herwig++ ... and a Sherpa! (albeit sherpa 1.4.2, CPU time 22 hours 1 min 57 sec)

Somehow I think they're trying to tell me something. :(
ID: 38366 · Report as offensive     Reply Quote
Previous · 1 · 2 · 3 · 4 · 5 · Next

Message boards : Theory Application : Issues Native Theory application


©2020 CERN