Ciao,
the script: "HtoWWTreeDumper/scripts/run.pl"
in the tag: "HtoWWTreeDumper edm-01052007"
can be used to create cfg files and submit jobs to batch queue starting from the output of DBS discovery page.
Short README:
1) go to: http://cmsdbs.cern.ch/discovery/
choose the dataset you want. You will get a list of files on castor.
2) copy it on a plain text file. Ex: higgsDatasets.txt:
/store/mc/2006/12/21/mc-onsel-120_qqH160_WW/0000/1AF3299E-63A8-DB11-A649-00E08129008B.root
/store/mc/2006/12/21/mc-onsel-120_qqH160_WW/0000/40DB08AB-15A7-DB11-8BE7-0013D3DE2633.root
...
3) run the script on it:
HtoWWTreeDumper/scripts/run.pl -d datasets/qqH160.txt -c HtoWWTreeDumper/test/hToWWAnalysis.cfg -w /afs/cern.ch/user/e/emanuele/work/HtoWWAnalysis/src/ -g 2 -b qqH160 -q 1nh -s ~/scratch0/Production
this will submit jobs with 2 collections/job, writing:
a) cfg files in ~/scratch0/Production/conf
b) script files in ~/scratch0/Production/script
c) log files in ~/scratch0/Production/log
d) root files with tree in ~/scratch0/Production/output
try run.pl -h for the options.
Enjoy ;)
emanuele
Tuesday, May 1, 2007
Subscribe to:
Post Comments (Atom)
1 comment:
*** Added "wait (-z)" option ***
if you use "-z 20" the script submits jobs until there are a maximum number of 20 jobs pending, then waits intervals of 10 mins and, if the pending jobs are below 20 jobs, it submits another bunch.
I will use as follows:
nohup run.pl [... usual options...] -z 20 >&! runSub.log &
then logout from shell and go to 1st May concert.
The script take cares of your production...
*** Added -r option ***
if set, it uses funny names for your jobs (usually you don't care of jobname...)
Post a Comment