Roughly only 1/3 of the particles were simulated

I have a modified version of the example Hadr01. I’m simulating 40 MeV deuterons onto Graphite.
I ran a Macro with 6.24*10^9 deuterons, roughly a Nano amp by my estimations.
This macro took a while to run, so I left it alone. When I checked on the simulation it was running fine.

Then the run finished after only 2.14*10^9 deuterons.
The data was written into a .root file and was readable. There was not indication of errors or reasons why it stopped.

I am just wondering what caused the issue and if its worth trying this lengthy simulation again?

Is there a limit to the amount of particles a Geant simulation can run?

Your run finished with exactly 2,147,483,647 deuterons :slight_smile: That’s the maximum value which can be stored in a signed 32-bit integer (G4int), which is how the number of events is stored.

You can do multiple runs in a single job, or you can do a lot of smaller jobs spread out on a batch farm, and use TChain with the output ROOT files afterward.


Yes you’re correct exactly that many deuterons. Interesting, thank you for the info! I figured it could be a max value.

I will look into multiple runs in a single job and TChain.
I’m still fairly new to Geant4 so I’m constantly learning.

The example I built off of (hadr01) is in sequential not multi threaded mode. Will this effect the ability to do multiple runs in the single job?

I am only attempting to simulate so many particles because larger simulations seems to produce disproportionately more higher energy secondary particles for my example.

Would it be better to rebuild my simulation on a example that isn’t in sequential mode?

Ideally I would attempt to simulation 6.24*10^12 deuterons (which is an microAmp)

The examples should not be Seq-specific. Whether you get sequential or MT depends on how you installed Geant4.

That’s peculiar. With many events, of course you’ll see instances of very low cross-section processes, but the rates should be consistent with the cross-section.

You’ll need about 3000 jobs (at 2.08e9 events per job) for that. If you have a large batch farm available, I’d suggest submitting a job array. That will also let you use the job array index to set a unique random seed per job.

If you don’t set the seed uniquely, then every job will start in the same state, and you’ll get 3000 copies of each event.

1 Like

I thought that it depended on how Geant4 is installed. But I had no problem running multiple threads in other examples but when I try any multi tread commands like ("/run/numberOfThreads") its says:

“command is issued in sequential mode. Command is ignored.”

This is why I thought my example was in sequential mode.
Are there other commands to control the number of threads an example uses?

I also found it odd that there seems to be disproportionately more high energy secondary neutrons. I assumed it would have been a proportional ratio between primary particles and high energy secondary. I think potentially the cross section is so low for high energy that it only produces after lots of primary particles.

The tricky part is I think my simulation is good (using good selected physics and cross sectional data)
but the histograms I produce seem different than the experimental data my boss has tasked me with proving. Also converting to TTNY (thick target neutron yeilds) from just number of neutrons at a certain energy could also cause error in my comparing. But I’m fairly sure set up a way to get from number of neutrons to the needed units of TTNYs. All of this has led me to be trying to simulate so many deuterons in the hope that more primary particles will produce more of the desired higher energy neutrons.

Urgh. If you’ve got an MT build, that suggests that the Hadr01 is set up to only instantiate the sequential mode run manager (G4RunManager), not the MT version. In basic example B1, you can see how it’s supposed to be done. I just ran grep, and sure enough, both Hadr01 and Hadr02 are sequential only.

If you want to run MT, you should modify the main() to select between the run managers, as is done in the basic/B* and all of the other Hadr* examples.

One issue with your cross-sections and yields could possibly be that some of the hadronic examples use very limited physics lists, in order to demonstrate some particular process. I’m not saying this is the case, but it’s worth checking. Which physics list are you using? FTFP_BERT is usually a good one for general hadronic physics.

I will look into modifying Hadr01’s main() to make it run MT! The reason I ended up using Hadr01 is because of the sensitive detector and Histo set up. It made it easy to get the data I wanted. I’ve tried to get sensitive detector’s to work on other examples but I can’t “get pointer” for the histo I think.

I’ve asked the forum a few times about what physic I should use for my specific simulation. I have also done a bit a research into the physics but by no means I am 100% sure my physics are optimal.
The physics I am using are:


I am using INCL physics based off the suggestion of some users on forum and the fact that it produces the most higher energy neutrons. I could try FTFP_BERT again and see if I get different results.

Thank you for all the help so far I’ve been able to learn something from everyone of your responses!

Okay. I think your hand-built physics list should be pretty close to either FTFP_INCLXX_HP or QGSP_INCLXX_HP. If you think you’ll need to pick and choose the different processes, what you’re doing is fine. If you want to use something “out of the box”, either of those should be suitable for low-energy deuterons on target.

Okay. I will try both of those physics lists and compare the results to my hand build!

Could you explain or point me to an example of “multiple runs in a single job”?
I do not have access to a batch farm, but I did get my example to be in MT mode.

For multiple runs, I just mean having a series of /run/beamOn ### statements in your macro file. Each one will have a different “run number” assigned, sequentially from zero (see G4Run::GetRunID()).

If you’ve set up your code to create output files, you can (and should :slight_smile: ) write them to include the run number as part of the file name. Or, you can have them open the output at the start of the job, and just accumulate all those runs into a single output.

I thought I changed the main() of my example to be MT mode but then when I attempted to run a macro I got this error:

*** G4Exception : Run0123
issued by : G4MTRunManager::SetUserAction()
For multi-threaded version, define G4VUserPrimaryGeneratorAction in G4VUserActionInitialization.
***Fatal Exception *** core dump **
**** Track information not available at this moment
**** Step information not available at this moment

I do not have an ActionInitialization file to define G4VUserPrimaryGeneratorAction in.
I’m unsure how to fix this error.

I will try the multiple runs after I get this fixed. My results are currently output in .root format.

Also for multiple runs in a single job, do I need to worry about a forcing a unique seed for each of them like if I did a batch farm?

Oh. Sigh. I don’t want you to fall down a rabbit hole. That suggests that Hadr01 (and probably Hadr02) never got migrated to the “new” Geant4 10.0 model, where all of the UserAction initializations are done in a separate, new class ( This was done to support multithreading, with master and worker threads.

Since your code worked fine as sequential-only, you can back out the MT changes, untill such time as you’re ready to fully migrate (it took me six years to get around to that in my experiment’s code!).

No. So long as you set a unique seed at the beginning of the job, each new run just continues with the random engine; it doesn’t start over.

Oh okay thank you for the all this insight, I will definitely wait before I fully migrate. Also thankfully I only made the MT changes to a copy of my code! I still have my sequential-only working code.

I do not know exactly to set a unique seed at the beginning of the job.
Are there seed commands?
or a randomize seed function needed to be added to the code?

I just tried using two /run/beamOn 1000 to see what would happen.
I think there already is a unique seed. the first run (Run 0) made 34 neutrons and the 2nd run (Run 1) made 20.
The way data is output from my simulation is through .root file.
I encountered another issue though.
The .root histogram file only keeps the last run (Run 1).
I think this is because it wrote the same Histogram name. Can I sum these runs or set up a way for the output name to change related to the Run #?

Yes, but there is not one that says, “create a unique seed for this job.”

Yes. In the CDMS experiment, we wrote an “auto seed” function, with an associated UI command to turn it on, which makes use of the UUID library to get an absolutely unique seed at the start of every job. Many people use time() for this, but if you submit to a batch farm, you can have two, or a dozen, jobs all start simultaneously, which means they’d all get the same time()-based seed.

Yes, those two runs are different, with different starting seeds. But if you submit that same job again, you will get the identical output.

You should look at the code where the output file is specified, opened and closed. If that is done in “BeginOfRunAction”, then there’s a new output file for every run. You can modify that code to include the run number in the filename. Or you could modify the code so that the output file is opened at start of job, and closed at end of job.

Is there any way I could use or see the code for your “auto seed” function?
or could you direct me to some examples of similar codes?

Are there any Geant4 examples with setting seed to be related to time?
I do not have access to a batch farm so setting to time() is possible for me.

But if I could get something like your “auto seed” function that would definitely be important since I think in the near future I might need to set up a batch farm.

I’m unsure where in my code that the output file is specified. I think its in and then the actual name seems to be set in macros with /testhadr/HistoName.
I was able to get each run to be labeled separately if I use the above command before the next run. I don’t think this is ideal but it works for now.

Also is there any way inside of the Geant4 simulation to sum the runs together?

#include "Randomize.hh"
#include <uuid/uuid.h>

// Generate unique random seeds from a UUID and write into engine

void CDMSRandomManager::generateSeeds() {
  if (verboseLevel) G4cout << "CDMSRandomManager::generateSeeds" << G4endl;

  // Create array of longs big enough to hold UUID plus zero terminator
  const size_t nlong = sizeof(uuid_t)/sizeof(long) + 1;
  long* seeds = new long[nlong];
  seeds[nlong-1] = 0;
  unsigned char* theUUID = (unsigned char*)&seeds[0];
  if (verboseLevel>1) {         // Report generated UUID
    char strUUID[2*sizeof(uuid_t)+5];   // 32 hex digits, 4 hyphens, terminator
    uuid_unparse(theUUID, strUUID);
    G4cout << "Seeds generated from UUID " << strUUID << G4endl;
  if (verboseLevel>1) G4Random::showEngineStatus();

  delete[] seeds;               // Clean up memory before exit

You won’t be able to use this all by itself (there’s all of our framework around it), but you should be able to turn it into a function that your code could call.

That would be up to you. Just as with the histogram files, you would be modifying the RunAction, so that things don’t get closed at end of run, but rather at the end of the job (e.g., in the RunAction destructor).

I will attempt to make a function out of that and will look into RunAction to see if I could sum them.

I seriously have learned a lot from this forum post!

Thank you so much for all this help and info!

Hope you have a great rest of your week!

Hello!! how can i make them open the output at the beginning of the job and just accumulate all those executions into a single output. Thank you!!