How can i simulate 10^12 particles simultaneously?

Good morning!
I want to simulate 10^12 particles at a run.However ,every times I execute
/run/beamOn 1000000000000, it will failed and print out below.


Could anyone give me a solution to realize my requirements?Thank you very much.

You can’t, not in a single run. The number of events is stored in a signed integer (G4int), with a maximum value of 2,147,483,647. To generate that many events, you should do five runs of maximum size in your job:

/run/beamOn 2000000000
/run/beamOn 2000000000
/run/beamOn 2000000000
/run/beamOn 2000000000
/run/beamOn 2000000000

Set up your analysis code so that it either accumulates across all the runs of a job, or includes the run number (which increments with each beamOn) in the output filename.

If you want to run your jobs in parallel, you will need to write your own code to initialize the random number seed uniquely for each job (in my experiment, we wrote a utility class to generate seeds from UUIDs; you could use the timestamp, job ID, or something).

1 Like

wouldn’t 1e12 particles be 500 runs with 2e9 particles each?
I would be interested in your solution to automate the merging of primitive scorers, if you use them :slight_smile:

@weller Yup, it would be 500 runs. But if the user is already prepared to wait for 10^12 events to finish, waiting for 500 runs to take the same amount of time isn’t a problem. Alternatively, they can launch the jobs in parallel on a compute farm and do the output merging afterward.

We don’t use primitive scorers in my experiment, unfortunately. We’re using SDs to collect hits, write those out to ROOT N-tuples, and do analysis offline. If the scorers produce simple histogram-like output, then merging them should be as easy as summing the histograms.