statistical simulation error and mesh scoring

Hello to all users

I am a beginner with Geant4

I want to calculate the dose distribution in a room containing a radioactive source.
I think it’s okay to use a scoring mesh, instead of creating multiple sensitive detectors, which is a bit frustrating and increases the time to compute.

My mesh file is:

########################################
/score/create/boxMesh boxMesh_1

/score/mesh/boxSize 150. 150. 2. cm
/score/mesh/nBin 50 50 1
/score/quantity/doseDeposit AbsDose
/score/close

########################################
/score/list

/vis/disable
/run/beamOn 1000000
/vis/enable
########################################

drawing projections

/score/drawProjection boxMesh_1 AbsDose

########################################

Dump scorers to a file

/score/dumpQuantityToFile boxMesh_1 AbsDose AbsDose.txt

My question relates to the statistical simulation error which decreases by increasing the number of particles generated. In mesh scoring mode, there is no indicator for this. How I can choose a number N of particles generated so that the statistical error is less than 5% and then my results are confident.
Is it possible in my AbsDose.txt scoring file I also calculate the statistical simulation error?
your opinions interest me, thank you

hello!I have the same problem in “choose a number N of particles generated so that the statistical error is less than 5%”.Did you solve this problem?

I don’t believe there is a way to reliably calculate statistical errors in general, as they would so heavily depend on the geometry / situation. Instead, I would run multiple runs with a number N of particles, and check the statistics for each mesh cell in these multiple runs.

Under the assumption that a subset of cells should “see” similar/identical dose, you could also calculate statistics for the subset of these cells in one single run.
If it is somehow not feasible to make multiple runs with a large N, you could also make multiple runs with smaller N, plot the error vs. N and then extrapolate.

Thanks very much for your comprehensive answer. I‘ll try as you suggest.Thank you again!

I believe what @weller is referring to is the batch method. An alternate approach is the “history by history” approach of quantifying statistical uncertainties. You can find information about it in this paper: https://people.physics.carleton.ca/~drogers/pubs/papers/Wa02a.pdf

For dose, for example, it involves keeping track of the sum of the total dose accumulated in the voxel, as well taking the square of dose every time dose is added to the voxel. I prefer this approach because you don’t have to run multiple batches to figure out your statistical uncertainties.

1 Like

Thanks for your advice .Both of your suggestions are helpful to my problems.I’ll read the paper carefully. I’m lucky to meet so many kind people and get hlep on this forum :grinning: :wink:

that is much smarter. does this also work for very large number of primaries, but very low number of counts in a cell?

That’s a good question. I’m not sure of which conditions the approach would not be appropriate.

Related to your concern: unlike in the paper, I have most commonly seen the history by history approach implemented where N refers to the number of histories which have deposited energy in that cell/voxel, rather than the total number of histories generated. If you take this definition of N, it keeps a very large number of N_primaries from artificially driving the uncertainty down in cells which don’t see many counts.