I have been looking at the spectrum of a simulated AmBe source with a simulated detector (BaF2 in this case).
What I noticed is that the spectrum, made from the total energy deposited per event in the crystal geometry, shifts based on time gates set up in Step. So, If I only capture data from the first 1ns, 10ns, 100ns, 1000ns, and with no time window, the known peaks (511keV, 4.44MeV).
From what I can see, as the time gate gets smaller, the peaks approach their correct values.
I’m not sure if this is useful to help solve this question, but I also looked at the spectra of only the optical photon energy deposited and the peaks don’t seem to shift.
Any Idea why this is happening?
. This is the total energy deposited. As you can see, as the time gate gets smaller, the peaks move left (to where they should be).
How are you recording the energy? Remember that a competing energy loss mechanism is thermal/lattice vibrations which would involve collisions on much longer time scales than, for example, scintillation.
I’ve been investigating more and it looks like its G4OpticalPhysics() doing this.
I’m using QGSP_BIC_HP. The x–axis is energy in MeV.
Both simulations are exactly the same except one has optical physics enable while the other is disabled.
The following is optical physics enabled vs disabled. There is a clear difference with the optical physics enabled. A shift to higher energies. Does anyone know why this is the case, and what implications it’d have for simulating a detector response?
I’m recording the energy deposit similar to one if the basic examples. Where the eDep from each step is accumulated in Event, then sent to runAction to be added to a root file via the analysisManager.
It’s quite interesting (or I’m missing something basic) that the spectrum shifts to higher energy when G4OpticalPhysics is enabled, and back to where you’d expect it to be when it’s disabled.
The other effect is that larger Time Gates (just an if statement in Step using GlobalTime to select the energy deposits that’d be sent to Event) seem to shift the spectrum more than shorter Time Gates.
Below is how I collect the data. Once I know why the OpticalPhysics is behaving as it does, I’ll be looking something more realistic.
Below is broadly how I am collecting the eDep data.
In SteppingAction
if (touchable->GetVolume()->GetName() == "pv_NaI" && globalTime < Gate){
fEventAction->AddEdep(edepStep);
}
OpticalPhotons do carry energy, and when they are absorbed, that energy will be recorded in the G4Step as GetTotalEnergyDeposit(). In your SteppingAction, maybe add an if statement that if the particle is G4OpticalPhoton, print the track/step info and the edepStep value you extracted?
On a side note, if you only want to collect energy from a specific volume, you might consider replacing your SteppingAction with a SensitiveDetector attached to the volume of interest. That would eliminate part of the if-statement you quoted above.
I implemented some of the changes you suggested. I haven’t added a sensitive detector. This is still in SteppingAction.
Most processes with the optical photons were transports and every now and then, an absorption. The kinE was always 220nm photon (~5 eV). The optical photon eDep is shown in the spectrum below.
I collected the total eDep upon the scintillation crystal volume, the global Time of each step, the eDep of the optical photons and the time of each optical photon step.
2.3ns Time Gate. QGSP_BIC_HP + G4OpticalPhysics. eDep collected the same as explained above.
Total eDep (same as previous posts, the smaller the time gate, the smaller the spectrum shifts, x-axis in MeV)
I’m not exactly sure what I am seeing with the optical photon eDep. Could you explain this?
Does this help solve why the spectrum is shifting when G4OpticalPhysics is enabled?
From your bottom plot, you can see clearly that optical photons are contributing to your “total energy collection.” Try making your top plot by excluding optical photons from your energy collection; that is, in this bit of your code,
try putting an “if not-optical-photon” before the AddEdep() call.
Something I’m not entirely certain of (@dsawkey, do you know?): when G4 generates optical photons, does it treat them like ordinary seconaries, or does the parent track/step report back the same EnergyDeposit it would have reported without optical photons? We do this in the G4CMP package, so that energy deposits are still reported, but also converted to electron-hole pairs or phonons. It means the energy is double counted, but allows users to do the same kind of analysis.
Hi.
I subtracted the optical photon’s eDep and the spectrum looks correct. the 511 keV and 4.44 MeV peaks are where they should be. Thanks for helping me here. Very much appreciated.
My last question, isn’t an issue, but is confusing. Is the optical photon spectrum more inline with what you’d see from a real detector?
I know there are a lot of uncertainties that aren’t accounted for here i.e. electronic and signal noise, energy resolution blurring, etc. I don’t have access to the real things, so I got to ask.
Is the optical photon spectrum more inline with what you’d see from a real detector?
Ultimately the resolution of your detector is limited by Poisson statistics related to generated/detected information carriers. With a scintillator and a SIPM readout, for example, it would be dominated by the number of optical photons that make it to your detector. PMTs and SIPMs are really just photon counters. That is the primary driver of energy resolution “blurring” and what you can simulate instead of assume by actually tracking the optical photons. And it would be there even with ideal electronics/signal processing.
Thanks for this explanation. That makes a lot of sense.
A question in the same vein, If I were in a lab using a NaI Detector, just to name a detector for this example, to detect the radiation from an AmBe source, would the spectrum output be similar to the TotalEdep spectrum or the optical photon spectrum?
Also, what I find strange is that the TotalEdep and the optical photon spectrum look similar. The peaks are just at different energies. Is this what would be expected?
In the optical photon eDep spectra below, you can see what looks the TotaleDep but shifted to lower energy. Is the appropriate thing todo here to scale the x-axis so the peaks are in the correct locations? if not what is happening?
**TotalEdep corrected (TotalEdep minus OpticalPhotonEdep). **
OpticalPhotonEdep only - As the time window gets larger the spectrum shifts to higher energy. (this is not the same concern I had earlier. Just a new effect I’m confused about). Ignore the yellow 1000ns TimeWindow spectrum. It’s not being added to another spectra, just under the red line
Optical photon spectrum since you have a light sensor. The “lost” energy comes from thermal excitations that you will not be able to record.
I suspect somewhere there is double counting. You should be recording optical photons with an optical physics list not the total energy deposited. NaI has both a rise time and a decay time that are in the hundreds of nanoseconds range even before accounting for photon transport in larger crystals. There should be a dependence on the “time window” since you would likely integrate more near the 1000 ns time range. It just seems like the x-axis is getting mislabeled. A good check would be the FWHM of some feature in the plot as a fraction of the peak position. That should improve with longer integration times/“windows”.
It make sense the optical Photon eDep is the more realistic output. Hearing someone else say it gives me confidence.
I suspect somewhere there is double counting
If you are talking about TotalEdep, I was double counting. I had to minus the OpticalPhotonEdep from TotalEdep. If this isn’t what you mean, could you explain?
You should be recording optical photons with an optical physics list not the total energy deposited.
Do you mean something like this? This is how I am capturing the optical photon data at the moment. if(particle == G4OpticalPhoton::OpticalPhotonDefinition()){
Or is there something I’m missing?
NaI has both a rise time and a decay time that are in the hundreds of nanoseconds range even before accounting for photon transport in larger crystals
This is very helpful. For the spectra I have shown, I have used BaF2 scintillation, because it’s good for fast timing. But I do want to test NaI as well.
It just seems like the x-axis is getting mislabeled. A good check would be the FWHM of some feature in the plot as a fraction of the peak position. That should improve with longer integration times/“windows”.
Just a quick calculation (FWHM divided by the mean energy of the peak all in MeV). I fit a Gaussian to each to do this. The peak is what I presume to be 4.44MeV Carbon i.e. the right most large peak for the spectra. I tried to fit consistency between peaks.
1000ns: 0.0305
100ns: 0.0362
10ns: 0.0354
2.3ns: 0.0511 ← (This is around the crossover between the slow and fast components)
I was referring to earlier. You have the proper treatment now.
Yes, it is a sanity check. A scintillator will not emit all the light at once because of various physical processes with time delays that are also stochastic in nature. If you have to have a longer integration time, that also means there is a higher likelihood of pileup from the next pulse. That the FWHM improves with integration time suggests that you are collecting more of the late arriving photons which improves the counting statistics as expected.
Edit- Confusing that 10 and 100 ns seemed swap. That might be a fit/binning issue
What this also means is that you have to recalibrate the x-axis depending on the integration time, which is also true when doing a real world experiment with scintillators.
Depending on what it is doped with BaF2 has a secondary decay constant on the order of 600 ns which is quite long for a scintillator. Count rate versus energy resolution is more of a problem related to scintillator dopants, pulse processing electronics, and desired count rate/energy resolution and not something easy to simulate in Geant.
Thank you for all the assistance. I’ve been investigating what you said here and it makes sense, but I’m lost on how to approach this.
**Does this approach makes sense to you.
Create a scale factor for the optical energy deposited which is 1 over the fraction of the time gate energy deposited divided by the total energy deposited i.e.
**scaleFactor = 1 / (integral of OpticalEdepGated / integral of OpticalEdepTotal)**.
However, the issue I see with this is that the total optical energy deposited i.e. no time gates, also needs calibration. Also, the quotient should be normalised to the true spectrum. So I think this is the wrong way.
**Instead, I should instead do something like this
**scaleFactor = 1/(integral of OpticalEdepGated / integral of eDepTotal)**?
Where eDepTotal is collecting every eDep from every particle (not just optical photons) upon the GEANT4 scintillation crystal volume. Below are spectra - left spectra is before re-scaling of the axis, and the right spectra is after rescaling using the above relation, which looks correct, and makes sense why it is just based on the fraction above.
Here is the Total Optical Energy Deposited with no time gates. (Left) 10ns time gate, the spectrum squashes left, like in my previous posts. (Right) x-axis re-scaled based
My question now is that this works in simulation because we know the total energy deposited upon the scintillation crystal. I understand we can get the total optical photon eDep in a lab setting, but how is TotalEdep determined in a real experimental setting?
My intuition is that it isn’t, but rather that identified peaks are used to scale the axis to where it should be, but I’m not sure at all.
Geant will happily tell you the ground truth energy deposited. If the incident particle and all secondaries are stopped inside the volume this will always correspond to the incident energy from energy conservation. The issue is that energy in scintillators has two major competing processes for where this energy “goes”. Scintillation converts part of that energy to light which you then count with a photosensor, this is what you have. However, part of the energy goes into phonons i.e. heating the crystal. Your photosensor will NOT see this energy. So this means that you “lose” energy in detection even if you fully stop the particle and all of its secondaries. And this before effects like light transport losses. Optical physics simulations are trying to mimic this.
In the real world, you would just measure this with check sources and recalibrate your energy axes based on the integration time. Your “approaches” are trying to do this de facto but really just the energy resolution is what matters and any detector person would be interested in that plot. The ideal integration time is infinity but that would confuse two separate incident gamma rays as one i.e. it competes with desired count rate. That will be informed by the radiation source you use and what you ultimately want to measure.