HPGe model for efficiency calbration

Hello all,
I’m a postgraduate student and I have been working with Geant4 for only one year. I’m trying to model a HPGe detector in order to obtain an “ideal” efficiency calibration curve. I’m trying to calculate the mentioned curve by using the gamma spectra of two different radionuclides: 177Lu and 177mLu. Since 177mLu decays into 177Lu, they have two emission energies in common (113 keV and 208 keV), thus efficiency values in these energy points must be the same. However efficiency points derived from the 177Lu spectrum seems to be always higher than the 177mLu-derived ones.
In both cases I used the G4RadioactiveDecayPhysics and the G4EmStandardPhysics_option4() to activate the decay and the EM processes. In the case of 177mLu, I set the excited energy of the particle source at 970.17 keV, whereas for the 177Lu source I set it to 0 keV.
I also tried to run a simple gamma source with the correspondent emission probabilities and the derived efficiency points are comparable with the 177mLu-derived ones. Thus, I’m wondering if Geant have some problems with the 177Lu decay or not. I checked emission intensities in the RadioactiveData and LevelGammaData files and they appear correct.

I really do not know how to solve this problem. Any suggestion and idea is really appreciated. And sorry for my bad English.

I’m waiting for your answer and thank you in advance.

Claretta

Just moving this to the Physics category as should get a better response here.

Ok, thank you very much!

Ciao Claretta,

I am not sure that I got the problem right. By “efficiency” here you mean the fraction of gammas entering into your HPGe and which release the entire energy in it? Indeed, I expect that this depends only on the energy on the gamma, so this should be the same for 177mLu, 177Lu and “primary” gamma-rays. Since physics models are exactly the same in the three cases, I expect this has nothing to do with the physics settings.

I would check for spurious effects, e.g. coincidence summing in the cascade, secondary decays, normalization vs. the number of incoming gamma-rays. Or maybe I missed the point here.

Ciao,
Luciano

Ciao Luciano,
thank you for the quick response.
I think that beamOn is the total number of nuclides which decay during my simulation, thus I’m calculating each efficiency point as:

Net Area (at energy E) / (beamOn*Intensity of the emission at E)

Am I missing something? My source is built as a uniform nuclide source inside a Marinelli beaker.
In the case of 177Lu I runned 3mln of source particles, but if I suppose that they are 3.5mln results are perfect. It seems like there’s some rescaling factor related to 177Lu alone that I can’t pinpoint.

Thank you again.

Claretta

Ciao Claretta,

one possibility could be the existence of true coincidence summing in gamma cascades. I mean, if your gamma of interest always comes together with another one, you have a certain probability that your detector catches (partially or entirely) both gammas. As a consequence, the effeciency that you see for your gamma of interest in the cascade is smaller than you would see for an “isolated” gamma of the same energy. I am not familiar with the decay scheme of Lu177 and Lu177m, but you mentioned many gammas in your post…

Ciao,hope it helps,
Luciano

Thank you very much, I will think about it.

Claretta

Dear Luciano,
I really appreciate your helpfulness and patience in trying to help me out. If you would like to go deeper into the problem and we could talk personally I would be more than grateful.

Thank you again

Claretta