GetGlobalTime when /gps/ion is used

In my application I’m using a radioactive source with gps/ion settings.
I see a strange behavior when I request global time like

G4double gtime = aStep->GetPreStepPoint()->GetGlobalTime();

in my sensitive class. The time returned is at the order of E+8s. It goes to expected values though if I change the gps/ion to gps/particle. I confirmed the same behavior with Hadr07 example. When I run the example with Na22.mac I get global time around E+8s. Once again it returns to normal values when I use proton.mac (or electron particle as source). Is there a trick to get global time when using ion source?

The version I’m using is geant4.10.05.p01.


When you use a radioisotope as your primary, Geant4 automatically throws the decay time of the ion using it’s half-life. If you run a number of events, and make a histogram of the decay time of your Na-22, you should see an exponential distribution with the appropriate lifetime.

Thank you. In order to demonstrate the issue I’ve added the output from UserSteppingAction (just after the decay of Na22) below. I see that global time of Na22 is 0. Including daughters of the Na22 decay and subsequent particles (not shown in the table) inside the detector volume has all ~E+17ns global time. I guess I’m missing your point. Could you please elaborate more?

Particle Etot(keV) Ekin(keV) GlobalTime(ns)
Na22 2.04877e+07 0 0
e+ 720.352 209.353 2.92372e+17
nu_e 337.262 337.262 2.92372e+17
Ne22[1274.577] 2.0481e+07 0.0065299 2.92372e+17
gamma 1274.54 1274.54 2.92372e+17
Ne22 2.04797e+07 0.0396598 2.92372e+17
Ne22 2.04797e+07 0 2.92372e+17

The Na22 primary particle was emplaced (by your PrimaryGeneratorAction) at global time 0. It was placed at rest, so it sat there and did nothing until it decayed . That decay occurred at global time 2.92372e+17 ns (9.26 years), which is not inconsistent with Na-22’s half-life of 2.6 years. The daughters of that decay are assigned the global time of the decay.

Hi Calory,

I’m jumping in here because I see the same as you - although I assume you have resolved your issue by now! I am looking at the DOI PET scanner example which has a function in the analysis class to calculate the time between successive decays, but the GlobalTime can be something like 10^15 nanoseconds.

I expect the way to get around this is to record the difference in the global time at the point of emission, and the point of detection.

Hi, Michael
Are there any methods to get rid of this decay time so that the decay occurred at global time 0 ns?

No, there’s nothing general for this, because you could have radioactive decays happen during an event, not just at the beginning. You could do something yourself with a stepping action or a wrapper process to reset the secondaries to have time 0, for the case where the decaying track is a primary (parentID = 0).

You may use biasing options of radioactiveDecay process to force a nuclei to decay within a given time window defined by a data file.
But each event will have a weight that you must take into account.
Attached, a set of files for example rdecay01, which are an adaptation of timeWindowBiasing from the same example.
I have fixed maximum time life = 1 picosecond. You can choose less, but not strictly zero.

wenhe.mac.txt (364 Bytes) (16 Bytes)

1 Like

Thanks, Michael
That’s exactly my case, the decaying track is a primary. I want to simulate a Positron Emssion Tomography system, the primary is e+ radioactive nuclide like Na-22, F-18, etc.
But when I use Na-22 as primary, the time of flight (the time of detection minus the time of emission) will be zero or some other nonsense numbers because the time of detection and emission is much larger than time of flight.
Maybe I could reset the global time of secondaries?

Thanks, Michel,

what is the weight?

aTrack->GetLocalTime() is the time since the beginning of that track.
At end of track, it is exactly the time of fly that you are looking for.

1 Like

Oh, that is a good way! Thanks a lot, Michel :grin: