Vertex volume issue

Hello, I have encountered a problem with the identification of the origin volume of secondary particles. The information was recorded inside the sensitivedetector class like this:

G4String VertexVolume=track->GetLogicalVolumeAtVertex()->GetName();
newHit->SetLogicalVolumeAtVertexName(VertexVolume);

basically, sometimes the vertexvolume was reported to be the world volume (which is actually vacuum, so it is unlikely correct). I searched the forum a bit, and from what I understood it is a problem related to particles created on or near the surface of solids, for which the origin volume is incorrectly identified.

I tried to get around the problem this way (I tested several approaches, this is just the latest):

// fix
G4ThreeVector vertexPos = track->GetVertexPosition();
G4Navigator* navigator = G4TransportationManager::GetTransportationManager()->GetNavigatorForTracking();
G4VPhysicalVolume* physVol = navigator->LocateGlobalPointAndSetup(vertexPos);
if (!physVol) return false;

G4LogicalVolume* logicalVol = physVol->GetLogicalVolume();
if (!logicalVol) return false;

G4String VertexVolume = track->GetLogicalVolumeAtVertex()->GetName();

if (VertexVolume != worldVolumeName || ParentParticleID == 0) {
newHit->SetLogicalVolumeAtVertexName(VertexVolume);
}
else {
VertexVolume = logicalVol->GetName();
newHit->SetLogicalVolumeAtVertexName(VertexVolume);
}

basically the logic is: if the particle is not a primary (for which the vertex is expected to be the world volume) and the origin volume is still worldvolume, get the vertexposition and find in which logical volume is contained.

Now, the problem: I ran 2 simulations in which the only difference is the two pieces codes above, and in the second case I get an output file 100 times larger, in which every single event has thousands of energy deposition totaling to an unrealistic amount (20 MeV average total energy deposition in a 7 um Bismuth layer…). I do not understand this at all. I do not understand how this is possible and how to fix it.

I attached the first 200 lines of the output file in both cases, with the following columns organization:

evt_id	particlename	ParticleID	ParentParticleID	KinEPreStep	Edep	KinE	VertexVolume	VertexKinE	CProcessName	pxlnum	Gtime

newoutput.txt (16.2 KB)
oldoutput.txt (17.2 KB)

and as you can see, after the first tens of lines which are the same, the second file goes on and on with the same event. Can someone help me with this? I have run out of ideas

_Geant4 Version:_10.4
_Operating System:_ubuntu


I’ll be honest, this seems a very odd way to do this and possibly prone to multiply counting.
Why not use the TrackingAction instead? The PreUserTrackingAction is called once and only once for each particle for the very first step. Then you can just check the logical volume there after confirming its the kind of secondary you want.

If you want this information to be accessible in a later sensitive volume than you can just create a custom Track User info and pass it to all future descendants like in RunAndEvent01 and described here. In the sensitive detector volume you can just check if this info has been set. And then handle it accordingly.

This avoids boundary issues and handles the exception cases in a self contained way.

Edit - It is unclear what generates a hit from what you have posted. If it is this line then you will generate a “new hit” every single step inside the volume for any other secondary particles.

if (VertexVolume != worldVolumeName || ParentParticleID == 0) {
newHit->SetLogicalVolumeAtVertexName(VertexVolume);
}
1 Like