Low energy proton (~15 keV) simulations are heavily dependent on step length

Currently I am running simulations with shooting low energy protons (~15 keV) on a silicon target, while recording the outcome (depth in detector, energy deposit) in different histograms. While playing around with different energies, physics lists and step sizes I made quite a weird observation:

The depth histograms are heavily dependent on the step size. While the step size is above 100 nm the shape of the curve takes a weird double peak structure. Although changing it to 10 nm the curve takes a “gaussian” shape. Lower step sizes result in more narrow peaks until there’s almost just 1 filled bin.

I already looked over my code several times but could not find anything, that could lead to these problems.
Did someone encounter the same problem at one point or does someone have an idea why these things might occur? Or is this even normal?

Energy is collected at end of each step.
To produce depth dose distribution the step size must be coherent with the binning of the distribution.
Here, an example for TestEm11. We have 100 bins for 350 nm; the step size must be of the order of 3.5 nm.
sebomba.mac.txt (395 Bytes)