Choice of PSD for whiten

Dear colleagues,

I would like to ask which PSD shall I use for whiten process. Before stating my question, I would like to explain my whitening process:

  1. apply a tukey window to the signal so that the boundaries meet,
  2. transform the signal into frequency domain by fast Fourier transform,
  3. divide the frequency series of signal by PSD, and
  4. transform the signal back to time domain.

This whtiening method is simple but it works. However I’m now hesitate which PSD to use. Please see the following descriptions for my concerns:

  1. Averaged PSD obtained from the nearby noises:
    Since the detector PSD is non-stationary, nearby noises exhibits similar PSD. Therefore it’s better to whiten the segment with nearby PSDs before the PSD evolves too much. This is used in the GWOSC workshop, but it looks weird in the low frequency part.

  2. Averaged PSD obtained from a larger range of data:
    Since the whitening process aim at reducing the colored frequency profile, it’s better to find a PSD which is able to represent the trend of noises. Nearby PSDs may still contains local fluctuations and may not be enough to show the global trend.

Currently, I’m using the averged PSD of 40000 2-second segments from LIGO Hanford (the bottom figure). The curve is rather smooth and I think it is sufficient for whiten. Nevertheless, I still wonder which PSD is appropriate.

That’s a great question! There is no one “right” answer for how much data to use to compute a PSD. As you say, there is a trade-off between getting a smoother PSD and using data that is near the source.

I think many transient analyses use a few hundred seconds up to a few thousand seconds of data to construct PSDs. This is an attempt to find a balance between these two concerns.

Good luck!

2 Likes

Thank you very much! I may try different durations for PSD and see which gives a better result.