Lecture Questions - Day 2 (2022)

Questions about the Day 2 lectures from the GW Open Data Workshop (2022) can be posted here.

1 Like

Why am I not able to join the YouTube live stream? The livestream seems to be waiting to start. Has today’s lecture started? Please let me know

@frowdow The livestream is running now. Please try again.

Have there been any BBH events that “stand out” for their relatively “large” deviations from GR?

I didn’t get what squishiness of a neutron star actually imply, so can someone kindly repeat its meaning. Also, how the probability density vs. fractional deviation plot verify general relativity?

So far, we have not found any evidence of GR deviations from the BBH events. Instead, we have set tight constraints on several gravity theories. The most recent paper about all the GR tests that we have done can be found here. Note, however, that there are also other studies. For instance, several authors argue for the presence of GW echoes following the ring-down (you can perform a Google search for it). However, these results are not widely confirmed.

Instead from the astrophysical point of view, there are several thing we don’t really understand. For instance, GW190814 is a merger of a 20 solar mass BH with an object of 2.6 solar masses. We don’t expect Neutron stars to be so massive and BHs to be so light. So we don’t really know the nature of the smaller object (maybe we should model better neutron star or BH formation).

4 Likes

Hi Saksham,

I think with ‘‘squishness’’ we refer to the neutron star (NS) tidal deformability. When the two neutron stars are very close, they start to feel each other gravitational field and they get ‘‘squished’’.
Basically, the tidal deformability tells you how easy is to modify the shape of a neutron star.

Regarding your second question, when we constrain GR deviations, we obtain probability density functions for a given value of GR deviation. Usually a GR deviation = 0 means that you are compatible with GR. If your probability density function, includes the value =0, for a given confidence interval (usually 90%), then we know that we are compatible with GR.

2 Likes

What did rebecca mean by ‘whitened’ template and ‘whitened’ data?

@fazlu “Whitening” is a process where the noise spectrum of a signal is reshaped to be flat across frequency. You can learn more about this in the Signal Processing Tutorial.

2 Likes

Rebecca Ewing’s presentation was amazing! It was so clear and thorough! Great stuff!

3 Likes

GSTLAL slices the templates into like 7-8 slices and then downsamples the signal at the initial stages of inspiral. But won’t having these many template slices create a setback in the computational time?

Is there any way someone/group can submit new templates with better parameter estimations to the LIGO collab template banks?

@fazlu That’s a good question. I’m not an expert, but my understanding is that all of the template slicing is designed to speed up the computation. The speed-ups come from a combination of the fact that the lower frequency part of the templates can be computed with a lower sampling rate (so, less processing), and the fact that the SVD provides a more computationally efficient coverage of the parameter space.

@fazlu Yes. Groups around the world are working on improving waveform models. Also, most of the code used in LVK analysis is open source - so it is possible to submit pull requests for changes.

Jonah has already answered the question.
There may be a confusion with the (slightly jargonic) term ‘whitened template’, that I’ll try to clarify in this post. This refers to the template processed with the same filter used to whiten the data. This means that, for both data and template, one divides in the frequency domain by the square root of the power spectral density. While the whitened data has a flat spectrum, the whitened template does not!

A question about tutorial 1.3. I am puzzled about how we generate the qrange. I could not solve the challenge question because I could not identify the signal in the q_transform() call without specifying the qrange() and the outseg() parameters. The tutorial arbitrarily selects an “intuitive” qrange and then narrows it out further using the outseg() parameters. This approach does not work for the challenge question. After seeing the answer, i figured out that calling q_transform() with just the outseg() parameters quickly narrows out the signal. I feel that the tutorial should lay this out.

2 Likes

Hi @frowdow, the qrange and the outseg controls different parts of the plotting. You can always see mode details about the methods you are using in a notebook adding a new cell with a ? after the name of the method, for example:

hdata.q_transform?

will give you all the options for the q_transform method. While outseg controls the segment for output (so it’s a way to zoom in the x axis), the qrange control the range of Qs to scan. Both parameters need to be set appropriately. For BBH the interval of time of interest is much shorter than the one used for the BNS in the example, so the segment of time to be plotted can be shorter. The range of the Q values has to be chosed differently. As you can see e.g. in slide 8 of this presentation, the Q value determine the shape of the tiles in the spectrogram. For BNS, the signal is long and many cicles can be recovered so you need high Q values (tiles broad in time and thin in frequency). A BBH stays in the frequency band detectable only shortly so it is better to chose low Q values (tiles thin in time and large in frequency) in order to have tiles that capture small intervals of time allowing to highlight the variations in the signal happening in a short time range.
Hope it’s clearer now :slight_smile:

3 Likes

Hi Agata,

Thanks for that really clear explanation. That presentation helped too. Your precise explanation really belongs in the tutorial.
:slightly_smiling_face:
Thanks,
Sachin

1 Like