Problem to use the bilby package to test the parameter estimation of the neutron star case

Following the tutorial source described in the tutorial resources of GWOSC, I tried to work on the parameter estimation (PE) of GW170817. Since GW170817 is the merger events of NSs, I changed all the packages from BHs to NSs (e.g., convert_to_lal_binary_neutron_star_parameters, generate_all_bns_parameters) The inspiral phase for NS merger is much longer than BH merger, so I considered to take the 16-s data for my test. And I determine the post_trigger_duration = 8 (sec). To generate the PSD, I still consider 128-s duration taken before the start of the analysis. In my preliminary test, I also used the ‘IMRPhenomPv2’ waveform and initialize the following parameters as the prior accord to the reference of some papers.

prior = bilby.core.prior.PriorDict()
prior['chirp_mass'] = Uniform(name='chirp_mass', minimum=1.0,maximum=1.5)
prior['mass_ratio'] = Uniform(name='mass_ratio', minimum=0.5, maximum=1)
prior['phase'] = Uniform(name="phase", minimum=0, maximum=2*np.pi)
prior['geocent_time'] = Uniform(name="geocent_time", minimum=time_of_event-0.1, maximum=time_of_event+0.1)
prior['a_1'] =  0.0
prior['a_2'] =  0.0
prior['tilt_1'] =  0.0
prior['tilt_2'] =  0.0
prior['phi_12'] =  0.0
prior['phi_jl'] =  0.0
prior['dec'] =  -0.40808
prior['ra'] =  3.44616
prior['theta_jn'] =  2.5028
prior['psi'] =  0
prior['luminosity_distance'] = 40

When I run the analysis, I used the same sampler described in the following:

result_short = bilby.run_sampler(
    likelihood, prior, sampler='dynesty', outdir='short', label="GW170817",
    conversion_function=bilby.gw.conversion.generate_all_bns_parameters,
    sample="unif", nlive=500, dlogz=3  # <- Arguments are used to make things fast - not recommended for general use
)

Unfortunately, I obtained the error message.

....

1695it [08:13,  1.95s/it, bound:0 nc: 39 ncall:1.7e+04 eff:10.2% logz-ratio=1150.87+/-0.17 dlogz:373.146>3]

1696it [08:13,  1.45s/it, bound:0 nc:  3 ncall:1.7e+04 eff:10.2% logz-ratio=1151.17+/-0.17 dlogz:372.889>3]

1697it [08:19,  2.67s/it, bound:0 nc: 81 ncall:1.7e+04 eff:10.1% logz-ratio=1151.46+/-0.17 dlogz:372.589>3]

1698it [08:22,  2.78s/it, bound:0 nc: 44 ncall:1.7e+04 eff:10.1% logz-ratio=1151.72+/-0.17 dlogz:372.290>3]

1699it [08:39,  7.13s/it, bound:0 nc:256 ncall:1.7e+04 eff:10.0% logz-ratio=1151.95+/-0.17 dlogz:372.029>3]

Exception while calling prior_transform function:

  params: [0.17066496 0.86254559 1.14297206]
  args: []
  kwargs: {}
  exception:
 
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/dist-packages/dynesty/dynesty.py", line 860, in __call__
    return self.func(x, *self.args, **self.kwargs)
  File "/usr/local/lib/python3.7/dist-packages/bilby/core/sampler/dynesty.py", line 53, in _prior_transform_wrapper
    return _priors.rescale(_search_parameter_keys, theta)
  File "/usr/local/lib/python3.7/dist-packages/bilby/core/prior/dict.py", line 487, in rescale
    return list(flatten([self[key].rescale(sample) for key, sample in zip(keys, theta)]))
  File "/usr/local/lib/python3.7/dist-packages/bilby/core/prior/dict.py", line 487, in <listcomp>
    return list(flatten([self[key].rescale(sample) for key, sample in zip(keys, theta)]))
  File "/usr/local/lib/python3.7/dist-packages/bilby/core/prior/analytical.py", line 206, in rescale
    self.test_valid_for_rescaling(val)
  File "/usr/local/lib/python3.7/dist-packages/bilby/core/prior/base.py", line 188, in test_valid_for_rescaling
    raise ValueError("Number to be rescaled should be in [0, 1]")
ValueError: Number to be rescaled should be in [0, 1]
 
---------------------------------------------------------------------------
 
ValueError                                Traceback (most recent call last)
 
<ipython-input-24-0a93c6f15491> in <module>
      8     likelihood, prior, sampler='dynesty', outdir='short', label="GW170817",
      9     conversion_function=bilby.gw.conversion.generate_all_bns_parameters,
---> 10     sample="unif", nlive=500, dlogz=3  # <- Arguments are used to make things fast - not recommended for general use
     11 )
 
15 frames
 
/usr/local/lib/python3.7/dist-packages/bilby/core/prior/base.py in test_valid_for_rescaling(val)
    186         tests = (valarray < 0) + (valarray > 1)
    187         if np.any(tests):
--> 188             raise ValueError("Number to be rescaled should be in [0, 1]")
    189
    190     def __repr__(self):
 
ValueError: Number to be rescaled should be in [0, 1]

I just followed the tutorials provided by GWOSC to analyze the GW150914 (google colab), and changed the data from GW150914 to GW170817. In my previous post, I still use the waveform model ‘IMRPhenomPv2’, and it may not be proper to apply for the case of the neutron star merger. In the attached Jupyter notebook (Tuto_3_2_Parameter_estimation_for_compact_object_mergers.ipynb), now I consider the waveform model ‘IMRPhenomPv2_NRTidal’, but I still obtain the similar error message. By the way, these are the recommended versions of the packages installed in my tests: lalsuite==6.82 bilby==1.1.2 gwpy==2.0.2 matplotlib==3.2.2 dynesty==1.0.0.

Can anyone have any comment to deal with my problem?

1 Like

Hello @lupinlin

Thank you for the question. I have taken a look at your code, and I think doing the following things will solve your problem:

(1) You need to include “mass_1” and “mass_2” in your prior dictionary.
(2) You might want to increase the maximum frequency in H1 and L1 to 2048 Hz.
(3) You might want to change the sampling method from “unif” to “rwalk” (random walk)

Also, you probably want to do this run on your local computer by installing the necessary packages instead of running it on Google Colab - GW170817 is a BNS signal whose waveform has a much longer duration living in the inspiral stage than BBHs, so one will expect that the PE run is going to be much longer than a PE run for a BBH event.

I hope this will help.

Best regards,
Alvin

4 Likes

Dear Alvin,

Thanks a lot for your comments.
In my code, I definitely determine “mass_1” and “mass_2” as the following:

mass = 1.5 mass_2 = 1.3
prior = bilby.core.prior.PriorDict()
prior[‘mass_1’] = mass_1
prior[‘mass_2’] = mass_2

I checked the 2nd and 3rd component, and the 3rd one is more important.
My code can be preformed without bug if I change the sampling method as “rwalk”.
Can you please tell me the benefit to set the sampling method as “rwalk”?
What’s the major difference between “rwalk” and “unif” in sampling method?

By the way, you suggested me to set the maximum freq.=2048 Hz.
I suppose that to set the max. freq. as 2048 Hz can better describe the completeness of the data. Is my speculation correct?

I ran the code with Google Colab for a test so I followed the tutorial code to set “nlive=50” and “dlogz=100”. When I checked the code run in my server, I changed “nlive=500” and “dlogz=3”.
Is there any constraint on the setting of these two parameters if I want to obtain a better result?

Here are the summary of the output information.

17:05 bilby INFO    : Summary of results:
nsamples: 622
ln_noise_evidence: -475274.920
ln_evidence: -474810.695 +/-    nan
ln_bayes_factor: 464.225 +/-    nan

I noticed that the uncertainty of ln_evidence and ln_bayes_factor is nan.
Does it mention that any potential problem still exists in my code?