I understand that the certificate is automatically generated once the course is completed. Does this mean we must complete the data challenge in order to complete the course and recieve the certificate?
Will we know how many points we got in the challenge?
Hi, why is this equation true:
weights = weights / max(weights) why are we normalising weights by the maximum value of weights? How can that be equal to the evidence?
Can you please explain how
efficiency = len(freq_samples) / len(freq_gsamples) fgives a measure of the efficiency of the rejection sampling?
Hi @AmbicaG, I hope I understand your question correctly. This is a gaussian likelihood function, by using this equation, we make sure that if our data point (or curve) is the same as the simulated one, we have L=1 because y_i=s(t_i,f,a) and the numerator in the exponential is null. The bigger the difference between the model and the data, s and y respectively, the more negative the numerator in the exponential and the smaller the likelihood. We could choose other functions to calculate the likelihood, but this is one of the easiest which fulfils all requirements.
Hi @AmbicaG , we want to normalize because it is easier to work with numbers which are between 0 and 1 instead of an unknown interval. There is no difference in using the non-normalised weights, since in the next step we are performing the rejection. If you want to use unnormalised weights you just have to change
keep = weights > np.random.uniform(0, 1, weights.shape) to
keep = weights > np.random.uniform(0, max(weights) , weights.shape).
We are not assuming max(weights) to be the evidence here, we do not need to include the evidence as it is irrelevant for the sampling, being an equal factor for all weights.
Hi @AmbicaG, as efficiency we mean how many of the samples we generate are still there after we perform the sampling. For example:
If we have three random numbers (0.1,0.2 and 0.8) and we generate three random numbers to compare them with as in the code, e.g. (0.05, 0.3 and 0.5). Now we have to compare them:
0.1>0.005 so we keep this point, 0.2<0.3 so we reject this point and 0.8>0.5 so we keep the last one.
In total we have kept 2 out of three initial numbers, our efficiency in this case is 2/3=0.666…
In the code we are doing the same, only that the rejection is done for 100000 samples in these three lines:
keep = weights > np.random.uniform(0, 1, weights.shape)
alpha_samples = alpha_gsamples[keep]
freq_samples = freq_gsamples[keep]