Module for handling JLA supernova data Adapted form the SNLS module The differences between this and the supernova_SNLS.f90 module are: 1) We use the SALT2 X_1 parameter instead of stretch 2) Intrinsic dispersion is included in the error reported in data files 3) Dispersion from lensing is also included 4) Redshift uncertainty is a global 150km/s Model: The model for SN magnitudes is m_obs = 5*log10( D_L ) - alpha*(stretch-1) + beta*colour + scriptm scriptm is some combination of the absolute luminosity of SNe Ia and the Hubble constant -- marginalizing over it is equivalent to marginalizing over the absolute magnitude. Because the errors on the individual points don't depend on scriptm, we can do this. Since they do depend on alpha and beta, we can't marginalize over them. Because cosmomc returns D_L in Mpc, scriptm is M - 25, where M is the magnitude of a stretch=1, colour=0 SN Ia in whatever band the mag comes in as. Here we use a flat prior for one or two scriptm values. Note that the SN data is independend of H_0 -unless- you use the absdist file. Covariance matricies: This code has support for SN-SN covariance matricies. We write the final covariance matrix as: V = D + V_mm + alpha^2 * V_ss + beta^2 + V_cc + 2 alpha V_ms - 2 beta V_mc - 2 alpha beta V_sc where, for example, V_ms is the covariance matrix between the uncorrected magnitude and the stretch of the SN. D are the diagonal terms calculated from D_ii = sigma_m^2 + alpha^2 sigma_s^2 + beta^2 * sigma_c^2 + 2 alpha cov_m_s - 2 beta cov_m_c - 2 alpha beta cov_s_c + intrinsicdisp^2 + (5/log 10)^2 sigma_z^2 / z^2 It may seem a little strange that the diagonal term is split off, but it is convenient in some circumstances, as it allows for Sherman-Woodbury inversion. However, we don't implement that here. Speed: One might wonder if it is really necessary to explicitly fit for alpha and beta. Can't we just internally marginalize over them? The short answer is, no, you can't, at least not if you want an unbiased answer. The long answer is, yes, sure you can internally marginalize over them. But doing so correctly is actually slower than fitting for them, so it isn't a great idea. There are a few things you might consider trying to do the internal marginalization: 1) Fixing alpha and beta. This is -wrong- and will both underestimate and bias your results. This is the way all previous cosmomc packages work, and so all those papers are -wrong-. 2) Fixing alpha and beta but including some assumed error on them to make the errors better. An improvement, but still wrong because alpha and beta are correlated with the other parameters. Of course, if other constraints effectively fix the cosmology, then this works, but that's equivalent to saying that the SN data is irrelevant to your fit -- so why are you bothering anyways. 3) Internally minimizing alpha and beta, then plugging these in to get the chisq. This is at least interesting, because this technique usually works, and would make things much faster. Sadly, here it doesn't because this method only applies if the errors are independent of the parameters you are marginalizing over. And the errors do depend on alpha and beta, so this will give you a biased answer. 4) Explicitly making a grid over alpha and beta, computing the likelihood for each, and then marginalizing. This finally actually works. But, it turns out to be slower than the alternative. To get a good result, you really need to have your grid be 60x60 or larger. That means inverting the systematics covariance matrix (which depends on alpha and beta) > 60^2 times, and it's about 500x500. Without SNLS, the slowest step in the likelihood computation is usually the 3000x3000 inversion of the WMAP pixel space TT cov matrix. Matrix inversion is N^3, so that means that this solution for alpha and beta is 60^2*500^3/3000^3 ~ 17 times slower than the WMAP inversion. For comparison, fitting for alpha and beta explicitly slows the code down by about 20% for a typical fit. So, you can use this method if you want, but it would be kinda stupid. 5) Comment on the speed topic by MB. The slow step here is the recomputation of the cholesky decomposition of the covariance matrix each time alpha or beta changes (i.e. each step if you are fitting for alpha and beta). This is about 150MFLOPs for the JLA sample. The good way to gain a factor 10 is to make this computation perturbative with respect to a reference alpha, beta. First order seems accurate enough even for large departure from my investigation. I did not implement this here because all those costs are negligible in regard to the Planck likelihood evaluations. But that may become interesting for larger SN samples. Modification History: Written by Alex Conley, Dec 2006 aconley, Jan 2007: The OpenMP stuff was causing massive slowdowns on some processors (ones with hyperthreading), so it was removed aconley, Jul 2009: Added absolute distance support aconley, May 2010: Added twoscriptm support aconley, Apr 2011: Fix some non standard F90 usage. Thanks to Zhiqi Huang for catching this. aconley, April 2011: zhel, zcmb read in wrong order. Thanks to Xiao Dong-Li and Shuang Wang for catching this mbetoule, Dec 2013: adaptation to the JLA sample AL, Mar 2014: updates for latest CosmoMC structure AL, June 2014: updated JLA_marginalize=T handling so it should work (also default JLA.ini)