one bridge additional

[This article was first published on R – Xi’an’s Og, and kindly contributed to R-bloggers]. (You may report difficulty in regards to the content material on this web page right here)


Wish to share your content material on R-bloggers? click on right here when you’ve got a weblog, or right here when you do not.

Jackie Wong, Jon Forster (Warwick) and Peter Smith have simply revealed a paper in Statistics & Computing on bridge sampling bias and enchancment by splitting.

“… recognized to be asymptotically unbiased, bridge sampling approach produces biased estimates in sensible utilization for small to reasonable pattern sizes (…) the estimator yields constructive bias that worsens with growing distance between the 2 distributions. The second kind of bias arises when the approximation density is set from the posterior samples utilizing the strategy of moments, leading to a scientific underestimation of the normalizing fixed.”

Recall that bridge sampling relies on a double trick with two samples x and y from two (unnormalised) densities f and g which can be interverted in a ratio

m sum_{i=1}^n g(x_i)omega(x_i) Big/ n sum_{i=1}^m f(y_i)omega(y_i)

of unbiased estimators of the inverse normalising constants. Therefore biased. The extra the much less comparable these two densities are. Particular circumstances for ω embrace significance sampling [unbiased] and reciprocal significance sampling. Because the optimum model of the bridge weight ω is the inverse of the combination of f and g, it makes me surprise on the efficiency of utilizing each samples prime and backside, since as an aggregated pattern, additionally they come from the combination, as in Owen & Zhou (2000) a number of significance sampler. Nevertheless, a fast attempt with a constructive Regular versus an Exponential with fee 2 doesn’t present an enchancment in utilizing each samples prime and backside (even when utilizing the superbly normalised variations)

morc=(sum(f(y)/(nx*dnorm(y)+ny*dexp(y,2)))+ sum(f(x)/(nx*dnorm(x)+ny*dexp(x,2))))/( sum(g(x)/(nx*dnorm(x)+ny*dexp(x,2)))+ sum(g(y)/(nx*dnorm(y)+ny*dexp(y,2))))

at the very least when it comes to bias… Surprisingly (!) the bias nearly vanishes for very completely different samples sizes both in favour of f or in favour of g. This can be a type of real defensive sampling, who is aware of?! On the very least, this ensures a finite variance for all weights. (The splitting strategy launched within the paper is a pure answer to create independence between the primary pattern and the second density. This jogged my memory of our two parallel chains in AMIS.)



In case you acquired this far, why not subscribe for updates from the positioning? Select your taste: e-mail, twitter, RSS, or fb

Leave a Reply

Your email address will not be published. Required fields are marked *