Inference to the Best Explanation in Uncertain Evidential Situations
It has recently been argued that a non-Bayesian probabilistic version of inference to the best explanation (IBE*) has a number of advantages over Bayesian conditionalization (Douven ; Douven and Wenmackers ). We investigate how IBE* could be generalized to uncertain evidential situations and formulate a novel updating rule IBE**. We then inspect how it performs in comparison to its Bayesian counterpart, Jeffrey conditionalization (JC), in a number of simulations where two agents, each updating by IBE** and JC, respectively, try to detect the bias of a coin while they are only partially certain what side the coin landed on. We show that IBE** more often prescribes high probability to the actual bias than JC. We also show that this happens considerably faster, that IBE** passes higher thresholds for high probability, and that it in general leads to more accurate probability distributions than JC.
2. Generalizing Inference to the Best Explanation to Uncertain Evidential Situations
3. Detecting the Bias of a Coin
4. Overall Performance of IBE** versus Jeffrey Conditionalization
5. Speed of Convergence
6. The Threshold for High Subjective Probability
7. Epistemic Inaccuracy