Leveling up technical work with context and purpose
Add new comment
Comments are for the page: The probability of green cheese
I’ve perpetrated more than enough machine learning systems in my time that spit out “probabilities”; that is to say there was once a justification for the method that involved some probabalistic reasoning and some very dodgy assumptions (often known to be false), but that justification was far in the background by the time the computer produced its results. Quite apart from the issue of not taking all factors into account, these “probabilities” would often be wildly overconfident or sometimes underconfident. Often I would use the euphemism “scores”, saying, “this may well correlate with the thing you’re interested with, but beyond that…”
Naive Bayes is in particular known for producing “probabilities” very close to 0 or 1 which are incredibly badly calibrated, but if you ignore the “probability” rubric and just pick a threshold the results are… kind of OKish by machine learning standards.
I think you misunderstand subjective bayesianism. You treat priors as a sort of parameter that needs to be set to the right value so that the method gives the right results. But in terms of bayesian epistemology, your prior is just what you currently believe. But what if you’re wrong? A prior doesn’t need to be correct. The most correct prior would be to already know the answer, in which case why bother updating? The entire point of learning from experience is that priors don’t have to be correct , they can improve with learning.
The idea of “justified” priors just sort of copies the “What if I’m wrong?” reaction, but unlike “correct”, its content is unclear, which makes it hard to give the sort of one-paragraph explanation why that intuition is unhelpful.
I think the cause why it seems justified to have a prior of 1/6th on each outcome of the dice, and unjustified to be sure its 5, is a long chain of experience suggesting we can’t predict the result of a dice more accurately than 1/6 before it happens. This is bound up with the idea of justification because we often want to be able to rely on others reasoning, so we expect them to be able to show how they got to their conclusion from shared priors.
Of course, you can’t always do this, in some (many) cases it’s just too complicated or you don’t know of any shared priors. But this only means you can’t justify yourself to others, not that bayesian methods aren’t applicable. The idea of requiring justification for everything ends up somewhat surprisingly being a mistake, just like moving the entire earth east.
In case my explanation was bad, here is someone elses.
The problem I see with subjective Bayesianism is that no one knows what their priors are in the vast majority of cases. For example, what are your priors for the proposition that I was born on Mars? No fair saying “extremely low” or some other qualitative answer, unless you can point me to a well-developed qualitative version of Bayesianism that maintains something like the quantitative version’s rigor.
But that is a very different problem! The current suggestions for its solution are things like “consider what odds you’d bet at” or “find a mathematically constructible prior that, while not exactly your current beliefs, will converge with it quickly”. Neither of these make sense if you’re worrying about “correct” priors “out there”.
And while there isnt a rigorous qualitative version of bayesianism, there is a non-rigorous version for improving gut-based reasoning. Knowing whether there is a valid method for this to approximate seems quite important.
I read this post on the same day I saw Nate Silver, in all seriousness, tweeting out an explanation as to why Donald Trump has a 1/8 chance of winning: “if the polls move toward Trump in the closing days rather than Biden (50/50 chance) and there’s a polling error in Trump’s favor (50/50 chance) then he’s 50/50 to win. That gets you to his 1 in 8 odds in our current forecast.”
You can use some Markdown and/or HTML formatting here.
Optional, but required if you want follow-up notifications. Used to show your Gravatar if you have one. Address will not be shown publicly.
If you check this box, you will get an email whenever there’s a new comment on this page. The emails include a link to unsubscribe.