PHILOSOPHY - Epistemology: The Preface Paradox [HD]
Video Statistics and Information
Channel: Wireless Philosophy
Views: 83,147
Rating: 4.4567165 out of 5
Keywords: Khan Academy, Philosophy, Wireless Philosophy, Wiphi, video, lecture, course, epistemology, logic, belief, reason, paradox, Preface Paradox, University of Toronto, Jonathan Weisberg, Jennifer Nagel, brain in a vat, contradiction
Id: 7fwQJu9ywT4
Channel Id: undefined
Length: 9min 50sec (590 seconds)
Published: Mon May 23 2016
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.
We are given the following premises: 1.) Author believes everything he wrote to be true. 2.) Author believes that he may have written at at least one false statement.
However, the first premise could be more accurately worded to say "the author believes what he wrote to be generally true". The problem is not a logical error. The problem is that the knowledge that exists outside of ourselves has a chance of being false because we do not know it for certain to be true. However, in order for us to operate in a manner outside our minds, we must hold our outside experiences to be generally true, while accounting for the small possibility that they are not. Nevertheless, we are able to come to certain truths that do not come from experience. These are things such as basic logic and math, and because we are able to come to these truths without outside experience, we do not need to account the the possibility that they may be false.
Given this, I would change the premises of this problem to the following: 1.) The author believes what he wrote to be generally true. 2.) The author believes that there is a possibility one or more statements he wrote may be false.
I would also propose that the problem could be solved by separating knowledge gained from outside of us from knowledge "from within us" such as logic. Doing this, we would operate on probability-based knowledge in regard to knowledge gained from outside experience and certainty-based knowledge in regard to that knowledge we are able to figure out independent of experience.
"If you know something is true, you've ruled out all possible alternatives"
Yeah, no. That's an absurd claim. Absolute certainty isn't required for knowledge. The epistemic closure principle is not accepted as a requirement for knowledge in any reasonable definition of knowledge and is only given by some skeptics to create an idealized version of knowledge which has virtually no relation to how the word is used in any real world context. The term "knowledge" becomes useless in practical settings if you require absolute (warranted) certainty since no one can ever be said to know anything with such a definition (brain-in-vat, evil demon, faulty memory, etc. would make ruling out all other possibilities with absolute certainty impossible). Let's not conflate "knowledge" in some pure/unattainable sense posed as an ideal by epistemic skepticism with the word as we use it in every other context.
There's no paradox here. People are not perfectly precise rational beings and do not have the capacity to be perfectly rational in determining the exact degree of confidence that should be associated with each belief. We use loose approximations and we inevitably believe some things which are false and contradictory as our beliefs become composed of many approximations. The probabilistic model is about how we would want to rationally work with our beliefs to maintain self-consistent degrees of confidence for beliefs, not about our actual capacity to do so perfectly.
1) first he should notice the distinction between specific beliefs about facts and a meta-belief about the possibility that one of these beliefs (or more) may turn out to be false. It is NOT the situation that someone believes P and not P. It is rather that one has carefully accounted a large set of beliefs, making up the book, and it is in fact true that one believes all of them, but that when one thinks of the book as a whole, one recognizes that it is likely that one is incorrect about something in the book. There is something to be reconciled here, but it is not the P not P paradox he presents. 2) he then mentions the earth is flat belief, AS IF this was a good example for all the beliefs in a book. In fact, in pretty much any book, there will be large numbers of assertions that are not at the level of certainty involved in believing the earth is not flat. So while he does bring in the idea of probability - more on this later - his example is poor, precisely because a person who refers to the non-flatness of the earth assertion in his book, is not expecting that particular belief or ones like it, to be the problematic ones where he may be wrong. 3) He brings in probability, but then presents a counter argument that it is too complicated to work out the probabilities. But who would argue that we do. We have a sound, gut sense, that even some rather seemingly obvious truths can turn out not to be true. We do not do the math, we intuitively guesstimate. No contradiction. We have beliefs we are totally sure of. We have others that we think are incredibly likely to be the case and so on down. Books are generally not meant to be taken like Bibles and we realize this, most of us. So while Nature magazine might demand that we include in our article the statistics we used to draw the meta-belief that there are likely errors in the book, we do not do this, nor do we need to.
There is no contradiction here.
I interpreted the final question to be this. How can we be right about anything we know, if there is a chance that we can be wrong? My response to the question is this. Because the probability of being right out weighs the probability of being wrong.
I take the probabilistic side of the argument. As far as I'm concerned the notion that uncertainty negates knowledge suffers the same logical errors that the notion beliefs represent knowledge do, and implicitly depends on a conflation between validity and truth. A good example is the distinction between a mathematical proof and a well verified physical theory. The reason mathematicians can offer proofs that physicist can't is because they can select belief a&b and define a&b true axiomatically. Then say that c provably follows when a&b are true. That is valid knowledge, even if it's not uniquely valid knowledge.
Take Riemannian geometry for instance. It was created as a purely intellectual exercise that explicitly began by choosing axioms that essentially contradicted those of Euclidean geometry. It wasn't till later that it was used as the foundation for General Relativity. Only we can't say that the validity of Euclidean geometry is falsified just because Riemannian geometry is valid.
Those who would say that reality if non-Euclidean because Riemannian geometry is valid, implying Euclidean geometry is invalid, are conflating validity with capital Truth. Two propositions that take opposing axiomatic propositions can be equally as valid. Understanding that both, and other undefined alternatives, can be equally as valid is knowledge irrespective of any application errors in the use of that knowledge.
Knowledge is not an absolute that you can then automatically reject the validity of any alternative axioms just because they aren't in line with your selected axioms. Validity and capital Truth are very distinct things, are can be valid without being uniquely valid.
Fallibility.... Yes, we must take everything with the proverbial grain of salt. As he said we are "almost certain that bosphorous is in turkey".
It's all about becoming familiar with unorthodox perspectives so that ours as an individual can be a well rounded opinion.
I think the problem here is more in the interpretation of the two beliefs. The author believes that the premises A & B & C ... & X are individually true and collectively true. He also believes that one of those is not true, however that latter belief is better stated logically as a belief that a belief is false not a belief that a premise is false. That is to say, he believes A, B, C, D, E, ..., X are true - let's call these beliefs B_A, B_B, B_C, .... Note that we haven't assigned truth values to these beliefs (only the premises, via the beliefs), we have just stated that they exist. Further, he believes that one of these (B_X) is false. Thus, by B_A... B_W, A ... W are true. By the belief of B_X being false, X is false. There is no contradiction.
(My apologies if the explanation is a bit convoluted in this stream-of-consciousness paragraph attempt)
The biggest contradiction I saw here was the narrator, after acknowledging that when we're being careful, we reason probabilistically, and then proceeds to talk about knowledge requiring absolute certainty.
He just defined himself into irrelevance by writing bad rules to the game-rules that don't allow any progress. Those rules result in a soccer game where the score, in addition to being low by virtue of it being soccer, are made even lower, as if Zenos Paradox was rolling the ball's movement.
If this philosopher applied his skepticism to his own beliefs concerning this entire video, then by his own logic he could be wrong.
But if he admits what he said could be wrong, then he would actually be admitting that certain knowledge, at least about some things, does in fact exist.
In other words, universal skepticism is self-refuting.
Just because we could be wrong about some things, does not require us to doubt all of our convictions. That is asking too much. It is like a self-directed ad hominem tu quoque fallacy.