When a conclusion is correct, there is always more than one valid argument that can be made for it.
Or maybe it's just me....
For a long time I've had the feeling that there's a bias in contemporary philosophy against arguing for 'big' positions--the kinds of positions those who hold (or discount) hold (or discount) in a fundamental way. The thought is: if you give an argument for one of these positions, the only people you convince will be those already convinced.
What remains to philosophy is--must be--to only solve problems within these fundamental framework views. But that makes it difficult to ever interact with those who reject them, UNLESS your M.O. when interacting with them is to always accept others' presuppositions and to try to work things out within their views.
That's all well and good, I suppose. But it seems to be ignoring a herd of elephants in the room. (That, and I'm absolutely terrible at accepting presuppositions I object to.)
1. The Identity of Indiscernibles: If two things have all their properties in common, they are identical.
2. The Indiscernibility of Identicals: If two things are identical, they have all their properties in common.
THESIS: there is no application of these principles that is not implicitly epistemic, and self-refuting.
I used to think that both of these principles were acceptable. Or that perhaps (1) was acceptable, but not (2). But upon further thought, both are problematic if "have all their properties in common" is read as "are indiscernible." Having all properties in common with something is not equivalent to being indiscernible from it. Indiscernibility is an epistemic notion. As an epistemic notion, it is incomplete until the perspective of the discerner has been specified. (It makes a difference whether we're talking about things being indiscernible to God, or to me; and it might make a difference what else I know about the entity in question.) Thus Descartes's use of (1), which turns not on shared properties but on indiscernibility from his perspective, is not legitimate.
For uses that do not on indiscernibility from a perspective, (1) seems fine. But it also seems like a principle that will never be used, as no two things have all their properties in common--unless what we really mean to say is that what seem, from some perspective, to be two things, have all their properties in common. But if that's the case, they only seem to have some properties in common: there are sufficient differences for them to seem to be two things in the first place. There won't be any such "two things" ("two things" should be put in scarequotes).
Non-epistemically motivated uses of (2) are equally impossible to apply. We're never going to encounter TWO THINGS that have all their properties in common. We might encounter what seem to be two things but are actually one, which have all their properties in common except seeming to be two things. This is an epistemic interpretation of (2), and it refutes itself. In seeming to be two things, the "two things" do not have all their properties in common. They have sufficient difference to seem to be two.
The purely metaphysical version of (2) might also be false in quantum physics: if, in quantum physics, the same thing can be in different places at once, (2) is false for whatever sort of thing has that ability.
To do the work philosophers want to put them to, both (1) and (2) need to include epistemic notions. But they can't. The purely metaphysical versions are not much use.
Despite the apparent democracy of its classes, St. John's is a strikingly non-transparent place. The curriculum is handed down from on high. The educational objectives are completely undetermined. No explanations of why things are important, or what you should be getting from them, are ever given. This sounds great, in theory. But in my case it led to paranoia: am I finding what I should be finding in these things? And to obsessive attempts at pattern-recognition, at discerning what the guiding intentions were behind the arrangement of the Program.
This contributes all the more to students' paranoia about their own performance. The grading system is--because unspoken--also vague and various. It's harder to know when you're doing well when you have no idea what that would mean.
The College thus takes on characteristics of a religion. It inspires simultaneously doubt about one's own worthiness and about whether what the College requires of one to be worthy--whatever it is--is right. Perhaps this is part of what has made me obsessed with trust issues for much of my adult life. This is also why the College reminds me of The Village from The Prisoner, despite being intended to be the paragon and training ground of democracy.
In past years, I've moved on from the things in my life I believed in that required me to trust them in ways I found difficult. I'm much happier, and don't miss them. Perhaps it's a worthwhile experience to have, but on the whole, if something requires trust to the degree that constant self-doubt is inspired, I would avoid it.
David Kaplan uses the following example in support of the idea that indexicals and demonstratives (including some uses of pronouns) refer directly, i.e. unmediated by Fregean senses:
Suppose you have a friend, Paul, who lives in Princeton. You're at a party, and Charles has shown up disguised as Paul. You say
(1) "He [Delta] now lives in Princeton."
Kaplan says, "I assume that in the possible circumstances described earlier, Paul and Charles having disguised themselves as each other, Delta would have demonstrated Charles. Therefore, under the Fregean theory, the proposition I just expressed, Pat [he named the proposition 'Pat'] would have been false under the circumstances of the switch" (Demonstratives, 516). He construes Fregeanism for some reason as tying the actual object presented, rather than the mode in which it is presented, with the proposition expressed. (Sure, senses are modes of presentation of referents, but senses compose thoughts, so I'm a bit confused.)
He claims that Direct Reference gets the right result because demonstratives are rigid designators; they designate the same thing in all possible worlds (keeping certain things constant), rather than varying with, say, which object is present. (I can't claim to understand this. If you hold the object constant, the person you refer to in all possible worlds would still be Charles, would it not?)
But this seems like the right result for the wrong reason. I agree that in the proposition above, the person you refer to is Paul rather than Charles. But why? Because the information you're drawing on in making the judgment is derived from Paul, and not Charles. It's irrelevant whether Paul or Charles is the person sitting next to you: you're saying something about Paul based on information drawn previously from Paul.
Let's tweak the case slightly. Suppose, at the same party, Charles-in-a-Paul-suit starts dancing. You say,
(2) "I never thought I'd see him dance!"
Which person are you referring to?
This seems harder, to me, because the information you're drawing on concerns the actions of the person in front of you. Are these Charles's actions, or Paul's? I don't pretend this is obvious, but I think it's much more tempting in this case to say you're referring to the person who's actually there (Charles), rather than the person he's disguised as. Then again, it might not be surprising if Charles is someone who dances often, and Paul never does. So perhaps here you're referring to Paul also, even if Charles is doing the dancing.
What if Charles disguised himself as Paul to carry out a murder. You catch him in the act, and call the police. When they arrive, you say,
(3) What he did was horrible!
Does 'he' refer to Charles, or Paul? Charles. Why? Charles did the thing, even if you thought he was someone else. These are not easy cases.
A very Evansian way to treat these cases would be the following: you are referring to the person from whom the bulk of the information you draw on in your thought/statement/judgment derives. You fail to meet your target in making the judgment insofar as there is a mismatch between the person from whom the bulk of your information derives and the person who's actually there. So in (1) you have tried and failed to refer to Paul by demonstrating Charles, but you have still said something true about Paul. In (2) you're drawing on information about the frequency of Paul's dancing, and so still trying to refer to Paul, albeit also failing. In (3) you have said something true about Charles without knowing it. You have referred to Charles, but as it were by luck.
That is still not to explain why a more sophisticated Fregeanism might be useful in such cases. But Charles-under-a-Paul-mode of presentation could certainly go a long way in making sense of (1) and (2).
I am a lady who is big into technical stuff. Philosophy of language, taking electronics apart and putting them back together; you want to talk about how to solve a problem? I'm all over it.
I am also surrounded by dudes all the time.
So why in the name of the seven mad gods who rule the world isn't most of my life like Kaylee at the ball on Persephone (Firefly)?
Oh, because the latter is a Joss Whedon fantasy. Carry on.
Sometimes I think much of what we value about people and places and living situations boils down to action potentials--if not in the strict sense that the term is used in neuropsychology, but in the sense of what is easy to do in different places, with different people, etc, and what is most enjoyable that way.
My apartment without a roommate in it has all the same things I enjoy using and playing with. But only without one can I really enjoy them, or use them at all. When one is on one's own, action potentials in a lot of directions are practically infinite.
Quite a few are also curtailed, which could only blossom in someone else's presence: shared experiences of all kinds.
On the whole, I value shared experiences more. But I don't feel whole unless I have that total freedom frequently. It's the space for self-determination.