What Philosophy Is Good For
Math-y thought patterns for everyday concepts
17 Nov 2019
One philosophy degree later, the first question that people ask when we meet is what it was like to add philosophy to a CS degree at an engineering school where most people are just trying to skirt by the humanities requirement. There is a mix of pleasant surprise (“finally, not just another tech person!”) and curiosity in the question — what is there to studying philosophy, really, beyond debating esoteric questions like the trolley problem and the meaning of existence?
I didn’t really know what philosophy was either, for a while. Coming from an engineering background, the concepts are annoyingly fuzzy, and the boundaries of problems poorly defined. You realize quickly that you don’t reach a conclusive answer to any question, only a better understanding of the difficulties in answering it. In some subfields of philosophy, it feels like “anything goes” — it’s unclear what you’re supposed to be taking as a given; people just construct entirely new frameworks.
Unlike other academic disciplines, it’s unclear what progress looks like. Philosophers seem to debate questions for centuries without coming to a consensus, despite exploring the whole conceptual territory.
After suspending my skepticism through several semesters, my views on what philosophy was good for started to crystallize. Here’s what I realized, and what I tell my friends who express the same reservations and uneasiness as they go through their first interactions with philosophy:
Philosophy comes out of talking about fuzzy, everyday concepts and making them more rigorous.
When you compare philosophy to math / engineering, it doesn’t feel precise. People feel uneasy, thinking that philosophy isn’t worthwhile because it’s not as rigorous or logical as they expected.
In reality, the rigor of math simply isn’t possible when it comes to the everyday questions that we grapple with. When we talk about hard problems like what effective schooling should look like, or how much governance should be imposed on communities, we don’t simply “prove” what the answer is. We can no longer assume a controlled environment and well-defined scope. Progress on real problems is inherently nonlinear and undecidable. People throw their ideas and reasons haphazardly into the ring, we end up with ten different schools of thought, and yet it seems like the answer is still more complex and nuanced than anything we can come up with.
Philosophy is a way to address these questions more systematically — it takes our fuzzy concepts and intuitions and makes them rigorous. It isn’t meant to provide answers, but the primitives, frameworks, and abstractions that we can use to structure the world.
In metaphysics, for example, when we talk about ontology, we try to formalize our natural intuitions for objects (“if we were to list all the objects in the world, what determines whether something is on the list?”). Other areas of philosophy formalize questions that are more familiar and concrete — what justice is (political philosophy), what scientific progress looks like (philosophy of science), the role of luck in judgments of morality (ethics), how we justify our beliefs (epistemology).
These “what is X, really” questions that are typical in philosophy aren’t purely theoretical descriptions; their answers sometimes bear directly on judgments and actions that we have to make in practice. Take the recent discussions on the discriminatory impacts of machine learning. We have to ask — what does it mean for something to be fair? If you use the same decision-making policy to make predictions for two different individuals, is that fair? This is surprisingly complex, and much of recent research not only borrows notions from political philosophy, but echoes the same struggles and debates that occurred historically!
The complexity and apparent “fuzziness” of philosophy isn’t a problem with the discipline itself, it’s a feature of the content that it deals with. Instead of measuring its worth against math, we should be comparing it to everyday conversation, and seeing how the mental models we extract from philosophy clarify how we think.
Philosophy might not share the clean rigor of math, but the mental tools it provides are surprisingly similar. Instead of manipulating numbers and symbols, you learn to manipulate ideas.
I started observing new patterns in my own thinking: recognizing what the question really is, identifying when an idea hasn’t fully covered the possible space of considerations, inverting an argument to get a clean counterexample, mapping out the threads of an argument. You learn to find an isomorphism between the concepts in your brain and someone else’s. You realize that even when you’re talking about the same thing, people have vastly different mental maps. Different words and explanations make sense to us for the same concept. Framing your counterarguments in another map has a more powerful effect on others.
Philosophy professors are extraordinarily good at all of this: distilling the essence of some meandering argument, and constructing a simple counterexample that makes its weaknesses immediately apparent. This was a familiar refrain in classes:
Student: Wait, but isn't X not quite right because A and B? And P and Q makes it complicated because then this other thing...
Prof: So your objection is Y, right?
S: ...yes.P: Ok, consider this counterexample C. Y would imply you believe Z about C, which you might choose to accept, but it also seems like an extreme position.
S: Ah, hm.
Outside of philosophy, most discussions don’t go like this — it’s too easy to get absorbed in making your own points and insights known, instead of addressing the crux of the disagreement. But when this sort of communication is primarily what you do in philosophy, you inevitably find it translates into speaking and thinking clearly elsewhere.
None of philosophy is necessary in the way that programming is necessary to become a software engineer. Going through the world bottom-up, operating on the low-level details, works perfectly well. You don’t need to get ahold of what theories are to be a great scientist, nor do you have to know what consequentialism is to have a personal moral system; assumptions and abstractions are implicit in action.
Still, I’ve personally found that there’s a lot to gain — it’s made me a much clearer thinker.
Thanks to Timmy and Basil for feedback! :) Email me if you have thoughts or comments.