The acronym STEM, as everyone knows, stands for Science, Technology, Engineering, and Math. STEM subjects are math-intensive, analytic, and generally require a high intelligence to understand all the rules and intricacies. By this definition, the umbrella of subjects that could be considered ‘STEM’ or STEM-like could be expanded to include finance, economics, and even philosophy…I’ll leave it to the reader to find a catchy acronym.
Finance, which includes both personal finance and accounting, requires math, although the math tends to be simple – mostly arithmetic and compounding. There are also data visualizations involved, and the organization and interpretation of arrays of information such as spreadsheets and financial statements.
Economics, beginning in the the 50’s with Solow’s Growth Model and then in the 70’s with the theory of Rational Expectations and efficient vs. behavioral markets, has become very mathematical. Differential equations are a necessity, along with complicated diagrams and processing and analyzing an abundance of data. For example, there is Robert Barro, who used econometric methods to analyze data; John Cochrane, who pioneered time-series economics. Then there’s Paul Samuelson, a famous economist who used a lot of math to formulate his economic theories, adding significant rigor to the field of economics; Milton Friedman of the Chicago School and possibly the most famous economist of the 20th second half of the century, who used mathematical methods in the modeling of rational agents; Buchanan and the Calculus of Consent; Ronald Coase and his theorem…all economists who applied mathematics to economics subjects such as public choice, behavioral economics, rational markets, and decision making. More recently, physicists Lee Smolin and Eric Weinstein have begun applying concepts of gauge theory to macroeconomics.
Quantitative fiance is the most difficult and math-intensive of all fields of finance and economics, requiring a study of multi-variable partial differential equations, real analysis, measure spaces, martingale theory, and probability theory.
Online, especially since 2013 with the rise of ‘STEM culture’, finance, economics, philosophy, and quantitative finance carry the same prestige as the ‘hard’ STEM subjects such as physics, computer science, and math. Offline, no one cares you’re are an econometrician, but online you’re royalty. But even history majors, lit majors, comparative literature, and anthropology majors are also respected – subjects that, in contrast to useless ‘fluff’ degrees, are rigorous and intellectually redeeming even if they don’t pay as well as STEM. Also, finance, economics, and philosophy majors have as high of SAT scores (a good proxy for IQ) as math, computer science, and physics majors.
Then there’s philosophy, which I proclaim to be a STEM subject. Most people when they think of philology, the names Aristotle, Socrates, Plato, and maybe Nietzsche, Hume, and Kant come to mind, not mathematicians. But modern math and science offers a way of reconciling, or at least shedding new light, on questions posed by philosophers centuries earlier. The work of Godel, Turing, and Cantor blur the lines between philosophy and mathematics. Not only has philosophy, like economics, has become more STEM-like in recent years, but online especially, over the past few years, I’ve also noticed an immense increase in interest in mathematical-philosophy.
Kant in his 1781 magnum opus Critique of Pure Reason argued there were limitations to knowledge beyond the empirical: ‘Kant’s arguments are designed to show the limitations of our knowledge. The Rationalists believed that we could possess metaphysical knowledge about God, souls, substance, and so forth; they believed such knowledge was transcendentally real. Kant argues, however, that we cannot have knowledge of the realm beyond the empirical.’
Fast-forward to the 20th century, when Godel and Church disproved Hilbert’s ‘Entscheidungs Problem‘, as described by Scott Aaronson, who himself often blurs the lines between philosophy and science, in his research on computer science and complexity-theory:
The Entscheidungsproblem was the dream, enunciated by David Hilbert in the 1920s, of designing a mechanical procedure to determine the truth or falsehood of any well-formed mathematical statement. According to the usual story, Hilbert’s dream was irrevocably destroyed by the workof Godel, Church, and Turing in the 1930s. First, the Incompleteness Theorem showed that no recursively-axiomatizable formal system can encode all and only the true mathematical statements. Second, Church’s and Turing’s results showed that, even if we settle for an incomplete system F,there is still no mechanical procedure to sort mathematical statements into the three categories “provable in F,” “disprovable in F,” and “undecidable in F.”
In layman’s terms, every proof of the consistency of arithmetic (specifically, Peano axioms) is incomplete (arithmetic cannot prove itself).
These proofs tenuously vindicates Kant’s ‘synthetic a priori’, that there there are limitations to what can be proved, and that abstractions and propositions (like ‘multiplication’ and ‘addition’) have no empirical antecedent (a priori knowledge) and are ‘synthetic’ not ‘analytic’.
Bertrand Russell, Ludwig Wittgenstein, Ernst Zermelo, Alan Turing, Alfred Tarski, and Georg Cantor are other examples of mathematicians whose results had philosophical implications.
The P versus NP problem may is also relevant in understanding the limitations of what cam be proved under ‘reasonable’ conditions by a computer, specifically whether every problem whose solution can be quickly verified by a computer can also be quickly solved by a computer.
A significant area of philosophical inquiry involves the very concept of reality itself, whether reality is ‘real’ or ‘artificial’ (a computer simulation), the latter posited by Oxford Philosopher Nick Bostrom in his famous ‘Simulation Argument‘, which ‘proves’ there is a non-zero probability everyone is living in a simulation. In addition to the probabilistic argument, there is also the mathematics and logistics of creating the simulation itself, such as if enough resources exist for an advanced civilization to build a sufficiently powerful computer that can emulate the complexities of reality, or how such a computer or program would be created. Creationists argue that the structure of the universe is so fine-tuned (such as physical constants) that a ‘creator’ or ‘designer’ of sorts is involved, an argument that merges theology with neurology, biology, and physics.
Philosophy of mind – a branch of philosophy that studies the nature of the mind and the brain (mind–body problem) – has reaches in computer science and artificial intelligence, specifically if a simulated mind is conscious, or if a sufficiency advanced artificial intelligence is a substitute for consciousness. Computational theory of mind is view that the human mind or the human brain (or both) is an information processing system and that thinking is a form of computing, a view endorsed by philosopher Daniel Dennett, who argues that artificial systems can have ‘intentionality’ and ‘understanding’. Philosopher John Searle, invoking a thought experiment he devised called the Chinese room, counters, arguing that while a computable mind may seem like it has ‘understanding’ to an outsider, it doesn’t. This is an example of how philosophy borrows from STEM subjects such as neurology and computer science.
Free will vs. determinism is an age-old philosophical debate that has attracted the attention of quantum physicists. If the universe is entirely deterministic, it may imply humans have no free will. Philosopher and cognitive scientist Sam Harriss rejects free will; Daniel Dennett endorses compatibilism, which mixes some free will with determinism. This also ties into quantum mechanics, in an article from Scientific American The Quantum Physics of Free Will:
More recently, quantum-gravity theorist and blogger Sabine Hossenfelder has offered some thoughts. In a 2012 paper, she suggests that there is a third way between determinism and randomness: what she calls “free-will functions,” whose outputs are fully determined but unpredictable. Only those who know the function know what will happen. This is distinct from deterministic chaos, in which the function is universally known but the initial conditions are imperfectly known.
There is also the ‘many worlds’ interpretation of quantum mechanics, to reconcile determinism and free will by postulating that every possible event or outcome resides in a discrete universe that doesn’t interact with other universes.
The demarcation problem, in the philosophy of science, is about how to distinguish between science and pseudoscience. Karl Popper argued that science, in contrast to pseudoscience, can be falsified. Russell’s teapot, is an analogy or thought experiment, coined by the philosopher Bertrand Russell (1872–1970), to illustrate that the philosophic burden of proof lies upon a person making scientifically unfalsifiable claims, rather than shifting the burden of disproof to others. It’s impossible for someone to disprove within reason (without checking every square inch of space) the existence of the teapot. This ties into string theory, because the concern is that it cannot be falsified by any existing technology or scientific method. String theory may ‘never be wrong’, since it can always be ‘modified’ when new evidence is introduced that challenges (such as the failure to discover supersymmetry) the theory. Concepts such as the ‘multiverse’ are also impossible to falsify. It doesn’t mean these concepts are not fruitful (in the mathematical sense) or possibly correct, but right now there are no ways to test them.
That’s enough examples. It’s obvious that philosophy has extended its tentacles to all STEM subjects. In many ways, STEM complements philosophy by filling gaps of knowledge, or by providing new perspectives or angles of inquiry on timeless questions.