Wednesday, April 24, 2024

The Scientific Method, part 4: Eating elephants and The Big News Principle

This is the fourth in a series about the scientific method and how it can be applied to everyday life.  In this installment I'm going to suggest a way to approach all the science-y stuff without getting overwhelmed.

There is an old joke that goes, "How do you eat an elephant?  One bite at a time."  That answer might be good for a laugh, but it wouldn't actually work, either for a real elephant (if you were foolish enough to actually attempt to eat a whole elephant by yourself) nor for the metaphorical science elephant.  Modern science has been a thing for over 300 years now, with many millions of people involved in its pursuit as a profession, and many millions more in supporting roles or just doing it as a hobby.  Nowadays, over 100,000 scientific papers are published world-wide every single day.  It is not possible for anyone, not even professional scientists, to keep up with it all.

Fortunately, you don't have to consume even a tiny fraction of the available scientific knowledge to get a lot of mental nutrition out of it.  But there are a few basics that everyone ought to be familiar with.  For the most part this is the stuff that you learned in high school science class if you were paying attention.  I'm going to do a lightning-quick review here, a little science-elephant amuse bouche.  What I am about to tell you may all be old hat to you, but later when I get to the more interesting philosophical stuff I'll be referring back to some of this so I want to make sure everyone is on the same page.

It may be tempting to skip this, especially if you grew up hating science class.  I sympathize.  Science education can be notoriously bad.  It may also be tempting to just leave the elephant lying in the field and let the hyenas and vultures take care of it.  The problem with that approach is that the hyenas and vultures may come for you next.  In this world it really pays to be armed with at least a little basic knowledge.

I'm going to make a bold claim here: what I am about to tell you, the current-best-explanations provided by science, are enough to account for all observed data for phenomena that happen here on earth.  There are some extant Problems -- observations that can't be explained with current science -- but to find them you have to go far outside our solar system.  In many cases you have to go outside of our galaxy.  How can I be so confident about this after telling you that there is so much scientific knowledge that one person cannot possibly know it all?

The source of my confidence is something I call the Big News principle.  To explain it, I need to clarify what I mean by "all the observed data."  By this I do not mean all of the data collected in science labs, I mean everything that you personally observe.  If you are like most people in today's world, part of what you observe is that science is a thing.  There are people called "scientists".  There is a government agency called NASA and another one called the National Science Foundation.  There are science classes taught in high schools and universities.  There are science journals and books and magazines.

The best explanation for all this is the obvious one: there really are scientists and they really are doing experiments and collecting data and trying to come up with good explanations for that data.  This is not to say that scientists always get it right; obviously scientists are fallible humans who sometimes make mistakes.  But the whole point of science is to find those mistakes and correct them so that over time our best explanations keep getting better and better and explain more and more observations and make better and better predictions.  To see that this works you need look no further (if you are reading this before the apocalypse) than all the technology that surrounds you.  You are probably reading this on some kind of computer.  How did that get made?  You probably have a cell phone with a GPS.  How does that work?

It's not hard to find answers to questions like "how does a computer work" and "how does GPS work" and even "how does a search engine work."  Like everything else, these explanations are data which requires an explanation, and the best explanation is again the obvious one: that these explanations are actually the result of a lot of people putting in a lot of effort and collecting a lot of data and reporting the results in good faith.  This is not to say that there aren't exceptions.  Mistakes happen.  Deliberate scientific misconduct happens.  A conspiracy is always a possibility.  But if scientific misconduct were widespread, if falsified data were rampant, why does your GPS work?  If there is a conspiracy, why has no one come forward to blow the whistle?

This is the Big News Principle: if any explanation other than the obvious one were true, then sooner or later someone would present some evidence for this and it would be Big News.  Everyone would know.  The absence of Big News is therefore evidence that no one has found any credible evidence against the obvious explanation, i.e. that there are in fact no major Problems with the current best theories.

The name "Big News Principle" is my invention (as far as I know) but the idea is not new.  The usual way of expressing it is with the slogan "extraordinary claims require extraordinary evidence."  I think this slogan is misleading because it gets the causality backwards.  It is not so much that extraordinary claims require extraordinary evidence, it's that if an extraordinary claim were true, that would necessarily produce extraordinary evidence, and so the absence of extraordinary evidence, the absence of Big News, is evidence that the extraordinary claim, i.e. the claim that goes against current best scientific theories, is false.

The other important thing to know is that not all scientific theories are the same with respect to producing Big News if those theories turn out to be wrong.  Some theories are very tentative, and evidence that they are wrong barely makes the news at all.  Other theories are so well established -- they have been tested so much and have so much supporting evidence behind them -- that showing that they are wrong would be some of the Biggest News that the world has ever seen.  The canonical example of such a theory is the first and second laws of thermodynamics, which basically say that it's impossible to build a perpetual motion machine.  This is so well established that, within the scientific community, anyone who professes to give serious consideration to the possibility that it might be wrong will be immediately dismissed as a crackpot.  And yet, all anyone would have to do to prove the naysayers wrong is exhibit a working perpetual motion machine, which would, of course, be Big News.  It's not impossible, but to say that the odds are against you would be quite the understatement.  By way of very stark contrast, our understanding of human psychology and sociology is still very tentative and incomplete.  Finding false predictions made by some of those theories at the present time would not be surprising at all.

So our current scientific theories range from extremely well-established ones for which finding contrary evidence would be Big News, to more tentative ones for which contrary evidence would barely merit notice.  But there is more to it than just that.  The space of current theories has some extra and very important structure to it.  The less-well-established theories all deal with very complex systems, mainly living things, and particularly human brains, which are the most complicated thing in the universe (as far as we know).  The more well-established theories all deal with simpler things, mainly non-living systems like planets and stars and computers and internal combustion engines.

This structure is itself an observation that requires explanation.  There are at least two possibilities:

1.  The limits on our ability to make accurate predictions for complex phenomena is simply a reflection of the fact that they are complex.  If we had unlimited resources -- arbitrarily powerful computers, arbitrarily accurate sensors -- we could based on current knowledge make arbitrarily accurate predictions for arbitrarily complicated systems.  The limits on our ability are purely a reflection of the limits of our ability to apply our current theories, not a limit of the theories themselves.

2.  The limits of our ability to make accurate predictions for complex phenomena is because there is something fundamentally different about complex phenomena than simple phenomena.  There is something fundamentally different about living systems that allow them to somehow transcend the laws that govern non-living ones.  There is something fundamentally different about human minds and consciousness that allows them to transcend the laws that govern other entities.

Which of these is more likely to be correct?  We don't know for sure, and we will not know for sure until we have a complete theory of the brain and consciousness, which we currently don't.  But there are some clues nonetheless.

To wit: there are complex non-living systems for which we cannot make very good predictions.  The canonical example of this is weather.  We can predict the movements of planets with exquisite accuracy many, many years in advance.  We can't predict the weather very accurately beyond a few days, and sometimes not even that.

It was once believed that the weather was capricious for the same reason that people can be: because the weather was controlled by the gods, who were very much like people but with super-powers.  Nowadays we know this isn't true.  The reason the weather is unpredictable is not because it is controlled by the gods, but because of a phenomenon called chaos, which is pretty well understood.  I'll have a lot more to say about chaos theory later in this series, but for now I'll just tell you that we know why we can't predict the weather.  It's not because there are gods operating behind the scenes, it is that there are certain kinds of systems that are just inherently impossible to make accurate predictions about even with unlimited resources.  Nature itself places limits on our ability to predict things.  It is unfortunate, but that's just the Way It Is.

So our inability to make accurate predictions about living systems and human consciousness is not necessarily an indication that these phenomena are somehow fundamentally different from non-living systems.  It might simply be due to their complexity.  We don't have proof of that, of course, but so far no one has found any evidence to the contrary: no one has found anything that happens in a living system or in a human brain that can't be explained by our current best theories of non-living systems.  How can I know that?  Because if anyone found any such evidence it would be Big News, and there hasn't been any such Big News, at least not that I've found, and I've looked pretty diligently.

Because of the fact that, as far as we can tell, our current-best theories of simple non-living systems can, at least in principle, explain everything that happens in more complex systems, we can arrange our current-best theories in a sort of hierarchy, with theories of non-living systems at the bottom, and theories of living systems built on top of those.  It goes like this: at the bottom of the hierarchy are two theories of fundamental physics: general relativity (GR) and something called the Standard Model, which is built on top of something called Quantum Field Theory (QFT), which is a generalization of Quantum Mechanics (QM) which includes (parts of) relativity.  The details don't really matter.  What matters is that, as far as we can tell, the Standard Model accurately predicts the behavior of all matter, at least in our solar system.  (There is evidence of something called "dark matter" out there in the universe which we don't yet fully understand, but no evidence that it has any effect on any experiment we can conduct here on earth.)

The Standard Model describes, among other things, how atoms are formed.  Atoms, you may have learned in high school, are what all matter is made of, at least here on earth.  To quote Richard Feynman, atoms are "little particles that move around in perpetual motion, attracting each other when they are a little distance apart, but repelling upon being squeezed into one another."  Atoms come in exactly 92 varieties that occur in nature, and a handful of others that can be made in nuclear reactors.

(Exercise for the reader: how can it be that atoms "move around in perpetual motion" when I told you earlier that it is impossible to build a perpetual motion machine?)

The details of how atoms repel and attract each other is the subject of an entire field of study called chemistry.  Then there is a branch of chemistry called organic chemistry, and a sub-branch of organic chemistry called biochemistry which concerns itself exclusively with the chemical reactions that take place inside living systems.

Proceeding from there, biochemistry is a branch of biology, which is the study of living systems in general.  The foundation of biology is the observation that the defining characteristic of living systems is that they make copies of themselves, but that these copies are not always identical to the original.  Because of this variation, some copies will be better at making copies than others, and so you will end up with more of the former and less of the latter.  It turns out that there is no one best strategy for making copies.  Different strategies work better in different environments, and so you end up with a huge variety of different self-replicating systems, each specialized for a different environment.  This is Darwin's theory of evolution, and it is the foundation of modern biology.

Here I need to point out one extant Problem in modern science, something that has not yet been adequately explained.  There is no doubt that once this process of replication and variation gets started that it is adequate to account for all life on earth.  But that leaves a very important unanswered question: how did this process start?  The honest answer at the moment is that we don't yet know.  It's possible that we will never know.  But people are working on it, and making (what seems to me like) pretty good progress towards an answer.  One thing is certain, though: if it turns out that the answer involves something other than chemistry, something beyond the ways in which atoms are already known to interact with each other, that will be Big News.

Beyond biology we have psychology and sociology, which are the study of the behavior of a particular biological system: human brains.  Studying them is very challenging for a whole host of reasons beyond the fact that they are the most complex things known to exist in our universe.  But even here progress is being made at a pretty significant pace.  Just over the last 100 years or so our understanding of how brains work has grown dramatically.  Again, there is no evidence that there is anything going on inside a human brain that cannot be accounted for by the known ways in which atoms interact with each other.

Note that when I say "the known ways in which atoms interact with each other" I am including the predictions of quantum field theory.  It is an open question whether quantum theory is needed to explain what brains do, or if they can be fully understood in purely classical terms.  Personally, I am on Team Classical, but Roger Penrose, who is no intellectual slouch, is the quarterback of Team Quantum and I would not bet my life savings against him.  I will say, however, that if Penrose turns out to be right, it will be (and you can probably anticipate this by now) Big News.  It is also important to note that no non-crackpot believes that there is any evidence of anything going on inside human brains that is contrary to the predictions of the Standard Model.

Speaking of the Standard Model, there is another branch of science called nuclear physics that concerns itself with what happens in atomic nuclei.  For our purposes here we can mostly ignore this, except to note that it's a thing.  There is one and only one fact about nuclear physics that will ever matter to you unless you make a career out of it: some atoms are radioactive.  Some are more radioactive than others.  If you have a collection of radioactive atoms then after a certain period of time the level of radioactivity will drop by half, and this time is determined entirely by the kind of atoms you are dealing with.  This time is called the "half-life" and there is no known way to change it.  In general, the shorter the half life, the more radioactive that particular flavor of atom is.  Half lives of different kinds of atoms range from tiny fractions of a second to billions of years.  The most common radioactive atom, Uranium 238, has a half life of just under four and a half billion years, which just happens by sheer coincidence to be almost exactly the same as the age of the earth.

There is another foundational theory that doesn't quite fit neatly into this hierarchy, and that is classical mechanics.  This is a broad term that covers all of the theories that were considered the current-best-explanations before about 1900.  It includes things like Newton's laws (sometimes referred to as Newtonian Mechanics), thermodynamics, and electromagnetism.

The reason classical mechanics doesn't fit neatly into the hierarchy is because it is known to be wrong: some of the predictions it makes are at odds with observation.  So why don't we just get rid of it?

Three reasons: first, classical mechanics makes correct predictions under a broad range of circumstances that commonly pertain here on earth.  Second, the math is a lot easier.  And third and most important, we know the exact circumstances under which classical mechanics works: it works when you have a large number of atoms, they are moving slowly (relative to the speed of light), and their temperature is not too cold.  If things get too fast or too small or too cold, you start to see the effects of relativity and quantum mechanics.  But as long as you are dealing with most situations in everyday life you can safely ignore those and use the simpler approximations.

This, by the way, is the reason for including Step 2 in the Scientific Method.  As long as you are explicit about the simplifying assumptions you are making, and you are sure that those simplifying assumptions actually hold, then you can confidently use a simplified theory and still get accurate predictions out of it.  This happens all the time.  You will often hear people speak of "first order approximations" or "second-order approximations".  These are technical terms having to do with some mathematical details that I'm not going to get into here.  The point is: it is very common practice to produce predictions that are "good enough" for some purpose and call it a day.

Classical mechanics -- Newton's laws, electromagnetism, and thermodynamics -- turn out to be "good enough" for about 99% of practical purposes here on earth.  The remaining 1% includes things like explaining exactly how semiconductors and superconductors work, why GPS satellites need relativistic corrections to their clocks, and what goes on inside a nuclear reactor.  Unless you are planning to make a career out of these things, you can safely ignore quantum mechanics and relativity.

And here is more good news: classical mechanics is actually pretty easy to understand, at least conceptually.  It's the stuff that is commonly taught in high school science classes, except that there it is usually taught as a fait accompli, without any mention of the centuries of painstaking effort that went into figuring it all out, nor the ongoing work to fill in the remaining gaps in our knowledge.

The reason this matters is that it leaves people with the false impression that science is gospel handed down from on high.  You hear slogans like "trust the science."  You should not "trust the science."  You should apply the scientific method to everything, including the question of what (and who) is and is not trustworthy.  And the most important question you can ask of anyone making any claim is: is this consistent with what I already know about the world?  Or, if this were true, would it be Big News?  And if so, have you seen any other evidence for it elsewhere?

It is important to note that the converse is not true.  If someone makes a claim that would be Big News if it were true but it doesn't seem to have made a splash, the best explanation for that it usually that the claim is simply not true.  But just because a claim does end up being Big News doesn't necessarily mean that it's true!  Cold fusion was Big News when it was first announced, but it ended up being (almost certainly) false nonetheless.  Big News should not be interpreted as "true" but something more like "possibly worthy of further investigation."

Sunday, April 21, 2024

Three Myths About the Scientific Method

This is the third in a series on the scientific method.  This installment is a little bit of a tangent, but I wanted to publish it now because I've gotten tired of having to correct people about these things all the time.  I figured if I just wrote this out once and for all I could just point people here rather than having to repeat myself all the time.

There are a lot of myths and misconceptions about science out there in the world, but these three keep coming up again and again.  These myths are pernicious because they sound plausible.  Even some scientists believe them, or at least choose their words carelessly enough to reinforce them, which is just as bad.  Even I am guilty of this sometimes.  It is an easy trap to fall into, especially when talking about "scientific facts".  So here for the record are three myths about the scientific method, and the corresponding truth (!) about each of them.

Myth #1:  The scientific method relies on induction

Induction is a form of reasoning that assumes that phenomena follow a pattern.  The classic example is looking at a bunch of crows, observing that every one of them is black, and concluding that therefore all crows are black because you've never seen a non-black crow.

It is easy to see that induction doesn't work reliably: it is simply false that all crows are black.  Non-black crows are rare, but they do exist.  So do non-white swans.  Philosophers make a big deal about this, with a lot of ink being spilled discussing the "problem of induction".  It's all a waste of time because science doesn't rely on induction.  Any criticism that anyone levels at science that includes the word "induction" is a red herring.

It's easy to fall into this trap.  The claim that all crows are black, or all swans are white, are wrong, but they're not that wrong.  The vast majority of crows are black, so "all crows are black" is a not-entirely-unreasonable approximation to the truth in this case, so it's tempting to think that induction is the first step in a process that gets tweaked later to arrive at the truth.

The problem is that most inductive conclusions are catastrophically wrong.  Take for example the observation that, as I write this in April of 2024, Joe Biden is President of the United States.  He was also President yesterday, and the day before that, and the day before that, and so on for over 1000 days now.  The inductive conclusion is that Joe Biden will be President tomorrow, and the day after that, and the day after that... forever.  Which is obviously wrong, barring some radical breakthrough in human longevity and the repeal of the 22nd amendment to the U.S. Constitution.  Neither of these is very likely, so we can be very confident that Joe Biden will no longer be President on January 7, 2029, and possibly sooner than that depending on his health and the outcome of the 2024 election.

How do we know these things?  Because we have a theory of what causes someone to become and remain President which predicts that Presidential terms are finite, and that theory turns out to make reliable predictions.  Induction has absolutely nothing to do with it.

Induction has absolutely nothing to do with any scientific theory.  At best it might be a source of ideas for hypotheses to advance, but the actual test of a hypothesis is how well it explains the known data and how reliable its predictions turn out to be.  That's all.

Myth #2:  The scientific method assumes naturalism/materialism/atheism

This is a myth promulgated mainly by religious apologists who want to imply that the scientific bias against supernaturalism is some kind of prejudice, an unfair bias built in to the scientific method by assumption, and that this can blind those who follow the scientific method to deeper truths.

This is false.  The scientific method contains no assumptions whatsoever.  The scientific method is simply that: a method.  It has no more prejudicial assumptions than a recipe for a soufflĂ©.

Even the gold-standard criterion for a scientific theory, namely, its ability to make reliable predictions, is not an assumption.  It is an observation, specifically, it is an observation about the scientific method: it just turns out that if you construct parsimonious explanations that account for all the observed data, those explanations turn out to have more predictive power than anything else humans have ever tried.  That is an observation that, it turns out (!) can also be explained, but that is a very long story, so it will have to wait.

The reason science is naturalistic and atheistic is not because these are prejudices built into the method by fiat, it is because it turns out that the best explanations -- the most parsimonious ones that account for all the known data and have the most predictive power -- are naturalistic.  The supernatural is simply not needed to explain any known phenomena.

Note that this is not at all obvious a priori.  There are a lot of phenomena -- notably the existence of life and human intellect and consciousness -- that don't seem like they would readily yield to naturalistic explanations when you first start to think about them.  But it turns out that they do.  Again, this is a long story whose details will have to wait.  For now I'll just point out that people used to believe that the weather was a phenomenon that could not possibly have a naturalistic explanation.

The reason science is naturalistic is not that it takes naturalism as an assumption, but rather that there is no evidence of anything beyond the natural.  All it would take for science to accept the existence of deities or demons or other supernatural entities is evidence -- some observable phenomenon that could not be parsimoniously explained without them.

Myth #3:  "Science can't prove X" or "scientists got X wrong" is an indication that science is deficient

I often see people say, "Science can't prove X" with the implication that this points out some deficiency in science that only some other thing (usually religion) can fill.  This is a myth for two reasons.  First, science never proves anything; instead it produces explanations of observations.  And second, this failure to prove things is not a bug, it's a feature, because it is not actually possible to prove anything about the real world.  The only things that can actually be proven are mathematical theorems.

Now, you will occasionally hear people speak of "scientific facts" or "the laws of nature" or even "scientific proof".  These people either don't understand how the scientific method actually works, or, more likely, they are just using these phrases as a kind of shorthand for something like "a theory which has been sufficiently well established that the odds of finding experimental evidence to the contrary (within the domain in which the theory is applicable) are practically indistinguishable from zero."  As you can see, being precise about this gets a little wordy.

The scientific method gives us no guidance on how to find good theories, only on how to recognize bad ones: reject any theory that is at odds with observation.  This method has limits.  We are finite beings with finite life spans and so we can only ever gather a finite amount of data.  For any finite amount of data there are an infinite number of theories all consistent with that data, and so we can't reject any of them on the grounds of being inconsistent with observation.  To whittle things down from there we have to rely on heuristics to select the "best explanation" from among the infinite number of possibilities that are consistent with the data.

Again, it just turns out that when we do this, the result of the process generally has a lot of predictive power.  Some of our theories are so good that they have never made a false prediction.  Others do make false predictions, but to find observations that don't fit their predictions you have to go outside of our solar system.  For theories like that we will sometimes say that those theories are "true" or "established scientific facts" or something like that.  But that's just shorthand for, "The best explanation we currently have, one which makes very reliable predictions."  It is always possible that some observation will be made that will falsify a theory no matter how well established it is.

Finding observations that falsify well-established theories does happen on occasion, but it is very, very rare.  The better established a theory is, the rarer it is to find observations that contradict it.  For less-well-established theories, finding contradictory data happens regularly.  This is also often cited, especially by religious apologists, as a deficiency but it's not.  It's how science makes progress.  In fact, the best established theory in the history of science is the Standard Model of particle physics.  We know that the Standard Model is deficient, but not because it makes predictions that are at odds with experiment -- quite the opposite in fact.  The Standard Model has never (as of this writing) made a false prediction since it was finalized in the 1970s.  The reason we know it's deficient is not because it makes false predictions (it doesn't, or at least hasn't yet) but rather because it doesn't include gravity.  We know gravity is a thing, but no one has been able to figure out how to work it in to the Standard Model.  And one of the reasons we haven't been able to do it is because we have no experimental data to give us any hints as to where the Standard Model might be wrong.  This is actually considered a major problem in physics.

That's it, my top three myths about science debunked.  Henceforth anyone who raises any of these in my presence gets a dope slap (or at least a reference to this blog post).

Monday, April 01, 2024

Feynman, bullies, and invisible pink unicorns

This is the second installment in what I hope will turn out to be a long series about the scientific method.  In this segment I want to give three examples of how the scientific method, which I described in the first installment, can be applied to situations that are not usually considered "science-y".  By doing this I hope to show you how the scientific method can be used without any special training, without any math, but to nonetheless solve real problems.

Example 1

In my inaugural blog post twenty years ago I wrote:

The central tenet of science in which I choose to place my faith is that experiment is the ultimate arbiter of truth. Any idea that is not consistent with experimental evidence must be wrong.

This was an adaptation of Richard Feynman's definition of science, given in the opening paragraphs of the first chapter of his Lectures on Physics.  Note that Feynman did not write the Lectures.  The Feynman Lectures were not written as a book, they are transcripts of lectures that Feynman gave while teaching an introductory physics course at Caltech in the early 1960s.  These lectures were recorded, and it is worth listening to a few of them to get a feel for that the original source material sounds like.

It is worth reading (or listening to) Feynman's introduction in its entirety.  It is only nine paragraphs, or nine minutes.

If you read the transcript you will see this:

The principle of science, the definition, almost, is the following: The test of all knowledge is experiment. Experiment is the sole judge of scientific “truth.”

Note that the word "truth" is in quotes.  Why?  One possibility is that these are "scare quotes", an indication that the word "truth" is being used in an "in an ironic, referential, or otherwise non-standard sense."  This matters because it materially changes the meaning of what Feynman is saying here.  Without the scare quotes, the passage implies that there exists a transcendent metaphysical Truth with a capital T and that science uncovers this Truth.  If that is what Feynman intended, then this would contradict what I said in the first installment, that science converges towards *something*, but that something may or may not be metaphysical Truth.

You might be tempted to argue that there is no way that I -- or anyone else for that matter -- could possibly know what Feynman actually meant, but that is not true.  We can.  How?  By going back to the original source material: there is a recording of Feynman actually speaking those words.  If you listen to it, you will find that the transcript is actually not a word-for-word transcription of what Feynman said.  Here is what he actually said, word-for-word:

Experiment is the sole judge of truth, with quotation marks...

and he goes on from there say some other things that are not included in the transcript.  I'm not going to attempt to transcribe them because there are a lot of clues regarding his intent in his cadence and tone of voice which I cannot render as text.  But one thing should be clear: the use of scare quotes in the transcript is justified because Feynman specifically said so.

Does this prove that this is what Feynman meant?  No.  Nothing in science is ever proven.  It's possible that Feynman, because he was speaking off-the-cuff, said something he didn't intend.  It's possible that he was under the influence of alien mind-control technology.  It's possible that Richard Feynman never actually existed, that he was a myth, and all of the evidence of his existence is actually the product of a vast conspiracy.  But if you think that any of these possibilities are likely enough to pursue, well, good luck to you because I predict you're going to be wasting a lot of time.

Discussion

I'm going to break down the previous example in some painstaking detail to show how it is an instance of the process I described before.

1.  Identify a Problem.  Recall that a Problem is a discrepancy between your background knowledge and something you observe.  In this case, the discrepancy was the use of scare quotes in the printed version of the Feynman lectures, and the background knowledge that this is a transcript of something Feynman said rather than something that he wrote.

2.  Make a list of simplifying assumptions.  In this case there weren't any worth mentioning.

3.  Try to come up with a plausible hypothesis.  In this case there were two: one was that this was somehow a faithful rendering of what Feynman intended, and the other was that this was an editorial embellishment inserted by whoever produced the transcript.

4.  Subject your hypotheses to criticism.  I skipped that step because this is just a trivial example and not worth asking other people to spend any time on.

5.  Adjust your hypotheses according to the results of step 4.  Not applicable here.

6.  Do an experiment to try to falsify one or more of your hypotheses.  In this case, we had the original audio recording, and so we could go back to the source to hear what Feynman actually said.  And it turned out in this case that this new data actually falsified *both* of our initial hypotheses.  The transcript is *neither* a verbatim rendering of what Feynman said, *nor* is it an editorial embellishment by the transcriber.  Instead, it is a faithful rendering of Feynman's stated intentions, indeed arguably more faithful than a verbatim transcript would have been because (and note that here I am once again engaging in a tiny little example of applying the scientific method in a very abbreviated way) he had to work around a limitation of the medium he was using, namely, speech, which has no way of explicitly rendering punctuation.

7.  Use your theory to make more predictions.  I skipped that step here too.

Example 2

The second example comes from a real incident from when I was in elementary school.  My family emigrated from Germany to Lexington, Kentucky, in the late 60s.  My parents were secular Jews.  I spoke virtually no English.  As you might imagine in a situation like that, I was not exactly the most popular kid in school.  I got bullied.  A lot.  It went on for five years until we moved to Oak Ridge, Tennessee, at which point I was looking forward to making a fresh start.  I was no longer obviously a foreigner.  I spoke fluent English.  I was familiar with the culture (or so I thought).  I would not have my reputation as a punching bag following me around.  So I was rather dismayed when, within a few months in my new home, I was once again being bullied.

Here was a Problem.  I had a theory: I was being bullied in Lexington because I was a foreigner, and the culture wasn't welcoming to foreigners, especially not German Jews, who were just half a notch above blacks in the social pecking order.  But in Oak Ridge it was not obvious I was a foreigner.  I spoke unaccented English, I was white, I never went to synagogue or did anything else to identify myself as a Jew.  So why was I still being picked on?

To make a very, very long story short, I began to consider the possibility that my original hypothesis was fundamentally wrong, and that the reason I was being picked on had nothing to do with what I was but rather with something I was doing, and that I was engaging in the same provocative behavior (whatever that might be) in Oak Ridge as I had in Lexington.  In retrospect this was, of course, the right answer, but it took me a very long time to figure it out.  It's hard enough to think straight when you are being bullied all the time, and it's even harder when you are in the emotional throes of adolescence and puberty.  But I eventually did manage to figure out that the reason I was being bullied was quite simply that I was behaving like a jerk.  When I stopped acting like a jerk, the bullying stopped.  Not right away, of course.  Like I said, it took a very, very long time, and I'm leaving out a lot of painful details.  But I eventually did manage to figure it out and become one of the cool kids (or at least one of the cool nerds).

The point of this story is that I solved a real-world social problem using the scientific method without even realizing that I was doing it.  This happened in junior high school.  I didn't have the foggiest clue about the scientific method, hadn't even encountered it in science classes, and even if I had, the idea that it would be applicable to something besides chemistry experiments would have been laughable.  It is only in retrospect that I realized that this is what I had done.  And by coming to that realization, I have since been able to do the same thing deliberately in my day-to-day life to great effect.  I think anyone can do this, especially with a little coaching, which is one of my motivations for putting the effort into writing all this stuff.

Example 3

My third example comes from philosophy, and I'm putting it in here because it's kind of fun, but also because it actually turns out to be a generally useful guide for spotting certain kinds of invalid arguments.  The Problem we are going to address is: how did the universe come into existence?  (This qualifies as a Problem because the universe obviously does exist, and so it must have somehow come into existence, but we don't know how.)

The standard scientific answer is that we don't know.  Something happened about 13 billion years ago that caused the Big Bang (which is more appropriately called the Everywhere Stretch, but that's another story for another time) but we have no idea what that something is.  Religious apologists are quick to seize on this gap in scientific knowledge as an argument for God, but that is not what I want to talk about here.  (I promise I'll come back to it in a future installment.)  Instead, I want to explore a different hypothesis, one which is obviously ridiculous, and talk about how we can reject this argument in a more principled way than to point to its obvious ridiculousness.

The hypothesis goes by the name of Last Thursday-ism.  The hypothesis states that the universe was created last Thursday in the exact state it was then in.  Before that, nothing existed.  The reason you might think otherwise is that you were created with all your memories intact to give you the illusion that something existed before last Thursday when in fact it did not.

Like I said, obviously -- indeed, intentionally -- ridiculous.  But just because something is obviously ridiculous doesn't necessarily mean it's wrong.  Quantum mechanics seems obviously ridiculous too when you first encounter it, and it actually turns out to be right.  So being obviously ridiculous is not a sound reason for rejecting a hypothesis.

Can you think of a more principled argument for rejecting last-Thursday-ism?  Seriously, stop and try before you read on.  Remember that last-Thursday-ism is, by design, consistent with all currently observed data.

You might be tempted to say that last-Thursday-ism can be rejected on the grounds that it is unfalsifiable, but all it takes to fix that is a minor tweak: last-Thursday-ism predicts that if you build just the right kind of apparatus it will produce as output the date of the creation of the universe, and so the output of this apparatus will, of course, be last Thursday (assuming you get it built before next Thursday).  The cost of this apparatus is $100M (which is a bargain if you compare it to what the particle physicists are asking for nowadays).

Here's a hint: consider an alternative hypothesis which I will call the last-Tuesday hypothesis.  The last-Tuesday hypothesis states (as you might guess) that the universe was created last Tuesday.  Before that, nothing existed.  The reason you think it did is that you were created with all your memories intact to give you the illusion that something existed before last Tuesday when in fact it did not.

You could, of course, substitute any date.  Last Monday.  November 11, 1955.  Whatever.  Last-Thursday-ism is not one hypothesis, it is one of a vast family of hypotheses, one for each instance in time in the past.  And at most one of that vast family can possibly be right.  All the others must be wrong.  So unless there is some way to tell a priori which one is right, the odds of any particular one of them, including last-Thursday, being the right one are vanishingly small.  And that is why we are justified in rejecting the last-X hypothesis for any particular value of X.

Note that this is true even if the prediction made by the tweaked version of last-Thursday-ism turns out to be true!  It might very well be that if we build the apparatus described above that it might very well output "last Thursday".  But this will almost certainly not be because last-Thursday-ism is true (because it almost certainly isn't), but for some other reason, like that the apparatus just happens to be a design for a printer that prints out "last Thursday", and this has absolutely nothing to do with when the universe was created.

Invisible Pink Unicorns

That last example may have seemed like a silly detour, but you will be amazed at how often hypotheses that are essentially equivalent to last-Thursday-ism get advanced.  I call these "invisible pink unicorn" hypotheses, or IPUs, because the canonical example is that there is an invisible pink unicorn in the room with you right now.  The only reason you can't see it is that -- duh! -- it's invisible.  This hypothesis can be rejected on the same grounds as last-Thursday-ism. Why pink?  Why not green?  Or brown?  Or mauve?  Why a unicorn?  Why not an elephant?  Or a gryphon?  Or a centaur?  Unless you have some evidence to make one of these variations more likely than the others, they can call be rejected on the grounds that even if one of them were correct, the odds that we will choose it from among all the alternatives are indistinguishable from zero.

IPUs are everywhere, especially among religious apologists.  The cosmological argument, the fine-tuning argument, the ontological argument, etc. etc. etc. -- pretty much any argument of the form, "We cannot imagine how our present state of existence could possibly have arisen by natural processes (that is the Problem) therefore God must exist."  But "the universe was created by God" is just one of a vast family of indistinguishable hypothesis:  We cannot imagine how our present state of existence could possibly have arisen by natural processes, therefore Brahma must exist.    We cannot imagine how our present state of existence could possibly have arisen by natural processes, therefore Mkuru must exist.  And, as long as I'm at it: we cannot imagine how our present state of existence could possibly have arisen by natural processes, therefore an invisible pink unicorn with magical powers to create universes must exist.

Note that this is in no way proves that God -- or Brahma or Mkuru or the Invisible Pink Unicorn -- do not exist.  It is only meant to show why certain kinds of arguments that are often invoked in favor of their existence are not valid, at least not from a scientific point of view.

This sin is by no means unique to religious apologists.  Even professional scientists will advance IPU hypotheses.  This happens more often than one would like.  String theory is the most notable example.  It is an almost textbook example of an IPU.  String theory is not a single theory, it is literally a whole family of theories, all of which are indistinguishable based on currently available data.  Some string theorists will argue (indeed have argued) that string theory can be tested by building yet another particle accelerator for the low, low price of a few billion dollars, and maybe it can.  I don't pretend to understand string theory.  But the overt similarity with last-Thursday-ism should make anyone cast a very jaundiced eye on the claims being made despite the fact that the people making them aren't crackpots.  Having scientific credentials doesn't necessarily mean that you actually understand or practice the scientific method.

[NOTE] The "read more" link below doesn't lead anywhere.  It's there because at some point I accidentally inserted a jump break at the end of this article and now I can't figure out how to get rid of it.  AFAICT it's a bug in the Blogger editor.  If anyone knows how to get rid of this damn thing please let me know.

Saturday, March 16, 2024

A Clean-Sheet Introduction to the Scientific Method

 About twenty years ago I inaugurated this blog by writing the following:

"I guess I'll start with the basics: I am a scientist. That is intended to be not so much a description of my profession (though it is that too) as it is a statement about my religious beliefs."

I want to re-visit that inaugural statement in light of what I've learned in the twenty years since I first wrote it.  In particular, I want to clarify what I mean when I say that being a scientist is a statement about my religious beliefs.  I thought that there would be enough consensus about the meaning of "science" and "religious belief" that this would not be necessary, but that turns out to be one of the many, many things I as wrong about back then.  In this post I'm going to try to fix that, or at least start to.

Let me start with the easy part: By "religious beliefs" I do not mean to imply that science is a religion in the usual sense.  It isn't.  Religions generally involve things like the worship of deities, respect for the authority of revealed wisdom, and the carrying out of prayer and rituals.  Science has none of that, not because science rejects these things *a priori*, but because when you pursue science you are invariably (but not inevitably!) led to the conclusion that there are no deities active in our universe, and therefore no good reason to accept the authority of revealed wisdom, and hence not much point spending valuable time on prayer and ritual (except insofar as one might find satisfaction in pursuing prayer and ritual for their own sake).

What I *do* mean by "religious beliefs" is that being a scientist -- pursuing science, engaging in the scientific method -- need not be a profession.  It can also be a *way of life*.  I believe that science provides a *complete worldview* applicable to all aspects of life, not just ones that are commonly regarded as "science-y".  Furthermore, I believe that this worldview can be practiced by anyone, not just professional scientists.  You don't even have to be good at math (though it doesn't hurt).  And I also think that if more people did this, the world would be a better place.

In particular, I believe that science can be applied to answer questions about *morality*, and I claim that if you do this properly the results are *better* than those produced by traditional religions.  I also believe that science can provide satisfactory answers to deep existential questions, like what is the meaning of life.  But that will be a very long row to hoe.  For now I want to start simply by describing what science actually *is* because it turns out that there are a lot of misconceptions about that, particularly among the religious.

But let me start at the beginning

What is science?


Science is a process, a method, for solving a particular kind of problem.  The most succinct description I have found of the scientific method is:

Find the best explanation that accounts for all the observed data, and act as if that explanation is correct until you encounter contradictory data or a better explanation.
That is obviously an extreme oversimplification.  It is roughly akin to explaining how to play golf by saying, "Swing the club in such a way that it makes the ball go into the hole."  That's not wrong, but by itself it's not very useful either.

Golf actually turns out to be a pretty good analogy.  The scientific method is a skill, just like golf, and like golf, it is something anyone can do at a beginner level, but achieving mastery takes time and effort.  Unlike golf, the scientific method is good for a lot more than just getting balls into holes.  Golf is a uni-tasker.  Science is the ultimate multi-tasker.  You can even use it to improve your golf game!

Also like golf, you can do it both for fun and for profit.  You don't have to be a professional golfer to enjoy golf and to get something out of it.  Doing science can be rewarding for it's own sake, but there is also a significant side-benefit because, as I said, science is a method for solving problems.  So not only can it be fun and challenging and engaging, it can also give you solutions to problems.  And the kinds of problems that the scientific method can be applied to is much broader than most people realize, and using the scientific method in general is easier than a lot of people realize.  In fact, you are almost certainly already doing it, possibly without even realizing it.  Let me show you.

An example

Look around you (or, if you're blind, feel around you).  You will see (or at least think you see) things -- people, tables, cars, buildings, trees.  These things (seem to) exist in three-dimensional space, and occupy specific parts of that space, that is, the world is such that it makes sense to say things like "this tree is over here" and "that car is over there (and moving in that direction)".

Moreover, you can interact with some of the things around you in very complex and interesting ways.  There are things called "humans" that you can talk to and they will talk back and the things they say to you and that you say to them seem to convey some kind of meaning that corresponds to the properties of other things.  You can say, for example, "Do you see that tree over there?" and a human might respond, "Yes.  I think it's a maple."  And this will resonate with you in a way that saying the same thing to a dog and hearing it bark will not.

How can you account for all this?  How do you explain it?  Well, the obvious way to explain it is that the things you see are real, that is, that there really are trees and cars and other humans "out there" in point of actual physical (and maybe even metaphysical) fact.  This explanation is so obvious that it is hard to even conceive of an alternative.  Some of you might even be thinking to yourselves, "Well, duh, of course trees are real.  This guy must be some kind of moron if he thinks that is a profound observation."

The explanation that the things you perceive are real is obvious and compelling, but it is not the only possible explanation.  Another possible explanation is that you are living in the Matrix, a very high quality simulation created by some advanced alien race with technology vastly superior to our own.  That might seem unlikely, but it's possible, and it's not immediately obvious how you could definitively rule it out (or even that it is actually false!)

It turns out that neither one of these explanations is actually correct.  Both of them can actually be ruled out by experiment.  But it turns out that for the most part this doesn't actually matter.  Remember, the scientific method is not "find the correct explanation", it is "find the best explanation that accounts for all the data", and "objects appear to be real because they actually are real" is a very good explanation that is consistent with if not all of the data, at least the data that most people have access to.

Notice also that the second part of the scientific method is not, "accept that this explanation is correct", it is, "act as if this explanation is correct", and then there is the final caveat, "until you encounter contradictory data or a better explanation".

So the scientific method leads you naturally, without even being aware of it, to act as if the things you perceive are real are actually real, despite the fact (and here I have to ask you to temporarily suspend your disbelief and just take my word for it) this isn't actually true.  However, despite the fact that it isn't actually true, acting as if objects are real will not steer you far wrong in day-to-day life.

---

Here is another example.  This one is taken from history.  Imagine that you are living some time before the invention of the telescope.  You look up in the night sky and you can see the sun, moon, and stars.  Most of the stars stay in the same location (relative to each other) except that they all rotate around one star -- if you happen to be living in the northern hemisphere, otherwise they will appear to turn around an imaginary point that lies below the horizon.  (Explain that, flat-earthers!)

All of this is already strange enough, but to compound the mystery there are five -- and only five -- things that look like stars but don't move in the same way as all the others.  These are called "wanderers" or "planetae" in ancient Greek.

Two of these planetae, Venus and Mercury, are only ever seen near the horizon around sunset and sunrise.  The other three -- Mars, Jupiter and Saturn -- can be seen throughout the night.  These three all move generally in the same direction (relative to the other stars) but one them, Mars, occasionally stops and moves backwards.

How do you account for all this?

That was a question that occupied the finest human minds for thousands of years and they grappled with it to varying degrees of success.  The explanation that ultimately prevailed for well over 1000 years was produced by Claudius Ptolemy, a Greek astronomer living in Alexandria in the first century CE.  The details of Ptolemy's explanation don't matter much.  The thing that matters here is that it was based on the "fact" that what goes on in the heavens is radically different from what happens here on earth.  I put "fact" in scare quotes here because with the benefit of modern knowledge we know that this is not in fact a fact.  But from the perspective of someone living before telescopes, not only is it a fact, it is an obvious one.  The earth is dirty, the heavens are clean.  Any source of light on earth eventually extinguishes itself, but the lights in the heavens burn forever.  Anything moving on earth eventually stops, but the heavenly bodies move forever without ever coming to a halt.  And finally and most obviously, the behavior of things on earth is governed by the law that "what goes up must come down."  Some things like birds and canon balls can rise above the surface of the earth, but they can only go so far, and they can only stay aloft temporarily.  Eventually the canon ball will fall and the bird will roost (or die).  But the objects in the heavens stay there forever.  With one exception.  See if you can figure out what it is before I tell you.

[Spoiler alert]

Meteorites.  Every now and then a stone would fall from the heavens.  Where they came from was a deep mystery because on the one hand they looked like ordinary rocks, but on the other hand they came from the heavens which, as everybody knew because it was just obvious, were made of very different stuff governed by very different laws than those which pertained here on earth.

It was not until Isaac Newton in 1687 that this mystery was solved.  It turned out that the "obvious fact" that what happened in the heavens were radically different from what happens on earth was actually wrong.  The heavenly bodies are in fact made of the same stuff that things on earth are made of, and are governed by the same laws.  Today we take this for granted.  In 1687 it was a radical breakthrough, the dawn of modern science.  And one of the reasons it was accepted is that it explained the previously-mysterious observation of rocks falling from the sky.

---

At this point I want to go on a small tangent to put this event in its proper historical perspective.  As I write this, in March of 2024, it has been 336 years since Newton's Principia was published.  That might sound like a long time, but it's actually not.  I am almost 60 years old, so I have been alive for almost 20% of the total history of modern science.  Some of the most fundamental scientific theories are surprisingly recent.  The existence of atoms, for example, was controversial as recently as the early 20th century.  Albert Einstein died in 1955, slightly before I was born, but well within current living memory.  Many of the pioneers of quantum mechanics were alive when I was born.  I have personally met and spoken with Freeman Dyson, who died a mere four years ago.  Many of the experiments that provided the foundation for quantum computation were done while I was in high school.  The frontiers of quantum computation and artificial intelligence are being explored even as I write this.  We are very much still in the midst of the scientific revolution.  Quantum computing and AI are today what digital computers were in 1955, what relativity was in 1905, and what thermodynamics and steam power were in 1855.

One of the things that has happened in the 336 years since Principia was first published is that science has become an industry (much like golf has).  Isaac Newton was the first modern scientist, but he was not a professional scientist.  There was no such thing back then.  What we call "science" today was called "natural philosophy" then, and it included all kinds of things that would not be considered science today, like alchemy and astrology.  If you had asked Newton to describe the "scientific method" he would have had no idea what you were talking about.

With that in mind, I invite you to consider the following question: why is science a thing?  Why are there arguments over the definition of "science" but not "astrology" or "alchemy"?  Why is there so much more prestige (and money!) surrounding science than alchemy or astrology?  Sure, there are a few people making money as astrologers, but try getting an NSA grant to find a better way of casting horoscopes and you will get laughed out of the room.

The answer is: science is more effective at producing useful results than alchemy or astrology.  If you are reading this before the coming climate apocalypse, then you are steeped in technology.  (And if you are reading it after, take this as testimony that there was a time before the climate apocalypse when technology was ubiquitous.)  Computers, internal combustion engines, air conditioning, the Internet -- all of these things grew out of science and not alchemy or astrology.  Science is a thing because it works.

Which raises the obvious question: why does it work?  Why is science so much more effective at producing useful results than alchemy or astrology, or, for that matter, any other form of human endeavor?

To answer that, I will need to describe the scientific method in a little more detail.  But before I do that I need to first explain why describing the scientific method is not as straightforward as it might seem.

Why describing the scientific method is hard

If you seek out descriptions of the scientific method on the web you will find that they do not all agree with each other.  For example, Wikipedia says:

The scientific method involves careful observation coupled with rigorous skepticism, because cognitive assumptions can distort the interpretation of the observation. Scientific inquiry includes creating a hypothesis through inductive reasoning, testing it through experiments and statistical analysis, and adjusting or discarding the hypothesis based on the results.

However, if you dig deeper, you will find that not everyone agrees with this definition.  For example, Karl Popper, a highly regarded philosopher of science, argues that induction is not part of the scientific method, that it is a myth.  As an even more extreme example, if you go to Answers in Genesis, a creationist web site, you will find a very different description:

Science means “knowledge” and refers to a process by which we learn about the natural world. There are two different kinds of science; observational and historical. Historical science deals with the past and is not directly testable or observable so it must be interpreted according to your worldview.

The Bible is the foundation for science. Non-Christians must borrow biblical ideas—such as an orderly universe that obeys laws—in order to do science. If naturalism were true—if nature is “all there is”—then why should the universe have such order? Without the supernatural, there is no basis for logical, orderly laws of nature.
How can you tell who to believe?  Specifically, why should you believe what I am about to tell you?

One possible answer is that I was once a professional scientist.  I was an AI researcher at JPL for 12 years, from 1988 to 2000.  I made my living publishing peer-reviewed papers.  I was fairly successful.  I was the most referenced CS researcher in all of NASA (according to citeseer), and I held that title for many years even after I left.  I advanced to the rank of Principal, which is "awarded to recognize sustained outstanding individual contributions in advancing scientific or technical knowledge", which came with the most coveted perk at JPL: on-lab parking.

But that is not a very good answer, for two reasons.  First, just because I was able to make my living as a scientist doesn't necessary mean I understood how the scientific method works.  Being a successful professional scientist has as much (maybe even more) to do with politics than it does with science.  In fact, when my career advancement began to turn more on politics than science, that is what made me decide to quit.

Another possible answer is that what I am about to tell you aligns with things said by even more illustrious names like Karl Popper and Richard Feynman.  Their authority is much better than mine, but it's still an argument from authority, and the bedrock principle of science is that experiment and not authority should be the final arbiter of truth.  At least that's what Feynman said, so it must be true, right?

Ironically, the right answer can be found in the Bible, in the Gospel according to Matthew, chapter 7: by their fruits ye shall know them.  Remember, the reason we care about science at all is because it is effective at producing useful results.  The reason you should believe what I am about to tell you is that it will explain this effectiveness.  It will not be a complete explanation because that would take much longer than one blog post.  A much more detailed explanation is possible.  The one I am about to give you will be oversimplified.  But it will nonetheless explain the effectiveness of the scientific method, at least to some extent.  In other words, the scientific method can be applied to itself to explain its own effectiveness.  And that is the reason you should believe it.

By the way, an important thing to keep in mind as you read the next section: the scientific method is a *natural process*.  It is a discovery, not an invention.  It is something that happens, something that people (and even animals!) do, at least to some extent, without even being aware of it.  You can, in fact you almost certainly do, engage in the scientific method instinctively, just as you can probably hit a golf ball without any training.  But you'll be a lot better at science (and golf!) with training and practice.  So let's start.

The scientific method

The scientific method consists of seven steps.  It is important to follow these steps carefully and deliberately, otherwise you'll just end up with the scientific equivalent of a wild swing.

Step 1:  Identify a Problem.  I'm capitalizing Problem because it's a term of art which has a more specific meaning that it does in common usage.  A capital-P Problem is a discrepancy between your background knowledge, everything you believe to be true at the present moment, and something you observe.  Examples of currently open scientific Problems include things like, "Galaxies appear to rotate faster than they should based on the amount of observable matter they contain", and "There is life on earth, but we don't know how it started."  But Problems don't have to be Big Scientific Questions.  They can be as prosaic as, "I'm doing a good job at work but I'm not getting promoted" or "My wife seems to be mad at me even though she doesn't have any reason to be."

Note that the existence of Problems is not a shortcoming of the scientific method.  To the contrary, identifying a Problem is the crucial first step of the process.  I mention this because a common criticism of science among creationists is to point to Problems, things that science does not yet understand, and cite them as a reason for not trusting science at all.  This argument is not just wrong, it actually betrays a profound ignorance of how science actually works.  The only way to not have Problems is to already understand everything, to be omniscient.  The existence of Problems is a feature, not a bug.

(A creationist would no doubt respond: but we have an omniscient source of knowledge: God!  To which I respond: OK, but your access to this omniscient source of knowledge doesn't seem to give you much leverage towards producing useful results.  That is a Problem!)

Step 2: 
Make a list of all simplifying assumptions that you are going to make.  For example, in the vast majority of situations here on earth it is safe to ignore relativity and quantum mechanics, but it's important to keep in the back of your mind that you are ignoring them.

Step 3:  Try to come up with a plausible *hypothesis*, a *guess* at an explanation that is consistent *both* with all the data that produced your background knowledge, *and* the discrepancy that constitutes the Problem you are addressing.  At the frontiers of science you will often get stuck at this point because coming up with *any* plausible hypothesis is considered a major achievement.  Sometimes this will happen when using the scientific method in day-to-day life.  That's OK.  Getting stuck temporarily is a normal part of the process.

Note that the term "background knowledge" is a little misleading, because very often a plausible hypothesis will be that some part of your background "knowledge" is wrong.  The use of the term "scientific knowledge" is fairly common, and it implies that this knowledge is immutable and not open to question, but that is not true.  All "knowledge" in science is tentative and subject to being overturned by new data or better hypotheses.  But this doesn't mean that we don't know anything.  Some scientific results are so well established, and backed up with so much evidence, that the odds of it being wrong, while not zero, are extremely low, and the evidence needed to show that it is wrong would be truly extraordinary.  We will sometimes abbreviate that by calling it "knowledge" or "established scientific fact" even though what we really mean is "current-best explanation, one which is so well established that the odds of overturning it, while not quite zero, are so close to zero as to make no practical difference."

Step 4:  Subject your hypotheses/guesses to criticism.  In other words, try as hard as you can to show why each of your hypotheses is *wrong*.  Anything is fair game here, including asking other people to poke holes in your ideas.  In fact, that is encouraged.  You can also participate in the scientific method by helping to poke holes in other people's ideas (this is called "peer review")

Note that -- and this is very important -- you are not trying to show that your hypothesis is valid or correct!  The object here to do the exact opposite: trying to show that your hypothesis is wrong.  Of course, you are hoping that you will fail in this endeavor, but you must nonetheless try as hard as you can and in good faith to debunk yourself.

There a few rules about scientific criticism:

Rule 1: you have to separate your criticism of the hypothesis from criticism of its presentation.  The former is vastly more valuable than the latter.  The latter is ultimately important too, but it's much less important, and doing too much of the latter at the expense of the former gets really annoying.

Rule 2: you have to criticize within the bounds of the assumptions laid out in step 2.  So in this case, if you want to criticize the hypothesis I am laying out here, it's out of bounds to say, "But you've ignored quantum mechanics."  Yes, I *know* I have ignored quantum mechanics.  I *said* I was going to ignore quantum mechanics.  Your pointing that out again is not helpful.

Rule 3:
you can't change the problem statement.  So, for example, you can't criticize the hypothesis I am laying out here on the grounds that science has not (yet) produced answers to various political and social problems.  The problem I'm addressing here is: why does science appear to be so effective at producing *any useful results at all* (and in producing technology in particular)?  Any criticism not having to do with that is out of bounds.  (This is the reason it is important to have a clear, explicit, and unambiguous problem statement.)

Beyond that pretty much anything is fair game.  Here are three particularly valid forms of criticism:

Valid criticism 1: The hypothesis is inconsistent with observation.  It doesn't matter how plausible or mathematically elegant your hypothesis is, if it doesn't agree with experiment (subject to the assumptions laid out in step 2) it goes in the hopper.

Valid criticism 2: The hypothesis is unfalsifiable.  It must be possible, at least in principle, to do an experiment whose outcome would show that the hypothesis is *wrong*.  If there is no possible experiment that could be done whose outcome could be at odds with the hypothesis, then it is not a valid scientific hypothesis.  (I call this the "invisible pink unicorn" or IPU rule.)

Valid criticism 3: The hypothesis contains unnecessary detail.  You can always make any hypothesis consistent with all observations by adding additional details, but a high quality theory will be parsimonious: it will account for a lot of data with as few details as possible.

(In fact, since you live in the information age, you can actually think of the whole scientific method as a data compression process: it takes a vast amount of raw data and boils it down to the minimum amount of information needed to reproduce that data.  This turns out to be more than just a casual observation, but rather a Very Deep Insight that sheds light on why the scientific method works.  But before I can get into those details I will have to talk about the theory of computation and information, and we're nowhere near ready for that.)

Step 5: Consider whether the criticism you have produced or received is valid.  If it is, go back to step 3 and try again.

Step 6:
Sometimes you will come up with more than one hypothesis that withstands all of the criticism that anyone can think to throw at it.  In that case, examine the predictions that these hypotheses make and choose one that is different between them.  Then do an experiment to see which hypothesis makes the correct prediction, and discard the others.  Note that it is entirely possible that the results will eliminate all of the surviving candidates, in which case you will need to go back to step 3.  But if this doesn't happen, if one hypothesis survives, then, congratulations, your one remaining hypothesis has now been promoted to the status of a Theory!  A Theory is a hypothesis that has withstood all attempts to invalidate it.  In science, "theory" is a synonym for "knowledge" or "fact" subject to the caveats described above.

Finally, the last step is:

Step 7:
Use your Theory to make more predictions and test those against experiment too.

It turns out that if you follow this process, by the time you get to step 7, it is extremely rare for the results of those subsequent experiments to contradict the predictions made by the theory.  And that is the magic.  That is the reason that science is effective at producing useful results.  It is because it produces theories with predictive power.  It literally gives you the gift of prophecy, and if you have that, you can choose your actions to more reliably produce results that you want.

This of course raises the obvious question of why this procedure works, and specifically why this particular procedure works so much better than anything else anyone has been able to come up with.  That question also turns out to have an answer, but it is a much, much longer and more complicated answer.  It involves quantum mechanics, information theory, and the theory of computation.  I'm planning future installments about all of those, but if you're impatient this story is told reasonably well in David Deutsch's book, "The Fabric of Reality" (though what he says about parallel universes needs to be taken with a grain of salt).

Does science lead to Truth?

That science produces Theories with predictive power is simply an observed empirical fact.  As time goes by, it gets harder and harder to find Problems, harder and harder to find observations that cannot be explained and predicted by existing Theories, and harder and harder to come up with new Theories that tie up the fewer and fewer remaining loose ends.  It is possible that some day we might even come up with a Theory of Everything that will tie up the last remaining loose end, and the whole project will be complete.

There is another empirical observation we can make about the scientific method: it converges.  Not only does it produce Theories with more and more predictive power, on those rare occasions when a new Theory completely overturns an old one, the old theory (uncapitalized now because it has been shown to be wrong) always turns out to be a good approximation to the new one under certain circumstances, and in particular, under the circumstances that pertain here in our solar system.  To find phenomena that cannot be explained with current scientific Theories you have to go far outside our solar system and look at neutron stars, black holes, and even other galaxies.

One possible explanation for these empirical observations is that there is an actual metaphysical Truth out there, and that the thing that the scientific method is converging towards is this metaphysical Truth.  That's a hypothesis, a possible explanation for the empirical observation that science converges, at least so far.  This hypothesis makes a prediction: that science will continue to converge, and may some day even reach the point where there are no more Problems, where it can explain all observations.  This hypothesis is falsifiable, so it's a valid scientific hypothesis.  And for the last 336 years no data has contradicted it.

Does that prove that science finds metaphysical Truth?  No.  Nothing is ever proven in science.  All knowledge is tentative and subject to being overturned by new data or a better explanation.  But it is, at the moment, the current-best explanation.

This is not to say that there are no extant Problems with the hypothesis that science converges towards metaphysical truth.  There are at least four that I can think of.  The first is the so-called "hard problem of consciousness", i.e. explaining qualia and subjective experience.  The second is "deriving ought from is", i.e. using the scientific method to obtain a theory of moral behavior.  And the last one is the problem of origins and teleology.  Why is there something rather than nothing?  How did life on earth begin?  What is the point of all this?  And finally there is the problem of religion: why do so many people believe things that are at odds with science?

I actually believe that all of these Problems have had some pretty significant dents put into them by the scientific method, much more than is generally appreciated or understood, even among scientists.  I've written about all of these things at one time or another, but usually in the context of developing my own ideas about them, and never as a coherent summary that presents the final results in a unified and organized way.  I'm going to try to remedy that in the future.  But it has already taken me a week just to get this far so I thought I'd go ahead and put this out there and subject it to criticism.

Saturday, February 03, 2024

Why I want to repeal the Second Amendment

About three years ago I wrote a blog post calling for the repeal of the Second Amendment to the Constitution of the United States.  I hope it goes without saying that I did not (and do not) harbor any illusions about this actually happening any time soon.  I wrote it in a fit of frustration at having to watch over and over again the futile ritual that America goes through every time there is a mass shooting: condolences are offered, prayers are said, there are calls to Do Something, and then, of course, nothing gets done.  Because there's this pesky Second Amendment.

By calling for its repeal I was hoping to point out that the Second Amendment is not actually a necessary consequence of the laws of physics, nor is it gospel handed down from on high by God.  We actually can change it, and the only reason we don't is because not enough people want to, or even think it's possible.  I was mostly just standing up to say that it is in fact possible if enough of us decide we want to make it happen.

But I want to go one step further here and advance an explicit argument for why we should.  The argument is quite simple: the Second Amendment is both bad law and bad policy.  Note that these are two different things.  I think that either one on its own makes a compelling case for repeal, and the two together actually make it a slam-dunk, but it is nonetheless important not to conflate the two things.  They are completely unrelated to each other.

What makes the Second Amendment bad law is that it is unclear what the amendment actually means.  It has this really weird structure, with its infamous introductory clause: "A well regulated Militia, being necessary to the security of a free State..."  Is that intended to be an actual operative part of the law, or is it merely a rhetorical flourish?  And if it's the former, what does it actually mean?  No one knows.  There are well-informed arguments advanced by scholars in support of both sides, and resolving the dispute to everyone's satisfaction is about as likely as as reconciling Catholicism and Protestantism.  Because of this, the actual intended meaning of the Amendment is unclear, and an unclear law is a bad law.

That is the entirety of the first part of my argument.  At the very least, the Second Amendment should be repealed and replaced with something that actually says what it is intended to mean instead of leaving everything up to politics and the Supreme Court.

But, as I said, I have a second argument, which is that the Second Amendment is bad policy.  How can I possibly argue that in light of the fact that I've just taken the position that the main problem with the Second Amendment is that it is unclear what kind of policy it actually advances?  It turns out not to matter.  Whatever the Second Amendment actually means (if indeed it actually means anything) one thing is beyond dispute: it explicitly enshrines the Right to Bear Arms and so raises it above a whole host of other unenumerated rights.  Like, for example, the right to vote, or to travel, or to not be discriminated against based on the color of your skin or your gender or your sexual preferences.  None of those things are written into the Constitution, but the "right to bear arms" is.  In my humble opinion, that is, at best, a failure to get our priorities straight.

This is not to say that I want to take away everybody's guns.  This is a common canard on the right, but I have never heard anyone call for complete prohibition.  I'm sure there are some peaceniks who would like to see every last gun beaten into a plowshare, but I'm not one of them, and I've never actually heard anyone advance this as a serious policy proposal.  Prohibition is a terrible policy simply because it doesn't work.  But in between prohibition and a Constitutionally protected right lies a very broad range of much saner policy choices, like licensing and background checks, which are now being tossed out the window because you can't burden a Constitutionally protected right even if it means that innocent people, including children, have to pay with their lives.  I don't want to repeal the Second Amendment because I want to get rid of guns, I want to repeal the Second Amendment because that is a pre-requisite to even having a sane policy discussion about guns.

What would a sane policy discussion about guns look like?  I think it should start with the observation that guns are not just dangerous, but inherently dangerous.  Unlike every other consumer product, the *purpose* of a gun is to end life.  This is not to say that guns cannot be used in other ways.  I get that some people simply enjoy target shooting or making loud noises on holidays.  But the fact that ancillary uses exist does not negate that the primary purpose of guns, the reason they were invented and the task they are optimized for, is to kill.

Now, I am emphatically not saying that killing is an unalloyed evil.  There are situations in which killing has value; sometimes it can even rise to the level of a necessity.  My neighborhood is overrun with deer, and shooting a few of them on occasion would definitely be a good thing.  I don't see much wrong with using a gun to hunt for food.  And I am keenly aware that one of the reasons I can safely walk around without a gun is that there are police and soldiers trained to use guns and other weaponry.  I do wish it were otherwise, but it isn't, and that's how it's going to stay.  The existence of violence *is* a consequence of the laws of physics.

The above notwithstanding, I do think that all else being equal, less killing is better than more, and society should particularly strive to minimize the killing of *innocent* people, that is, people who have not voluntarily undertaken the inherent risks of being around guns.  And the only way I see to achieve that goal is to restrict who has access to guns and under what circumstances.  I have to stress again that this does not mean getting rid of all guns.  It means licensing, background checks, and some pretty severe restrictions on carrying guns in public places.  None of that is possible under the current legal interpretation of the Second Amendment, which is one of the reasons I think it has to go.

That is the end of my argument.  But before I close I would like to address three anticipated counter-arguments.

The first is that cars kill about as many people as guns do.  This is true, but this doesn't make guns and cars comparable.  For starters, there is no Constitutionally enshrined right to drive, and that makes it possible to impose reasonable restrictions on driving and cars to make them safer.  Furthermore, the societal benefit of driving is vastly greater than the societal benefit of widespread gun ownership.  If we got rid of all privately held cars, the economy would grind to a halt.  If we got rid of all privately held guns, all that would happen is that some gun enthusiasts would grumble and complain, a few people who work for gun manufacturers would lose their jobs, and a lot fewer people would die.  (Personally, I would call that a win, but please note that I'm not actually advocating for this because I also believe in personal freedom, and there are less draconian ways of keeping people safe.)

The second is that people ought to have the right to defend themselves.  To which my response is: defend themselves against what?  There doesn't seem to be any consensus among gun-rights activists as to what the correct answer to that question ought to be.  And it matters.  A lot.  A lot of the rhetoric around gun-rights advocacy seems to turn on the phrase "personal protection" but that is just as fuzzily defined as "assault weapon".  Does "personal protection" extend to defending my personal property?  Against whom?  Do the adversaries against which I have the right to defend myself and/or my property include the government?  Because the founders were explicitly concerned about allowing states the right to defend themselves against military overreach by the federal government.  If the states retain the right to resist the federal government by force (and Texas certainly seems to think that they do) then why not individuals?  And if I have a constitutional right to defend myself against the government, why can't I have a bazooka?  Or a tank?  Or a nuke?  On the other hand, if I don't have a constitutional right to defend against the government, then how do you justify claiming the right to own an AK-47?

Which brings me to the third anticipated counter-argument, which is that I am just a stupid twit, profoundly ignorant of the history of the Second Amendment and why it is important.  That may be true.  I'm not a historian, just a citizen.  I actually think my knowledge of history is better than what is on display here but I don't think that matters.  The history of the Amendment is mainly invoked as part of the argument over what it means, and as I've already said, I don't think that argument can be resolved, which is one of the reasons I think the right answer here is a rewrite, not a rehash.  But more than that, I don't believe in originalism.  I think it's a completely arbitrary standard invented out of whole cloth to advance a political agenda.  We are not living in 1788 and I see no reason we should allow ourselves to be governed by the standards of that time.  When the Second Amendment was written, the United States did not have a standing army.  Today it does (and marines, and a navy, and an air force, and a space force).  A well-regulated militia (whatever that might actually mean) may have been necessary to the security of a free state in 1788, but it is certainly not necessary any more.

Today all sane people agree that somewhere between a BB gun and a nuke there is a line that ought to be drawn beyond which any individual right to bear arms should not extend.  But the Amendment provides absolutely no guidance as to where that line is because the people who wrote it had no idea that something like a nuke -- or even an AK-47 -- was even possible.  The state-of-the-art technology of the day was a muzzle-loader.  If you were extremely skilled you could get off one shot every 15-20 seconds or so.  It is simply not possible for an individual to use such a weapon to carry out a mass shooting.  You might be able to get off one shot, but you would surely be tackled by an angry mob well before you managed a second.  (It is likewise practically impossible for an individual to use a muzzle-loader for personal defense.)

This is the technical environment in which the Second Amendment was written.  It was simply not designed for today's world, and it is not up to the task.  It is not possible to use the Second Amendment to draw the line between reasonable and unreasonable weaponry for personal use because the people who wrote it lived in a world where AK-47's and nukes not only did not exist, they were inconceivable.

Relying on the Second Amendment to inform us about gun rights today makes about as much sense as looking at horse-drawn carriages to figure out what the speed limit should be on the interstate.  And yet, that is what we are currently doing, and that is what we will continue to do as long as the Second Amendment is on the books.  And as long as that is the case, the decisions that get made will be made on the basis of politics and unprincipled arguments, because there simply are no other possibilities.  There are hard limits to the extent to which 18th-century knowledge and wisdom can inform 21st-century decisions.

Postscript:

In the interest of full disclosure I wish to reveal that despite the fact that I generally side very strongly with individual freedom, I do find gun-rights advocacy to be quite annoying.  Many gun-rights advocates seem to have this fantasy about living out a real-life version of a spaghetti Western melodrama where they play the part of the hero.  I can tell you from both recent events and personal experience that this is indeed a fantasy.  Even if someone breaks into your house, the chances that having a gun on hand will change the outcome for the better are very small.  In 1991 I came home to find a robber in my house, who took a pot-shot at me with a 0.38 on his way out the window.  I escaped serious injury by less than a meter.  The entire sequence of events played out over less than five seconds, and even if I had a gun of my own there is no way it would have made a difference, except that it would have introduced yet another possible way I could have gotten injured or killed.  Under no circumstances would I have been able to use it to deter the thief.

Gun-rights advocates like to brandish slogans like "the only thing that stops a bad guy with a gun is a good guy with a gun."  It's simply not true.  A much more effective way of stopping bad guys with guns is to stop the bad guys from ever getting a gun in the first place.  (Another annoying slogan is, "Guns don't kill people, people kill people."  That one isn't true either.  A lot of people die in accidents involving guns, in which case it really is the gun doing the killing and not another person.  But even in cases where it is "people" doing the killing, all too often they are doing the killing with guns.

And this is actually what annoys me the most about gun-rights advocates: they don't actually have a principled argument to support their position, and so they rely on sketchy history and sketchy slogans and sketchy politics in order to cling (yes, I'll go there) to their guns.  I suppose they do it because they are afraid, afraid that someone will "come for their guns", upon which someone will come for them.  And yeah, that's possible, though if someone does "come for their guns" it won't be me.  I'll buy a gun myself before I let that happen.  But I'll say this: if some day a generation rises up that gets sick and tired enough of the bloodshed to come for their guns, it will be no one's fault but their own.  I almost certainly won't live to see it, but if that day ever does come, don't say I didn't warn you.