Sunday, May 05, 2024

Languages are theories: debunking the new riddle of induction and flat-eartherism

This is the sixth installment in a series about the scientific method.  My central thesis is that science is not just for scientists, it can be used by anyone in just about any situation.

In part 2 of this series I gave a few examples of how the scientific method can be applied in everyday situations.  In this chapter I want to show how it can be used to tackle what is considered to be a philosophical problem, something called the New Riddle of Induction.  I already covered the "old" riddle of induction in an earlier chapter but I'm going to go back over it here in a bit more detail.

The "old" problem of induction is this: we are finite beings.  There are only a finite number of us humans.  Each of us only lives for a finite amount of time, during which we can only have a finite number of experiences and collect a finite amount of data.  How can we be sure that the theories we construct to explain that data don't have a counter-example that we just haven't come across yet?

The reason this is called the "problem of induction" is that the example most commonly used to motivate it is the (alleged) "fact" that all crows are black.  It turns out that this isn't true.  There are non-black crows, but they are rare.  If all the crows you have ever seen are black, then it seems not entirely unreasonable for you to draw the conclusion that all crows are black because you have never seen a counter-example.  But of course you would be wrong.

The "new" riddle of induction (NRI) was invented by Nelson Goodman in 1955.  It adds a layer to this puzzle by defining a new word, "grue', as follows:

An object is grue if and only if it is observed before midnight, December 31, 2199 and is green, or else is not so observed and is blue.

Goodman then goes on to observe that every time we see a green emerald before December 31, 2199, that is support for the hypothesis that all emeralds are green, but it is equally good support for the hypothesis that all emeralds are grue, and so we are equally justified in concluding that at the stroke of midnight on new years eve 2199, all of the world's emeralds will suddenly turn blue as we are in predicting that they will remain green.

Now, of course this is silly.  But why is it silly?  You can't just say that the definition of "grue" is obviously silly, because we can give an equally silly definition of the word "green".  First we define "bleen" as a parallel to "blue":

An object is bleen if and only if it is observed before midnight, December 31, 2199 and is blue, or else is not so observed and is green.

And now it is "green" and "blue" that end up with the silly-seeming definitions:

An object is green if and only if it is observed before midnight, December 31, 2199 and is grue, or else is not so observed and is bleen.

An object is blue if and only if it is observed before midnight, December 31, 2199 and is bleen, or else is not so observed and is grue.

The situations appear to be completely symmetric.  So on what principled grounds can be say that "grue" and "bleen" are silly, but "blue" and "green" are not?

You might want to take a moment to see if you can solve this riddle.  Despite the fact that philosophers have been arguing about it for decades, it's actually not that hard.

It is tempting to say that we can reject the grue hypothesis because it has this arbitrary time, midnight, December 31, 2199, baked into the definition of the words "grue" and "bleen", so we can reject it for the same reason we rejected last-thursdayism.  The grue hypothesis (one might argue) is not one hypothesis, it is just one of a vast family of hypotheses, one for every instant of time in the future.  In fact, if you look up the NRI you will find the definition of grue given not in terms of any particular time, but explicitly in terms of some arbitrary time called T.

This explanation is on the right track, but it's not quite right because, as I pointed out earlier, the green hypothesis can also be stated in terms of some arbitrary time T.  What is it about "green" that makes it more defensible as a non-silly descriptor than "grue"?

Again, see if you can answer this yourself before reading on.

The answer is that while it is possible to give a silly definition of "green" in terms of grue and bleen, it isn't necessary.  It is possible to give a non-silly definition of "green"; it is not possible to give a non-silly definition of grue.  It is possible to define "green" without referring to an arbitrary time; it is not possible to define grue without referring to an arbitrary time.

How can we know this?  Because the grue hypothesis makes a specific prediction that the green hypothesis does not, namely, that all of the emeralds discovered after time T will be blue, which is to say, they will be a different color than all of the emeralds discovered before T.

Goodman would probably reply: no, that's not true, all of the emeralds discovered before and after time T will be the same color, namely, grue.  But this is just word-play. If you take two emeralds, one discovered before T and one after, they will look different.  If you point a spectrometer at a before-T emerald and an after-T emerald, the readings will be different.  In other words, on the grue hypothesis you will be able to distinguish experimentally between emeralds discovered before T and after T.  The grue hypothesis is falsifiable, and it will almost certainly be falsified the first time someone discovers an emerald after time T.

The crucial thing here is that your choice of terminology is not neutral, it is a crucial component of the expression of your hypothesis.  To quote David Deutsch, in an aphorism that arguably sets the record for packing the greatest amount of wisdom into the fewest number of words: languages are theories.  An argument based on hiding questionable assumptions under a terminological rug can be rejected on that basis alone.

Here is another example: consider, "The sun rises in the east."  Most people would consider that to be true.  But if you think about it critically, this sentence is laden with hidden assumptions, not least of which is (at least apparently) that the sun rises.  It doesn't.  The sun just sits there, and the earth orbits around it while rotating on an axis.  That makes it appear, to an observer attached to the surface of the earth, that the sun rises and sets even though it actually doesn't.  But that doesn't make "the sun rises in the east" false, it is just a deliberate misinterpretation of what those words actually mean in practice.  "The sun rises in the east" does not mean that the sun literally rises, it means that the sun appears to rise (obviously), and it does so in the same general direction every morning.  There is also an implicit assumption that we are making these observations from non-extreme latitudes.  At extreme latitudes, the sun does not even appear to rise in the east.  In fact, at the poles, the concepts of "east" and "west" don't even make sense -- at the poles, east and west literally do not exist!  (By the way, this is not just a trivial detail.  This exact same thing will come up again when we start talking about space-time, cosmology, and the origins of the universe.)

Note that "the sun rises in the east" is not an inductive conclusion, nor is it a hypothesis.  It is a prediction, one of many, made by the theory that the earth is a sphere rotating about an axis.  Furthermore, the fact that the sun rises and sets, together with the fact that this happens at different times in different places, definitively debunks the competing hypothesis that the earth is flat.  On a flat earth, if the sun is above the horizon, it must be above the horizon for all observers.  If the sun is below the horizon, it must appear below the horizon for all observers, and likewise if the sun is at the horizon.  This is in conflict with the observation that sunrise and sunset happen at different times in different locations.

Similarly, "all crows are black" is neither an inductive conclusion nor a hypothesis, but a prediction made by a very complex set of theories having to do with how DNA is transcribed into proteins, some of which absorb light of all visible wavelengths and so appear to be black.  "All emeralds are green" works the same way, but with one important distinction worth noting: in the case of crows, the hypothesis admits the possibility of occasional genetic mutations that result in non-black crows, which is in fact exactly what we sometimes observe.  (It also predicts that these will be rare, which is also what we observe.)

Emeralds are different.  Emeralds are green not because they contain proteins produced by DNA, but because they consist of particular kinds of atoms arranged in a particular crystalline structure with some particular impurities that make them look green.  It is possible to have other impurities that produce other colors, but in that case the result is not called an emerald but aquamarine or morganite.  All emeralds are green, without exception, because that is consequence of the definition of the word "emerald" plus the known laws of physics.  If a non-green emerald were ever discovered, that is, a mineral with the same chemical composition and crystal structure as an emerald but which was not green, that would be Big News.

Notice how easy all this was.  We didn't have to do any math.  We didn't have to get deep into the weeds of either scientific or philosophical terminology.  The hairiest technical terms I had to use to explain all this were "chemical composition" and "crystalline structure".

Notice too that we didn't have to debunk any of the specific arguments advanced by flat-earthers.  All we had to do is think about what "the sun rises in the east" actually means, and combine that with the fact that time zones are a thing, to generate an observation that the flat-earth hypothesis cannot explain.  Unless and until flat-earthers refute that (and they won't because they can't) we can confidently reject all of their arguments even if we have not examined them in detail, just as we can confidently reject claims of perpetual motion even if we have not examined those claims in detail.

In fact, we can reject flat-eartherism even more confidently than we can reject perpetual motion, and that is really saying something.  There are possible worlds where the second law of thermodynamics doesn't apply, the world we live in just happens not to be one of them.  It is a logical impossibility for the sun to rise and set at different times on a flat earth.  Simultaneous sunset and sunrise for all observers is a mathematical consequence of what it means to be flat.

The take-away message here is that the choice of terminology, the concepts you choose to bundle up as the definitions of words, is an integral part of the statement of a hypothesis.  Often the entire substance of a hypothesis is contained not in the statement of the hypothesis, but in the definitions of the words used to make the statement.

There are all kinds of problems that philosophers have argued about for decades that are easily resolved (and also bad science pedagogy that is easily recognized) once one comes to this realization.  It is a hugely empowering insight.  If someone tries to explain something science-y to you and it doesn't make any sense, it very well might just be that they haven't explained what they mean by the words they are using.  Science is chock-full of specialized terminology, and a lot of it sounds intimidating because, for historical reasons, scientists have adopted words with Greek and Latin roots (and sometimes German too).  These can sound weird, but the important thing to remember is that even weird-sounding words are just words, and they mean things just like more familiar words, and the things that they mean are often not nearly as intimidating as the words themselves.  Don't let weird words scare you.

The same can be said for math.  A lot of people are put off from science because it tends to have a lot of math, which they find to be off-putting.  But here is the empowering secret about math: math is just another language!  It is a very a very weird and specialized language, but a language nonetheless.  It uses a lot of unfamiliar symbols and notational conventions (the relative placement of symbols on a page matters a lot more than in other languages) but at the end of the day it's just marks on a page that mean something, and it's the meaning that matters, not the marks.  Keep that in mind any time things start to feel like they're getting too complicated.

Monday, April 29, 2024

The Scientific Method part 5: Illusions, Delusions, and Dreams

(This is the fifth in a series on the scientific method. )

Daniel Dennett died last week.  He was a shining light of rationality and clarity in a world that is often a dark and murky place.  He was also the author of, among many other works, Consciousness Explained, which I think is one of the most important books ever written because it gives a plausible answer to what seems like an intractable question: what is consciousness?  And the answer is, to the extent that it is possible to condense a 450-page-long scholarly work down to a single sentence: consciousness is an illusion.

I don't know if Dennett would have agreed with my prĂ©cis, and I don't expect you to believe it just because I've proclaimed it, or even because Dennett very ably (IMHO) defends it.  I might be wrong.  Dennett might be wrong.  You can read the book and judge for yourself, or you can wait for me to get around to it (and I plan to).  But for now I just want to talk about what illusions actually are and why they matter for someone trying to apply the scientific method.  In so doing I hope to persuade you only that the hypothesis that consciousness is an illusion might not be as absurd as it seems when you first encounter it.  I am not hoping to convince that it's true here -- that is much too big a task for one blog post -- only that the hypothesis is worthy of consideration and further study.

You are almost certainly reading this on a computer with a screen.  I probably don't have to convince you that that screen is a real thing.  But why do I not have to convince you of that?  Why can I take it for granted that you believe that your computer screen is a solid tangible object that actually exists?

The answer can't be merely that you can see it.  In fact, if you are reading this, you probably actually can't see most of your computer screen!  What you are seeing instead is an image on your computer screen, and that image is not a real thing.  Here, for example, is something that looks like a leopard:


but it is not a leopard, it is a picture of a leopard, and a picture of a leopard is not a leopard.  The latter is dangerous, the former not so much.  But the point is that, when a computer screen is in use, most of it does not look like a computer screen, it looks like something else.  The whole point of a computer screen is to look like other things.  Computer screens are the ultimate chameleons.

As I write this, it is still pretty easy to distinguish between real and image-inary leopards (and even imaginary leopards), but that may not be the case much longer.  Virtual reality headsets are becoming quite good.  I recently had an opportunity to try an Apple Vision Pro and it was a transformative experience.  While I was using it, I genuinely thought I was looking through a transparent pane of glass out onto the real world.  It was not until later that I realized that this was impossible, and what I was seeing was an image projected into two very small screens.  God only knows where this technology will be in another few decades.

Now take a look at this image, which is called "Rotating Snakes":

 


If you are like most people, you will see the circles moving.  (If you don't see it, try looking at the full-size image.)  Since you are almost certainly viewing this on a computer screen, a plausible explanation is that the image actually is moving, i.e. that it is not a static image like the leopard photo but a video or an animated gif, like this

 

But that turns out not to be the case.  The Rotating Snakes image is static.  There are a couple of ways to convince yourself of this.  One is to focus on very small parts of the image rather than the whole thing at once, maybe even use a sheet of paper with a hole cut in it to block out most of the image.  Another is to print the image on a sheet of paper and look at it there.

The motion you see (again, if you are typical) in Rotating Snakes is an example of an illusion.  An illusion is a perception that does not correspond to actual reality.  Somehow this image tricks your brain into seeing motion where there is none.  The feeling of looking out at the world through a pane of transparent glass in a Vision Pro is also an illusion.  And in fact the motion you see in the animated gif above is also an illusion.  That image is changing, but it's not actually moving.  And even if we put that technicality aside, you probably see a rotating circle, but the white dots that make up that circle are actually all moving in straight lines.

The Rotating Snakes image is far from unique.  Designing illusions is an entire field of endeavor in its own right.  Illusions exist for all sensory modalities.  There are auditory illusions, tactile illusions, even olfactory illusions.  The first impression of your senses is not a reliable guide to what is really out there.

There are two other kinds of perceptions besides illusions that don't correspond to reality: dreams and delusions.  You are surely already familiar with dreams.  They are a universal human experience, and they happen only while you are asleep.  But one of the characteristics of dreams is that you are generally unaware that you are asleep while you are dreaming.  Dreams can feel real at the time.  It is possibly to become aware that you are dreaming while you are dreaming.  These are called "lucid dreams".  They are rare, but not unheard of, and some people claim that you can improve your odds of experiencing one with practice.  I've had a few of them in my life, and they can be a lot of fun.  For a little while I feel like I am living in a world where real magic exists, and I can do things like fly simply by thinking about it.

But then, of course, I always wake up.

This is the thing that distinguishes dreams from illusions and delusions: dreams only happen when you are asleep.  Illusions and delusions happen when you are awake.  The difference between illusions and delusions is that delusions, like dreams, are private.  They are only experienced by one person at a time, and they are not dependent on any external sensory stimulus.

The word "delusion" is sometimes understood to be pejorative, but it need not be.  Delusions are a common part of the human experience.  Tinnitus and psychosomatic pain are delusions but the people who suffer from them are not mentally ill or "deluded".  Even schizophrenics are not necessarily "deluded" -- many schizophrenics know that (for example) the voices they hear are not real, just as people with tinnitus (I am one of them) know that the high-pitched squeal they experience is not real.  What drives them (us?) crazy is that they (we?) can't turn these sounds off.

Delusions don't even have to be unpleasant.  They can be induced by psychoactive chemicals like LSD, and (I am told -- I have not tried LSD) those experiences can be quite pleasant, sometimes even euphoric.

Illusions, on the other hand, are only experienced in response to real sensory stimulus, and for the most part in predictable ways that are the same across nearly all humans and even some animal species.  Illusions can be shared experiences.  Two people looking at the Rotating Snake illusion will experience the same illusory motion.

So how do we know that illusions are not actually faithful reflections of an underlying reality?  After all, the main reason to believe in reality at all is that it's a shared experience.  Everyone agrees that they can see cars and trees and other people, and the best explanation for that is that there really are cars and trees and people in point of actual physical (maybe even metaphysical) fact.  So why do we not draw the same conclusion when everyone sees movement when looking at Rotating Snakes?

I have already pointed out that you can print the Moving Snakes image on a sheet of paper and it will still appear to move when you look at it.  That is powerful evidence that the motion is an illusion, but it's not proof.  How can we be sure that there aren't certain patterns that, when printed on paper, actually move by some unknown mechanism?  Maybe the Moving Snakes image actually causes ink to move around on a sheet of paper somehow.  It's not a completely outrageous suggestion.  After all, we know that printing very specific patterns on a silicon chip can make it behave in very complicated ways.  How can you be sure that paper doesn't have the same ability?

The full argument that Moving Snakes is an illusion is actually quite complicated when you expand it out in full detail.  You have to get deep into the weeds of why silicon can be made to do things that ink and wood pulp can't.  But the bottom line is that we have a pretty good idea of how silicon works, and we have a pretty good idea of how paper and ink work, and if it turns out that paper can ink could be made to do anything even remotely like what silicon can do it would be Big News, and since there hasn't been any Big News about this, the best explanation of the perceived motion is that it's an illusion.

Things are very different when it comes to consciousness.  Consciousness is also a universal human perception, just like the motion in Moving Snakes, but the suggestion that it might be an illusion and not an actual reflection of reality is obviously far less of a slam-dunk than the idea that Moving Snakes is an illusion.  In fact, most people when first presented with the idea dismiss it as absurd and unworthy of further consideration.  For starters, we have a good understanding of silicon and paper, but we don't have a good understanding (yet) of human brains (Dennett's book notwithstanding).  We are nowhere near being able to definitively being able to rule out definitively the possibility that our perception of consciousness is a faithful reflection of some underlying reality that we just don't understand yet.

Another argument against consciousness being an illusion is that it is private.  All humans, and possibly some animals, experience it, but each of us only has direct experience of our own consciousness.  We cannot directly experience anyone else's.

There is also an argument from first principles that consciousness cannot be an illusion in the same way that Rotating Snakes is: optical illusions are false perceptions, but they are still perceptions.  In order to have a perception at all, whether that perception is a faithful reflection of an underlying reality or not, there has to be something real out there to do the perceiving.  Consciousness in some sense is that "thing that does the perceiving", or at least it is a perception of the thing-that-does-the-perceiving (whatever that might be) but in any case the idea that consciousness is an illusion is self-defeating: if our perception of consciousness were not a faithful reflection of some underlying reality, we could not perceive it because there would not be any real thing capable of perceiving (the illusion of) consciousness.  To quote Joe Provenzano: if consciousness is an illusion, who (or what) is being illused?

I will eventually get around to answering these questions, which will consist mainly of my summarizing Dennett's book so if you want a sneak preview you can just go read it.  Fair warning though: it is a scholarly work, and not particularly easy to follow, so you might just want to wait.  But if you're feeling ambitious, or merely curious, by all means go tot he source.

In the meantime, rest in peace, Daniel Dennett.  Your words had a profound impact on me.  I hope that mine may some day do the same for a younger generation.

Wednesday, April 24, 2024

The Scientific Method, part 4: Eating elephants and The Big News Principle

This is the fourth in a series about the scientific method and how it can be applied to everyday life.  In this installment I'm going to suggest a way to approach all the science-y stuff without getting overwhelmed.

There is an old joke that goes, "How do you eat an elephant?  One bite at a time."  That answer might be good for a laugh, but it wouldn't actually work, either for a real elephant (if you were foolish enough to actually attempt to eat a whole elephant by yourself) nor for the metaphorical science elephant.  Modern science has been a thing for over 300 years now, with many millions of people involved in its pursuit as a profession, and many millions more in supporting roles or just doing it as a hobby.  Nowadays, over 100,000 scientific papers are published world-wide every single day.  It is not possible for anyone, not even professional scientists, to keep up with it all.

Fortunately, you don't have to consume even a tiny fraction of the available scientific knowledge to get a lot of mental nutrition out of it.  But there are a few basics that everyone ought to be familiar with.  For the most part this is the stuff that you learned in high school science class if you were paying attention.  I'm going to do a lightning-quick review here, a little science-elephant amuse bouche.  What I am about to tell you may all be old hat to you, but later when I get to the more interesting philosophical stuff I'll be referring back to some of this so I want to make sure everyone is on the same page.

It may be tempting to skip this, especially if you grew up hating science class.  I sympathize.  Science education can be notoriously bad.  It may also be tempting to just leave the elephant lying in the field and let the hyenas and vultures take care of it.  The problem with that approach is that the hyenas and vultures may come for you next.  In this world it really pays to be armed with at least a little basic knowledge.

I'm going to make a bold claim here: what I am about to tell you, the current-best-explanations provided by science, are enough to account for all observed data for phenomena that happen here on earth.  There are some extant Problems -- observations that can't be explained with current science -- but to find them you have to go far outside our solar system.  In many cases you have to go outside of our galaxy.  How can I be so confident about this after telling you that there is so much scientific knowledge that one person cannot possibly know it all?

The source of my confidence is something I call the Big News principle.  To explain it, I need to clarify what I mean by "all the observed data."  By this I do not mean all of the data collected in science labs, I mean everything that you personally observe.  If you are like most people in today's world, part of what you observe is that science is a thing.  There are people called "scientists".  There is a government agency called NASA and another one called the National Science Foundation.  There are science classes taught in high schools and universities.  There are science journals and books and magazines.

The best explanation for all this is the obvious one: there really are scientists and they really are doing experiments and collecting data and trying to come up with good explanations for that data.  This is not to say that scientists always get it right; obviously scientists are fallible humans who sometimes make mistakes.  But the whole point of science is to find those mistakes and correct them so that over time our best explanations keep getting better and better and explain more and more observations and make better and better predictions.  To see that this works you need look no further (if you are reading this before the apocalypse) than all the technology that surrounds you.  You are probably reading this on some kind of computer.  How did that get made?  You probably have a cell phone with a GPS.  How does that work?

It's not hard to find answers to questions like "how does a computer work" and "how does GPS work" and even "how does a search engine work."  Like everything else, these explanations are data which requires an explanation, and the best explanation is again the obvious one: that these explanations are actually the result of a lot of people putting in a lot of effort and collecting a lot of data and reporting the results in good faith.  This is not to say that there aren't exceptions.  Mistakes happen.  Deliberate scientific misconduct happens.  A conspiracy is always a possibility.  But if scientific misconduct were widespread, if falsified data were rampant, why does your GPS work?  If there is a conspiracy, why has no one come forward to blow the whistle?

This is the Big News Principle: if any explanation other than the obvious one were true, then sooner or later someone would present some evidence for this and it would be Big News.  Everyone would know.  The absence of Big News is therefore evidence that no one has found any credible evidence against the obvious explanation, i.e. that there are in fact no major Problems with the current best theories.

The name "Big News Principle" is my invention (as far as I know) but the idea is not new.  The usual way of expressing it is with the slogan "extraordinary claims require extraordinary evidence."  I think this slogan is misleading because it gets the causality backwards.  It is not so much that extraordinary claims require extraordinary evidence, it's that if an extraordinary claim were true, that would necessarily produce extraordinary evidence, and so the absence of extraordinary evidence, the absence of Big News, is evidence that the extraordinary claim, i.e. the claim that goes against current best scientific theories, is false.

The other important thing to know is that not all scientific theories are the same with respect to producing Big News if those theories turn out to be wrong.  Some theories are very tentative, and evidence that they are wrong barely makes the news at all.  Other theories are so well established -- they have been tested so much and have so much supporting evidence behind them -- that showing that they are wrong would be some of the Biggest News that the world has ever seen.  The canonical example of such a theory is the first and second laws of thermodynamics, which basically say that it's impossible to build a perpetual motion machine.  This is so well established that, within the scientific community, anyone who professes to give serious consideration to the possibility that it might be wrong will be immediately dismissed as a crackpot.  And yet, all anyone would have to do to prove the naysayers wrong is exhibit a working perpetual motion machine, which would, of course, be Big News.  It's not impossible, but to say that the odds are against you would be quite the understatement.  By way of very stark contrast, our understanding of human psychology and sociology is still very tentative and incomplete.  Finding false predictions made by some of those theories at the present time would not be surprising at all.

So our current scientific theories range from extremely well-established ones for which finding contrary evidence would be Big News, to more tentative ones for which contrary evidence would barely merit notice.  But there is more to it than just that.  The space of current theories has some extra and very important structure to it.  The less-well-established theories all deal with very complex systems, mainly living things, and particularly human brains, which are the most complicated thing in the universe (as far as we know).  The more well-established theories all deal with simpler things, mainly non-living systems like planets and stars and computers and internal combustion engines.

This structure is itself an observation that requires explanation.  There are at least two possibilities:

1.  The limits on our ability to make accurate predictions for complex phenomena is simply a reflection of the fact that they are complex.  If we had unlimited resources -- arbitrarily powerful computers, arbitrarily accurate sensors -- we could based on current knowledge make arbitrarily accurate predictions for arbitrarily complicated systems.  The limits on our ability are purely a reflection of the limits of our ability to apply our current theories, not a limit of the theories themselves.

2.  The limits of our ability to make accurate predictions for complex phenomena is because there is something fundamentally different about complex phenomena than simple phenomena.  There is something fundamentally different about living systems that allow them to somehow transcend the laws that govern non-living ones.  There is something fundamentally different about human minds and consciousness that allows them to transcend the laws that govern other entities.

Which of these is more likely to be correct?  We don't know for sure, and we will not know for sure until we have a complete theory of the brain and consciousness, which we currently don't.  But there are some clues nonetheless.

To wit: there are complex non-living systems for which we cannot make very good predictions.  The canonical example of this is weather.  We can predict the movements of planets with exquisite accuracy many, many years in advance.  We can't predict the weather very accurately beyond a few days, and sometimes not even that.

It was once believed that the weather was capricious for the same reason that people can be: because the weather was controlled by the gods, who were very much like people but with super-powers.  Nowadays we know this isn't true.  The reason the weather is unpredictable is not because it is controlled by the gods, but because of a phenomenon called chaos, which is pretty well understood.  I'll have a lot more to say about chaos theory later in this series, but for now I'll just tell you that we know why we can't predict the weather.  It's not because there are gods operating behind the scenes, it is that there are certain kinds of systems that are just inherently impossible to make accurate predictions about even with unlimited resources.  Nature itself places limits on our ability to predict things.  It is unfortunate, but that's just the Way It Is.

So our inability to make accurate predictions about living systems and human consciousness is not necessarily an indication that these phenomena are somehow fundamentally different from non-living systems.  It might simply be due to their complexity.  We don't have proof of that, of course, but so far no one has found any evidence to the contrary: no one has found anything that happens in a living system or in a human brain that can't be explained by our current best theories of non-living systems.  How can I know that?  Because if anyone found any such evidence it would be Big News, and there hasn't been any such Big News, at least not that I've found, and I've looked pretty diligently.

Because of the fact that, as far as we can tell, our current-best theories of simple non-living systems can, at least in principle, explain everything that happens in more complex systems, we can arrange our current-best theories in a sort of hierarchy, with theories of non-living systems at the bottom, and theories of living systems built on top of those.  It goes like this: at the bottom of the hierarchy are two theories of fundamental physics: general relativity (GR) and something called the Standard Model, which is built on top of something called Quantum Field Theory (QFT), which is a generalization of Quantum Mechanics (QM) which includes (parts of) relativity.  The details don't really matter.  What matters is that, as far as we can tell, the Standard Model accurately predicts the behavior of all matter, at least in our solar system.  (There is evidence of something called "dark matter" out there in the universe which we don't yet fully understand, but no evidence that it has any effect on any experiment we can conduct here on earth.)

The Standard Model describes, among other things, how atoms are formed.  Atoms, you may have learned in high school, are what all matter is made of, at least here on earth.  To quote Richard Feynman, atoms are "little particles that move around in perpetual motion, attracting each other when they are a little distance apart, but repelling upon being squeezed into one another."  Atoms come in exactly 92 varieties that occur in nature, and a handful of others that can be made in nuclear reactors.

(Exercise for the reader: how can it be that atoms "move around in perpetual motion" when I told you earlier that it is impossible to build a perpetual motion machine?)

The details of how atoms repel and attract each other is the subject of an entire field of study called chemistry.  Then there is a branch of chemistry called organic chemistry, and a sub-branch of organic chemistry called biochemistry which concerns itself exclusively with the chemical reactions that take place inside living systems.

Proceeding from there, biochemistry is a branch of biology, which is the study of living systems in general.  The foundation of biology is the observation that the defining characteristic of living systems is that they make copies of themselves, but that these copies are not always identical to the original.  Because of this variation, some copies will be better at making copies than others, and so you will end up with more of the former and less of the latter.  It turns out that there is no one best strategy for making copies.  Different strategies work better in different environments, and so you end up with a huge variety of different self-replicating systems, each specialized for a different environment.  This is Darwin's theory of evolution, and it is the foundation of modern biology.

Here I need to point out one extant Problem in modern science, something that has not yet been adequately explained.  There is no doubt that once this process of replication and variation gets started that it is adequate to account for all life on earth.  But that leaves a very important unanswered question: how did this process start?  The honest answer at the moment is that we don't yet know.  It's possible that we will never know.  But people are working on it, and making (what seems to me like) pretty good progress towards an answer.  One thing is certain, though: if it turns out that the answer involves something other than chemistry, something beyond the ways in which atoms are already known to interact with each other, that will be Big News.

Beyond biology we have psychology and sociology, which are the study of the behavior of a particular biological system: human brains.  Studying them is very challenging for a whole host of reasons beyond the fact that they are the most complex things known to exist in our universe.  But even here progress is being made at a pretty significant pace.  Just over the last 100 years or so our understanding of how brains work has grown dramatically.  Again, there is no evidence that there is anything going on inside a human brain that cannot be accounted for by the known ways in which atoms interact with each other.

Note that when I say "the known ways in which atoms interact with each other" I am including the predictions of quantum field theory.  It is an open question whether quantum theory is needed to explain what brains do, or if they can be fully understood in purely classical terms.  Personally, I am on Team Classical, but Roger Penrose, who is no intellectual slouch, is the quarterback of Team Quantum and I would not bet my life savings against him.  I will say, however, that if Penrose turns out to be right, it will be (and you can probably anticipate this by now) Big News.  It is also important to note that no non-crackpot believes that there is any evidence of anything going on inside human brains that is contrary to the predictions of the Standard Model.

Speaking of the Standard Model, there is another branch of science called nuclear physics that concerns itself with what happens in atomic nuclei.  For our purposes here we can mostly ignore this, except to note that it's a thing.  There is one and only one fact about nuclear physics that will ever matter to you unless you make a career out of it: some atoms are radioactive.  Some are more radioactive than others.  If you have a collection of radioactive atoms then after a certain period of time the level of radioactivity will drop by half, and this time is determined entirely by the kind of atoms you are dealing with.  This time is called the "half-life" and there is no known way to change it.  In general, the shorter the half life, the more radioactive that particular flavor of atom is.  Half lives of different kinds of atoms range from tiny fractions of a second to billions of years.  The most common radioactive atom, Uranium 238, has a half life of just under four and a half billion years, which just happens by sheer coincidence to be almost exactly the same as the age of the earth.

There is another foundational theory that doesn't quite fit neatly into this hierarchy, and that is classical mechanics.  This is a broad term that covers all of the theories that were considered the current-best-explanations before about 1900.  It includes things like Newton's laws (sometimes referred to as Newtonian Mechanics), thermodynamics, and electromagnetism.

The reason classical mechanics doesn't fit neatly into the hierarchy is because it is known to be wrong: some of the predictions it makes are at odds with observation.  So why don't we just get rid of it?

Three reasons: first, classical mechanics makes correct predictions under a broad range of circumstances that commonly pertain here on earth.  Second, the math is a lot easier.  And third and most important, we know the exact circumstances under which classical mechanics works: it works when you have a large number of atoms, they are moving slowly (relative to the speed of light), and their temperature is not too cold.  If things get too fast or too small or too cold, you start to see the effects of relativity and quantum mechanics.  But as long as you are dealing with most situations in everyday life you can safely ignore those and use the simpler approximations.

This, by the way, is the reason for including Step 2 in the Scientific Method.  As long as you are explicit about the simplifying assumptions you are making, and you are sure that those simplifying assumptions actually hold, then you can confidently use a simplified theory and still get accurate predictions out of it.  This happens all the time.  You will often hear people speak of "first order approximations" or "second-order approximations".  These are technical terms having to do with some mathematical details that I'm not going to get into here.  The point is: it is very common practice to produce predictions that are "good enough" for some purpose and call it a day.

Classical mechanics -- Newton's laws, electromagnetism, and thermodynamics -- turn out to be "good enough" for about 99% of practical purposes here on earth.  The remaining 1% includes things like explaining exactly how semiconductors and superconductors work, why GPS satellites need relativistic corrections to their clocks, and what goes on inside a nuclear reactor.  Unless you are planning to make a career out of these things, you can safely ignore quantum mechanics and relativity.

And here is more good news: classical mechanics is actually pretty easy to understand, at least conceptually.  It's the stuff that is commonly taught in high school science classes, except that there it is usually taught as a fait accompli, without any mention of the centuries of painstaking effort that went into figuring it all out, nor the ongoing work to fill in the remaining gaps in our knowledge.

The reason this matters is that it leaves people with the false impression that science is gospel handed down from on high.  You hear slogans like "trust the science."  You should not "trust the science."  You should apply the scientific method to everything, including the question of what (and who) is and is not trustworthy.  And the most important question you can ask of anyone making any claim is: is this consistent with what I already know about the world?  Or, if this were true, would it be Big News?  And if so, have you seen any other evidence for it elsewhere?

It is important to note that the converse is not true.  If someone makes a claim that would be Big News if it were true but it doesn't seem to have made a splash, the best explanation for that it usually that the claim is simply not true.  But just because a claim does end up being Big News doesn't necessarily mean that it's true!  Cold fusion was Big News when it was first announced, but it ended up being (almost certainly) false nonetheless.  Big News should not be interpreted as "true" but something more like "possibly worthy of further investigation."

Sunday, April 21, 2024

Three Myths About the Scientific Method

This is the third in a series on the scientific method.  This installment is a little bit of a tangent, but I wanted to publish it now because I've gotten tired of having to correct people about these things all the time.  I figured if I just wrote this out once and for all I could just point people here rather than having to repeat myself all the time.

There are a lot of myths and misconceptions about science out there in the world, but these three keep coming up again and again.  These myths are pernicious because they sound plausible.  Even some scientists believe them, or at least choose their words carelessly enough to reinforce them, which is just as bad.  Even I am guilty of this sometimes.  It is an easy trap to fall into, especially when talking about "scientific facts".  So here for the record are three myths about the scientific method, and the corresponding truth (!) about each of them.

Myth #1:  The scientific method relies on induction

Induction is a form of reasoning that assumes that phenomena follow a pattern.  The classic example is looking at a bunch of crows, observing that every one of them is black, and concluding that therefore all crows are black because you've never seen a non-black crow.

It is easy to see that induction doesn't work reliably: it is simply false that all crows are black.  Non-black crows are rare, but they do exist.  So do non-white swans.  Philosophers make a big deal about this, with a lot of ink being spilled discussing the "problem of induction".  It's all a waste of time because science doesn't rely on induction.  Any criticism that anyone levels at science that includes the word "induction" is a red herring.

It's easy to fall into this trap.  The claim that all crows are black, or all swans are white, are wrong, but they're not that wrong.  The vast majority of crows are black, so "all crows are black" is a not-entirely-unreasonable approximation to the truth in this case, so it's tempting to think that induction is the first step in a process that gets tweaked later to arrive at the truth.

The problem is that most inductive conclusions are catastrophically wrong.  Take for example the observation that, as I write this in April of 2024, Joe Biden is President of the United States.  He was also President yesterday, and the day before that, and the day before that, and so on for over 1000 days now.  The inductive conclusion is that Joe Biden will be President tomorrow, and the day after that, and the day after that... forever.  Which is obviously wrong, barring some radical breakthrough in human longevity and the repeal of the 22nd amendment to the U.S. Constitution.  Neither of these is very likely, so we can be very confident that Joe Biden will no longer be President on January 7, 2029, and possibly sooner than that depending on his health and the outcome of the 2024 election.

How do we know these things?  Because we have a theory of what causes someone to become and remain President which predicts that Presidential terms are finite, and that theory turns out to make reliable predictions.  Induction has absolutely nothing to do with it.

Induction has absolutely nothing to do with any scientific theory.  At best it might be a source of ideas for hypotheses to advance, but the actual test of a hypothesis is how well it explains the known data and how reliable its predictions turn out to be.  That's all.

Myth #2:  The scientific method assumes naturalism/materialism/atheism

This is a myth promulgated mainly by religious apologists who want to imply that the scientific bias against supernaturalism is some kind of prejudice, an unfair bias built in to the scientific method by assumption, and that this can blind those who follow the scientific method to deeper truths.

This is false.  The scientific method contains no assumptions whatsoever.  The scientific method is simply that: a method.  It has no more prejudicial assumptions than a recipe for a soufflĂ©.

Even the gold-standard criterion for a scientific theory, namely, its ability to make reliable predictions, is not an assumption.  It is an observation, specifically, it is an observation about the scientific method: it just turns out that if you construct parsimonious explanations that account for all the observed data, those explanations turn out to have more predictive power than anything else humans have ever tried.  That is an observation that, it turns out (!) can also be explained, but that is a very long story, so it will have to wait.

The reason science is naturalistic and atheistic is not because these are prejudices built into the method by fiat, it is because it turns out that the best explanations -- the most parsimonious ones that account for all the known data and have the most predictive power -- are naturalistic.  The supernatural is simply not needed to explain any known phenomena.

Note that this is not at all obvious a priori.  There are a lot of phenomena -- notably the existence of life and human intellect and consciousness -- that don't seem like they would readily yield to naturalistic explanations when you first start to think about them.  But it turns out that they do.  Again, this is a long story whose details will have to wait.  For now I'll just point out that people used to believe that the weather was a phenomenon that could not possibly have a naturalistic explanation.

The reason science is naturalistic is not that it takes naturalism as an assumption, but rather that there is no evidence of anything beyond the natural.  All it would take for science to accept the existence of deities or demons or other supernatural entities is evidence -- some observable phenomenon that could not be parsimoniously explained without them.

Myth #3:  "Science can't prove X" or "scientists got X wrong" is an indication that science is deficient

I often see people say, "Science can't prove X" with the implication that this points out some deficiency in science that only some other thing (usually religion) can fill.  This is a myth for two reasons.  First, science never proves anything; instead it produces explanations of observations.  And second, this failure to prove things is not a bug, it's a feature, because it is not actually possible to prove anything about the real world.  The only things that can actually be proven are mathematical theorems.

Now, you will occasionally hear people speak of "scientific facts" or "the laws of nature" or even "scientific proof".  These people either don't understand how the scientific method actually works, or, more likely, they are just using these phrases as a kind of shorthand for something like "a theory which has been sufficiently well established that the odds of finding experimental evidence to the contrary (within the domain in which the theory is applicable) are practically indistinguishable from zero."  As you can see, being precise about this gets a little wordy.

The scientific method gives us no guidance on how to find good theories, only on how to recognize bad ones: reject any theory that is at odds with observation.  This method has limits.  We are finite beings with finite life spans and so we can only ever gather a finite amount of data.  For any finite amount of data there are an infinite number of theories all consistent with that data, and so we can't reject any of them on the grounds of being inconsistent with observation.  To whittle things down from there we have to rely on heuristics to select the "best explanation" from among the infinite number of possibilities that are consistent with the data.

Again, it just turns out that when we do this, the result of the process generally has a lot of predictive power.  Some of our theories are so good that they have never made a false prediction.  Others do make false predictions, but to find observations that don't fit their predictions you have to go outside of our solar system.  For theories like that we will sometimes say that those theories are "true" or "established scientific facts" or something like that.  But that's just shorthand for, "The best explanation we currently have, one which makes very reliable predictions."  It is always possible that some observation will be made that will falsify a theory no matter how well established it is.

Finding observations that falsify well-established theories does happen on occasion, but it is very, very rare.  The better established a theory is, the rarer it is to find observations that contradict it.  For less-well-established theories, finding contradictory data happens regularly.  This is also often cited, especially by religious apologists, as a deficiency but it's not.  It's how science makes progress.  In fact, the best established theory in the history of science is the Standard Model of particle physics.  We know that the Standard Model is deficient, but not because it makes predictions that are at odds with experiment -- quite the opposite in fact.  The Standard Model has never (as of this writing) made a false prediction since it was finalized in the 1970s.  The reason we know it's deficient is not because it makes false predictions (it doesn't, or at least hasn't yet) but rather because it doesn't include gravity.  We know gravity is a thing, but no one has been able to figure out how to work it in to the Standard Model.  And one of the reasons we haven't been able to do it is because we have no experimental data to give us any hints as to where the Standard Model might be wrong.  This is actually considered a major problem in physics.

That's it, my top three myths about science debunked.  Henceforth anyone who raises any of these in my presence gets a dope slap (or at least a reference to this blog post).

Monday, April 01, 2024

Feynman, bullies, and invisible pink unicorns

This is the second installment in what I hope will turn out to be a long series about the scientific method.  In this segment I want to give three examples of how the scientific method, which I described in the first installment, can be applied to situations that are not usually considered "science-y".  By doing this I hope to show you how the scientific method can be used without any special training, without any math, but to nonetheless solve real problems.

Example 1

In my inaugural blog post twenty years ago I wrote:

The central tenet of science in which I choose to place my faith is that experiment is the ultimate arbiter of truth. Any idea that is not consistent with experimental evidence must be wrong.

This was an adaptation of Richard Feynman's definition of science, given in the opening paragraphs of the first chapter of his Lectures on Physics.  Note that Feynman did not write the Lectures.  The Feynman Lectures were not written as a book, they are transcripts of lectures that Feynman gave while teaching an introductory physics course at Caltech in the early 1960s.  These lectures were recorded, and it is worth listening to a few of them to get a feel for that the original source material sounds like.

It is worth reading (or listening to) Feynman's introduction in its entirety.  It is only nine paragraphs, or nine minutes.

If you read the transcript you will see this:

The principle of science, the definition, almost, is the following: The test of all knowledge is experiment. Experiment is the sole judge of scientific “truth.”

Note that the word "truth" is in quotes.  Why?  One possibility is that these are "scare quotes", an indication that the word "truth" is being used in an "in an ironic, referential, or otherwise non-standard sense."  This matters because it materially changes the meaning of what Feynman is saying here.  Without the scare quotes, the passage implies that there exists a transcendent metaphysical Truth with a capital T and that science uncovers this Truth.  If that is what Feynman intended, then this would contradict what I said in the first installment, that science converges towards *something*, but that something may or may not be metaphysical Truth.

You might be tempted to argue that there is no way that I -- or anyone else for that matter -- could possibly know what Feynman actually meant, but that is not true.  We can.  How?  By going back to the original source material: there is a recording of Feynman actually speaking those words.  If you listen to it, you will find that the transcript is actually not a word-for-word transcription of what Feynman said.  Here is what he actually said, word-for-word:

Experiment is the sole judge of truth, with quotation marks...

and he goes on from there say some other things that are not included in the transcript.  I'm not going to attempt to transcribe them because there are a lot of clues regarding his intent in his cadence and tone of voice which I cannot render as text.  But one thing should be clear: the use of scare quotes in the transcript is justified because Feynman specifically said so.

Does this prove that this is what Feynman meant?  No.  Nothing in science is ever proven.  It's possible that Feynman, because he was speaking off-the-cuff, said something he didn't intend.  It's possible that he was under the influence of alien mind-control technology.  It's possible that Richard Feynman never actually existed, that he was a myth, and all of the evidence of his existence is actually the product of a vast conspiracy.  But if you think that any of these possibilities are likely enough to pursue, well, good luck to you because I predict you're going to be wasting a lot of time.

Discussion

I'm going to break down the previous example in some painstaking detail to show how it is an instance of the process I described before.

1.  Identify a Problem.  Recall that a Problem is a discrepancy between your background knowledge and something you observe.  In this case, the discrepancy was the use of scare quotes in the printed version of the Feynman lectures, and the background knowledge that this is a transcript of something Feynman said rather than something that he wrote.

2.  Make a list of simplifying assumptions.  In this case there weren't any worth mentioning.

3.  Try to come up with a plausible hypothesis.  In this case there were two: one was that this was somehow a faithful rendering of what Feynman intended, and the other was that this was an editorial embellishment inserted by whoever produced the transcript.

4.  Subject your hypotheses to criticism.  I skipped that step because this is just a trivial example and not worth asking other people to spend any time on.

5.  Adjust your hypotheses according to the results of step 4.  Not applicable here.

6.  Do an experiment to try to falsify one or more of your hypotheses.  In this case, we had the original audio recording, and so we could go back to the source to hear what Feynman actually said.  And it turned out in this case that this new data actually falsified *both* of our initial hypotheses.  The transcript is *neither* a verbatim rendering of what Feynman said, *nor* is it an editorial embellishment by the transcriber.  Instead, it is a faithful rendering of Feynman's stated intentions, indeed arguably more faithful than a verbatim transcript would have been because (and note that here I am once again engaging in a tiny little example of applying the scientific method in a very abbreviated way) he had to work around a limitation of the medium he was using, namely, speech, which has no way of explicitly rendering punctuation.

7.  Use your theory to make more predictions.  I skipped that step here too.

Example 2

The second example comes from a real incident from when I was in elementary school.  My family emigrated from Germany to Lexington, Kentucky, in the late 60s.  My parents were secular Jews.  I spoke virtually no English.  As you might imagine in a situation like that, I was not exactly the most popular kid in school.  I got bullied.  A lot.  It went on for five years until we moved to Oak Ridge, Tennessee, at which point I was looking forward to making a fresh start.  I was no longer obviously a foreigner.  I spoke fluent English.  I was familiar with the culture (or so I thought).  I would not have my reputation as a punching bag following me around.  So I was rather dismayed when, within a few months in my new home, I was once again being bullied.

Here was a Problem.  I had a theory: I was being bullied in Lexington because I was a foreigner, and the culture wasn't welcoming to foreigners, especially not German Jews, who were just half a notch above blacks in the social pecking order.  But in Oak Ridge it was not obvious I was a foreigner.  I spoke unaccented English, I was white, I never went to synagogue or did anything else to identify myself as a Jew.  So why was I still being picked on?

To make a very, very long story short, I began to consider the possibility that my original hypothesis was fundamentally wrong, and that the reason I was being picked on had nothing to do with what I was but rather with something I was doing, and that I was engaging in the same provocative behavior (whatever that might be) in Oak Ridge as I had in Lexington.  In retrospect this was, of course, the right answer, but it took me a very long time to figure it out.  It's hard enough to think straight when you are being bullied all the time, and it's even harder when you are in the emotional throes of adolescence and puberty.  But I eventually did manage to figure out that the reason I was being bullied was quite simply that I was behaving like a jerk.  When I stopped acting like a jerk, the bullying stopped.  Not right away, of course.  Like I said, it took a very, very long time, and I'm leaving out a lot of painful details.  But I eventually did manage to figure it out and become one of the cool kids (or at least one of the cool nerds).

The point of this story is that I solved a real-world social problem using the scientific method without even realizing that I was doing it.  This happened in junior high school.  I didn't have the foggiest clue about the scientific method, hadn't even encountered it in science classes, and even if I had, the idea that it would be applicable to something besides chemistry experiments would have been laughable.  It is only in retrospect that I realized that this is what I had done.  And by coming to that realization, I have since been able to do the same thing deliberately in my day-to-day life to great effect.  I think anyone can do this, especially with a little coaching, which is one of my motivations for putting the effort into writing all this stuff.

Example 3

My third example comes from philosophy, and I'm putting it in here because it's kind of fun, but also because it actually turns out to be a generally useful guide for spotting certain kinds of invalid arguments.  The Problem we are going to address is: how did the universe come into existence?  (This qualifies as a Problem because the universe obviously does exist, and so it must have somehow come into existence, but we don't know how.)

The standard scientific answer is that we don't know.  Something happened about 13 billion years ago that caused the Big Bang (which is more appropriately called the Everywhere Stretch, but that's another story for another time) but we have no idea what that something is.  Religious apologists are quick to seize on this gap in scientific knowledge as an argument for God, but that is not what I want to talk about here.  (I promise I'll come back to it in a future installment.)  Instead, I want to explore a different hypothesis, one which is obviously ridiculous, and talk about how we can reject this argument in a more principled way than to point to its obvious ridiculousness.

The hypothesis goes by the name of Last Thursday-ism.  The hypothesis states that the universe was created last Thursday in the exact state it was then in.  Before that, nothing existed.  The reason you might think otherwise is that you were created with all your memories intact to give you the illusion that something existed before last Thursday when in fact it did not.

Like I said, obviously -- indeed, intentionally -- ridiculous.  But just because something is obviously ridiculous doesn't necessarily mean it's wrong.  Quantum mechanics seems obviously ridiculous too when you first encounter it, and it actually turns out to be right.  So being obviously ridiculous is not a sound reason for rejecting a hypothesis.

Can you think of a more principled argument for rejecting last-Thursday-ism?  Seriously, stop and try before you read on.  Remember that last-Thursday-ism is, by design, consistent with all currently observed data.

You might be tempted to say that last-Thursday-ism can be rejected on the grounds that it is unfalsifiable, but all it takes to fix that is a minor tweak: last-Thursday-ism predicts that if you build just the right kind of apparatus it will produce as output the date of the creation of the universe, and so the output of this apparatus will, of course, be last Thursday (assuming you get it built before next Thursday).  The cost of this apparatus is $100M (which is a bargain if you compare it to what the particle physicists are asking for nowadays).

Here's a hint: consider an alternative hypothesis which I will call the last-Tuesday hypothesis.  The last-Tuesday hypothesis states (as you might guess) that the universe was created last Tuesday.  Before that, nothing existed.  The reason you think it did is that you were created with all your memories intact to give you the illusion that something existed before last Tuesday when in fact it did not.

You could, of course, substitute any date.  Last Monday.  November 11, 1955.  Whatever.  Last-Thursday-ism is not one hypothesis, it is one of a vast family of hypotheses, one for each instance in time in the past.  And at most one of that vast family can possibly be right.  All the others must be wrong.  So unless there is some way to tell a priori which one is right, the odds of any particular one of them, including last-Thursday, being the right one are vanishingly small.  And that is why we are justified in rejecting the last-X hypothesis for any particular value of X.

Note that this is true even if the prediction made by the tweaked version of last-Thursday-ism turns out to be true!  It might very well be that if we build the apparatus described above that it might very well output "last Thursday".  But this will almost certainly not be because last-Thursday-ism is true (because it almost certainly isn't), but for some other reason, like that the apparatus just happens to be a design for a printer that prints out "last Thursday", and this has absolutely nothing to do with when the universe was created.

Invisible Pink Unicorns

That last example may have seemed like a silly detour, but you will be amazed at how often hypotheses that are essentially equivalent to last-Thursday-ism get advanced.  I call these "invisible pink unicorn" hypotheses, or IPUs, because the canonical example is that there is an invisible pink unicorn in the room with you right now.  The only reason you can't see it is that -- duh! -- it's invisible.  This hypothesis can be rejected on the same grounds as last-Thursday-ism. Why pink?  Why not green?  Or brown?  Or mauve?  Why a unicorn?  Why not an elephant?  Or a gryphon?  Or a centaur?  Unless you have some evidence to make one of these variations more likely than the others, they can call be rejected on the grounds that even if one of them were correct, the odds that we will choose it from among all the alternatives are indistinguishable from zero.

IPUs are everywhere, especially among religious apologists.  The cosmological argument, the fine-tuning argument, the ontological argument, etc. etc. etc. -- pretty much any argument of the form, "We cannot imagine how our present state of existence could possibly have arisen by natural processes (that is the Problem) therefore God must exist."  But "the universe was created by God" is just one of a vast family of indistinguishable hypothesis:  We cannot imagine how our present state of existence could possibly have arisen by natural processes, therefore Brahma must exist.    We cannot imagine how our present state of existence could possibly have arisen by natural processes, therefore Mkuru must exist.  And, as long as I'm at it: we cannot imagine how our present state of existence could possibly have arisen by natural processes, therefore an invisible pink unicorn with magical powers to create universes must exist.

Note that this is in no way proves that God -- or Brahma or Mkuru or the Invisible Pink Unicorn -- do not exist.  It is only meant to show why certain kinds of arguments that are often invoked in favor of their existence are not valid, at least not from a scientific point of view.

This sin is by no means unique to religious apologists.  Even professional scientists will advance IPU hypotheses.  This happens more often than one would like.  String theory is the most notable example.  It is an almost textbook example of an IPU.  String theory is not a single theory, it is literally a whole family of theories, all of which are indistinguishable based on currently available data.  Some string theorists will argue (indeed have argued) that string theory can be tested by building yet another particle accelerator for the low, low price of a few billion dollars, and maybe it can.  I don't pretend to understand string theory.  But the overt similarity with last-Thursday-ism should make anyone cast a very jaundiced eye on the claims being made despite the fact that the people making them aren't crackpots.  Having scientific credentials doesn't necessarily mean that you actually understand or practice the scientific method.

[NOTE] The "read more" link below doesn't lead anywhere.  It's there because at some point I accidentally inserted a jump break at the end of this article and now I can't figure out how to get rid of it.  AFAICT it's a bug in the Blogger editor.  If anyone knows how to get rid of this damn thing please let me know.

Saturday, March 16, 2024

A Clean-Sheet Introduction to the Scientific Method

 About twenty years ago I inaugurated this blog by writing the following:

"I guess I'll start with the basics: I am a scientist. That is intended to be not so much a description of my profession (though it is that too) as it is a statement about my religious beliefs."

I want to re-visit that inaugural statement in light of what I've learned in the twenty years since I first wrote it.  In particular, I want to clarify what I mean when I say that being a scientist is a statement about my religious beliefs.  I thought that there would be enough consensus about the meaning of "science" and "religious belief" that this would not be necessary, but that turns out to be one of the many, many things I as wrong about back then.  In this post I'm going to try to fix that, or at least start to.

Let me start with the easy part: By "religious beliefs" I do not mean to imply that science is a religion in the usual sense.  It isn't.  Religions generally involve things like the worship of deities, respect for the authority of revealed wisdom, and the carrying out of prayer and rituals.  Science has none of that, not because science rejects these things *a priori*, but because when you pursue science you are invariably (but not inevitably!) led to the conclusion that there are no deities active in our universe, and therefore no good reason to accept the authority of revealed wisdom, and hence not much point spending valuable time on prayer and ritual (except insofar as one might find satisfaction in pursuing prayer and ritual for their own sake).

What I *do* mean by "religious beliefs" is that being a scientist -- pursuing science, engaging in the scientific method -- need not be a profession.  It can also be a *way of life*.  I believe that science provides a *complete worldview* applicable to all aspects of life, not just ones that are commonly regarded as "science-y".  Furthermore, I believe that this worldview can be practiced by anyone, not just professional scientists.  You don't even have to be good at math (though it doesn't hurt).  And I also think that if more people did this, the world would be a better place.

In particular, I believe that science can be applied to answer questions about *morality*, and I claim that if you do this properly the results are *better* than those produced by traditional religions.  I also believe that science can provide satisfactory answers to deep existential questions, like what is the meaning of life.  But that will be a very long row to hoe.  For now I want to start simply by describing what science actually *is* because it turns out that there are a lot of misconceptions about that, particularly among the religious.

But let me start at the beginning

What is science?


Science is a process, a method, for solving a particular kind of problem.  The most succinct description I have found of the scientific method is:

Find the best explanation that accounts for all the observed data, and act as if that explanation is correct until you encounter contradictory data or a better explanation.
That is obviously an extreme oversimplification.  It is roughly akin to explaining how to play golf by saying, "Swing the club in such a way that it makes the ball go into the hole."  That's not wrong, but by itself it's not very useful either.

Golf actually turns out to be a pretty good analogy.  The scientific method is a skill, just like golf, and like golf, it is something anyone can do at a beginner level, but achieving mastery takes time and effort.  Unlike golf, the scientific method is good for a lot more than just getting balls into holes.  Golf is a uni-tasker.  Science is the ultimate multi-tasker.  You can even use it to improve your golf game!

Also like golf, you can do it both for fun and for profit.  You don't have to be a professional golfer to enjoy golf and to get something out of it.  Doing science can be rewarding for it's own sake, but there is also a significant side-benefit because, as I said, science is a method for solving problems.  So not only can it be fun and challenging and engaging, it can also give you solutions to problems.  And the kinds of problems that the scientific method can be applied to is much broader than most people realize, and using the scientific method in general is easier than a lot of people realize.  In fact, you are almost certainly already doing it, possibly without even realizing it.  Let me show you.

An example

Look around you (or, if you're blind, feel around you).  You will see (or at least think you see) things -- people, tables, cars, buildings, trees.  These things (seem to) exist in three-dimensional space, and occupy specific parts of that space, that is, the world is such that it makes sense to say things like "this tree is over here" and "that car is over there (and moving in that direction)".

Moreover, you can interact with some of the things around you in very complex and interesting ways.  There are things called "humans" that you can talk to and they will talk back and the things they say to you and that you say to them seem to convey some kind of meaning that corresponds to the properties of other things.  You can say, for example, "Do you see that tree over there?" and a human might respond, "Yes.  I think it's a maple."  And this will resonate with you in a way that saying the same thing to a dog and hearing it bark will not.

How can you account for all this?  How do you explain it?  Well, the obvious way to explain it is that the things you see are real, that is, that there really are trees and cars and other humans "out there" in point of actual physical (and maybe even metaphysical) fact.  This explanation is so obvious that it is hard to even conceive of an alternative.  Some of you might even be thinking to yourselves, "Well, duh, of course trees are real.  This guy must be some kind of moron if he thinks that is a profound observation."

The explanation that the things you perceive are real is obvious and compelling, but it is not the only possible explanation.  Another possible explanation is that you are living in the Matrix, a very high quality simulation created by some advanced alien race with technology vastly superior to our own.  That might seem unlikely, but it's possible, and it's not immediately obvious how you could definitively rule it out (or even that it is actually false!)

It turns out that neither one of these explanations is actually correct.  Both of them can actually be ruled out by experiment.  But it turns out that for the most part this doesn't actually matter.  Remember, the scientific method is not "find the correct explanation", it is "find the best explanation that accounts for all the data", and "objects appear to be real because they actually are real" is a very good explanation that is consistent with if not all of the data, at least the data that most people have access to.

Notice also that the second part of the scientific method is not, "accept that this explanation is correct", it is, "act as if this explanation is correct", and then there is the final caveat, "until you encounter contradictory data or a better explanation".

So the scientific method leads you naturally, without even being aware of it, to act as if the things you perceive are real are actually real, despite the fact (and here I have to ask you to temporarily suspend your disbelief and just take my word for it) this isn't actually true.  However, despite the fact that it isn't actually true, acting as if objects are real will not steer you far wrong in day-to-day life.

---

Here is another example.  This one is taken from history.  Imagine that you are living some time before the invention of the telescope.  You look up in the night sky and you can see the sun, moon, and stars.  Most of the stars stay in the same location (relative to each other) except that they all rotate around one star -- if you happen to be living in the northern hemisphere, otherwise they will appear to turn around an imaginary point that lies below the horizon.  (Explain that, flat-earthers!)

All of this is already strange enough, but to compound the mystery there are five -- and only five -- things that look like stars but don't move in the same way as all the others.  These are called "wanderers" or "planetae" in ancient Greek.

Two of these planetae, Venus and Mercury, are only ever seen near the horizon around sunset and sunrise.  The other three -- Mars, Jupiter and Saturn -- can be seen throughout the night.  These three all move generally in the same direction (relative to the other stars) but one them, Mars, occasionally stops and moves backwards.

How do you account for all this?

That was a question that occupied the finest human minds for thousands of years and they grappled with it to varying degrees of success.  The explanation that ultimately prevailed for well over 1000 years was produced by Claudius Ptolemy, a Greek astronomer living in Alexandria in the first century CE.  The details of Ptolemy's explanation don't matter much.  The thing that matters here is that it was based on the "fact" that what goes on in the heavens is radically different from what happens here on earth.  I put "fact" in scare quotes here because with the benefit of modern knowledge we know that this is not in fact a fact.  But from the perspective of someone living before telescopes, not only is it a fact, it is an obvious one.  The earth is dirty, the heavens are clean.  Any source of light on earth eventually extinguishes itself, but the lights in the heavens burn forever.  Anything moving on earth eventually stops, but the heavenly bodies move forever without ever coming to a halt.  And finally and most obviously, the behavior of things on earth is governed by the law that "what goes up must come down."  Some things like birds and canon balls can rise above the surface of the earth, but they can only go so far, and they can only stay aloft temporarily.  Eventually the canon ball will fall and the bird will roost (or die).  But the objects in the heavens stay there forever.  With one exception.  See if you can figure out what it is before I tell you.

[Spoiler alert]

Meteorites.  Every now and then a stone would fall from the heavens.  Where they came from was a deep mystery because on the one hand they looked like ordinary rocks, but on the other hand they came from the heavens which, as everybody knew because it was just obvious, were made of very different stuff governed by very different laws than those which pertained here on earth.

It was not until Isaac Newton in 1687 that this mystery was solved.  It turned out that the "obvious fact" that what happened in the heavens were radically different from what happens on earth was actually wrong.  The heavenly bodies are in fact made of the same stuff that things on earth are made of, and are governed by the same laws.  Today we take this for granted.  In 1687 it was a radical breakthrough, the dawn of modern science.  And one of the reasons it was accepted is that it explained the previously-mysterious observation of rocks falling from the sky.

---

At this point I want to go on a small tangent to put this event in its proper historical perspective.  As I write this, in March of 2024, it has been 336 years since Newton's Principia was published.  That might sound like a long time, but it's actually not.  I am almost 60 years old, so I have been alive for almost 20% of the total history of modern science.  Some of the most fundamental scientific theories are surprisingly recent.  The existence of atoms, for example, was controversial as recently as the early 20th century.  Albert Einstein died in 1955, slightly before I was born, but well within current living memory.  Many of the pioneers of quantum mechanics were alive when I was born.  I have personally met and spoken with Freeman Dyson, who died a mere four years ago.  Many of the experiments that provided the foundation for quantum computation were done while I was in high school.  The frontiers of quantum computation and artificial intelligence are being explored even as I write this.  We are very much still in the midst of the scientific revolution.  Quantum computing and AI are today what digital computers were in 1955, what relativity was in 1905, and what thermodynamics and steam power were in 1855.

One of the things that has happened in the 336 years since Principia was first published is that science has become an industry (much like golf has).  Isaac Newton was the first modern scientist, but he was not a professional scientist.  There was no such thing back then.  What we call "science" today was called "natural philosophy" then, and it included all kinds of things that would not be considered science today, like alchemy and astrology.  If you had asked Newton to describe the "scientific method" he would have had no idea what you were talking about.

With that in mind, I invite you to consider the following question: why is science a thing?  Why are there arguments over the definition of "science" but not "astrology" or "alchemy"?  Why is there so much more prestige (and money!) surrounding science than alchemy or astrology?  Sure, there are a few people making money as astrologers, but try getting an NSA grant to find a better way of casting horoscopes and you will get laughed out of the room.

The answer is: science is more effective at producing useful results than alchemy or astrology.  If you are reading this before the coming climate apocalypse, then you are steeped in technology.  (And if you are reading it after, take this as testimony that there was a time before the climate apocalypse when technology was ubiquitous.)  Computers, internal combustion engines, air conditioning, the Internet -- all of these things grew out of science and not alchemy or astrology.  Science is a thing because it works.

Which raises the obvious question: why does it work?  Why is science so much more effective at producing useful results than alchemy or astrology, or, for that matter, any other form of human endeavor?

To answer that, I will need to describe the scientific method in a little more detail.  But before I do that I need to first explain why describing the scientific method is not as straightforward as it might seem.

Why describing the scientific method is hard

If you seek out descriptions of the scientific method on the web you will find that they do not all agree with each other.  For example, Wikipedia says:

The scientific method involves careful observation coupled with rigorous skepticism, because cognitive assumptions can distort the interpretation of the observation. Scientific inquiry includes creating a hypothesis through inductive reasoning, testing it through experiments and statistical analysis, and adjusting or discarding the hypothesis based on the results.

However, if you dig deeper, you will find that not everyone agrees with this definition.  For example, Karl Popper, a highly regarded philosopher of science, argues that induction is not part of the scientific method, that it is a myth.  As an even more extreme example, if you go to Answers in Genesis, a creationist web site, you will find a very different description:

Science means “knowledge” and refers to a process by which we learn about the natural world. There are two different kinds of science; observational and historical. Historical science deals with the past and is not directly testable or observable so it must be interpreted according to your worldview.

The Bible is the foundation for science. Non-Christians must borrow biblical ideas—such as an orderly universe that obeys laws—in order to do science. If naturalism were true—if nature is “all there is”—then why should the universe have such order? Without the supernatural, there is no basis for logical, orderly laws of nature.
How can you tell who to believe?  Specifically, why should you believe what I am about to tell you?

One possible answer is that I was once a professional scientist.  I was an AI researcher at JPL for 12 years, from 1988 to 2000.  I made my living publishing peer-reviewed papers.  I was fairly successful.  I was the most referenced CS researcher in all of NASA (according to citeseer), and I held that title for many years even after I left.  I advanced to the rank of Principal, which is "awarded to recognize sustained outstanding individual contributions in advancing scientific or technical knowledge", which came with the most coveted perk at JPL: on-lab parking.

But that is not a very good answer, for two reasons.  First, just because I was able to make my living as a scientist doesn't necessary mean I understood how the scientific method works.  Being a successful professional scientist has as much (maybe even more) to do with politics than it does with science.  In fact, when my career advancement began to turn more on politics than science, that is what made me decide to quit.

Another possible answer is that what I am about to tell you aligns with things said by even more illustrious names like Karl Popper and Richard Feynman.  Their authority is much better than mine, but it's still an argument from authority, and the bedrock principle of science is that experiment and not authority should be the final arbiter of truth.  At least that's what Feynman said, so it must be true, right?

Ironically, the right answer can be found in the Bible, in the Gospel according to Matthew, chapter 7: by their fruits ye shall know them.  Remember, the reason we care about science at all is because it is effective at producing useful results.  The reason you should believe what I am about to tell you is that it will explain this effectiveness.  It will not be a complete explanation because that would take much longer than one blog post.  A much more detailed explanation is possible.  The one I am about to give you will be oversimplified.  But it will nonetheless explain the effectiveness of the scientific method, at least to some extent.  In other words, the scientific method can be applied to itself to explain its own effectiveness.  And that is the reason you should believe it.

By the way, an important thing to keep in mind as you read the next section: the scientific method is a *natural process*.  It is a discovery, not an invention.  It is something that happens, something that people (and even animals!) do, at least to some extent, without even being aware of it.  You can, in fact you almost certainly do, engage in the scientific method instinctively, just as you can probably hit a golf ball without any training.  But you'll be a lot better at science (and golf!) with training and practice.  So let's start.

The scientific method

The scientific method consists of seven steps.  It is important to follow these steps carefully and deliberately, otherwise you'll just end up with the scientific equivalent of a wild swing.

Step 1:  Identify a Problem.  I'm capitalizing Problem because it's a term of art which has a more specific meaning that it does in common usage.  A capital-P Problem is a discrepancy between your background knowledge, everything you believe to be true at the present moment, and something you observe.  Examples of currently open scientific Problems include things like, "Galaxies appear to rotate faster than they should based on the amount of observable matter they contain", and "There is life on earth, but we don't know how it started."  But Problems don't have to be Big Scientific Questions.  They can be as prosaic as, "I'm doing a good job at work but I'm not getting promoted" or "My wife seems to be mad at me even though she doesn't have any reason to be."

Note that the existence of Problems is not a shortcoming of the scientific method.  To the contrary, identifying a Problem is the crucial first step of the process.  I mention this because a common criticism of science among creationists is to point to Problems, things that science does not yet understand, and cite them as a reason for not trusting science at all.  This argument is not just wrong, it actually betrays a profound ignorance of how science actually works.  The only way to not have Problems is to already understand everything, to be omniscient.  The existence of Problems is a feature, not a bug.

(A creationist would no doubt respond: but we have an omniscient source of knowledge: God!  To which I respond: OK, but your access to this omniscient source of knowledge doesn't seem to give you much leverage towards producing useful results.  That is a Problem!)

Step 2: 
Make a list of all simplifying assumptions that you are going to make.  For example, in the vast majority of situations here on earth it is safe to ignore relativity and quantum mechanics, but it's important to keep in the back of your mind that you are ignoring them.

Step 3:  Try to come up with a plausible *hypothesis*, a *guess* at an explanation that is consistent *both* with all the data that produced your background knowledge, *and* the discrepancy that constitutes the Problem you are addressing.  At the frontiers of science you will often get stuck at this point because coming up with *any* plausible hypothesis is considered a major achievement.  Sometimes this will happen when using the scientific method in day-to-day life.  That's OK.  Getting stuck temporarily is a normal part of the process.

Note that the term "background knowledge" is a little misleading, because very often a plausible hypothesis will be that some part of your background "knowledge" is wrong.  The use of the term "scientific knowledge" is fairly common, and it implies that this knowledge is immutable and not open to question, but that is not true.  All "knowledge" in science is tentative and subject to being overturned by new data or better hypotheses.  But this doesn't mean that we don't know anything.  Some scientific results are so well established, and backed up with so much evidence, that the odds of it being wrong, while not zero, are extremely low, and the evidence needed to show that it is wrong would be truly extraordinary.  We will sometimes abbreviate that by calling it "knowledge" or "established scientific fact" even though what we really mean is "current-best explanation, one which is so well established that the odds of overturning it, while not quite zero, are so close to zero as to make no practical difference."

Step 4:  Subject your hypotheses/guesses to criticism.  In other words, try as hard as you can to show why each of your hypotheses is *wrong*.  Anything is fair game here, including asking other people to poke holes in your ideas.  In fact, that is encouraged.  You can also participate in the scientific method by helping to poke holes in other people's ideas (this is called "peer review")

Note that -- and this is very important -- you are not trying to show that your hypothesis is valid or correct!  The object here to do the exact opposite: trying to show that your hypothesis is wrong.  Of course, you are hoping that you will fail in this endeavor, but you must nonetheless try as hard as you can and in good faith to debunk yourself.

There a few rules about scientific criticism:

Rule 1: you have to separate your criticism of the hypothesis from criticism of its presentation.  The former is vastly more valuable than the latter.  The latter is ultimately important too, but it's much less important, and doing too much of the latter at the expense of the former gets really annoying.

Rule 2: you have to criticize within the bounds of the assumptions laid out in step 2.  So in this case, if you want to criticize the hypothesis I am laying out here, it's out of bounds to say, "But you've ignored quantum mechanics."  Yes, I *know* I have ignored quantum mechanics.  I *said* I was going to ignore quantum mechanics.  Your pointing that out again is not helpful.

Rule 3:
you can't change the problem statement.  So, for example, you can't criticize the hypothesis I am laying out here on the grounds that science has not (yet) produced answers to various political and social problems.  The problem I'm addressing here is: why does science appear to be so effective at producing *any useful results at all* (and in producing technology in particular)?  Any criticism not having to do with that is out of bounds.  (This is the reason it is important to have a clear, explicit, and unambiguous problem statement.)

Beyond that pretty much anything is fair game.  Here are three particularly valid forms of criticism:

Valid criticism 1: The hypothesis is inconsistent with observation.  It doesn't matter how plausible or mathematically elegant your hypothesis is, if it doesn't agree with experiment (subject to the assumptions laid out in step 2) it goes in the hopper.

Valid criticism 2: The hypothesis is unfalsifiable.  It must be possible, at least in principle, to do an experiment whose outcome would show that the hypothesis is *wrong*.  If there is no possible experiment that could be done whose outcome could be at odds with the hypothesis, then it is not a valid scientific hypothesis.  (I call this the "invisible pink unicorn" or IPU rule.)

Valid criticism 3: The hypothesis contains unnecessary detail.  You can always make any hypothesis consistent with all observations by adding additional details, but a high quality theory will be parsimonious: it will account for a lot of data with as few details as possible.

(In fact, since you live in the information age, you can actually think of the whole scientific method as a data compression process: it takes a vast amount of raw data and boils it down to the minimum amount of information needed to reproduce that data.  This turns out to be more than just a casual observation, but rather a Very Deep Insight that sheds light on why the scientific method works.  But before I can get into those details I will have to talk about the theory of computation and information, and we're nowhere near ready for that.)

Step 5: Consider whether the criticism you have produced or received is valid.  If it is, go back to step 3 and try again.

Step 6:
Sometimes you will come up with more than one hypothesis that withstands all of the criticism that anyone can think to throw at it.  In that case, examine the predictions that these hypotheses make and choose one that is different between them.  Then do an experiment to see which hypothesis makes the correct prediction, and discard the others.  Note that it is entirely possible that the results will eliminate all of the surviving candidates, in which case you will need to go back to step 3.  But if this doesn't happen, if one hypothesis survives, then, congratulations, your one remaining hypothesis has now been promoted to the status of a Theory!  A Theory is a hypothesis that has withstood all attempts to invalidate it.  In science, "theory" is a synonym for "knowledge" or "fact" subject to the caveats described above.

Finally, the last step is:

Step 7:
Use your Theory to make more predictions and test those against experiment too.

It turns out that if you follow this process, by the time you get to step 7, it is extremely rare for the results of those subsequent experiments to contradict the predictions made by the theory.  And that is the magic.  That is the reason that science is effective at producing useful results.  It is because it produces theories with predictive power.  It literally gives you the gift of prophecy, and if you have that, you can choose your actions to more reliably produce results that you want.

This of course raises the obvious question of why this procedure works, and specifically why this particular procedure works so much better than anything else anyone has been able to come up with.  That question also turns out to have an answer, but it is a much, much longer and more complicated answer.  It involves quantum mechanics, information theory, and the theory of computation.  I'm planning future installments about all of those, but if you're impatient this story is told reasonably well in David Deutsch's book, "The Fabric of Reality" (though what he says about parallel universes needs to be taken with a grain of salt).

Does science lead to Truth?

That science produces Theories with predictive power is simply an observed empirical fact.  As time goes by, it gets harder and harder to find Problems, harder and harder to find observations that cannot be explained and predicted by existing Theories, and harder and harder to come up with new Theories that tie up the fewer and fewer remaining loose ends.  It is possible that some day we might even come up with a Theory of Everything that will tie up the last remaining loose end, and the whole project will be complete.

There is another empirical observation we can make about the scientific method: it converges.  Not only does it produce Theories with more and more predictive power, on those rare occasions when a new Theory completely overturns an old one, the old theory (uncapitalized now because it has been shown to be wrong) always turns out to be a good approximation to the new one under certain circumstances, and in particular, under the circumstances that pertain here in our solar system.  To find phenomena that cannot be explained with current scientific Theories you have to go far outside our solar system and look at neutron stars, black holes, and even other galaxies.

One possible explanation for these empirical observations is that there is an actual metaphysical Truth out there, and that the thing that the scientific method is converging towards is this metaphysical Truth.  That's a hypothesis, a possible explanation for the empirical observation that science converges, at least so far.  This hypothesis makes a prediction: that science will continue to converge, and may some day even reach the point where there are no more Problems, where it can explain all observations.  This hypothesis is falsifiable, so it's a valid scientific hypothesis.  And for the last 336 years no data has contradicted it.

Does that prove that science finds metaphysical Truth?  No.  Nothing is ever proven in science.  All knowledge is tentative and subject to being overturned by new data or a better explanation.  But it is, at the moment, the current-best explanation.

This is not to say that there are no extant Problems with the hypothesis that science converges towards metaphysical truth.  There are at least four that I can think of.  The first is the so-called "hard problem of consciousness", i.e. explaining qualia and subjective experience.  The second is "deriving ought from is", i.e. using the scientific method to obtain a theory of moral behavior.  And the last one is the problem of origins and teleology.  Why is there something rather than nothing?  How did life on earth begin?  What is the point of all this?  And finally there is the problem of religion: why do so many people believe things that are at odds with science?

I actually believe that all of these Problems have had some pretty significant dents put into them by the scientific method, much more than is generally appreciated or understood, even among scientists.  I've written about all of these things at one time or another, but usually in the context of developing my own ideas about them, and never as a coherent summary that presents the final results in a unified and organized way.  I'm going to try to remedy that in the future.  But it has already taken me a week just to get this far so I thought I'd go ahead and put this out there and subject it to criticism.