Item 2: All observations are probabilistic.

This is true whether we invoke quantum mechanics or not. What quantum mechanics brings to the table is the specific sort of probabilities being used. What must be true is that there is a minimum interval of time measurable by any object. What is this interval? In principle, it is the time it takes for light to pass from one end of the object to the other. For time intervals shorter than this, there is no coherent notion of “before” and “after” which could be used to distinguish between events. There is therefore an inherent uncertainty in the measurement of time. This creates uncertainties in the measurements of all the properties of distant objects. For example, if you wanted to know how fast your semi-truck was bearing down on you, you would have to measure the length of the time interval between successive photons hitting the back of your eye. Ultimately, all types of measurements reduce to measuring the time interval between local events. Most of the time, it’s much worse than this, as more detailed observations require more information, which means more photons hit your detector, which means a compounding of the errors in the measurements being done. Steps can be taken to reduce the compounding of the error, but it definitely cannot be reduced to zero. Every honest observation would be a statement like “I am 99.997 percent sure that the truck is between 25.5233357 and 25.5233359 feet away from me, and I am 99.9982 percent sure that its speed is between 45.213 and 45.214 MPH.”

The mathematically inclined would make a graph of “position of the truck” vs. “probability it is currently THIS distance from me.” This graph is called a probability density function (or sometimes simply a distribution.) We would expect it to look like a bell curve (which is called the ‘normal’ distribution) and if we are very sure of the location of the truck, it will be a skinny curve, whereas if we are not terribly sure of the location of the truck it will be a wide curve. The width of the curve (more specifically, the variance) is what is referred to as the “uncertainty” in the measurement.

I should note that this type of uncertainty is completely unrelated to Heisenberg’s Uncertainty Principle, which gives us another bound on the accuracy of a measurement due to the specific types of probabilities (more accurately: “amplitudes”) used in quantum mechanics.

In the quantum mechanical picture, the amplitudes of different ways an event might be observed to occur can interfere with one another, potentially creating distributions (which are computed from the absolute value of the amplitudes) that are *very *different from what one sees with just regular old probabilities. For example, it is entirely possible (if exceedingly unlikely) that the semi-truck bearing down on you could have a 50% chance of being 100 feet from you and a 50% chance of being 2 feet from you, based on your observations of the photons hitting your eyes. A situation like this is called a “discrete” distribution, since there are only two discrete options for the location of the truck. This is in stark contrast to a continuous distribution, which is what we see in the normal, “classical,” picture.

I should point out that in this scenario, the quantum mechanical picture can give rise to either discrete distributions or continuous distributions, or sometimes a combination of the two, whereas the classical picture would only give rise to a continuous distribution. Also, these probabilities do not tell us the “true” position of the truck, just the probability that the truck will be in one position or the other (in the discrete case,) or within a certain interval of positions (in the continuous case.) Information from future photons will help us distinguish between the two cases.

I am going to make an attempt in the next section to use discrete probabilities to create a good-enough-for-science-fiction resolution to the famous “Grandfather Paradox.”

Stay tuned.