Views of Rolling Clouds

I bring fresh showers for the thirsting flowers,
From the seas and the streams;
I bear light shade for the leaves when laid
In their noonday dreams.
From my wings are shaken the dews that waken
The sweet buds every one,
When rocked to rest on their mother’s breast,
As she dances about the sun.
I wield the flail of the lashing hail,
And whiten the green plains under,
And then again I dissolve it in rain,
And laugh as I pass in thunder.

The Cloud, Percy Bysshe Shelley

As a child I spent what seemed like hours at a time watching clouds move across the sky, shifting shapes as they went. Seeing dragons, devils, ships, and castles moving and morphing across a blue canvas. I can’t be the only one. Rain clouds rolled in this morning and I found myself watching as a few low-lying, dark gray ones trundled along beneath the overcast sky.

Lower clouds appear to be moving faster than higher ones, but this is an illusion. In reality, wind speed increases with altitude. But when a low cloud bears down in its dark and shadow and immensity, it’s nearly impossible not to tremble at one’s own insignificance.

So, what creates the illusion of faster movement? The answer lies in the changing angle of an observer’s eye as it tracks a cloud. The observation angle changes faster when a cloud moves faster or when it’s closer to the observer. Closer can mean altitude–the cloud is lower in the sky, or it can mean distance over the ground–the cloud is closer to being directly above the observer.

So how much difference does it make?

Start with the sky, and a cloud, and it’s a sunny day, and there’s a guy standing on the ground looking at the cloud. The cloud’s altitude is a, and the distance over ground is d. Take a line straight into the sky and another that goes from the guy’s eyes to the cloud. Those two lines make an angle, θ. A breeze blows on the cloud, pushing it horizontally with velocity H, and vertically with velocity V.

So now for the nerdy stuff. When the cloud’s to the right of the dude, d and θ are positive, and to the left they’re negative. a is always positive. H is positive when going right and negative when going left. V is positive when the cloud moves up and negative when it moves down. The tangent of θ is d divided by a, and can be calculated if their lengths are known.

The total change of the angle θ with time is found by adding the change in angle due to horizontal movement to the change due to vertical movement:

Gad, that’s ugly to work with. It basically says when the cloud flies left, the angle changes in the negative direction. When the cloud is to the right of the observer, the angle changes in the positive direction when the cloud moves down, and in the negative direction when it moves up. And when the cloud is to the left, vertical movement causes changes in the opposite direction. How does it look when calculated?

I started with altitude and horizontal distances of 200 feet, and since they’re equal, the angle is 45 degrees. The cloud flies by at 10 feet per second, and the observers eyes track it across the sky. Here’s what the angle, θ looks like over time. It starts out at postive 45 degrees, reaches zero when the cloud is directly overhead, and goes negative as it flies to the right of our guy on the ground.

So what happens if the cloud is now 20 feet off the ground instead of 200, and still whizzing by at 10 feet per second? Well, at first it’s just a cloud on the horizon, getting bigger and bigger, and our guy’s head doesn’t even have to move. It takes 15 seconds for the angle of observation to go from 85 to 70 degrees. Then the cloud flies over in a tear, going to an angle of -70 degrees in only 13 seconds, before shrinking into the horizon.

This reminds me of something:
“How did you go bankrupt,” Bill asked.
“Two ways,” Mike said. “Gradually and then suddenly.”
Ernest Hemingway, The Sun Also Rises

A shot of the excel sheet and the formulas are below. Happy cloud watching.

B3 = A2 + E2 and copy down
C3 = C2 + D2 and copy down
F2 = DEGREES(ATAN(C2/B2)) and copy down
G2 = F2 – F3

An Attempt to Build on Dr Grimes Viability of Conspiratorial Beliefs

Got a secret
Can you keep it?
Swear this one you’ll save
Better lock it in your pocket
Taking this one to the grave
If I show you then I know you
Won’t tell what I said
‘Cause two can keep a secret
If one of them is dead?

The Pierces

Like most folks I enjoy a good conspiracy theory, and sometimes even a bad one. They allow the listener to suspend disbelief and consider for a moment that the social world in which he or she resides is an illusion. This is somewhat similar to the ability to slip into the fictional world of a story or novel, and bears similarity to the Gnostic assertion that the material realm is an entrapping illusion. In fact many very good films are based on the idea that conspiracies are afoot–the Bourne, Mission Impossible, Star Wars, and of course the Matrix series.

In modern societies, however, there are many conspiracy theories peddled as truth that cause real harm to their believers. One recent example from my country is PizzaGate–a ridiculous story based on the notion that a major Presidential candidate was operating a pederasty ring through a Washington DC pizza place–that led to a believer carrying out an armed raid. Another example is the belief that childhood vaccination is a cause of autism, an assertion founded on falsified data. Failure to vaccinate has led to numerous outbreaks of dangerous epidemic diseases in just the last few years. I should note that the longest running conspiracy theories I know of aren’t theories at all because in the minds of their believers they aren’t disprovable. In that sense they are fictitious certainties,

In his 2016 paper “On the Viability of Conspiratorial Beliefs” Dr David Grimes of Oxford University applied probability models to four popular conspiracy theories–NASA moon landing hoax, climate change fakery, vaccination-autism link, and cancer cure conspiracy. His model simulates the probability of a conspiracy being leaked using a Poisson distribution with conspiracy population parameters based on exponential decay, the Gompertz function, and no change. In order to understand Dr Grimes work a little better, I coded his equations in Octave and simulated his parameters, replicating the generalized results displayed in Figure 1 of the paper.

Comparison to Grimes general calculations

Three graphs showing my calculations, with Figure 1 from Dr Grimes paper in the lower right quadrant

In examining the work I found one point where I wanted to test a variation. The population parameters and probability distribution used in the paper are continuous–the growth models allow for fractions of a conspirator to leak information. I added an integer rounding function to the population parameters. I also decided on a binomial discrete probability function, where the probability of a leak is treated similarly to the probability of a defective part rolling off an assembly line. I also added a parameter that allows for a leak, presumably to the press, to require verification from additional defectors.

To test this formula I decided to test the fake job numbers conspiracy. The idea is that the US Bureau of Labor Statistics was cooking up job numbers in 2012 to assist the reelection campaign of President Obama. A few problems with this are:

  • The BLS employs more than 2500 individuals with a broad range of political preferences
  • Job figures are also measured by private firms, such as Gallup and ADP, which show different raw numbers but similar trends
  • There really isn’t any evidence that the average American voter pays close attention to economic releases

The down side to the discrete formula is that it uses factorial numbers to calculate the binomial coefficient. Numbers greater than 170! are treated as infinite by Octave. To mitigate this, I used a conspiracy population of 100, with an individual chance of leaking at 0.1% per year, and a press requirement of 2 defectors to publish the leak. While it looks like it would be possible to carry out such a conspiracy one time, the chance of the conspiracy being broken in 80 years is 91.49%. If one assumes a single leaker is sufficient, the chance of discovery rises to more than 99.99%. The chance of carrying it out with the involvement of 2500 must be slim indeed.

Results from discrete probability analysis of the fake job numbers conspiracy forwarded during the 2012 US Presidential election. On the left, requirement of two leakers, on the right requirement of one leaker.

Results from discrete probability analysis of the fake job numbers conspiracy forwarded during the 2012 US Presidential election. On the left, requirement of two leakers, on the right requirement of one leaker.

The m file code for the discrete distribution is below:

 function conspViaDisc(decayType, p, N0, numIt)

x = [1:numIt];     % fill x-vector for x-axis values
alpha = 10^-4;
beta = 0.085;       % alpha & beta for gompretzian function
te = 40;            % mean age of conspirators
lifeExp = 78;       % mean life expectancy of conspirators
lambda = log(2)/(lifeExp-te);
N(1) = N0;
a = 1;  % Number of leakers needed to break the conspiracy

if decayType == “G”
for n = 2:numIt
N(n) = int16(N0*exp(0-lambda*n));     % Gompretzian decay
end
elseif decayType == “E”
for n = 2:numIt
N(n) = int16(N0*exp((alpha/beta)*(1-exp(beta*(te+n)))));  % Exponential Decay
end
else
for n = 2:numIt
N(n) = N0;   % No Decay
end
endif

L(1) = 0.001;   % Estimated probability of discovery in timestep 1
cumL(1) = L(1);
for n = 2:numIt
if N(n) >= a
summ = 0;
for m = 1:a
s = m-1;
binCoef = factorial(N(n))/(factorial(s)*factorial(N(n)-s));
summ += binCoef*(p^s)*((1-p)^(N(1)-s));
end
L(n) = (1-summ)*(1-cumL(n-1));
cumL(n) = cumL(n-1) + L(n);
else
L(n) = 0;
cumL(n) = cumL(n-1) + L(n);
end
end

N
L
cumL
plot(x,L,x,cumL)

Changes in US Oil and Natural Gas Production, Employment, and Productivity 1980 to 2016

“Come and listen to a story ’bout a man named Jed
Poor mountaineer barely kept his family fed
Then one day he was shooting for some food,
And up through the ground come a bubbling crude
(Oil that is, black gold, Texas tea)

Well the first thing you know old Jed’s a millionaire
Kin folk said Jed move away from there
Said California is the place you oughta be…” 
The Ballad of Jed Clampett

Oil and natural gas have been important in the US and globally for nearly 180 years, with that importance spiking following invention and widespread adoption of the internal combustion engine in the 19th and 20th centuries.  Beginning in 1973, the US began consuming more crude oil than could be domestically produced.  Dependence on global markets made the US economy vulnerable to oil-price shocks in the 1970s, and acted as one of several proximate triggers for the 1990-1991 recession.

In order to better understand production and labor dynamics in US oil and gas production over time, I examined and downloaded production data from the Energy Information Administration (US DoE) and oil and gas employment data from the Bureau of Labor Statistics (US DoL).  Graphs of the raw data are below:

Natural Gas Withdrawals and Production in the United States, Jan 1980 through Sept 2016.

Natural Gas Withdrawals and Production in the United States, Jan 1980 through Sept 2016.

crude-oil-production-1920-to-2016

US Field Production of Crude Oil, Jan 1920 through Sept 2016.

oil-and-gas-employment-in-us-jan-1972-through-november-2016

Oil and Gas Employment in the US, Jan 1972 through Nov 2016.

A few points stand out when examining the data:

  • Natural gas production reached a bottom in the mid-1980s, rising modestly until the mid-1990s, and remained stable until the mid-2000s.  Production is currently just off the 2014-2015 peak.
  • Crude Oil production in the US peaked in the early 1970s, though oil and gas employment peaked around 1980.  Since then, a second production peak occurred in 2015, though production has fallen off slightly since then.
  • In spite of recent peaks in production, employment in oil and gas extraction is about 35% off the 1982 peak of 266 thousand workers.

To examine oil and gas combined production and productivity, data was downloaded and opened in MS Excel.  Annual barrels of oil and millions of cubic feet of natural gas were converted to British Thermal Units (BTUs).  September employment levels were used for each year.  The total production series was divided by the employment series to calculate productivity in BTUs per worker.

Oil and Gas Extraction Employment in 1000s (top), Total Crude Oil and Natural Gas Production in 10^14 BTUs, and Productivity in 10^9 BTUs per worker, 1980 though 2016.

Oil and Gas Extraction Employment in 1000s (top), Total Crude Oil and Natural Gas Production in 10^14 BTUs, and Productivity in 10^9 BTUs per worker, 1980 though 2016. Author’s Calculations.

The productivity data shows a nearly unbroken 20-year increase from 1983 to 2003, a modest decline, and some leveling until 2014, and then a return to productivity growth.  As one would expect, productivity increases tend to correspond to falling or stable crude oil prices.  Productivity by this measure tends to decline when prices are rising.  This makes sense because rising prices can be expected to follow periods of falling production (affecting the numerator), and lead to increases in employment (affecting the denominator).

The following is my opinion only, freely given, worth what you’ve paid for it, isn’t meant to be career or investment advice, and isn’t necessarily the view of my employer or anybody else:  The shale revolution has gone global, and demand growth is expected to be slow due to demographics and consumer preferences in the developed world. Because global supply is much less constrained than demand, I expect crude oil prices to stabilize in the 60 to 80 dollar per barrel range for the next several years, assuming no severe shocks to global supply or demand.  In order for profitability of US resources to be maintained productivity must continue to improve, implying modest but steady increases in production, and net employment changes occurring very slowly.

%d bloggers like this: