# Author Archives: Cameron

## Changes in US Oil and Natural Gas Production, Employment, and Productivity 1980 to 2016

*“Come and listen to a story ’bout a man named Jed*

* Poor mountaineer barely kept his family fed*

* Then one day he was shooting for some food,*

* And up through the ground come a bubbling crude*

* (Oil that is, black gold, Texas tea)*

*Well the first thing you know old Jed’s a millionaire*

* Kin folk said Jed move away from there*

* Said California is the place you oughta be…” *

*—The Ballad of Jed Clampett*

Oil and natural gas have been important in the US and globally for nearly 180 years, with that importance spiking following invention and widespread adoption of the internal combustion engine in the 19th and 20th centuries. Beginning in 1973, the US began consuming more crude oil than could be domestically produced. Dependence on global markets made the US economy vulnerable to oil-price shocks in the 1970s, and acted as one of several proximate triggers for the 1990-1991 recession.

In order to better understand production and labor dynamics in US oil and gas production over time, I examined and downloaded production data from the Energy Information Administration (US DoE) and oil and gas employment data from the Bureau of Labor Statistics (US DoL). Graphs of the raw data are below:

A few points stand out when examining the data:

- Natural gas production reached a bottom in the mid-1980s, rising modestly until the mid-1990s, and remained stable until the mid-2000s. Production is currently just off the 2014-2015 peak.
- Crude Oil production in the US peaked in the early 1970s, though oil and gas employment peaked around 1980. Since then, a second production peak occurred in 2015, though production has fallen off slightly since then.
- In spite of recent peaks in production, employment in oil and gas extraction is about 35% off the 1982 peak of 266 thousand workers.

To examine oil and gas combined production and productivity, data was downloaded and opened in MS Excel. Annual barrels of oil and millions of cubic feet of natural gas were converted to British Thermal Units (BTUs). September employment levels were used for each year. The total production series was divided by the employment series to calculate productivity in BTUs per worker.

The productivity data shows a nearly unbroken 20-year increase from 1983 to 2003, a modest decline, and some leveling until 2014, and then a return to productivity growth. As one would expect, productivity increases tend to correspond to falling or stable crude oil prices. Productivity by this measure tends to decline when prices are rising. This makes sense because rising prices can be expected to follow periods of falling production (affecting the numerator), and lead to increases in employment (affecting the denominator).

The following is my opinion only, freely given, worth what you’ve paid for it, isn’t meant to be career or investment advice, and isn’t necessarily the view of my employer or anybody else: The shale revolution has gone global, and demand growth is expected to be slow due to demographics and consumer preferences in the developed world. Because global supply is much less constrained than demand, I expect crude oil prices to stabilize in the 60 to 80 dollar per barrel range for the next several years, assuming no severe shocks to global supply or demand. In order for profitability of US resources to be maintained productivity must continue to improve, implying modest but steady increases in production, and net employment changes occurring very slowly.

## The Distance and Power of Mr Rodgers Pass

The Green Bay Packers and Detroit Lions played their second game of the 2015 season on December 3rd. It ended with a highly unlikely Hail Mary catch by Richard Rodgers to score the winning touchdown. In the days following the game I heard many, many different people expressing excitement and admiration for the play, with one colleague even declaring Aaron Rodgers “The best quarterback in the NFL”.

The league posted a video clip asking “How far exactly did Rodgers’ Hail Mary Travel?”. The official marking of 61 yards is shown at the bottom of the screen. This is the distance from the line of scrimmage at the start of the play to the front of the end zone line. Ben Rohrbach expressed his belief that the total travel distance of the ball was close to 100 yards, including the vertical and horizontal elements.

When a ball is thrown, kicked, or otherwise sent with force into the air, the vertical profile will be shaped like an upside down parabola. This is because gravity acts with constant downward acceleration against the object. The total distance along this segment of a parabola is known as the arc length. It can be calculated by integrating the changing height across the horizontal distance. Wolfram’s discussion is here, and Had to Know’s is here.

With that out of the way, the objective is to calculate the actual arc length of Aaron Rogers December 3rd pass. The same method can be used to calculate golf drives, log throws, rock catapultations, etc…. Note: Air friction is neglected in the following discussion.

The horizontal distance of the pass is calculated via the Pythagorean Theorem using length and width coordinate estimates from the back end zone and visual top of the field. The image below shows how this was done for Aaron Rodgers throw. The throw starts an estimated 39 yards from the back end zone, and ends 103 yards from the same end zone. At the start it’s an estimated 13 yards from the top sideline, and it ends about 18 yards from the same sideline.

The second step is to estimate the total height attained by the ball, minus the height of the passer. Here, I assume that the ball was caught at the same height from which it was thrown. The vertical displacement in yards can be calculated by estimating the hang time, squaring it, and multiplying by 1.34058.

Once the vertical and horizontal displacements are measured, the rest can be calculated in a straightforward manner. Using my estimates of pass start at (13,39), catch at (18,103), and hang time of 5.6 seconds I got the following results:

- Horizontal Displacement: 64.195 yards
- Vertical Displacement: 42.041 yards
- Total Travel Distance: 110.716 yards
- Velocity of Pass: 12.706 yards per second
- Energy Imparted to Football: 21.405 foot pounds
- Power Output of Aaron Rodgers during pass: 0.216 horsepower

This admittedly armchair analysis vindicates Rohrbach, and calculates Aaron Rodgers throwing technique to be as strong as just over a fifth of a horse. The image of my Excel output is below, with the following cell formulas:

- A6 =SQRT(C2^2 + C4^2)
- A11 =SQRT((A6^2/4) + (4*B9^2)) + (A6^2/(8*B9))*ASINH(4*B9/A6)
- A13 =(3*B11)^2*(0.94799/(2*32.174049))
- B9 =1.34058*A9^2
- B11 =SQRT((A6/A9)^2 + 10.72467*(A9/2))
- B13 =(A13/B6)/550
- C2 =ABS(B2-A2)
- C4 =ABS(B4-A4)

## Bifurcating FitzHugh-Nagumo

“I am a spec on a spec of a planet in the spec of a solar system in one galaxy of millions in the known Universe. My Universe is like a ring in the desert compared to the Footstool of God. And the Footstool like a ring in the desert compared to the Throne of God.”

–American Muslimah

The amount of research done on brains, neurons, neuro-chemicals, etc….is astounding. No matter whom your favorite neuroscientist is you can rest assured that his or her knowledge barely scratches the surface of all that is known about human brains, or even Bumble Bee brains for that matter. But even all that’s known is but a “ring in the desert” compared to all that can be known–to all that will one day be known.

Hodgkin and Huxley developed their model of axonal behavior in the European Giant Squid in the late 1940s and published it in 1952. It was a game changer for two main reasons–first because it accounted for the action of Sodium and Potassium channels, and second because it stood up to experiment. It is a shining example of mathematical biological modeling from first principles.

FitzHugh and Nagumo based their model partially on the Hodgkin-Huxley equation, partly on the van der Pol equation, and partly on experimental data. While it’s not as accurate as the H-H equation relative to experiment, it’s much easier to analyze mathematically and captures most of the important attributes. It’s also been successfully applied to the behavior of neural ganglia and cardiac muscle. There are many different interpretations of the FitzHugh Nagumo systems of equations. The readiest on-line tool for examining them was created by Mike Martin.

In this post I’m working with a slightly modified version of the forms described by Ermentrout and Terman:

As you can see, this is a system of ordinary differential equations. The second equation has the quality of stabilizing the first. In a sense, it acts like an independent damper on changes in voltage when current is applied to an axon. Like the van der Pol equations, it’s non-linear and deterministic, and doesn’t easily lend itself to analytical solutions.

One thing you can’t see with Dr Martin’s tool is an interesting phenomenon that frequently occurs with these sort of equations: the Hopf Bifurcation. The only equation parameter that can be easily changed during experiment is the applied current. By programming the equations and calculating across a range of currents, it can be determined when an applied current produces unstable voltage oscillations. The point at which it becomes unstable is known as the critical point or Hopf point, and as long as the region of instability doesn’t extend to infinity, there will be one on each side. According to Steven Baer, the critical points for Ermentrout and Terman’s system are found at:

Enough about all that. Using parameter values of a = 0.8, epsilon = 0.5, and gamma = 0.2, I punched it out using my MatLab program FHNBifurc, and also ran it with XPP. XPP is no-frills and a little bit glitchy, but it’s awesome for solving systems of ODEs and it’s a champ of a program for bifurcation analysis. The installation is a little tricky, but it’s totally free–no spyware or anything else attached. If you haven’t at least toyed around with it, you should.

The MatLab program works by drawing the diagram in two directions–from the right in red and from the left in blue. The calculated critical points are shown as white stars. Ideally, the bifurcations should begin at those points, but as you can see, they don’t show up perfectly. That’s the image on the left. The best use of the program I’ve found is that it can be used to search across a wide range of currents to determine if and where instability occurs. I ran the same stuff with XPP’/Auto, and it gave me the figure on the right. Auto’s kind of a bear to get to the first time, but do it once and you’re set for life.

I also did a final run with Matlab, this time a smaller range with a lot more iterations. Took my computer about 10 minutes to complete it, so make sure you’ve got enough memory and processing speed before you try it. I edited-in the critical point solutions, FYI:

You can see why the FitzHugh-Nagumo equations are called eliptical. The MatLab program is great for scanning a wide range of values and locating the range of the bifurcation. Auto is way primo for drawing their shapes. Here’s the code for the XPP file:

# FitzHugh-Nagumo equations # fhnbifurc.ode dv/dt=-v*(v-1)*(v-a)-w+i dw/dt=e*(v-g*w) parameter a=0.8, e=0.5, g=0.2, i=0 @ total=50000, dt=1, xhi=50000., MAXSTOR=100000,meth=gear, tol=.01 done

The MatLab call looks like this:

**FHNBifurc(a,eps,gamma,I1,IF,tspan,tstep)**

a = value of a eps = value of epsilon gamma = value of gamma

I1 = initial current value IF = final current value

tspan = time span over which current is scanned

tstep = how short the time steps should be for the calculation

The specific call for the 0 to 10 scan was:

**FHNBifurc(0.8,0.5,0.2,0,10,100000,0.01)**

and for the more detailed diagram:

**FHNBifurc(0.8,0.5,0.2,1.5,5,100000,0.001)**

As always, the complete code is below. Cheers!

PS–Did you like The Sexy Universe on Facebook yet?

function FHNBifurc(a,eps,gamma,I1,IF,tspan,tstep) steps = tspan/tstep; Iramp = (IF-I1)/steps; V(1) = 0; W(1) = 0; I(1) = I1; for n = 1:(steps-1) In = I(n); Vn = V(n); Wn = W(n); fV = Vn*(Vn-1)*(Vn-a); dVdt = -fV - Wn + In; dWdt = eps*(Vn - gamma*Wn); V(n+1) = Vn + dVdt*tstep; W(n+1) = Wn + dWdt*tstep; I(n+1) = In + Iramp; end IFwd = I; VFwd = V; WFwd = W; V(1) = 0; W(1) = 0; I(1) = IF; Iramp = -Iramp; for n = 1:(steps-1) In = I(n); Vn = V(n); Wn = W(n); fV = Vn*(Vn-1)*(Vn-a); dVdt = -fV - Wn + In; dWdt = eps*(Vn - gamma*Wn); V(n+1) = Vn + dVdt*tstep; W(n+1) = Wn + dWdt*tstep; I(n+1) = In + Iramp; end IRev = I; VRev = V; WRev = W; Vcr(1) = (a+1-sqrt(a^2 - a + (1-3*eps*gamma)))/3; Vcr(2) = (a+1+sqrt(a^2 - a + (1-3*eps*gamma)))/3; for n=1:2 Icr(n) = (Vcr(n)/gamma) + Vcr(n)*(Vcr(n)-1)*(Vcr(n)-a); end Vcr Icr plot(IFwd,VFwd,'-r',IRev,VRev,'-b',Icr,Vcr,'*w') xlabel('Current (I)') ylabel('Voltage (V)')