Shoplifting From CR

“There was a man in the land of Uz, whose name was Job; and that man was blameless and upright, and one who feared God and shunned evil.  And seven sons and three daughters were born to him. Also, his possessions were seven thousand sheep, three thousand camels, five hundred yoke of oxen, five hundred female donkeys, and a very large household, so that this man was the greatest of all the people of the East.

And his sons would go and feast in their houses, each on his appointed day, and would send and invite their three sisters to eat and drink with them. So it was, when the days of feasting had run their course, that Job would send and sanctify them, and he would rise early in the morning and offer burnt offerings according to the number of them all. For Job said, ‘It may be that my sons have sinned and cursed God in their hearts.’  Thus Job did regularly.”                                                                                                                                                  —Job 1:1-5

Job is probably the oldest book in the Bible and waaay fascinating to boot.  If you haven’t read it you’re missing out…and if you haven’t read it critically, you might as well have not read it at all.  The best part comes near the end, but I’m not giving it away!  The reason for quoting Job is to show that wealth in the ancient world was measured in much more material, meaningful ways than it is today–smart, hard working kids who had themselves survived to reproductive age, herds of livestock with reliable production, lackeys and indentured servants.  The first few verses demonstrate the story hales from the early days of agriculture and animal husbandry.

Our post-industrial global society is “richer”, but the definition of wealth has gotten pretty ambiguous.  A “mass-affluent” individual or family may have a gang of financial assets and high potential earnings.  But whether or not those assets behave as capital, let alone as strategic capital, is dependent on a host of factors.  Job’s capital, on the other hand, was measured in animals, workers, land, and family alliances.  Anybody who wouldn’t trade a few pieces of paper for that hoard is truly without hope.

Pre-1840 (approximately), relative wealth and income for 90% of the world’s population mostly depended on weather conditions and the rate of nitrogen replacement into the soil.  These could not be measured or predicted at the time, and rightly belonged to the realm of the gods.  With the use of salt-peter from Chile, and later through the Haber-Bosch process, the nitrogen replenishment limit on agricultural productivity was broken.  Thanks to steam- and internal-combustion engines, dependence on weather was overcome through global trade.  Fractional reserve banking lifted the precious metal limit on currency production.

Today our conventional notions of wealth and income are measured in currency, which wasn’t the norm even 120 years ago.  The value, production, and distribution of currency are entirely social constructs.  However, financial and economic systems are damned complex, subject to non-linearity and probability.  Because of this, there is a tendency to ascribe human personalities and divine attributes to dumb money.  This vestigial tendency shows up in everything from Adam Smith’s “Invisible Hand” to the ultra-hokey modern day “Law of Attraction”.

Bah!–Enough of the garbage!  Bill McBride at Calculated Risk has put together a neat-o graphic showing job losses and recoveries in US recessions.  It’s shoplifted below:

Percent Job Losses in Post-WWII Recessions (Calculated Risk)

Percent Job Losses in Post-WWII Recessions

Bill’s right to focus on employment when discussing recessions.  As Dr Hall notes, modern recessions tend to be mild in terms of changes in absolute output, but are as bad as ever–worse even–in terms of job recovery.  If this is the case, what can be measured to better understand and “rate” modern recessions and recoveries?  Are there trends from past recessions that can help?  If interest rates (determined by the Federal Reserve Bank), and fiscal deficits (from the US treasury) can be thought of as control mechanisms, to what extent are they effective?

Recession Score (phi) = Length (months)/Depth (%)

Job Recovery Score (phi) = Length (months)/Depth (%)

The current US recovery has been described by some “as the weakest since WWII“.  It’s a fair assessment, and I won’t mince words–the human cost has been terrible.  But from both Dr Hall’s work and McBride’s graph, it appears that job losses and recoveries have been trending longer since the 1980 recession.  This may just be a visual anomaly.  To try to score post-WWII US recessions and recoveries in terms of their length and depth, I measured the time from previous employment peak to recovery  in months and divided it by the depth of the job loss trough in percent for each recession on Bill’s graph.  I call this the Job Recovery Score.

I put the scores and the years of recession-starts into Excel and graphed.  To score the 2007 recession, I used Bill’s job recovery projection.  I don’t normally include Excels goofy rolling trendlines in real work, but this time it highlights a pattern in the data that’s hard to see from the data points alone:

Recession Score vs Year

Recession Score vs Year

When analyzing recessions, there are always a few problems.  For starters, the data set is small.  Furthermore, recessions are often caused and mitigated by extraneous events.  For example, the 1969 recession began at the same time as the Tet Offensive in Vietnam.  The 1974 recession started with the loss of Middle Eastern petroleum exports to North America and Europe.  Martin Feldsman’s data shows that it ended with growth in agricultural exports from the United States to the USSR, America’s first experiment in the “Oil for Food” trade.  Job recovery after the 2001 recession was delayed by the terrorist attacks of 9/11 and the march to war in Iraq.  Nevertheless, there’s a roller-coaster like pattern to the job recovery scores over time, and the scores of the last ~20 years are much more negative than those before them.   I isolated the “peak points” and labeled them in red, and the “trough points” and labeled them in green.  I did a linear fit of those data points:

Recession Score vs Year with Linear Fits

Job Recovery Score vs Year with Linear Fits

I was surprised to see strong correlations–0.964 for the peaks, and 1 for the troughs.  While it can be hoped that the next recession will have a less-negative job recovery score, based on this analysis we can expect that it won’t.  If the pattern holds, the next employment recession will be shallower, and the relative recovery much slower than the current one.  Why this is happening–have monetary and fiscal responses become less robust over time?  I obtained the historical Federal Funds Rate data and compared it with the recession score graph:

Federal Funds Rate and Recession Scores

Federal Funds Rate and Recession Scores

The Federal Reserve tends to raise interest rates during expansions to reduce inflation, and drops them during recessions to promote recovery.  Based on the Federal Funds Rate, it’s pretty obvious that this policy response has been relatively mute since about 1990, but it has fallen more and risen less.  On the whole, interest rates have been much lower over the last 20 years than during the time from the late 1960s to early 1980s.  The next image shows the fiscal response in the form of Federal deficit spending as a percentage of nominal GDP:

Federal Federal Deficit Spending Since 1946

Federal Federal Deficit Spending Since 1946 as % of GDP

Based on this chart, we can see that the fiscal response to recessions has, in fact, been proportionately stronger since 1970 than it was from 1946-1969.  Based on policy interest rate data and the Federal deficit data, I think the policy responses have not weakened since the 1948 recession; they have just become less effective.  This may be the least desirable conclusion of all.  Tax cuts and spending increases at the Federal level, even coupled with aggressive monetary policy from the Federal Reserve, have not been sufficient to solve the problem of long employment recessions in the United States.

In light of these problems, it’s tempting to point to trade deficits and uncontrolled immigration as causes of the seeming impotence of public policy to promote post-recession employment recovery.  Unfortunately, the data on these effects are mixed at best.  While there are many social problems caused by these issues, worker displacement for starters, it’s doubtful that they are the driving force of policy impotence.  It’s clear also that privately owned and managed institutions have not solved the problem either.

To end this post, I will barf out a few of my own opinions.  Feel free to stop reading now if you don’t want to hear it.  At the tail end of a May 20, 2013 discussion of economic inequality and growth at the City University of New York, Paul Krugman noted that the two periods with the highest economic growth in the US coincided with the Gilded Age (1870-1900) and the Post-WWII boom (1946-1973).  During the Gilded Age inequality grew fast, and during the Post-WWII Boom inequality declined.  He expected a closer relationship between inequality and growth, whether it was positive or negative.  I think Dr Krugman’s statement has to be examined from the perspective of real capital formation.  During the Gilded Age, the policy of the US government was to distribute land to settlers who were willing, able, and (unfortunately) racially favored to work it.  We should not kid ourselves–the Homestead Acts were coupled with one of the most horrific campaigns of genocide against a native population ever seen in human history.  But from an economic perspective they were a transfer of capital in the form of land into the hands of relatively cash-poor individuals.  The financial inequality of the Gilded Age was mitigated by the largely agrarian nature of society and the availability of free or cheap land.  Even if a homesteader could not profitably work a claim, he or she could reasonably expect make money by selling it.

My conjecture is that the employment problem we face is symptomatic of a rising difficulty in the accumulation of strategic capital.  Sure, there’s plenty of liquidity and corporate money in the US and around the globe.  But the flow is mainly controlled by a relative few individuals, and since they are a small group, their knowledge and interests are inherently too small to fill all potential markets.  For people of average means or less, there are few, if any, routes to acquiring the skill, equipment, land, and cash one needs to start a viable, profitable business.  Furthermore, there are virtually no simple ways for a person of modest means, education, and average health to build his or her real income.  A few policy proposals that may be worth considering are:

  •  Sponsoring increased mentoring and small business grants to potential entrepreneurs in low- to moderate-income areas
  • Increase public or public-private employment opportunities, such as one where government covers half the cost of employing low income individuals.  My first job was actually through a program like this.
  • Develop programs and partnerships to assist low- and moderate-income households with the purchase of solar panels or wind turbines, and grant consumers the right to sell power back to the grid
  • Encourage profit-sharing programs in mid-sized businesses, and dividend-paying stock compensation for workers in publicly traded companies
  • Fund modest lifetime income payments which could be earned by low-income and long-term unemployed workers through paid or volunteer work.  Payments could be managed through insurance annuities.

Anyway, interesting topic that I’d thought about for a while and finally had time to look at.  Disclaimer: economics isn’t my specialty and I really didn’t derive anything. What are your thoughts?

PS–you can like The Sexy Universe on Facebook right now!

Cheers!

Advertisements

Bifurcating FitzHugh-Nagumo

“I am a spec on a spec of a planet in the spec of a solar system in one galaxy of millions in the known Universe.  My Universe is like a ring in the desert compared to the Footstool of God.  And the Footstool like a ring in the desert compared to the Throne of God.”
–American Muslimah

neuron_cell-wide

Artist rendition of a neuron.
Attribution: HD Wallpapers

The amount of research done on brains, neurons, neuro-chemicals, etc….is astounding.  No matter whom your favorite neuroscientist is you can rest assured that his or her knowledge barely scratches the surface of all that is known about human brains, or even Bumble Bee brains for that  matter.  But even all that’s known is but a “ring in the desert” compared to all that can be known–to all that will one day be known.

Hodgkin and Huxley developed their model of axonal behavior in the European Giant Squid in the late 1940s and published it in 1952.  It was a game changer for two main reasons–first because it accounted for the action of Sodium and Potassium channels, and second because it stood up to experiment.  It is a shining example of mathematical biological modeling from first principles.

FitzHugh and Nagumo based their model partially on the Hodgkin-Huxley equation, partly on the van der Pol equation, and partly on experimental data.  While it’s not as accurate as the H-H equation relative to experiment, it’s much easier to analyze mathematically and captures most of the important attributes.  It’s also been successfully applied to the behavior of neural ganglia and cardiac muscle.  There are many different interpretations of the FitzHugh Nagumo systems of equations.  The readiest on-line tool for examining them was created by Mike Martin.

In this post I’m working with a slightly modified version of the forms described by Ermentrout and Terman:

FitzHugh Nagumo Equations

The FitzHugh-Nagumo System of Equations

As you can see, this is a system of ordinary differential equations.  The second equation has the quality of stabilizing the first.  In a sense, it acts like an independent damper on changes in voltage when current is applied to an axon.  Like the van der Pol equations, it’s non-linear and deterministic, and doesn’t easily lend itself to analytical solutions.

One thing you can’t see with Dr Martin’s tool is an interesting phenomenon that frequently occurs with these sort of equations: the Hopf Bifurcation.  The only equation parameter that can be easily changed during experiment is the applied current.  By programming the equations and calculating across a range of currents, it can be determined when an applied current produces unstable voltage oscillations.  The point at which it becomes unstable is known as the critical point or Hopf point, and as long as the region of instability doesn’t extend to infinity, there will be one on each side.  According to Steven Baer, the critical points for Ermentrout and Terman’s system are found at:

Analytical solutions for critical Hopf points

Analytical solutions for critical Hopf points

Enough about all that.  Using parameter values of a = 0.8, epsilon = 0.5, and gamma = 0.2, I punched it out using my MatLab program FHNBifurc, and also ran it with XPP.  XPP is no-frills and a little bit glitchy, but it’s awesome for solving systems of ODEs and it’s a champ of a program for bifurcation analysis.  The installation is a little tricky, but it’s totally free–no spyware or anything else attached.  If you haven’t at least toyed around with it, you should.

The MatLab program works by drawing the diagram in two directions–from the right in red and from the left in blue.  The calculated critical points are shown as white stars.  Ideally, the bifurcations should begin at those points, but as you can see, they don’t show up perfectly.  That’s the image on the left.  The best use of the program I’ve found is that it can be used to search across a wide range of currents to determine if and where instability occurs.  I ran the same stuff with XPP’/Auto, and it gave me the figure on the right.  Auto’s kind of a bear to get to the first time, but do it once and you’re set for life.

Matlab's 0 to 10 scan and Auto's bifurction analysis

Matlab’s 0 to 10 scan and Auto’s bifurction analysis

I also did a final run with Matlab, this time a smaller range with a lot more iterations.  Took my computer about 10 minutes to complete it, so make sure you’ve got enough memory and processing speed before you try it.  I edited-in the critical point solutions, FYI:

Final MatLab Bifurcation

Final MatLab Bifurcation

You can see why the FitzHugh-Nagumo equations are called eliptical.  The MatLab program is great for scanning a wide range of values and locating the range of the bifurcation.  Auto is way primo for drawing their shapes.  Here’s the code for the XPP file:

# FitzHugh-Nagumo equations
# fhnbifurc.ode
dv/dt=-v*(v-1)*(v-a)-w+i
dw/dt=e*(v-g*w)
parameter a=0.8, e=0.5, g=0.2, i=0
@ total=50000, dt=1, xhi=50000., MAXSTOR=100000,meth=gear, tol=.01
done

The MatLab call looks like this:

FHNBifurc(a,eps,gamma,I1,IF,tspan,tstep)

a = value of a      eps = value of epsilon       gamma = value of gamma

I1 = initial current value      IF = final current value

tspan = time span over which current is scanned

tstep = how short the time steps should be for the calculation

The specific call for the 0 to 10 scan was:

FHNBifurc(0.8,0.5,0.2,0,10,100000,0.01)

and for the more detailed diagram:

FHNBifurc(0.8,0.5,0.2,1.5,5,100000,0.001)

As always, the complete code is below.  Cheers!

PS–Did you like The Sexy Universe on Facebook yet?

function FHNBifurc(a,eps,gamma,I1,IF,tspan,tstep)

steps = tspan/tstep;
Iramp = (IF-I1)/steps;

V(1) = 0;
W(1) = 0;
I(1) = I1;
for n = 1:(steps-1)
    In = I(n);
    Vn = V(n);
    Wn = W(n);
    fV = Vn*(Vn-1)*(Vn-a);
    dVdt = -fV - Wn + In;
    dWdt = eps*(Vn - gamma*Wn);
    V(n+1) = Vn + dVdt*tstep;
    W(n+1) = Wn + dWdt*tstep;
    I(n+1) = In + Iramp;
end

IFwd = I;
VFwd = V;
WFwd = W;

V(1) = 0;
W(1) = 0;
I(1) = IF;

Iramp = -Iramp;

for n = 1:(steps-1)
    In = I(n);
    Vn = V(n);
    Wn = W(n);
    fV = Vn*(Vn-1)*(Vn-a);
    dVdt = -fV - Wn + In;
    dWdt = eps*(Vn - gamma*Wn);
    V(n+1) = Vn + dVdt*tstep;
    W(n+1) = Wn + dWdt*tstep;
    I(n+1) = In + Iramp;
end

IRev = I;
VRev = V;
WRev = W;

Vcr(1) = (a+1-sqrt(a^2 - a + (1-3*eps*gamma)))/3;
Vcr(2) = (a+1+sqrt(a^2 - a + (1-3*eps*gamma)))/3;

for n=1:2
    Icr(n) = (Vcr(n)/gamma) + Vcr(n)*(Vcr(n)-1)*(Vcr(n)-a);
end

Vcr
Icr

plot(IFwd,VFwd,'-r',IRev,VRev,'-b',Icr,Vcr,'*w')
xlabel('Current (I)')
ylabel('Voltage (V)')

Baumgartner’s Jump

This was originally posted at The Cameron Hoppe Project.  It’s popularity inspired this site.  It’s been updated since.

          “And He will raise you up on eagle’s wings,

           bear you on the breath of dawn,

           make you to shine like the sun,

           and hold you in the palm of His hand.”

                        — Josh Grobman “On Eagle’s Wings”

At 128+ thousand feet, Baumgartner looks down on the blue sky.

Felix Baumgartner completed his much pre-hyped jump from 128,100 feet, achieving a top speed 1.24 times the speed of sound.  It must have been an amazing ride, with the bill footed by Red Bull and everything.  I am neon green with envy.  Sure, it could be death to try, but it would be worth it just to see the blue-glowing world one time.  Besides, the life insurance is paid.

Gravity creates constant downward acceleration.  Friction with air produces drag that runs the opposite direction in which one is moving, and is proportional to velocity squared.  So the sum of all the forces on Felix, as with any skydiver, during his fall were equal to his mass times his actual acceleration.  Since drag is proportional to the square of velocity, the drag force and the gravitational pull become equal, and acceleration reaches zero.  This is known as terminal velocity.  Not nearly as exciting as the name sounds.  I know; I was disappointed, too.  In equation form it looks like this:

This is a pretty standard force balance on an object moving through a liquid or a gas.

The drag coefficient includes half the surface area of the guy in the suit and a proportionality constant.  Based on his reported time in free fall and an area of 4.3354 square meters I found this to be 1.15, which is actually quite reasonable.

The next issue to deal with in the problem is the air density.  Anybody who’s ever been up a tall mountain or a plane ride knows air gets thinner with elevation.  As a result, drag forces are higher near the ground than at the elevation Baumgartner jumped from. Without that included, we’re only left with the beautiful sky, without any interesting math.  So we can expect that Felix’s speed went way up, and then it actually decreased until he pulled his parachute at 270 seconds.  Then it really dropped.  The air density equation is described below:

Air Density Eqn

Air density is a function of initial pressure, temperature, and height off the ground.

There’s a ton more I just didn’t have time to do.  For example the gravitational force also decreases as elevation increases and the temperature gradient is probably not a constant from the ground to the stratosphere, etc, etc.  But time is short.  Well, we’ve got two coupled differential equations.  There’s only one thing to do–code it and find out how it looks!  I punched this one out in Matlab:

Baumgart Posistion

Baumgartner’s position with time into the jump. The green circle marks the point at which his descent slows and the red line is the time where the chute was deployed.

The scale of this is in tens of thousands of feet, so you can see where he pulls the cord at 270 seconds and about 8400 feet.  At that point, his descent slows way down.  Now, the velocity of the fall:

Baumgartner's Velocity

Baumgartner’s Velocity over the time period of the jump. Maximum velocity predicted by the model is too low by about 4.1%.

Two important spots here.  On the right, you can see where his chute deploys.  On the left, you can see where he reached terminal velocity in the upper atmosphere about 50 seconds in.  This was his maximum velocity.  After that, the rising density of the atmosphere continually slowed him down.  Kinda like being married.

You can see the model is a little off; the maximum velocity should be 1223.75 feet per second, while I’ve got him maxing out around 1175.  It’s probably due to my crude modeling of gravitational and temperature gradients at high elevation.  What can I say?  There’s only so many hours in a Sunday afternoon. The model is named for Baumgartner but it can be used generically for any object falling through a gas.  It works out everything in SI units, then converts to feet at the end before plotting.  Changing the plot command to small letters puts it in  SI.  The rest is pretty straightforward.  The function call is:

Baumgart(r,L,CD,p0,u0,a0,rhoR,rho0,T0,Tf)

r = Radius of object         L = Length of object        CD = Drag Coefficient

p0 = initial position       u0 = initial velocity         a0 = initial acceleration

rhoR = density of object        rho0 = density of air at sea level

T0 = Temperature at sea level in Celsius         Tf = Temperature at p0

The call I used was:

Baumgart(0.65,1.9,1.15,39045,0,0,1062,1.48,-25,25)

Cheers!

PS–Are you following @CameronHoppe on Twitter yet?  If you follow me, I’ll follow you.  Plus you should leave a comment.  Just sayin’.

Code:

function Baumgart(r,L,CD,p0,u0,a0,rhoR,rho0,T0,Tf)
tstep = 0.00001;
 Tgrad = (Tf - T0)/30000000;
for n=1:30000001
 t(n) = (n-1)*tstep;
 u(n) = 0;
 p(n) = 0;
 end
n = 0;
for n=1:27000001
 a(n) = 0;
 rho(n) = 0;
 T(n) = 273.15 + T0 + (n-1)*(Tgrad);
 end
a(1) = a0;
 u(1) = u0;
 p(1) = p0;
 rho(1) = rho0*exp(-0.0284877*9.8*p0/(T(1)*8.3144621));
n = 0;
V = pi*L*r^2;
 A = 2*pi*r*(r+L);
 g = -9.8;
 k = 0.5*CD*A;
massR = rhoR*V;
for n=2:27000001
 u(n) = u(n-1) + tstep*a(n-1);
 p(n) = p(n-1) + tstep*u(n);
 rho(n) = rho0*exp(-0.0284877*9.8*p(n)/(T(n)*8.3144621));
 a(n) = (massR^-1)*(V*(rhoR-rho(n))*g + k*rho(n)*(u(n)^2));
 end
uS = u(27000001)*.25;
for n = 27000002:30000001
 u(n) = uS;
 p(n) = p(n-1) + tstep*u(n);
 end
for n = 1:30000001
 P(n) = p(n)*3.28084;
 U(n) = u(n)*3.28084;
 end
 A
 k
 m = plot(t,P);
 xlabel('Time (seconds)')
 ylabel('Position (feet)')
 set(m,'LineWidth',2)
%d bloggers like this: