The meaning of freedom

In the debate over national health reform in 2009-2010, the law’s conservative Tea Party opponents regularly claimed the mantle of freedom.  Where reform supporters relied on moral and technocratic arguments to make the case that health care must be affordable for all, the Don’t-Tread-On-Me backlash to reform was largely allowed to monopolize the powerful American virtue of freedom.

It was a curious sort of freedom that conservatives endorsed.  At its extreme, opposition to the Affordable Care Act stood for the freedom to succumb to the consequences of un-insurance.  This conception of freedom defended the “choice” to go without health insurance as a calculated, rational personal decision that ought to be respected.  Compelling individuals to carry insurance amounted to a tyrannical invasion on this autonomous decision.

Falling shortly after the maligned bank bailout during the 2008 financial crisis, the fury over moral hazard spilled into the health reform debate.  The economic term “moral hazard” holds that individuals and firms must be allowed to feel the consequences of their choices, or else shielding them from risk will perpetuate irresponsible behavior.  Just as bailing out the banks was thought to reward reckless financial conduct, bailing out those who opted to go without insurance let reckless decision-making off the hook, too.  Call it a “You Reap What You Sow” brand of freedom.

Though muted, pro-reform policymakers could stake a claim to enhancing freedom as well.  The entire point of health reform was to expand freedom from risk.  It would insure people who had the misfortune of falling ill so that they could access health services without bankrupting their future.  And it moved us closer to the day when health insurance is wholly separate from our jobs, freeing us from dependency on our employers for our healthcare.  This is an important kind of freedom, too.

In his 1941 State of the Union address, President Franklin Roosevelt four fundamental freedoms thought to be inherent to all people.  Among these was “freedom from want.”  To Roosevelt, basic protections from scarcity, risk, and poverty were necessary to truly effectuate individual freedom.  Without basic necessities, freedom was wholly illusory.  As he put it three years later, “We have come to a clear realization of the fact that true individual freedom cannot exist without economic security and independence.  Necessitous men are not free men.”

Roosevelt helped solidify the modern liberal conception of freedom—a freedom to economic security.  This freedom puts affirmative obligations on government to provide a degree of protection from the risks and hazards of markets and modern life.

On the other side, the conservative (or perhaps more aptly, libertarian) conception of freedom emphasizes freedom from government.  This kind of freedom aims to protect the unbounded autonomy of the individual from government interference.  Markets are thought to be sacrosanct aggregations of autonomous individual choices, preferences, and desires.  Government intercedes on this laissez-faire freedom only by imposing its will and disrupting individual choice.

Because of the American origin story—casting off the yoke of tyrannical British authority—many seem to assume that the conservative brand of freedom has a stronger claim to our history.  The liberal alternative, it’s thought, is just a socialistic perversion concocted by pro-centralization New Dealers.  But that’s just not the case.

In his magnificent book The Story of American Freedom, historian Eric Foner chronicles the different ways that the American ideal of freedom has been deployed in political rhetoric throughout our history.  As political and social contexts have shifted, so too has the rhetoric around freedom, liberty, and independence.  As Foner shows, the dueling claims of what it means to be truly free have been with us for centuries.

The earliest seeds of the modern debate begin to appear during the Jacksonian era.  Whig leaders like John Quincy Adams and Henry Clay argued that government action could enhance freedom.  They argued that the capacity to wield one’s freedom depended on one’s power, and that freedom was dependent on prosperity.

Jacksonian Democrats, on the other hand, began railing against the faraway federal government as the preeminent threat to American liberty.  “Building upon laissez-faire economics,” Foner explains, “Democrats identified government-granted privilege as the root cause of social injustice.”

In the antebellum period, freedom was often employed in relation to its looming antithesis: slavery.  Latching on to the abolitionist cause, populists and reformers condemned the industrial economy for crafting a system of wage slavery that restricted individual freedom at the hands of business.  The idea underlying wage slavery was that the market posed a threat to freedom.  But this idea fell out of mainstream circulation for a time, as abolitionists resisted the characterization and sought free labor as the goal of the antislavery movement.

During the post-war period, the Gilded Age ushered in a period of laissez faire freedom dominance in the end of the nineteenth century into the early twentieth.  Freedom was defined as the liberty of contract—that the ability of individuals to freely enter into economic and financial arrangements ought to be unimpeded.  It was a period that grounded a sense of freedom in meritocracy and Social Darwinism.

But some resisted.  The American Economic Association was established in 1885 to combat “laissez-faire orthodoxy,” declaring, “We regard the state . . . as an educational and ethical agency  whose positive assistance is one of the indispensable conditions of human progress.”  Similarly, the sociologist Lester Ward determined that “individual freedom can only come through social regulation.”

Ultimately, the association of “freedom” and Gilded Age Social Darwinism temporarily made freedom a dirty word in American politics.  The Progressive movement situated its policy goals in the language of democracy rather than freedom.

Still, the central concern of progressivism, according to New Republic editor Herbert Croly, was how Americans could be free in a modern industrial economy.  Croly explained that “Hamiltonian means” of government intervention into the economy were necessary to achieve the “Jeffersonian ends” of democratic self-determination and individual freedom.  The Progressives thought that robust, energetic government was necessary to create the social conditions for meaningful freedom.

In 1912, former president Theodore Roosevelt campaigned for president under the Progressive Party mantle.  The party’s platform, Foner writes, “laid out a blueprint for a modern, democratic welfare state,” replete with plans for health and labor regulation, an eight-hour work day, a living wage, union protections, and a national system of social insurance for unemployment, healthcare, and old age.  Roosevelt’s freedom meant liberty from corporations effectuated through government power and regulation.

Theodore Roosevelt’s progressive version of freedom gained wider acceptance and circulation two decades later under FDR.  On the heels of the Great Depression, the nation saw how economic devastation can render theoretical freedoms meaningless.  Accordingly, FDR sought to guarantee freedom from want, establishing welfare state programs to protect Americans from the vicissitudes of modern economic life.

Left-wing pressure in the United States helped contribute to Roosevelt’s bold social democratic platform.  But after World War II, hostility between the Soviet Union and the United States made Americans define freedom in contrast to the Soviet Union, veering once more back toward laissez faire freedom.  Moreover, the economic abundance during this time produced great faith in capitalist institutions.  “Cold War affluence,” Foner writes, “greatly expanded the constituency that identified freedom with free enterprise.”

In the 1960s, President Johnson launched a War on Poverty, but implicitly deviated from the New Deal’s diagnosis of economic struggle.  “In a departure from the New Deal, when poverty had been seen as arising from an imbalance of economic power and flawed economic institutions,” Foner writes, “in the 1960s it was attributed to an absence of skills and opportunity and a lack of proper attitudes and habits.”  Therefore, many of Johnson’s antipoverty initiatives eschewed direct interventions—like a guaranteed minimum income for the non-elderly or government-created jobs—in favor of skills training and education.  Johnson’s programming aimed to enable individual self-liberation from the “enslaving forces of his environment.”

Nonetheless, Foner marks the 1960s as the era when “freedom” began to be co-opted by conservatism and relinquished by the left.  “As the social movements spawned by the sixties adopted first ‘power’ and then ‘rights’ as their favored idiom,” he writes, “they ceded the vocabulary of ‘freedom’ to a resurgent conservatism.”  This left conservatism with free rein to equate freedom with unfettered capitalism, as Milton Friedman (and later, Ronald Reagan) did, or to proclaim resistance to government economic and anti-discrimination regulation under the guise of freedom, as Barry Goldwater did.

This inexorably led to a resurgence of 1900s-style Social Darwinism.  This brand of conservatism, ostensibly grounded in principles of freedom, warned against government intervention into the “natural” workings of the economy; held that the distribution of wealth reflects individual merit; and deemed the plight of the unfortunate, too, a product of their own failings.

Left unchecked, this conception of “freedom” grew to dominate political discourse in the United States.  Liberals argued for their policies in technocratic terms, promising to provide economic help to a struggling middle class.  But conservatives relentlessly assailed any intervention as Big Government stepping on the throat of individual freedom.

Liberals seemingly forgot that they too have a claim to the virtues of freedom—a claim that their intellectual predecessors invoked countless times from the nation’s founding onward.  The free market has no mind for any individual’s particular well-being, autonomy, or bodily security.  In a time of ever expanding economic volatility, “freedom from want” still resonates as an audacious ideal.  So does the social insurance platform that flows out of it.

Foner shows that in the political debates that have raged throughout our history, the side that lays a stake to the rhetoric of freedom tends to seize the upper hand.  Freedom goes to the core of the nation’s identity, self-conception, and perceived purpose of its founding.  Reformers and policy advocates would be wise to listen to Richard Armey, former House Republican leader, who said, “No matter what cause you advocate, you must sell it in the language of freedom.”

Advertisements

The raw end of the free trade deal

Any economic change creates winners and losers.  “Creative destruction” is the economic concept that innovative, efficiency-promoting advancements also tend to displace segments of the preexisting status quo.  Uber generates benefits for consumers, but disrupts the taxi industry.  Automation makes consumer goods cheaper, but imperils jobs for workers.

Globalization has been one of these economic changes.  The rise of globalization promised vast new global wealth from lifting barriers on the movement of goods and people.  And on the whole, American consumers have immensely benefited from cheaper consumer goods and the bounties of global trade.  But globalization also triggered tectonic shifts in American workplaces.  Industries that, in a pre-globalized world, provided a good living to millions of working-class Americans suddenly faced international pressure and increasingly offshored their workforces to faraway countries.  Spurred by globalization, these companies picked up and left countless American communities in the dust.

In a fair political economy, the deal is supposed to be that we take a slice of the gains from broad economic innovation to compensate those on the losing end.  In theory, we could take some of the surplus wealth generated by free trade and direct it to those Americans who have been hit hardest by this creative destruction—those whose jobs have vanished and whose towns have dried up.

But that hasn’t happened.  Despite the diffuse gains of globalization, we haven’t provided much in the way of targeted help to those who have been net losers.  And those who perceive themselves to be net losers have noticed.

The missing compensation from globalization is becoming the defining political issue on both sides of the Atlantic and is scrambling political divisions.  At the New York Times, Nate Cohn writes that the Brexit vote signals “the emerging split between the beneficiaries of multicultural globalism and the working-class ethno-nationalists who feel left behind.”  Pro-Brexit votes flowed in from traditional Labour Party strongholds in working-class neighborhoods, with the dagger for “Remain” coming when 62 percent of Sunderland, a once reliable pro-Labour region, voted to “Leave.”  Similarly, at the Washington Post, Matt O’Brien writes that Brexit marks the beginning of the revolt by globalization’s losers—disproportionately concentrated in the working- and middle-classes of rich-world countries.

And let’s not forget Donald Trump, who has made walling off borders and tearing up trade deals—in effect, reversing globalization—the calling card of his nationalist campaign for president.  And who formed the core of Trump’s base?  A “certain kind of Democrat,” according to Cohn; specifically, less educated white registered Democrats who nonetheless identify as Republicans in the South, Appalachia, and the deindustrialized North.  Just like the “Leave” vote sweeping through working-class Sunderland, Trump’s ethno-populism has resonated with white working-class voters and the economic devastation they face in 2016.

So what to do?  Must globalization either march forward or else reverse itself to stem the political unrest fueling its working-class resisters?  Not necessarily.  There is a third option between globalization and no globalization, and it’s global capitalism paired with robust social insurance regimes.  As Marshall Steinbaum of the Center for Equitable Growth points out, “we once solved the problem of the conflict between capitalism and ethno-nationalist backlash with social democracy.”

We’ve fallen far short of that solution.  Whether a Bernie Sanders-style social democratic overhaul or a more targeted approach to aid those displaced by free trade, we have done little to cushion Americans against economic upheaval.  The rise of globalization has dovetailed with decades of stagnant income growth, mounting inequality, and ever-growing financial strain on American families.  Yet the United States hasn’t adopted the kinds of social insurance protections needed to match the increasing volatility and insecurity of twenty-first century capitalism.  And while we provide a small program to retrain and compensate certain workers who have lost out due to free trade, we do relatively little to otherwise target help to the communities that are hit the hardest.

Which means we’ve failed to live up to our end of the bargain.  Creative destruction is immensely valuable and can do wonders to improve overall well-being.  But it inherently causes destruction, and that destruction doesn’t just dissipate with time.  We’ve reaped the diffuse benefits of globalization, but have done little to level with those bearing the targeted costs.  This failure is a big part of the discontent we’re seeing rock both sides of the Atlantic now.

The House GOP’s you’re-on-your-own replacement for Obamacare

For six years, congressional Republicans have been screaming to “Repeal and Replace” Obamacare.  They proved quite adept at making symbolic efforts toward the “Repeal” half of this talking point, voting more than 60 times to tear up the national nightmare that has driven our uninsured rate to record lows, with the most recent vote fittingly falling on Ground Hog’s Day.

Coalescing around a single serious and workable replacement for the law, however, proved more elusive.  But this week, House Republicans finally put pen to paper, inching closer toward the conservative legislative solution to the nation’s healthcare crisis that they have promised for six years.  And it isn’t pretty.

To fill the Trump-sized conservative policy vacuum in 2016, House Republicans have been rolling out an affirmative conservative policy agenda called “A Better Way,” led by Speaker Paul Ryan.  And on Wednesday, Ryan and company released a policy paper finally detailing how GOP lawmakers would tackle the project of health reform.

The report begins with the standard right-wing airing of grievances about Obamacare.  It has caused premiums to increase; if you like your plan, Obama took it away from you; it cuts reimbursements to hospitals and providers—all the classics make a cameo.  Most notably, the plan accuses Obamacare of hampering the economy and employment—even though we’ve now had seventy-five months of continuous private sector job growth since the law was passed.  (But why abandon old disproven talking points now?)

With that aside, the plan gets to the heart of the matter: the conservative vision for health reform.  It turns out to be a barely warmed rehash of typical conservative healthcare ideas.  But assembled all together once more, it draws out what the conservative vision for health insurance really looks like: simply providing less of it.

Less insurance from private insurers for workers without employer-based coverage

For individuals who received insurance through their employers, it’s largely business as usual under the conservative plan.  Like Obamacare, the House plan has little impact on employer-provided coverage.  But it does seek to cap the tax exclusion for employer-based insurance.  Though the plan professes otherwise, this is essentially indistinct from Obamacare’s much-maligned (and much-delayed) Cadillac tax on lavish health insurance plans.

For those without employer-provided insurance, the House plan scraps Obamacare’s health exchanges and income-based subsidies.  It replaces them with a refundable tax credit for individuals to purchase “a plan of their choice, rather than the current offering of expensive, one-size-fits-all, Washington-approved products.”  Newly unbound health shoppers, freed from the shackles of Obamacare’s quality-regulated marketplaces, could take their tax credit and buy anything, anywhere called “insurance.”

The refundable tax credit would be adjusted by age, rather than by income.  The conservatives also provide no guarantee that it will cover the entire cost of insurance, like Obamacare does now for many working families.  It only promises to “help offset the cost” of insurance.  While the tax credit will supposedly be sufficient to purchase a “typical pre-Obamacare health insurance plan,” this entirely banks on cost savings from tossing out Obamacare’s regulations around the type of benefits plans must offer—meaning the savings come from muddying the quality of insurance.

To that end, conservatives would promote flimsier insurance with greater out-of-pocket costs.  A conservative health reform staple, the plan encourages high-deductible insurance plans that kick in only for the most devastating healthcare costs, leaving patients on the hook for everything else.  It then couples this insurance with tax-advantaged health savings accounts, in which individuals can save to cover out-of-pocket costs.

Rather amazingly, the plan boasts that it would come to the rescue of those trapped in the Medicaid coverage gap.  “[A]s a result of Obamacare’s poor design and incentives, many Americans— who do not have an offer of health insurance through their employer— have fallen into a coverage gap between their state’s Medicaid eligibility and the eligibility criteria for the Obamacare subsidies.”  The reason for this gap at all, of course, is that conservatives in 19 states have refused to adopt Obamacare’s Medicaid expansion that would provide coverage to these nearly 3 million people today.

And on the Medicaid expansion, the plan complains that Obamacare, which covers more than 90 percent of the states’ expansion costs, is too generous.  It argues that this leaves the federal government covering a bigger share of the cost for near-poor adults than it does for the disabled, elderly, or children in poverty, so the federal match for this less-deserving population should be cut.  Which is pretty remarkable, given that the chief justification for conservative opposition to the Medicaid expansion in the states has been that the federal government wouldn’t follow through on its funding commitment and would leave the states holding the bag–that is, that the federal government wouldn’t be generous enough.

The plan retains two of Obamacare’s most popular features.  It would continue to let children stay on their parents’ coverage up until age 26.  And it keeps the law’s monumental guarantee that no one can be denied coverage on account of a preexisting condition.

But it unravels Obamacare in countless other significant ways.  The plan would weaken Obamacare’s age-rating rules, which currently require insurers to charge older people no more than three times the premium rate charged to the young.  Conservatives would up this to five times, increasing the cost of insurance for older Americans and chipping away at universal health care’s communal ethic.  And even this limit is a mere default suggestion, because the GOP would give states the ability to “narrow or expand” this ratio.  (“After all, states understand what their residents want and need better than Washington.”)

The House plan would allow for state experimentation in a number of ways.  And given that the plan allows for buying health insurance across state lines (removing Trump’s “lines around the states”), conservatives seem downright eager to create a race to the bottom beholden to whichever state offers the loosest regulations, weakening the quality of insurance.

And of course, the plan would repeal Obamacare’s mandate to purchase insurance.  In lieu of an individual mandate, the conservatives would penalize those who go without insurance by (1) forfeiting their continuous coverage protections (which would provide HIPAA-style insurance portability after certain life events to those in the individual insurance market, in addition to those in the employer market), and (2) imposing higher coverage costs in the future.

Less insurance from Medicaid for the poor

For the poor, the conservatives would block grant Medicaid to the states in order to cut federal funding.  This is the same tack conservatives and President Clinton took to shrink the federal obligation for welfare benefits, devolving that program to the states, where the protection the program provides has been allowed to shrivel away.

The House conservatives would pay the states a fixed amount to manage their own Medicaid programs.  By putting a cap on federal Medicaid spending and turning the program over to the states, conservatives claim to be looking out for the “freedom and flexibility” of the states.  “For too long, states have been treated like junior partners in the oversight and management of the Medicaid program[,]” the plan mourns.

But ultimately, the plan admits, the real purpose of block granting Medicaid is to “[r]educe federal funding over the long term.”  Conservatives would kick the healthcare cost conundrum down to the states, who face their own fiscal pressures and are constitutionally blocked from running budget deficits.  At the end of the day, the conservative block-grant scheme will shore up the federal budget by providing less health insurance coverage to the poor.

No more guaranteed single-payer Medicare for retirees

For retirees, the GOP would transform Medicare from a single-payer, guaranteed benefit system into a competitive marketplace with only a guaranteed contribution toward premiums from the federal government.  Under the new system, seniors would receive a subsidy from the government, which they would then take to a marketplace where they could pick from a variety of competing private health insurance plans.  Sicker seniors would receive greater benefits, and low-income seniors would receive additional cost-sharing subsidies.  Premium support would be means-tested, paying less to high-income seniors.

Stop me if this sounds familiar.  This structure is nearly identical to Obamacare’s health exchanges for those who lack employer-provided insurance.  Which is ironic, given the litany of horrors that the House GOP rattled off at the beginning of their report.  If Obamacare is such a nightmare, why would Republicans want to enact the same reform for retirees?

Importantly, in the mix of insurance options on the conservatives’ new retiree health marketplace, one stands out: traditional Medicare.  Under this plan, Medicare becomes a public option competing with private plans for enrollees.  But what happened to the conservative fear that private insurers could never compete with the pricing of a public option?  Given the plan’s objections to Obamacare, maybe this is a tacit admission that competition from a public option would help constrain premium costs.

Notably, the GOP provides no specifics on how exactly it would calculate the level of premium support to Medicare recipients.  Low-balling the scheduled increase in the premium support subsidy has been the key to the estimated cost savings in Ryan’s previous Medicare privatization plans.

Ultimately, the House GOP wants to dismantle traditional, wildly popular, single-payer Medicare and submerge it as one option among competing private health insurance plans.  Maybe all seniors will just choose the Medicare option anyway, but it’s a first stab at shifting seniors away from a government-provided guaranteed-benefit program and toward private sector plans.

* * *

In sum, the conservative health plan would shift Obamacare’s exchange-based structure from workers to retirees.  For those with insurance through their jobs, your insurance continues to go largely untouched.  And for everyone else, you just get less insurance.

And ultimately, the conservative vision of health reform just repeats the same tune.  The solution is always to block grant: to block grant Medicaid to the states, to block grant a tax credit to individuals, and to block grant premium support to seniors.  Block granting gets the federal government out of the business—and away from the risk—of actually insuring people.

But who picks up that risk?  Individual Americans do.  With high-deductible plans and HSAs, individuals bear the burden of funding their own healthcare for all but the most catastrophic injuries.  With block-granted Medicaid, the poor wind up on the receiving end of federal and state budget slashing, with little institutional voice to stick up for them.

In truth, hacking away at the security provided by insurance has long been a conservative goal.  The whole point of insurance is to spread risk from the individual to the larger community.  But conservatives fear that this creates moral hazard, weakening the individual incentive to spend less and engage in responsible behavior.  To control costs and impose individual responsibility, people need to have “skin in the game”—to have their own dollars on the line.

The GOP dresses this up in language about promoting choice, flexibility, and consumer-driven care.  “One way to immediately empower Americans and put them in the driver’s seat of their health care decisions is to expand consumer-driven health care,” the report claims.  But what being in the driver’s seat really means is that you’re on the hook if you get sick.  By shifting health care risk back on to individuals, conservatives erode the very point of insurance.  So after six years, “Repeal and Replace” still just means “You’re on Your Own.”

The case for a children’s basic income

A guaranteed basic income is becoming the pipedream du jour on the left, making it all the way to a favorable review in the pages of the New Yorker.  Experiments are underway in towns in Finland and the Netherlands to give all citizens a government-provided minimum income.  Venture capital firm Y Combinator is planning a basic income pilot program in Oakland, and non-profit GiveDirectly is trying to alleviate extreme poverty in East Africa with a pilot of its own.  In Switzerland, a basic income ballot referendum went down to defeat, but more than half a million Swiss voters supported creating such an entitlement.

It’s an intriguing idea, and one that serves a broad range of policy and ideological interests.  To Silicon Valley types, basic income can prepare for technology-driven labor displacement.  To some liberals, basic income combats poverty and rising inequality.  To others, a sense of utopian curiosity wants to see what happens when, unbound by scarcity and the grind of eking out a living, individuals can flourish to become their best selves, freed to pursue their passions, ideas, and humanitarian instincts.  And to conservatives, a basic income can elegantly replace most of the welfare state altogether.

As interesting as a basic income may be, it’s undoubtedly a politically farfetched scheme for the United States in the near future, to say the least.  But can we seize the principles of a basic income to take incremental steps to help those who would gain from it the most?

I’ve argued that those who support a basic income should make providing a child allowance to families with children one of their top priorities.  Children are entirely morally blameless for their poverty, and poverty holds back their academic achievement, making a mockery of the American ideal of equality of opportunity.  They also stand to gain the most from living in households with more money.

And among children, the youngest are likely to substantially gain the most from more income support.  In Congress, Rep. Rosa DeLauro has introduced a bill to create a new Young Child Tax Credit to provide relief to families with children under three years old, recognizing both that families need support during this special (and costly) time in their lives, and that young children would benefit immensely from extra resources.

The Empirical Case for a Young Child Allowance

The strongest empirical case for boosting the household incomes of poor kids comes from a 2014 paper by Greg Duncan, Katherine Magnuson, and Elizabeth Votruba-Drzal.  Reviewing the evidence, Duncan et al. conclude that “children from poor families that see a boost in income do better in school and complete more years of schooling.”

Duncan et al begin by reminding us of the long-lasting destructive consequences of child poverty.  Some 16 million American children live in poverty—more than one in five.  And the disadvantages to a child growing up in poverty reverberate for a lifetime, suppressing her years in school, halving her earnings in her 30s, slashing the hours she’ll work as an adult, increasing the odds of her landing on food stamps in adulthood, and raising her likelihood of ill health.  Growing up in poverty doubles the probability of young boys being arrested during their lives, and it quintuples the odds of teenage pregnancy among girls.

Poor children enter school behind their peers by the time they are in kindergarten.  On every basic metric, low-income children face a yawning gap between more privileged children, from recognizing letters to counting.  Poverty thus strongly appears to impede child development early in life.

Duncan et al explore three different explanations for why poverty hinders development: family and environmental stress; resources and investment; and culture.  Under the family and environmental stress theory, poor households face a mountain of constraints and limits that produce harmful stresses.  Parents contend with economic pressure and are forced to cut back on basic essentials.  This pressure causes psychological stress, producing depressive and hostile feelings.  This psychological stress can also distort decision-making and render parents less able to pursue long-term goals.  Financial scarcity creates marital tension and tends to lead to developmentally harmful parenting techniques.  And of course, economic scarcity leads to a whole host of bads, like dilapidated housing, dangerous neighborhoods, struggling schools, and exposure to pollution.  Studies show that when children are chronically exposed to elevated stress levels, the region of their brains responsible for self-regulation suffers.

Under the resources and investment theory, poor parents are too crunched for time and money to fully invest in their children.  Because of their parents financial constraints and work obligations, poor children “lag behind their wealthier counterparts in part because parents have fewer resources to invest in them.”  Poor parents are more often at the mercy of inflexible and irregular work hours, making it harder to make time for their children.  And poor children are exposed to far fewer enrichments like books, computers, and camps than wealthy children—an inequality that has grown substantially over the last forty years.

Under the culture theory, the structural impediments from living in poverty produce maladaptive norms and behaviors in individuals, which are then transmitted to children and cause another generation of poverty.  In this view, poverty and the welfare state inadvertently promote single motherhood, male joblessness, and increased crime.  A “culture of poverty” also influences parents to focus on keeping their children safe, regulating their behavior, and enforcing discipline, whereas better-off parents focus on letting their children grow and flourish.

Scholars like William Julius Wilson have pushed back against the cultural explanations of poverty, showing, for example, that poor women strive for marriage and motherhood, but run up against high rates of male incarceration and unemployment that make marriage unattainable or less desirable in practice.  Others acknowledge the role of structural social and economic factors, but aim to impart middle-class norms and behaviors to low-income children in order to compensate for the apparent political and cultural immovability of entrenched poverty.

While poverty is abhorrent at all ages of childhood, Duncan et al show that it’s most destructive at the earliest ages.  “[D]uring early childhood,” they explain, “the brain develops critically important neural functions and structures that will shape future cognitive, social, emotional, and health outcomes.”  Poverty gravely interferes with this development.

Duncan et al point to the famous high-quality childcare studies demonstrating the importance of the earliest years of life.  The long-term benefits to at-risk children placed in high-quality care in the Abecedarian and Perry Preschool programs show that infancy and toddlerhood are fruitful points to make positive interventions in a child’s development.

Next, Duncan et al evaluate the empirical evidence for boosting family income to help child development, focusing on experimental and quasi-experimental randomized studies from policy changes and pilot programs.  Between 1968 and 1982, six towns across the United States experimented with a negative income tax, essentially a basic income-style precursor to the modern Earned Income Tax Credit.  Studies measuring outcomes for children receiving these benefits found significant achievement gains for children in elementary school, but no corresponding impact for older children.  The studies did not measure the effects in early childhood.

Welfare reform in the 1990s, which encouraged parents to work and thereby increase their incomes, also provided an opportunity to study the effect of income gains on poor children.  Studies found that when welfare reform’s wage supplement programs took effect, children in early elementary school scored significantly higher on achievement tests.  In fact, a $3,000 increase in annual income was associated with an achievement gain of one-fifth of a standard deviation for these children.  Again, however, no gains were seen among older children.

Between 1993 and 1996, Congress greatly expanded the generosity of the Earned Income Tax Credit, which rewards work among long-income families.  Researchers found that the expansion of the EITC coincided with improved academic achievement among low-income children between the ages of 8 and 14.

In Canada, researchers studied the impact of variations between provinces in the country’s national child benefit on test scores.  Among children between 6 and 10 years old, more generous benefits were associated with both higher math scores and a lower likelihood of receiving a learning disability diagnosis.  There were also signs of gains among younger children, particularly boys.

Last, in North Carolina, a tribal government opened a casino and began paying $6,000 to each member of the tribe every year.  A study found that children in families receiving these casino payments had increased school attendance rates and were more likely to graduate high school.

Duncan et al conclude that these experimental and quasi-experimental studies suggest that elementary school-aged children have the highest academic gains.  Gains among adolescents were more muted, but did boost educational attainment like increasing years of schooling and high school graduation.  The authors noted that few of these studies estimated the impacts of higher family income during the early childhood period.

Duncan et al also point out that in the non-experimental Panel Study of Income Dynamics, researchers found that among families earning below $25,000, an annual boost to household income before their children turned 5 was associated with increased working hours as adults, increased earnings, and lower rates of food stamp receipt.  Older children saw no statistically significant impact.

The authors then examine the policy implications of these findings.  “If the evidence ultimately shows that poverty early in childhood is most detrimental to development during childhood and adolescence,” they posit, “then it may make sense to consider income-transfer policies that provide more income to families with young children.”  They specifically suggest creating more generous supplements to the EITC and/or the Child Tax Credit for families with young children—essentially the proposal introduced by Rep. DeLauro.  This mirrors the strategy adopted by several European countries like Germany and France that offer age-dependent income subsidies for families with young children.

It would also draw on the conditional cash transfer programs like the EITC and other programs in the developing world that give cash grants to individuals who engage in beneficial behavior, like working.  In New York City, the Bloomberg administration tested a Family Rewards program from 2007 to 2009, which gave cash incentives to promote a host of work, education, and health goals.  While the program lifted significant numbers of New Yorkers out of poverty, it failed to boost the academic achievement of elementary or middle school students.  However, the program was weighed down by a confusing multitude of incentives and irregular payments.  Family Rewards 2.0 is now underway in the Bronx and Memphis, Tennessee.

Finally, Duncan et al conclude by warning that the policy implications cut both ways: just as young children would gain from policies that boost household incomes, they would also suffer from policy choices that slash incomes and in-kind benefits like food stamps—the kinds of cuts pushed by Speaker Paul Ryan and other congressional conservatives.

Social Security for the Young

A young child allowance, or some permutation of it, would be a confluence of several of the principles animating the push for basic income.  For progressives, a young child allowance would combat inequality and poverty among the most vulnerable Americans.  For techies, it would invest in the faculties of young children to some day dream up the next frontier of innovation.  If Social Security for the elderly is a reward for a life’s work, “Social Security for the young” is an investment for future productive years to come—a pair of policies nicely bookending the life cycle.

While it would be wonderful if Congress saw the light and quickly passed Rep. DeLauro’s young child tax credit, Washington gridlock makes that a virtual impossibility in the near term.  And sure, if you squint hard enough and suspend some disbelief, you can see how a President Hillary Clinton’s childcare reform plan could morph into a quasi-child allowance through negotiations with congressional Republicans.  But I’m not holding my breath.

In the meantime, then, policy experimentation around this issue will need to take place at more local subsidiary units of government.  As I’ve written, this could take the shape of a child allowance with or without conditions: “One could imagine an enterprising city, school district, or even a well-funded and ambitious charter school trying out an initiative that (1) provides each family with a monthly ‘scholar success stipend’ for each of their children, and (2) conditions receipt of a full payment on children meeting certain basic expectations in school.”

This would seem like a particularly apt place for adventurous charter schools, whose appeal stems in part from their ability to innovate around providing added resources for low-income children, like highly-paid teachers and wraparound social services.  Why not try providing income support, too?

So the socially impactful venture capital firms and non-profits of the world should step up and partner with a town or a network of schools to see what happens when families with poor children suddenly get more money.  As the empirical evidence shows, these experiments should foremost target the youngest children who will reap the greatest developmental gains.  Just as a childhood in poverty can reverberate for decades, so too can a childhood of ample means — one that allows children to be children, to develop and thrive.  That’s an investment that would truly transform our society, and would be a big legitimizing down payment toward a basic income for all.

 

The American tradition of big government

The myth that the American economy’s traditional and natural state is laissez-faire and government-free predominates over the conventional understanding of American history.  To some, a free market unencumbered by government meddling has forever been sacrosanct to the American project.  A lightly regulated economy is part and parcel of American freedom, it’s thought.

It turns out that this view deeply misunderstands our history, both far and recent.  Academics are challenging the conventional wisdom, showing that government action has been an integral part of the American economy throughout our history.  Jacob Hacker and Paul Pierson recently demonstrated in their book American Amnesia how government made the crucial public investments necessary to lay the foundation for broad-based rapid economic growth in the twentieth century.  The economy works best when the government works in tandem.

But the history of federal intervention into the market economy stretches back far earlier, dating from the earliest days of the republic.  In The Case for Big Government, Jeff Madrick lays out exactly that: a case for robust programmatic regulation and government action in the twenty-first century.  Like Hacker and Pierson, Madrick sees government action as an essential ingredient to a healthy and fair modern economy.

One particularly valuable section of Madrick’s case traces the history of federal intervention into the economy from the nation’s founding to the 1950s.  As Washington’s secretary of treasury, Alexander Hamilton favored a strong and active federal government that imposed excise taxes and tariffs on imports.  He endorsed public investments in infrastructure; fought for the establishment of a central bank; and promoted subsidies to get new industry off the ground.  He also injected the federal government into state economies to assume the war-time debts of the states.

Thomas Jefferson too came to promote an active government in the economy.  As a Virginia legislator, he proposed giving land grants to all citizens without property.  As president, he set aside federal land for schools and embraced federal financing of roads.  And of course, he greatly expanded the geographic sphere of the United States by stretching the bounds of his perceived constitutional authority to sign off on the Louisiana Purchase.

James Madison adjusted government’s role as the United States began to shift from an agricultural economy.  He believed wage labor would displace land ownership as the core of the economy, so he enacted a new tariff to protect domestic manufacturing.  He also supported a second national bank.

Notably, Madison would not support federally-funded internal improvements and transportation.  Both he and Jefferson thought that this required a constitutional amendment.  John Quincy Adams abandoned this reticence and made massive investments in roads and canals, setting the precedent for a federal role in developing the nation’s physical infrastructure.

Following this long early period of consistent federal intervention to provide the foundation and investment to develop a growing economy, the presidency of Andrew Jackson momentarily halted the pro-federal intervention consensus.  Under Jackson’s rugged individualist ethos, the federal government pivoted back toward laissez faire, devolving economic intervention to state and local government.  In the meantime, states took on important public transportation projects, like the Erie Canal in New York.  States financed more than two-thirds of the cost of new canals, and also provided generous land grants and subsidies to railroads.  These public investments were essential in developing the nation’s transportation network.

During the Reconstruction era after the Civil War, the federal government provided generous federal land grants to subsidize the development of transnational railroads.  These were expenditures akin to tax exemptions or tax credits today: revenue uncollected or resources un-monetized by the government to encourage certain private activity.  Even earlier, the government made generous land grants to colleges under the Morrill Act, and expanded the postal system.

Beginning in the late 1890s, the Progressive era saw government intervention into the economy accelerate.  At the federal level, government sought to break up industrial consolidation through anti-trust actions.  State and local governments increasingly invested in health programs, city services, education, and public goods like parks.

These investments saw huge gains, including a five-fold increase in the number of Americans completing high school between 1910 and 1930.  It also ushered in a shining new “age of sanitation” from public health investments to fight disease and invest in sewage systems.

During this time, states also began regulating the workplace to protect employees, imposing minimum wages, maximum hours for women, child labor laws, and widows’ pensions.  They undertook important regulations to protect retirees and consumers.  And they helped spread the reach of energy by establishing and regulating electric and gas utilities.

And of course, activist government came to a crescendo under Franklin Roosevelt’s New Deal.  On the heels of the Great Depression, Roosevelt created a flurry of new government programs to spark the economy and protect Americans’ livelihoods.  The FDIC came into being to insure bank deposits.  The SEC was created to patrol Wall Street.  Glass-Steagall was enacted to separate investment banks from plain vanilla commercial banks.  A national minimum wage guaranteed basic pay for all working Americans for the first time.  Robust public works and infrastructure investment put people back to work during the Depression while improving the nation’s physical stock.  The G.I. Bill made it easier for a generation of returning soldiers to pay for school and housing, and facilitated the modern middle-class life.  Social Security eliminated the elder poverty produced by laissez-faire capitalism and promised retirees a decent living.  And income taxes were cranked up to pay for the war effort and welfare state expansion.  Top marginal rates crept above 90 percent, creating a de facto maximum wage.

After the Roosevelt and Truman generation of welfare state dominance, President Eisenhower too took up moderate efforts to keep government involved in the economy.  He expanded Social Security to reach an additional ten million workers.  And he created the national highway system—yet another mass transportation project to facilitate economic activity and travel.

And so on.  The story of American economic triumph is one featuring a large and active role for government throughout.  As Madrick explains, government interventions in the economy have several benefits.  First, government can step in to provide public goods that would be under-provided by the market’s profit motive.  Second, government can be the focal point for necessary and useful coordination to create economies of scale, such as railroads, water systems, and highways.  Third, government can stimulate the economy by boosting the economic standing of workers, whether through a minimum wage, labor protections, or union rights.  And fourth, government intervention can provide macroeconomic stability through Keynesian demand management when the private sector turns sluggish.

All of which adds up to an economy that’s stronger, fairer, and more resilient.  As Madrick and others have shown, whether judged through the lens of historical experience or economic empirics, there has long been a compelling case for big government in the United States.