Resurrecting the public option

Don’t call it a comeback, but the public health insurance option is having a boomlet of sorts.  After being unceremoniously axed from the Affordable Care Act by the centrist Democrats who provided the clinching Senate votes in 2010, the idea for a widely available government-run health insurance plan spent years in the political wilderness.

But murmurs on the left about resurrecting the public option have been percolating lately.  And the public option’s biggest boost came from President Obama this week.  Writing a reflection on the effects of the Affordable Care Act for the Journal of the American Medical Association, Obama called on Congress to “revisit a public plan to compete alongside private insurers in areas of the country where competition is limited.”

Obama’s re-endorsement of a plan he shelved six years ago is significant.  The public option was a favorite among liberals, who saw it as a compromise on single-payer that gave Americans the freedom to choose insurance from outside the private sector.  And if the public option could price like Medicare, it could have imposed significant cost pressure on its competitor private insurance plans by benefiting from government purchasing power and holding down administrative costs.

Granted, Obama knows a public option has no chance of getting through Congress, so he won’t be converting his JAMA piece into legislative language anytime soon.  And he also seems to envision a much more limited public option than what was originally debated during national health reform.  His refocus on the public option comes principally from a desire to expand consumer choice particularly in those markets that lack a robust marketplace of competing insurers.  As the president notes, some 12 percent of Obamacare enrollees live in counties with only one or two insurance options.  These tend to be lightly populated rural areas that private insurers aren’t eager to do business in.  Obama wants a public option in these specific areas as a means of injecting competition into stagnant marketplaces.

It’s worth remembering that during the health reform negotiations, there was briefly a bipartisan proposal to include a public option “trigger,” where the public option would only go into effect in certain states that fell short of sufficient insurer competition and cost control.  This proposal was endorsed by both Obama’s then-Chief of Staff Rahm Emanuel and Republican Senator Olympia Snowe.  This trigger was modeled off of a feature of the Republican-led 2003 Medicare prescription drug benefit, which included a similar trigger if competition lagged in that market.  Had the Emanuel-Snowe proposal made it into the final bill, Congress would have no occasion to “revisit” the public option today—such an insurance option in non-competitive areas would be automatic.

But Obama isn’t the only one rediscovering the public option.  Hillary Clinton too recently endorsed building on Obamacare to provide a public option.  She also seems to envision the public option existing on a state-by-state basis, and wants to work within the law’s existing infrastructure to do so without involving Congress.  Specifically, she promises to “work with interested governors, using current flexibility under the Affordable Care Act, to empower states to establish a public option choice.”  What Clinton presumably has in mind is working through Obamacare’s innovation waivers to let states build and run their own public options.

This proposal, coupled with her plan to let people above a certain age (but below retirement age) buy into Medicare, was seen as a meaningful effort to appropriate some of Bernie Sanders’s agenda.  And indeed, Sanders, who has called for a Medicare-for-all single payer system, applauded Clinton’s new healthcare plans, saying that it was an “extremely important initiative” and “an important step forward.”

It’s worth remembering, however, that the details of the public option matter immensely, particularly details regarding its reimbursement structure, federalism, and eligibility criteria.  The strongest version of the public option would offer reimbursement rates tied to Medicare’s, benefiting from Medicare’s purchasing power and ability to offer providers low rates.  It would also be a single national plan run by the federal government in all fifty states.  This would maximize purchasing power and minimize administrative overhead.  And the plan would be offered to a broad base of customers, such as all non-elderly adults without access to employer- or government-provided insurance.

Eroding these characteristics leads to a weaker public option.  Based on her description, Clinton’s plan sounds like it will be run by individual states, meaning it likely won’t be tied to Medicare reimbursement rates.  Tying it to Medicare rates would almost certainly require an act of Congress, and it’s hard to see how Clinton or individual states could do so on their own.  And any move to tie the public option to Medicare rates would draw cries of unfairness from insurers afraid they couldn’t compete, and howls of socialism from conservatives fearing creeping single-payer.  Clinton’s plan therefore appears to be a relatively weak version of the public option.  (It’s not clear what eligibility requirements she would attach to it.)

But Clinton does seem to see the state-based public option as only an intermediate stopgap to something stronger.  According to her campaign website: “As she did in her 2008 campaign health plan, and consistently since then, Hillary supports a ‘public option’ to reduce costs and broaden the choices of insurance coverage for every American. To make immediate progress toward that goal, Hillary will work with interested governors, using current flexibility under the Affordable Care Act, to empower states to establish a public option choice.”  For Clinton, then, a weak public option may be the best she can do through existing executive authority, but the long-term end-game may be a more robust government plan.

Indeed, as healthcare experts Helen Halpin and Peter Harbage note, the genesis for the public option started as a state-based idea in California in the early 2000s.  Only later did politicians like John Edwards and policy experts like Jacob Hacker build on the state-based idea to propose a stronger national public option.  Perhaps we need to return to the idea’s state-level roots to truly resurrect the public option from health reform’s scrap heap.

The meaning of freedom

In the debate over national health reform in 2009-2010, the law’s conservative Tea Party opponents regularly claimed the mantle of freedom.  Where reform supporters relied on moral and technocratic arguments to make the case that health care must be affordable for all, the Don’t-Tread-On-Me backlash to reform was largely allowed to monopolize the powerful American virtue of freedom.

It was a curious sort of freedom that conservatives endorsed.  At its extreme, opposition to the Affordable Care Act stood for the freedom to succumb to the consequences of un-insurance.  This conception of freedom defended the “choice” to go without health insurance as a calculated, rational personal decision that ought to be respected.  Compelling individuals to carry insurance amounted to a tyrannical invasion on this autonomous decision.

Falling shortly after the maligned bank bailout during the 2008 financial crisis, the fury over moral hazard spilled into the health reform debate.  The economic term “moral hazard” holds that individuals and firms must be allowed to feel the consequences of their choices, or else shielding them from risk will perpetuate irresponsible behavior.  Just as bailing out the banks was thought to reward reckless financial conduct, bailing out those who opted to go without insurance let reckless decision-making off the hook, too.  Call it a “You Reap What You Sow” brand of freedom.

Though muted, pro-reform policymakers could stake a claim to enhancing freedom as well.  The entire point of health reform was to expand freedom from risk.  It would insure people who had the misfortune of falling ill so that they could access health services without bankrupting their future.  And it moved us closer to the day when health insurance is wholly separate from our jobs, freeing us from dependency on our employers for our healthcare.  This is an important kind of freedom, too.

In his 1941 State of the Union address, President Franklin Roosevelt four fundamental freedoms thought to be inherent to all people.  Among these was “freedom from want.”  To Roosevelt, basic protections from scarcity, risk, and poverty were necessary to truly effectuate individual freedom.  Without basic necessities, freedom was wholly illusory.  As he put it three years later, “We have come to a clear realization of the fact that true individual freedom cannot exist without economic security and independence.  Necessitous men are not free men.”

Roosevelt helped solidify the modern liberal conception of freedom—a freedom to economic security.  This freedom puts affirmative obligations on government to provide a degree of protection from the risks and hazards of markets and modern life.

On the other side, the conservative (or perhaps more aptly, libertarian) conception of freedom emphasizes freedom from government.  This kind of freedom aims to protect the unbounded autonomy of the individual from government interference.  Markets are thought to be sacrosanct aggregations of autonomous individual choices, preferences, and desires.  Government intercedes on this laissez-faire freedom only by imposing its will and disrupting individual choice.

Because of the American origin story—casting off the yoke of tyrannical British authority—many seem to assume that the conservative brand of freedom has a stronger claim to our history.  The liberal alternative, it’s thought, is just a socialistic perversion concocted by pro-centralization New Dealers.  But that’s just not the case.

In his magnificent book The Story of American Freedom, historian Eric Foner chronicles the different ways that the American ideal of freedom has been deployed in political rhetoric throughout our history.  As political and social contexts have shifted, so too has the rhetoric around freedom, liberty, and independence.  As Foner shows, the dueling claims of what it means to be truly free have been with us for centuries.

The earliest seeds of the modern debate begin to appear during the Jacksonian era.  Whig leaders like John Quincy Adams and Henry Clay argued that government action could enhance freedom.  They argued that the capacity to wield one’s freedom depended on one’s power, and that freedom was dependent on prosperity.

Jacksonian Democrats, on the other hand, began railing against the faraway federal government as the preeminent threat to American liberty.  “Building upon laissez-faire economics,” Foner explains, “Democrats identified government-granted privilege as the root cause of social injustice.”

In the antebellum period, freedom was often employed in relation to its looming antithesis: slavery.  Latching on to the abolitionist cause, populists and reformers condemned the industrial economy for crafting a system of wage slavery that restricted individual freedom at the hands of business.  The idea underlying wage slavery was that the market posed a threat to freedom.  But this idea fell out of mainstream circulation for a time, as abolitionists resisted the characterization and sought free labor as the goal of the antislavery movement.

During the post-war period, the Gilded Age ushered in a period of laissez faire freedom dominance in the end of the nineteenth century into the early twentieth.  Freedom was defined as the liberty of contract—that the ability of individuals to freely enter into economic and financial arrangements ought to be unimpeded.  It was a period that grounded a sense of freedom in meritocracy and Social Darwinism.

But some resisted.  The American Economic Association was established in 1885 to combat “laissez-faire orthodoxy,” declaring, “We regard the state . . . as an educational and ethical agency  whose positive assistance is one of the indispensable conditions of human progress.”  Similarly, the sociologist Lester Ward determined that “individual freedom can only come through social regulation.”

Ultimately, the association of “freedom” and Gilded Age Social Darwinism temporarily made freedom a dirty word in American politics.  The Progressive movement situated its policy goals in the language of democracy rather than freedom.

Still, the central concern of progressivism, according to New Republic editor Herbert Croly, was how Americans could be free in a modern industrial economy.  Croly explained that “Hamiltonian means” of government intervention into the economy were necessary to achieve the “Jeffersonian ends” of democratic self-determination and individual freedom.  The Progressives thought that robust, energetic government was necessary to create the social conditions for meaningful freedom.

In 1912, former president Theodore Roosevelt campaigned for president under the Progressive Party mantle.  The party’s platform, Foner writes, “laid out a blueprint for a modern, democratic welfare state,” replete with plans for health and labor regulation, an eight-hour work day, a living wage, union protections, and a national system of social insurance for unemployment, healthcare, and old age.  Roosevelt’s freedom meant liberty from corporations effectuated through government power and regulation.

Theodore Roosevelt’s progressive version of freedom gained wider acceptance and circulation two decades later under FDR.  On the heels of the Great Depression, the nation saw how economic devastation can render theoretical freedoms meaningless.  Accordingly, FDR sought to guarantee freedom from want, establishing welfare state programs to protect Americans from the vicissitudes of modern economic life.

Left-wing pressure in the United States helped contribute to Roosevelt’s bold social democratic platform.  But after World War II, hostility between the Soviet Union and the United States made Americans define freedom in contrast to the Soviet Union, veering once more back toward laissez faire freedom.  Moreover, the economic abundance during this time produced great faith in capitalist institutions.  “Cold War affluence,” Foner writes, “greatly expanded the constituency that identified freedom with free enterprise.”

In the 1960s, President Johnson launched a War on Poverty, but implicitly deviated from the New Deal’s diagnosis of economic struggle.  “In a departure from the New Deal, when poverty had been seen as arising from an imbalance of economic power and flawed economic institutions,” Foner writes, “in the 1960s it was attributed to an absence of skills and opportunity and a lack of proper attitudes and habits.”  Therefore, many of Johnson’s antipoverty initiatives eschewed direct interventions—like a guaranteed minimum income for the non-elderly or government-created jobs—in favor of skills training and education.  Johnson’s programming aimed to enable individual self-liberation from the “enslaving forces of his environment.”

Nonetheless, Foner marks the 1960s as the era when “freedom” began to be co-opted by conservatism and relinquished by the left.  “As the social movements spawned by the sixties adopted first ‘power’ and then ‘rights’ as their favored idiom,” he writes, “they ceded the vocabulary of ‘freedom’ to a resurgent conservatism.”  This left conservatism with free rein to equate freedom with unfettered capitalism, as Milton Friedman (and later, Ronald Reagan) did, or to proclaim resistance to government economic and anti-discrimination regulation under the guise of freedom, as Barry Goldwater did.

This inexorably led to a resurgence of 1900s-style Social Darwinism.  This brand of conservatism, ostensibly grounded in principles of freedom, warned against government intervention into the “natural” workings of the economy; held that the distribution of wealth reflects individual merit; and deemed the plight of the unfortunate, too, a product of their own failings.

Left unchecked, this conception of “freedom” grew to dominate political discourse in the United States.  Liberals argued for their policies in technocratic terms, promising to provide economic help to a struggling middle class.  But conservatives relentlessly assailed any intervention as Big Government stepping on the throat of individual freedom.

Liberals seemingly forgot that they too have a claim to the virtues of freedom—a claim that their intellectual predecessors invoked countless times from the nation’s founding onward.  The free market has no mind for any individual’s particular well-being, autonomy, or bodily security.  In a time of ever expanding economic volatility, “freedom from want” still resonates as an audacious ideal.  So does the social insurance platform that flows out of it.

Foner shows that in the political debates that have raged throughout our history, the side that lays a stake to the rhetoric of freedom tends to seize the upper hand.  Freedom goes to the core of the nation’s identity, self-conception, and perceived purpose of its founding.  Reformers and policy advocates would be wise to listen to Richard Armey, former House Republican leader, who said, “No matter what cause you advocate, you must sell it in the language of freedom.”

The raw end of the free trade deal

Any economic change creates winners and losers.  “Creative destruction” is the economic concept that innovative, efficiency-promoting advancements also tend to displace segments of the preexisting status quo.  Uber generates benefits for consumers, but disrupts the taxi industry.  Automation makes consumer goods cheaper, but imperils jobs for workers.

Globalization has been one of these economic changes.  The rise of globalization promised vast new global wealth from lifting barriers on the movement of goods and people.  And on the whole, American consumers have immensely benefited from cheaper consumer goods and the bounties of global trade.  But globalization also triggered tectonic shifts in American workplaces.  Industries that, in a pre-globalized world, provided a good living to millions of working-class Americans suddenly faced international pressure and increasingly offshored their workforces to faraway countries.  Spurred by globalization, these companies picked up and left countless American communities in the dust.

In a fair political economy, the deal is supposed to be that we take a slice of the gains from broad economic innovation to compensate those on the losing end.  In theory, we could take some of the surplus wealth generated by free trade and direct it to those Americans who have been hit hardest by this creative destruction—those whose jobs have vanished and whose towns have dried up.

But that hasn’t happened.  Despite the diffuse gains of globalization, we haven’t provided much in the way of targeted help to those who have been net losers.  And those who perceive themselves to be net losers have noticed.

The missing compensation from globalization is becoming the defining political issue on both sides of the Atlantic and is scrambling political divisions.  At the New York Times, Nate Cohn writes that the Brexit vote signals “the emerging split between the beneficiaries of multicultural globalism and the working-class ethno-nationalists who feel left behind.”  Pro-Brexit votes flowed in from traditional Labour Party strongholds in working-class neighborhoods, with the dagger for “Remain” coming when 62 percent of Sunderland, a once reliable pro-Labour region, voted to “Leave.”  Similarly, at the Washington Post, Matt O’Brien writes that Brexit marks the beginning of the revolt by globalization’s losers—disproportionately concentrated in the working- and middle-classes of rich-world countries.

And let’s not forget Donald Trump, who has made walling off borders and tearing up trade deals—in effect, reversing globalization—the calling card of his nationalist campaign for president.  And who formed the core of Trump’s base?  A “certain kind of Democrat,” according to Cohn; specifically, less educated white registered Democrats who nonetheless identify as Republicans in the South, Appalachia, and the deindustrialized North.  Just like the “Leave” vote sweeping through working-class Sunderland, Trump’s ethno-populism has resonated with white working-class voters and the economic devastation they face in 2016.

So what to do?  Must globalization either march forward or else reverse itself to stem the political unrest fueling its working-class resisters?  Not necessarily.  There is a third option between globalization and no globalization, and it’s global capitalism paired with robust social insurance regimes.  As Marshall Steinbaum of the Center for Equitable Growth points out, “we once solved the problem of the conflict between capitalism and ethno-nationalist backlash with social democracy.”

We’ve fallen far short of that solution.  Whether a Bernie Sanders-style social democratic overhaul or a more targeted approach to aid those displaced by free trade, we have done little to cushion Americans against economic upheaval.  The rise of globalization has dovetailed with decades of stagnant income growth, mounting inequality, and ever-growing financial strain on American families.  Yet the United States hasn’t adopted the kinds of social insurance protections needed to match the increasing volatility and insecurity of twenty-first century capitalism.  And while we provide a small program to retrain and compensate certain workers who have lost out due to free trade, we do relatively little to otherwise target help to the communities that are hit the hardest.

Which means we’ve failed to live up to our end of the bargain.  Creative destruction is immensely valuable and can do wonders to improve overall well-being.  But it inherently causes destruction, and that destruction doesn’t just dissipate with time.  We’ve reaped the diffuse benefits of globalization, but have done little to level with those bearing the targeted costs.  This failure is a big part of the discontent we’re seeing rock both sides of the Atlantic now.

The House GOP’s you’re-on-your-own replacement for Obamacare

For six years, congressional Republicans have been screaming to “Repeal and Replace” Obamacare.  They proved quite adept at making symbolic efforts toward the “Repeal” half of this talking point, voting more than 60 times to tear up the national nightmare that has driven our uninsured rate to record lows, with the most recent vote fittingly falling on Ground Hog’s Day.

Coalescing around a single serious and workable replacement for the law, however, proved more elusive.  But this week, House Republicans finally put pen to paper, inching closer toward the conservative legislative solution to the nation’s healthcare crisis that they have promised for six years.  And it isn’t pretty.

To fill the Trump-sized conservative policy vacuum in 2016, House Republicans have been rolling out an affirmative conservative policy agenda called “A Better Way,” led by Speaker Paul Ryan.  And on Wednesday, Ryan and company released a policy paper finally detailing how GOP lawmakers would tackle the project of health reform.

The report begins with the standard right-wing airing of grievances about Obamacare.  It has caused premiums to increase; if you like your plan, Obama took it away from you; it cuts reimbursements to hospitals and providers—all the classics make a cameo.  Most notably, the plan accuses Obamacare of hampering the economy and employment—even though we’ve now had seventy-five months of continuous private sector job growth since the law was passed.  (But why abandon old disproven talking points now?)

With that aside, the plan gets to the heart of the matter: the conservative vision for health reform.  It turns out to be a barely warmed rehash of typical conservative healthcare ideas.  But assembled all together once more, it draws out what the conservative vision for health insurance really looks like: simply providing less of it.

Less insurance from private insurers for workers without employer-based coverage

For individuals who received insurance through their employers, it’s largely business as usual under the conservative plan.  Like Obamacare, the House plan has little impact on employer-provided coverage.  But it does seek to cap the tax exclusion for employer-based insurance.  Though the plan professes otherwise, this is essentially indistinct from Obamacare’s much-maligned (and much-delayed) Cadillac tax on lavish health insurance plans.

For those without employer-provided insurance, the House plan scraps Obamacare’s health exchanges and income-based subsidies.  It replaces them with a refundable tax credit for individuals to purchase “a plan of their choice, rather than the current offering of expensive, one-size-fits-all, Washington-approved products.”  Newly unbound health shoppers, freed from the shackles of Obamacare’s quality-regulated marketplaces, could take their tax credit and buy anything, anywhere called “insurance.”

The refundable tax credit would be adjusted by age, rather than by income.  The conservatives also provide no guarantee that it will cover the entire cost of insurance, like Obamacare does now for many working families.  It only promises to “help offset the cost” of insurance.  While the tax credit will supposedly be sufficient to purchase a “typical pre-Obamacare health insurance plan,” this entirely banks on cost savings from tossing out Obamacare’s regulations around the type of benefits plans must offer—meaning the savings come from muddying the quality of insurance.

To that end, conservatives would promote flimsier insurance with greater out-of-pocket costs.  A conservative health reform staple, the plan encourages high-deductible insurance plans that kick in only for the most devastating healthcare costs, leaving patients on the hook for everything else.  It then couples this insurance with tax-advantaged health savings accounts, in which individuals can save to cover out-of-pocket costs.

Rather amazingly, the plan boasts that it would come to the rescue of those trapped in the Medicaid coverage gap.  “[A]s a result of Obamacare’s poor design and incentives, many Americans— who do not have an offer of health insurance through their employer— have fallen into a coverage gap between their state’s Medicaid eligibility and the eligibility criteria for the Obamacare subsidies.”  The reason for this gap at all, of course, is that conservatives in 19 states have refused to adopt Obamacare’s Medicaid expansion that would provide coverage to these nearly 3 million people today.

And on the Medicaid expansion, the plan complains that Obamacare, which covers more than 90 percent of the states’ expansion costs, is too generous.  It argues that this leaves the federal government covering a bigger share of the cost for near-poor adults than it does for the disabled, elderly, or children in poverty, so the federal match for this less-deserving population should be cut.  Which is pretty remarkable, given that the chief justification for conservative opposition to the Medicaid expansion in the states has been that the federal government wouldn’t follow through on its funding commitment and would leave the states holding the bag–that is, that the federal government wouldn’t be generous enough.

The plan retains two of Obamacare’s most popular features.  It would continue to let children stay on their parents’ coverage up until age 26.  And it keeps the law’s monumental guarantee that no one can be denied coverage on account of a preexisting condition.

But it unravels Obamacare in countless other significant ways.  The plan would weaken Obamacare’s age-rating rules, which currently require insurers to charge older people no more than three times the premium rate charged to the young.  Conservatives would up this to five times, increasing the cost of insurance for older Americans and chipping away at universal health care’s communal ethic.  And even this limit is a mere default suggestion, because the GOP would give states the ability to “narrow or expand” this ratio.  (“After all, states understand what their residents want and need better than Washington.”)

The House plan would allow for state experimentation in a number of ways.  And given that the plan allows for buying health insurance across state lines (removing Trump’s “lines around the states”), conservatives seem downright eager to create a race to the bottom beholden to whichever state offers the loosest regulations, weakening the quality of insurance.

And of course, the plan would repeal Obamacare’s mandate to purchase insurance.  In lieu of an individual mandate, the conservatives would penalize those who go without insurance by (1) forfeiting their continuous coverage protections (which would provide HIPAA-style insurance portability after certain life events to those in the individual insurance market, in addition to those in the employer market), and (2) imposing higher coverage costs in the future.

Less insurance from Medicaid for the poor

For the poor, the conservatives would block grant Medicaid to the states in order to cut federal funding.  This is the same tack conservatives and President Clinton took to shrink the federal obligation for welfare benefits, devolving that program to the states, where the protection the program provides has been allowed to shrivel away.

The House conservatives would pay the states a fixed amount to manage their own Medicaid programs.  By putting a cap on federal Medicaid spending and turning the program over to the states, conservatives claim to be looking out for the “freedom and flexibility” of the states.  “For too long, states have been treated like junior partners in the oversight and management of the Medicaid program[,]” the plan mourns.

But ultimately, the plan admits, the real purpose of block granting Medicaid is to “[r]educe federal funding over the long term.”  Conservatives would kick the healthcare cost conundrum down to the states, who face their own fiscal pressures and are constitutionally blocked from running budget deficits.  At the end of the day, the conservative block-grant scheme will shore up the federal budget by providing less health insurance coverage to the poor.

No more guaranteed single-payer Medicare for retirees

For retirees, the GOP would transform Medicare from a single-payer, guaranteed benefit system into a competitive marketplace with only a guaranteed contribution toward premiums from the federal government.  Under the new system, seniors would receive a subsidy from the government, which they would then take to a marketplace where they could pick from a variety of competing private health insurance plans.  Sicker seniors would receive greater benefits, and low-income seniors would receive additional cost-sharing subsidies.  Premium support would be means-tested, paying less to high-income seniors.

Stop me if this sounds familiar.  This structure is nearly identical to Obamacare’s health exchanges for those who lack employer-provided insurance.  Which is ironic, given the litany of horrors that the House GOP rattled off at the beginning of their report.  If Obamacare is such a nightmare, why would Republicans want to enact the same reform for retirees?

Importantly, in the mix of insurance options on the conservatives’ new retiree health marketplace, one stands out: traditional Medicare.  Under this plan, Medicare becomes a public option competing with private plans for enrollees.  But what happened to the conservative fear that private insurers could never compete with the pricing of a public option?  Given the plan’s objections to Obamacare, maybe this is a tacit admission that competition from a public option would help constrain premium costs.

Notably, the GOP provides no specifics on how exactly it would calculate the level of premium support to Medicare recipients.  Low-balling the scheduled increase in the premium support subsidy has been the key to the estimated cost savings in Ryan’s previous Medicare privatization plans.

Ultimately, the House GOP wants to dismantle traditional, wildly popular, single-payer Medicare and submerge it as one option among competing private health insurance plans.  Maybe all seniors will just choose the Medicare option anyway, but it’s a first stab at shifting seniors away from a government-provided guaranteed-benefit program and toward private sector plans.

* * *

In sum, the conservative health plan would shift Obamacare’s exchange-based structure from workers to retirees.  For those with insurance through their jobs, your insurance continues to go largely untouched.  And for everyone else, you just get less insurance.

And ultimately, the conservative vision of health reform just repeats the same tune.  The solution is always to block grant: to block grant Medicaid to the states, to block grant a tax credit to individuals, and to block grant premium support to seniors.  Block granting gets the federal government out of the business—and away from the risk—of actually insuring people.

But who picks up that risk?  Individual Americans do.  With high-deductible plans and HSAs, individuals bear the burden of funding their own healthcare for all but the most catastrophic injuries.  With block-granted Medicaid, the poor wind up on the receiving end of federal and state budget slashing, with little institutional voice to stick up for them.

In truth, hacking away at the security provided by insurance has long been a conservative goal.  The whole point of insurance is to spread risk from the individual to the larger community.  But conservatives fear that this creates moral hazard, weakening the individual incentive to spend less and engage in responsible behavior.  To control costs and impose individual responsibility, people need to have “skin in the game”—to have their own dollars on the line.

The GOP dresses this up in language about promoting choice, flexibility, and consumer-driven care.  “One way to immediately empower Americans and put them in the driver’s seat of their health care decisions is to expand consumer-driven health care,” the report claims.  But what being in the driver’s seat really means is that you’re on the hook if you get sick.  By shifting health care risk back on to individuals, conservatives erode the very point of insurance.  So after six years, “Repeal and Replace” still just means “You’re on Your Own.”

The case for a children’s basic income

A guaranteed basic income is becoming the pipedream du jour on the left, making it all the way to a favorable review in the pages of the New Yorker.  Experiments are underway in towns in Finland and the Netherlands to give all citizens a government-provided minimum income.  Venture capital firm Y Combinator is planning a basic income pilot program in Oakland, and non-profit GiveDirectly is trying to alleviate extreme poverty in East Africa with a pilot of its own.  In Switzerland, a basic income ballot referendum went down to defeat, but more than half a million Swiss voters supported creating such an entitlement.

It’s an intriguing idea, and one that serves a broad range of policy and ideological interests.  To Silicon Valley types, basic income can prepare for technology-driven labor displacement.  To some liberals, basic income combats poverty and rising inequality.  To others, a sense of utopian curiosity wants to see what happens when, unbound by scarcity and the grind of eking out a living, individuals can flourish to become their best selves, freed to pursue their passions, ideas, and humanitarian instincts.  And to conservatives, a basic income can elegantly replace most of the welfare state altogether.

As interesting as a basic income may be, it’s undoubtedly a politically farfetched scheme for the United States in the near future, to say the least.  But can we seize the principles of a basic income to take incremental steps to help those who would gain from it the most?

I’ve argued that those who support a basic income should make providing a child allowance to families with children one of their top priorities.  Children are entirely morally blameless for their poverty, and poverty holds back their academic achievement, making a mockery of the American ideal of equality of opportunity.  They also stand to gain the most from living in households with more money.

And among children, the youngest are likely to substantially gain the most from more income support.  In Congress, Rep. Rosa DeLauro has introduced a bill to create a new Young Child Tax Credit to provide relief to families with children under three years old, recognizing both that families need support during this special (and costly) time in their lives, and that young children would benefit immensely from extra resources.

The Empirical Case for a Young Child Allowance

The strongest empirical case for boosting the household incomes of poor kids comes from a 2014 paper by Greg Duncan, Katherine Magnuson, and Elizabeth Votruba-Drzal.  Reviewing the evidence, Duncan et al. conclude that “children from poor families that see a boost in income do better in school and complete more years of schooling.”

Duncan et al begin by reminding us of the long-lasting destructive consequences of child poverty.  Some 16 million American children live in poverty—more than one in five.  And the disadvantages to a child growing up in poverty reverberate for a lifetime, suppressing her years in school, halving her earnings in her 30s, slashing the hours she’ll work as an adult, increasing the odds of her landing on food stamps in adulthood, and raising her likelihood of ill health.  Growing up in poverty doubles the probability of young boys being arrested during their lives, and it quintuples the odds of teenage pregnancy among girls.

Poor children enter school behind their peers by the time they are in kindergarten.  On every basic metric, low-income children face a yawning gap between more privileged children, from recognizing letters to counting.  Poverty thus strongly appears to impede child development early in life.

Duncan et al explore three different explanations for why poverty hinders development: family and environmental stress; resources and investment; and culture.  Under the family and environmental stress theory, poor households face a mountain of constraints and limits that produce harmful stresses.  Parents contend with economic pressure and are forced to cut back on basic essentials.  This pressure causes psychological stress, producing depressive and hostile feelings.  This psychological stress can also distort decision-making and render parents less able to pursue long-term goals.  Financial scarcity creates marital tension and tends to lead to developmentally harmful parenting techniques.  And of course, economic scarcity leads to a whole host of bads, like dilapidated housing, dangerous neighborhoods, struggling schools, and exposure to pollution.  Studies show that when children are chronically exposed to elevated stress levels, the region of their brains responsible for self-regulation suffers.

Under the resources and investment theory, poor parents are too crunched for time and money to fully invest in their children.  Because of their parents financial constraints and work obligations, poor children “lag behind their wealthier counterparts in part because parents have fewer resources to invest in them.”  Poor parents are more often at the mercy of inflexible and irregular work hours, making it harder to make time for their children.  And poor children are exposed to far fewer enrichments like books, computers, and camps than wealthy children—an inequality that has grown substantially over the last forty years.

Under the culture theory, the structural impediments from living in poverty produce maladaptive norms and behaviors in individuals, which are then transmitted to children and cause another generation of poverty.  In this view, poverty and the welfare state inadvertently promote single motherhood, male joblessness, and increased crime.  A “culture of poverty” also influences parents to focus on keeping their children safe, regulating their behavior, and enforcing discipline, whereas better-off parents focus on letting their children grow and flourish.

Scholars like William Julius Wilson have pushed back against the cultural explanations of poverty, showing, for example, that poor women strive for marriage and motherhood, but run up against high rates of male incarceration and unemployment that make marriage unattainable or less desirable in practice.  Others acknowledge the role of structural social and economic factors, but aim to impart middle-class norms and behaviors to low-income children in order to compensate for the apparent political and cultural immovability of entrenched poverty.

While poverty is abhorrent at all ages of childhood, Duncan et al show that it’s most destructive at the earliest ages.  “[D]uring early childhood,” they explain, “the brain develops critically important neural functions and structures that will shape future cognitive, social, emotional, and health outcomes.”  Poverty gravely interferes with this development.

Duncan et al point to the famous high-quality childcare studies demonstrating the importance of the earliest years of life.  The long-term benefits to at-risk children placed in high-quality care in the Abecedarian and Perry Preschool programs show that infancy and toddlerhood are fruitful points to make positive interventions in a child’s development.

Next, Duncan et al evaluate the empirical evidence for boosting family income to help child development, focusing on experimental and quasi-experimental randomized studies from policy changes and pilot programs.  Between 1968 and 1982, six towns across the United States experimented with a negative income tax, essentially a basic income-style precursor to the modern Earned Income Tax Credit.  Studies measuring outcomes for children receiving these benefits found significant achievement gains for children in elementary school, but no corresponding impact for older children.  The studies did not measure the effects in early childhood.

Welfare reform in the 1990s, which encouraged parents to work and thereby increase their incomes, also provided an opportunity to study the effect of income gains on poor children.  Studies found that when welfare reform’s wage supplement programs took effect, children in early elementary school scored significantly higher on achievement tests.  In fact, a $3,000 increase in annual income was associated with an achievement gain of one-fifth of a standard deviation for these children.  Again, however, no gains were seen among older children.

Between 1993 and 1996, Congress greatly expanded the generosity of the Earned Income Tax Credit, which rewards work among long-income families.  Researchers found that the expansion of the EITC coincided with improved academic achievement among low-income children between the ages of 8 and 14.

In Canada, researchers studied the impact of variations between provinces in the country’s national child benefit on test scores.  Among children between 6 and 10 years old, more generous benefits were associated with both higher math scores and a lower likelihood of receiving a learning disability diagnosis.  There were also signs of gains among younger children, particularly boys.

Last, in North Carolina, a tribal government opened a casino and began paying $6,000 to each member of the tribe every year.  A study found that children in families receiving these casino payments had increased school attendance rates and were more likely to graduate high school.

Duncan et al conclude that these experimental and quasi-experimental studies suggest that elementary school-aged children have the highest academic gains.  Gains among adolescents were more muted, but did boost educational attainment like increasing years of schooling and high school graduation.  The authors noted that few of these studies estimated the impacts of higher family income during the early childhood period.

Duncan et al also point out that in the non-experimental Panel Study of Income Dynamics, researchers found that among families earning below $25,000, an annual boost to household income before their children turned 5 was associated with increased working hours as adults, increased earnings, and lower rates of food stamp receipt.  Older children saw no statistically significant impact.

The authors then examine the policy implications of these findings.  “If the evidence ultimately shows that poverty early in childhood is most detrimental to development during childhood and adolescence,” they posit, “then it may make sense to consider income-transfer policies that provide more income to families with young children.”  They specifically suggest creating more generous supplements to the EITC and/or the Child Tax Credit for families with young children—essentially the proposal introduced by Rep. DeLauro.  This mirrors the strategy adopted by several European countries like Germany and France that offer age-dependent income subsidies for families with young children.

It would also draw on the conditional cash transfer programs like the EITC and other programs in the developing world that give cash grants to individuals who engage in beneficial behavior, like working.  In New York City, the Bloomberg administration tested a Family Rewards program from 2007 to 2009, which gave cash incentives to promote a host of work, education, and health goals.  While the program lifted significant numbers of New Yorkers out of poverty, it failed to boost the academic achievement of elementary or middle school students.  However, the program was weighed down by a confusing multitude of incentives and irregular payments.  Family Rewards 2.0 is now underway in the Bronx and Memphis, Tennessee.

Finally, Duncan et al conclude by warning that the policy implications cut both ways: just as young children would gain from policies that boost household incomes, they would also suffer from policy choices that slash incomes and in-kind benefits like food stamps—the kinds of cuts pushed by Speaker Paul Ryan and other congressional conservatives.

Social Security for the Young

A young child allowance, or some permutation of it, would be a confluence of several of the principles animating the push for basic income.  For progressives, a young child allowance would combat inequality and poverty among the most vulnerable Americans.  For techies, it would invest in the faculties of young children to some day dream up the next frontier of innovation.  If Social Security for the elderly is a reward for a life’s work, “Social Security for the young” is an investment for future productive years to come—a pair of policies nicely bookending the life cycle.

While it would be wonderful if Congress saw the light and quickly passed Rep. DeLauro’s young child tax credit, Washington gridlock makes that a virtual impossibility in the near term.  And sure, if you squint hard enough and suspend some disbelief, you can see how a President Hillary Clinton’s childcare reform plan could morph into a quasi-child allowance through negotiations with congressional Republicans.  But I’m not holding my breath.

In the meantime, then, policy experimentation around this issue will need to take place at more local subsidiary units of government.  As I’ve written, this could take the shape of a child allowance with or without conditions: “One could imagine an enterprising city, school district, or even a well-funded and ambitious charter school trying out an initiative that (1) provides each family with a monthly ‘scholar success stipend’ for each of their children, and (2) conditions receipt of a full payment on children meeting certain basic expectations in school.”

This would seem like a particularly apt place for adventurous charter schools, whose appeal stems in part from their ability to innovate around providing added resources for low-income children, like highly-paid teachers and wraparound social services.  Why not try providing income support, too?

So the socially impactful venture capital firms and non-profits of the world should step up and partner with a town or a network of schools to see what happens when families with poor children suddenly get more money.  As the empirical evidence shows, these experiments should foremost target the youngest children who will reap the greatest developmental gains.  Just as a childhood in poverty can reverberate for decades, so too can a childhood of ample means — one that allows children to be children, to develop and thrive.  That’s an investment that would truly transform our society, and would be a big legitimizing down payment toward a basic income for all.

 

The American tradition of big government

The myth that the American economy’s traditional and natural state is laissez-faire and government-free predominates over the conventional understanding of American history.  To some, a free market unencumbered by government meddling has forever been sacrosanct to the American project.  A lightly regulated economy is part and parcel of American freedom, it’s thought.

It turns out that this view deeply misunderstands our history, both far and recent.  Academics are challenging the conventional wisdom, showing that government action has been an integral part of the American economy throughout our history.  Jacob Hacker and Paul Pierson recently demonstrated in their book American Amnesia how government made the crucial public investments necessary to lay the foundation for broad-based rapid economic growth in the twentieth century.  The economy works best when the government works in tandem.

But the history of federal intervention into the market economy stretches back far earlier, dating from the earliest days of the republic.  In The Case for Big Government, Jeff Madrick lays out exactly that: a case for robust programmatic regulation and government action in the twenty-first century.  Like Hacker and Pierson, Madrick sees government action as an essential ingredient to a healthy and fair modern economy.

One particularly valuable section of Madrick’s case traces the history of federal intervention into the economy from the nation’s founding to the 1950s.  As Washington’s secretary of treasury, Alexander Hamilton favored a strong and active federal government that imposed excise taxes and tariffs on imports.  He endorsed public investments in infrastructure; fought for the establishment of a central bank; and promoted subsidies to get new industry off the ground.  He also injected the federal government into state economies to assume the war-time debts of the states.

Thomas Jefferson too came to promote an active government in the economy.  As a Virginia legislator, he proposed giving land grants to all citizens without property.  As president, he set aside federal land for schools and embraced federal financing of roads.  And of course, he greatly expanded the geographic sphere of the United States by stretching the bounds of his perceived constitutional authority to sign off on the Louisiana Purchase.

James Madison adjusted government’s role as the United States began to shift from an agricultural economy.  He believed wage labor would displace land ownership as the core of the economy, so he enacted a new tariff to protect domestic manufacturing.  He also supported a second national bank.

Notably, Madison would not support federally-funded internal improvements and transportation.  Both he and Jefferson thought that this required a constitutional amendment.  John Quincy Adams abandoned this reticence and made massive investments in roads and canals, setting the precedent for a federal role in developing the nation’s physical infrastructure.

Following this long early period of consistent federal intervention to provide the foundation and investment to develop a growing economy, the presidency of Andrew Jackson momentarily halted the pro-federal intervention consensus.  Under Jackson’s rugged individualist ethos, the federal government pivoted back toward laissez faire, devolving economic intervention to state and local government.  In the meantime, states took on important public transportation projects, like the Erie Canal in New York.  States financed more than two-thirds of the cost of new canals, and also provided generous land grants and subsidies to railroads.  These public investments were essential in developing the nation’s transportation network.

During the Reconstruction era after the Civil War, the federal government provided generous federal land grants to subsidize the development of transnational railroads.  These were expenditures akin to tax exemptions or tax credits today: revenue uncollected or resources un-monetized by the government to encourage certain private activity.  Even earlier, the government made generous land grants to colleges under the Morrill Act, and expanded the postal system.

Beginning in the late 1890s, the Progressive era saw government intervention into the economy accelerate.  At the federal level, government sought to break up industrial consolidation through anti-trust actions.  State and local governments increasingly invested in health programs, city services, education, and public goods like parks.

These investments saw huge gains, including a five-fold increase in the number of Americans completing high school between 1910 and 1930.  It also ushered in a shining new “age of sanitation” from public health investments to fight disease and invest in sewage systems.

During this time, states also began regulating the workplace to protect employees, imposing minimum wages, maximum hours for women, child labor laws, and widows’ pensions.  They undertook important regulations to protect retirees and consumers.  And they helped spread the reach of energy by establishing and regulating electric and gas utilities.

And of course, activist government came to a crescendo under Franklin Roosevelt’s New Deal.  On the heels of the Great Depression, Roosevelt created a flurry of new government programs to spark the economy and protect Americans’ livelihoods.  The FDIC came into being to insure bank deposits.  The SEC was created to patrol Wall Street.  Glass-Steagall was enacted to separate investment banks from plain vanilla commercial banks.  A national minimum wage guaranteed basic pay for all working Americans for the first time.  Robust public works and infrastructure investment put people back to work during the Depression while improving the nation’s physical stock.  The G.I. Bill made it easier for a generation of returning soldiers to pay for school and housing, and facilitated the modern middle-class life.  Social Security eliminated the elder poverty produced by laissez-faire capitalism and promised retirees a decent living.  And income taxes were cranked up to pay for the war effort and welfare state expansion.  Top marginal rates crept above 90 percent, creating a de facto maximum wage.

After the Roosevelt and Truman generation of welfare state dominance, President Eisenhower too took up moderate efforts to keep government involved in the economy.  He expanded Social Security to reach an additional ten million workers.  And he created the national highway system—yet another mass transportation project to facilitate economic activity and travel.

And so on.  The story of American economic triumph is one featuring a large and active role for government throughout.  As Madrick explains, government interventions in the economy have several benefits.  First, government can step in to provide public goods that would be under-provided by the market’s profit motive.  Second, government can be the focal point for necessary and useful coordination to create economies of scale, such as railroads, water systems, and highways.  Third, government can stimulate the economy by boosting the economic standing of workers, whether through a minimum wage, labor protections, or union rights.  And fourth, government intervention can provide macroeconomic stability through Keynesian demand management when the private sector turns sluggish.

All of which adds up to an economy that’s stronger, fairer, and more resilient.  As Madrick and others have shown, whether judged through the lens of historical experience or economic empirics, there has long been a compelling case for big government in the United States.

Dealing prosperity

At Jacobin, Doug Henwood accuses Bryce Covert of New Deal-bashing in a piece she wrote in the New York Times connecting Donald Trump’s ethno-nationalist nostalgia movement to the racial exclusions carved into 1930s social programs.  “Large national programs that radically changed the country kept America great specifically for white men,” Covert points out, noting that Social Security, unemployment insurance, minimum wage, and union protections “transformed the country and created a booming middle class. But they all purposefully left out most women and minorities.”  Henwood objects to Covert’s “emphasiz[ing] only the exclusions [of the New Deal], and identif[yng] them as the source of the nostalgias that Donald Trump, not previously known as a friend of social programs, has been basing his campaign on.”

It’s true that the New Deal submitted to the bigotry of its time.  It particularly left African-Americans out of its post-Depression and post-war mass economic uplift.  But it’s also true that these very programs dramatically improved economic security for millions, creating a booming and predominantly white middle-class.  The lesson of the New Deal is that government has awe-inspiring power to define and create the middle-class.

Covert relies on Ira Katznelson’s history of the New Deal’s race exclusions, When Affirmative Action Was White.  Katznelson explains that in order to get Social Security passed through Congress, President Roosevelt and congressional liberals acceded the demands of powerful Southern Democrats, who wanted government benefits for whites while retaining Jim Crow’s racial hierarchy.  As such, Southern Democratic committee chairs insisted that Social Security be structured to exclude predominantly black occupations like agricultural workers and maids.  For the first generation of Social Security, most black workers were unable to participate in the nation’s groundbreaking retirement security program.

African-Americans drew little benefit from New Deal efforts to expand home ownership, as well.  The Roosevelt administration created the Federal Housing Authority to guarantee home loans and expand credit for Americans to buy property.  But in the 1930s and ‘40s, black neighborhoods were routinely redlined out of the zones eligible for FHA-backed loans.  They were also the victims of overt discrimination, real estate steering, violence, and intimidation if they even attempted to look into purchasing a home in a white neighborhood, with or without government-sponsored credit.

Even the G.I. Bill—seemingly universal to all who served—had race discrimination baked into its very structure.  Long hailed as a triumph in building the modern middle-class, the G.I. Bill was passed by Congress in 1944 to provide benefits to returning soldiers to buy a home or attend college.  But while it was facially race neutral, Katznelson argues that the G.I. Bill was nonetheless implemented in a predictably discriminatory fashion because its federalist structure disadvantaged blacks.  While early versions of the bill envisioned a single national benefits office, Southern Democrats in Congress insisted that G.I. Bill benefits be administered by decentralized state and local offices.  Because their votes and committee approval were necessary for the bill to pass, the G.I. Bill relied on implementation by state and local authorities.

The consequence was that black servicemen in the South had to seek benefits from segregationist local officials.  African-Americans returning from the war were thus routinely denied home loans from community banks, even though these loans were guaranteed by the Veterans Administration.  They were denied admission into the still segregated flagship universities in the South.  This led to an over-supply of applicants into the South’s all-black colleges, widely regarded as substandard schools with minimal resources in the era of supposed “separate but equal” education.

With black colleges at capacity, some African-Americans used their G.I. Bill benefits to attend vocational and training programs.  But many programs that sprouted up were of dubious quality, and amounted to little more than schemes of private profiteering off of a new government benefit while providing little in the way of real education.  (This legacy persists today, with for-profit universities targeting poor and minority non-traditional students while providing a subpar education at exorbitant cost and debt.)

The New Deal defined the vision of a broad middle-class American dream, founded on a college degree, home ownership, and secure retirement.  But its limitations and exclusions populated that dream only with white Americans.  It would be years before blacks could fully enjoy any of these benefits.  And the reverberations of both the New Deal’s discrimination and its mass economic uplift for whites remain with us.  Today, African-Americans possess only a fraction of the wealth that whites have in part because of the lost returns from this still-recent history of economic advancement denied.

But it’s worth recognizing the other implication of the moral ambiguities of the New Deal: that government has the power and ability to generate a new middle-class.  On the heels of the Great Depression and World War II, Roosevelt’s muscular liberalism set out the terms of a middle-class life and put the government to work to provide Americans with access to these key elements.  Because government steered benefits toward whites, they became the middle-class.  Because these same benefits were denied to African-Americans, they were largely left out of the middle-class.  The contours of a middle-class life were legislated by government.

The discrimination baked into the New Deal is a regrettable vestige of the political realities of its time.  But its demonstrated ability of exerting government might to expand prosperity is a tremendous power.  In an age where the middle-class dream is slipping further out of view because of rising inequality, stagnating wage growth, and the mounting cost of living, we can wield government power again to shore up the middle-class.  In fact, Katznelson’s prescription for restorative justice to compensate for the discriminatory New Deal and other social ills looks a lot like Roosevelt’s agenda itself, just without the color lines: subsidized mortgages, generous education and training grants, small business loans, subsidized childcare, guaranteed health insurance, and more.

Katznelson wants to re-do the post-war programs to bring African-Americans into the middle-class.  And indeed we should.  To renew the New Deal in the twenty-first century would expand access to American prosperity for all.  For the ultimate lesson of Katznelson’s New Deal history is that by and large, the modern American middle-class was created by government programs.

Bargaining up to a child allowance

Jeff Spross has a piece at The Week arguing that Hillary Clinton’s childcare reform proposal, while laudable, is still inferior to a straight universal child allowance.   Spross agrees that her plan to “make sure no American family spends more than 10 percent of their budget on child care[ . . .] would be a big deal,” but identifies a pair of problems with the proposal. First, subsidies to families will be offered through kludgy tax credits, making for a tedious application process and inefficient delivery system. Second, these tax credits will be paid directly to childcare providers, effectively steering children toward center-based care and away from other options like family-provided care. (Spross presumes that the fleshed out version of Clinton’s plan will look a lot like the Center for American Progress’s proposal. I do too.)

These are both valid criticisms of the Clinton/CAP childcare plan. Doling out subsidies through the tax code adds needless complexity to our social policy, and leaves out those without the awareness or resources to access these submerged benefits. Direct payment is undoubtedly a simpler option, both for the families eligible for benefits and the government agencies administering them.

Interestingly, Spross’s second critique—that childcare subsidies push families toward commercial care—is one more typically levied by conservative critics. “[I]f you want to get the tax credit,” Spross writes, “you have to want child care in the first place. The plan involves a certain failure of imagination that assumes all families want to have both spouses in the workforce.”

The National Review struck a similar note when President Obama proposed an expanded tax credit for childcare services. “Most mothers, especially of small children, prefer to work part-time or drop out of the labor force for a time,” it asserted. “Commercial child care is the least favored option for most parents. The president’s plan encourages families to do what they do not wish to do and penalizes them for refusing.” Instead, the National Review argued for an expanded Child Tax Credit so parents could do as they wish with the money.

This isn’t a new position for conservatives. In 2005, Ross Douthat and Reihan Salam wrote in the Weekly Standard to propose a similar solution to the childcare problem. “To address the concerns of women,” they wrote, “Democrats tend to focus on child care subsidies, parental leave, and other measures that are better understood as ‘market-friendly’ than as ‘family-friendly,’ in that the goal is to make it as easy as possible for parents to maximize their time in the paid labor force.” Under their preferred approach, “the government could offer subsidies to those who provide child care in the home, and pension credits that reflect the economic value of years spent in household labor.”

Whether raised by the right or the left, these strike me as valid concerns about the structure of a childcare subsidy. In its proposal, CAP makes an unapologetic case for nudging parents toward center-based care, which it sees as an “educational necessity” for the development of young children, whereas custodial care generally “does not prepare children for school.” Still, there are undoubtedly countless parents who would prefer to raise their children from home during their formative earliest years. Too many families are coerced into the dual-earner labor market by sheer economic necessity.

Where Douthat, Salam, and the National Review propose a bigger Child Tax Credit in place of childcare subsidies, Spross prefers a universal child allowance. As a general matter, a Child Tax Credit is essentially a child allowance with kludgy hurdles to applying for and receiving it added in. And if not made refundable, a bigger CTC cuts out the neediest families—a major problem for most progressives.

But suppose liberals and conservatives bridged their differences to: (a) make the new child benefit refundable, and (b) pay it directly to families instead of care providers. Liberals get protection for low-income families, while conservatives ensure benefits for stay-at-home parents. This would be something like an advance tax credit for families with children: a flat benefit for all who qualify. Essentially, it would approximate a universal child allowance.

This would still have some kludge baked into it, since it would ostensibly be a tax credit. But as I’ve argued, the second-best solution given our system’s exhausting preference for tax expenditures is to simply provide a periodic payment option for certain tax credits. Families could thus receive child subsidies in a series of regular payments like a child allowance, rather than in a springtime lump-sum tax refund.

Conservatives may recoil at this idea as thinly disguised welfare, particularly for families who would gain from the tax system while paying little into it. But perhaps they’d be willing to play ball in order to turn a subsidy for commercial childcare into one that rewards home-based care too.

Of course, this all assumes a functioning and good faith legislative process, something that is neither assured nor even likely at this point. But even if productive compromise legislation is farfetched, it’s interesting that the solution to valid conservative critiques of liberal childcare reform winds up being a more progressive solution.

A child allowance by another name would be an ever bolder and more ambitious program than a simple childcare subsidy. If Clinton’s significant childcare proposal wound up being bargained into an allowance-plus-kludge type of policy in Congress, we’d all be better off for it.

Obama’s middle-class pay raise

President Obama unilaterally raised the pay for millions of Americans this week.  With a proposed minimum wage increase stymied by Republicans in Congress, Obama once again looked to his executive toolkit for ways he could act singlehandedly without legislative action.  And act he did, guaranteeing more workers extra pay for overtime work.

The federal overtime threshold was part of the first minimum wage legislation in 1938.  Before this week, only workers who made up to $23,660 were owed overtime pay by their employees when they work more than forty hours a week.  The threshold has sat at that same level since 2004, slowly eroding from rising costs of living and inflation over the past decade.

Obama more than doubled it.  Beginning in December, salaried workers earning up to $47,476 must be paid time-and-a-half when they work more than 40 hours per week.  The Labor Department estimates that some 4.2 million additional workers will now be eligible for overtime pay, while other researchers predict it could help as many as 13.5 million workers.  The threshold will also now automatically increase every three years, meaning this worker protection will no longer be weakened by long periods of regulatory inaction.

Aside from extending overtime pay to more workers, those earning close to the $47,476 cutoff may also benefit from outright higher salaries.  Employers may give these workers raises in order to avoid the administrative and fiscal costs of recording hours and paying overtime.  In short, it’s a regulatory change that will boost the salaries for some workers while increasing the benefit of overtime work for many others.

It’s also a rule change that could spark economic growth.  One advocate called the change “a minimum wage increase for the middle class.”  And like the minimum wage, raising the overtime threshold could boost consumer demand by giving more workers more disposable income to spend.  As the Center for Equitable Growth explains:

“[I]n an economy that is not operating at full capacity, this policy is likely to put more money into workers’ pockets. A bigger paycheck boosts their ability to buy goods and services—a key economic engine for domestic growth. That is because workers that will benefit from these policies are more likely to spend the extra money they earn.”

This important rule change bolsters Obama’s legacy of turning back the tide of rising inequality.  It also further shifts the country away from the failed trickle-down economics that conservatives have pushed for generations.  In Obama’s view, the economy doesn’t gain from raising the incomes of the wealthy, but rather by making steady progress for the middle class.  When working Americans have more money to spend, everyone prospers.

This was true throughout the three decades of strong, sustained and equitable economic expansion during the mid-twentieth century.  From World War II through the early 1970s, the economy grew at an unprecedented clip and created a broad middle class where families enjoyed rapid and regular gains in their standards of living.

Since that time, broad economic growth in the United States has largely stalled.  Real median household income has stayed virtually flat for decades, as families are working harder for the same pay.  Inequality is growing and costs are mounting, leaving working families hard pressed to keep up.

But the broad prosperity that the economy produced during much of the twentieth century wasn’t just some historical accident.  It was the product of deliberate policy choices utilizing smart economic philosophy.  The Center for Equitable Growth notes that in 1933, President Franklin Roosevelt said:

“I ask that managements give first consideration to the improvement of operating figures by greatly increased sales to be expected from the rising purchasing power of the public. That is good economics and good business. The aim of this whole effort is to restore our rich domestic market by raising its vast consuming capacity.”

FDR understood that the economy thrives off of the purchasing power of working Americans.  Eighty-three years later, President Obama is trying to put that proven method for success back in action.

Parasites Lost

At the American Prospect, venture capitalist and inequality foe Nick Hanauer argues that government inaction on the minimum wage has helped create a low-paying “parasite economy” that is holding back national growth. Hanauer’s parasite economy includes large multinational corporations like Wal-Mart and McDonald’s that pay their workers less than a living wage, counting on government safety net benefits to keep their employees afloat.

These firms are “parasites” in that they extract wealth from the consumer economy without giving their workers the means to truly participate in it. “[L]ow-wage workers at parasite companies,” Hanauer writes, “cannot afford to robustly consume our products, or most anybody else’s, in return. The parasite economy is simply bad for business.”

By paying employees meager wages, parasite companies depress overall growth by refusing to provide the kind of disposable income their workers need to stimulate the economy. But Hanauer doesn’t condemn the parasite companies; indeed, he owns up to being a parasite company owner, too. Hanauer and business owners like him fear that taking on added costs like higher employee wages will undercut their company’s ability to compete in the cutthroat marketplace and maintain market share.

The parasite economy—firms caught in a race to the bottom of the pay scale—is one big collective action problem. We’d all be better off if the parasite companies paid higher wages, including those companies themselves: their employees could take their higher wages to purchase more consumer goods and grow the economy. But it’s a dangerous business move for any one firm to raise wages unilaterally.

Hanauer’s solution to this collective action problem is centralized action by Congress to gradually raise the minimum wage. “[W]hen we lift wages through reasonable increases in the minimum wage,” Hanauer argues, “everyone prospers[.]” All firms take the same financial hit, so no one firm is disadvantaged. And importantly, more money in employees’ pockets produces greater economic growth, lifting the revenue of these very firms.

Hanauer’s argument hinges on this last point: that raising the wage provides a stimulus for economic growth. It’s an argument he has made before — that a higher minimum wage is good for business because “[r]aising the earnings of all American workers would provide all businesses with more customers with more to spend.”  It’s also the argument that some smaller, lower-cost cities and towns have been banking on in California in anticipation of a statewide $15 minimum wage: that the higher cost of employees would be offset by a boon in economic growth from new disposable income and, in turn, new business.

But is it true? This kind of idea has a long history as a theory, going back to Adam Smith’s Wealth of Nations. As Jeff Madrick has noted, Smith “asserted that a large market for goods and services was critical for growth,” meaning that growth depends on maintaining sufficient consumer demand. And Keynesian economics has long believed that higher rates of growth can be achieved by stimulating demand.

But what about a minimum wage increase in particular? A 2011 study by economists Daniel Aaronson and Eric French at the Chicago Bank of the Federal Reserve found that a rise in the minimum wage does have notable effects on consumer spending. “[A] $1 minimum wage hike,” the economists found, “increases household income by roughly $250 and spending by approximately $700 per quarter in the year following a minimum wage hike.” When workers have more money in their pockets, they become more comfortable investing in durable goods like automobiles.

True, minimum wage hikes can have other negative effects, like eliminating jobs. But economists are generally split on whether raising the pay of workers really causes companies to cut jobs and hiring. Even though there is little evidence that raising the minimum wage has a definitively negative effect on employment, the prospect of job loss still led Aaron and French to temper their prognosis for a minimum wage-fueled growth boon. “[W]e should be somewhat suspicious of claims that the minimum wage will significantly boost the economy,” they conclude, while nonetheless finding “compelling evidence that putting money into the hands of consumers, especially low-income consumers, leads to predictable increases in spending.”

It makes intuitive economic sense that a rise in income for minimum wage workers will boost aggregate demand and, thus, increase growth. This is especially plausible given that low-wage workers have the highest tendency to spend any additional pay they take home. This fits comfortably with the liberal emphasis on middle-out economics: that growth is generated by consumer spending from a strong, broad middle class.

A livable minimum wage has other clear advantages, too. Hanauer’s parasite firms are essentially public-private partnerships, counting on public benefits like SNAP and the EITC to top off employee’s sub-livable paychecks. (Indeed, Hanauer notes that McDonald’s even maintained an employee hotline called “McResources” to guide workers through social assistance options.) Research shows that raising the minimum wage produces safety net savings by shifting the cost of guaranteeing a living wage from the government to employers. Researchers at the Economic Policy Institute found that with a $10.10 minimum wage, 1.7 million fewer Americans would rely on public assistance programs, saving the government (conservatively) nearly $8 billion each year. A higher minimum wage would thus mandate corporate responsibility from employers while promoting fiscal responsibility from the government.

Even though Hanauer’s minimum-wage-as-stimulus theory may not be definitively established, it’s just as plausible as the ubiquitous minimum-wage-as-job-killer theory. One heralds the macroeconomic benefits of the additional income to workers; the other hones in on the macroeconomic harm of new costs to firms. In actuality, the truth may depend on which force is greater.

But a theory like Hanauer’s also has value by evening the playing field in our debate over what is a just and effective minimum wage. The disemployment critique of the minimum wage is a fuzzy and debatable economic theory too, but conservatives continue to trot it out because it’s a simple and elegant argument. So too is the stimulus argument. More income allows workers to spend more money. When workers spend more money, the economy grows. It’s a simple case to make, and it might just be the case that needs to be heard in order to break out of the parasitic low-wage trap.