Press "Enter" to skip to content

Posts published in “Political Words”

Overton window

politicalwords

What shall we discuss? Or, what shall we discuss and be taken seriously? A person can throw out almost any idea, but many of those ideas may be batted aside as nonsense. At least, they may be batted aside as nonsense today; tomorrow, the idea might be more acceptable, or even a majority opinion.

That’s the concern of the “Overton Window of Political Possibilities.”
Joseph Overton, an academic at the Mackinac Center for Public Policy in Michigan, developed the concept in the mid-90s. The center described it this way:

“Imagine, if you will, a yardstick standing on end. On either end are the extreme policy actions for any political issue. Between the ends lie all gradations of policy from one extreme to the other. The yardstick represents the full political spectrum for a particular issue. The essence of the Overton window is that only a portion of this policy spectrum is within the realm of the politically possible at any time.”

This doesn’t amount to a value judgment, but it does suggest what’s realistic, as a matter of public policy, at a specific moment.
Same-sex marriage would be a useful case study of how a subject once considered out of bounds – an abomination or a joke if considered at all – could move over time into the window of political realism. Marijuana legalization may be a similar example.

Ideas move in and out of the window with some regularity, over the span of time. Judgment comes into play when we decide which ideas should or shouldn’t move, and in which direction.

Why ideas move is a question for political scientists, and many have weighed in (whether or no specifically citing Overton).

And there are other uses. Conservative talk show host Glenn Beck released a novel called The Overton Window (2010), a political conspiracy potboiler about a powerful elite seeking to take over the United States by moving an unacceptable concept – “one world, ruled by the wise and the fittest and the strong, with no naive illusions of equality or the squandered promises of freedom for all” – into the Overton window.

Whatever the virtues of the novel (few, reviewers seemed to agree), it got the point of the “window” backward: It is not something that can be manipulated by a “wag the dog” strategy, but rather serves as a measure of how the public changes its mind.

Writer Maggie Astor described it this way: “The key is that shifts begin with the public. Mr. Overton argued that the role of organizations like his own was not to lobby politicians to support policies outside the window, but to convince voters that policies outside the window should be in it. If they are successful, an idea derided as unthinkable can become so inevitable that it’s hard to believe it was ever otherwise.”
 

Integralism

politicalwords

The pieces of the word suggest an integration of several things, and they do, but we’re not talking here about racial integration, in the civil rights sense. This is something far different.

The Wikipedia description says that it suggests “a fully integrated social and political order, based on converging patrimonial (inherited) political, cultural, religious and national traditions of a particular state, or some other political entity. Some forms of integralism are focused on achieving political and social integration, and also national or ethnic unity, while others were more focused on achieving religious and cultural uniformity.”

To get more specific, a lot of the discussion grows out of a long-running – centuries long – discussion within the Roman Catholic church, arguing that the state should serve the church. (Presumably the Vatican, in which church and state always have been integrated, is exempted from the discussion.) The term seems to come from the 1905 decision by the Third French Republic formally to separate itself from the Catholic Church; French Catholics who opposed that decision called themselves Catholiques integraux (integrationist Catholics). The idea has been adapted since, and the argument has ebbed and flowed since the Second Vatican Council of 1962-65, which among other things seemed to loosen the church-state relationship.

Integralism in anything like a doctrinaire sense is not likely a majority view, or even more than a sliver-sized minority view, in the Catholic community.
R.R. Reno of the Institute on Religion and Public Life noted that there’s plenty of deep church doctrine against integralism: “Our supernatural destiny is other than our natural end. As St. Augustine put it, though they are intermixed in this age, the City of God is ordered to a different end than the City of Man. As a consequence, a Catholic politics never seeks to be a sacred politics, never proposes a full and complete integration of statecraft with soulcraft.”

It was that point John F. Kennedy made when he addressed Baptist ministers in Houston in the 1960 campaign, speaking to concerns about whether his Catholicism would lead to subservience to the Pope. He said, “I believe in an America where the separation of church and state is absolute – where no Catholic prelate would tell the President (should he be Catholic) how to act, and no Protestant minister would tell his parishioners for whom to vote – where no church or church school is granted any public funds or political preference – and where no man is denied public office merely because his religion differs from the President who might appoint him or the people who might elect him…. I believe in a President whose views on religion are his own private affair, neither imposed upon him by the nation or imposed by the nation upon him as a condition to holding that office.”
The speech won a great deal of praise, but some – not all – Catholics disapproved. Catholic writer John Courtney Murray (who had seen the speech before it was delivered) said that “to make religion merely a private matter was idiocy.”3 (Be it noted that religion being a private matter was a central point in the idwology of Thomas Jefferson.)

There are many perspectives pro and con, but one of the most concise pro- arguments may have been expressed from writer Daniel Pink:

“Integralism – the need for a confessional Catholic state – is part of Catholic teaching about grace. Grace is required to repair and perfect all of human nature. Human nature involves the political not as a mere expression and instrument of the private, but as a distinctive sphere of existence and understanding in its own right. Unless we commit ourselves to Christ as a political community, a vital part of human reason will remain untransformed by grace. The result will be spiritual conflict and degradation. … [The Catholic tradition] takes the state, and coercive authority in general, to have a teaching function. One central mode of teaching is through legal coercion. The supposition that it is the proper business of the state to teach, and teach coercively, extends back to Aristotle’s Nicomachean Ethics.”

Another analysis by Timothy Troutner (in describing integralism advocates, not his own view) takes this a step further, to the point where the argument becomes fully joined: “The maintenance of a neutral public square forbids the dominance of any thick conception of the good not shared by all. This quickly leads to the privatization of religion and the secularization of society, manifesting liberalism’s hostility to any religion which sees itself as more than a private concern.”

And Troutner adds, “Integralists argue that if Catholics do not dictate terms, others will dictate to them.” That fully gives the game away, making clear the real point in the last paragraph: Integralists simply want to be those who dictate the terms. To everyone. Including everyone who thinks something other than what they do. Under such a system, freedom of religion belongs only to one group, not to others; it’s a redefinition of what “freedom of religion” means (freedom only for certain believers).

The term integrationist primarily is Catholic in usage and then mainly among a core of scholars and pundits, but the concept is open for adoption elsewhere. Many fundamentalist Protestants could adopt it structurally – and a good many have already, even if they haven’t used the specific word. In that quarter, many extend the concept further: There’s the expressed fear there not only that they may be dictated to, but that if their views are not the diktat for all of society, that all of (unsaved) humanity may be condemned to hell … or something like that.
Shorter version of integralism: My way or the or the highway; believe as I do, follow all the rules I deem righteous, or get out of town, Jack.
 

Populist

politicalwords

When libertarian sentiments take a populist form, it looks like this: a mix of anger, fear, anti-intellectualism, and fierce government hostility. Welcome to the Tea Party movement.
► David Niose, Fighting Back the Right

Populism can include a bunch of things, some good, some not.
This could be a subject purely for political scientists to hash over … except that populism has been moving into ever-broader discussion. Soon, someone is likely to appropriate the term and run under it, so we should have a sense of it means. Or, historically has meant.
And as with so many others, it gets complicated. It has meant a number of different things to different people.

George Will (in an interview) offered, “Populism is the belief in the direct translation of public impulses, public passions. Passion was the great problem for the American Founders.” And it certainly was, though not all public desires, though, necessarily fall under the populist umbrella, and the founders’ concerns largely were met in their efforts to mediate and slow decision-making to inject more intellect and less emotion into the process. It was a process concern, not one of subject matter. The populism of the last century mostly definable by what it addresses.

The Oxford Dictionary offers these definitions: “Support for the concerns of ordinary people. The quality of appealing to or being aimed at ordinary people.”

Just what you should want in a democracy/republic, right?

Except that there’s a subtlety in the definitions from Oxford: Think carefully and you’ll sense that what populism really is about is less a platform or a movement than a style.

Writer George Packer approached that current sense of the word with, “Populism is a stance and a rhetoric more than an ideology or a set of positions. It speaks of a battle of good against evil, demanding simple answers to difficult problems. ([Donald] Trump: ‘Trade? We’re gonna fix it. Health care? We’re gonna fix it.’) It’s suspicious of the normal bargaining and compromise that constitute democratic governance. (On the stump, [Bernie] Sanders seldom touts his bipartisan successes as chairman of the Senate Veterans’ Affairs Committee.) Populism can have a conspiratorial and apocalyptic bent …”

It is not polished; it is raw, emotional more than thoughtful.
It can start with specifics and even specific ideas and proposals. But the internal machinery usually processes them into blood-churning vagueness.

A few specifics, then:

There was a Populist Party in the United States in the late 19th and early 20th century, and for some years it pulled a significant number of votes even in presidential contests; it proclaimed that “the fruits of the toil of millions are boldly stolen to build up colossal fortunes for a few.” In his Political Dictionary, William Safire referred to it as “a liberalism deeply rooted in U.S. history.” Many of the political progressives of that era and just beyond shaded over into populism, and by other names (such as the Nonpartisan League) it remained felt through the twenties. Franklin Roosevelt’s New Deal Democrats absorbed many, though not all, of them.

Much of this was based around economic issues, but not all. Packer wrote in his populism article about Georgia politician Thomas Watson, who railed that “the scum of creation has been dumped on us. Some of our principal cities are more foreign than American.” That anti-outsider thread has persisted too.

The populist style while persistent is not always equally popular; it rises and falls. In the middle of the 20th century gained relatively little traction. In 2016, and the elections leading up to it, it proved popular.

Political analyst John Judis has argued that “populist campaigns and parties often function as warning signs of a political crisis. In both Europe and the U.S., populist movements have been most successful at times when people see the prevailing political norms – which are preserved and defended by the existing establishment – as being at odds with their own hopes, fears, and concerns. The populists express these neglected concerns and frame them in a politics that pits the people against an intransigent elite. By doing so, they become catalysts for political change. Populist campaigns and parties, by nature, point to problems through demands that are unlikely to be realized in the present political circumstances.”

And there are the two problems, which have nothing to do with the goals – real or purported – populist movements espouse.

They make demands, in the most emotional and emphatic terms, that cannot be fulfilled. (This seems to be a core component of modern populism that distinguishes it from the population of a century back.)
And they attack the very people and institutions which most plausibly might make their demands happen, and often would be inclined to … if they weren’t under attack from populists.

Populism doesn’t have to be self-destructive, and it can have good intentions, but those who fall under its spell seem to have a weakness for folding rapidly and dangerously into darkness, despair and fury. It has a hard time fitting into a system of practical self-government of, you guessed it, the people.
 

Personal responsibility

politicalwords

Personal responsibility or something like it has been a mainstay in recent American politics, back to a speech by Nathaniel Gorham of Massachusetts,1 delivered at the Constitutional Convention on July 18, 1787 (although his reference was to the behavior of “public bodies” rather than individual action or obligation).

More recently it has been associated with the limitation of governmental social programs, as in the Personal Responsibility and Welfare Reform Act of 1996.

It has been used most often by conservatives but with some frequency across the political spectrum. Essayist Alan Wolfe wrote in 1998, “To liberals and leftists, the message would be equally blunt. In particular, your insistent, almost pathological, fear of understanding the importance of personal responsibility astonishes us.”

The concept is often enough taken up on the left too, however. In his inaugural address, for example, Barack Obama spoke of “a new era of responsibility,” especially directing his point to younger people.
The concept as such is broadly popular. In 2014, for example, a Pew Research study showed that “being responsible” was considered the single most popular value in the rearing of children, over many options, across the whole of the ideological spectrum.

But what does “personal responsibility” actually mean?

Wikiquote’s definition is that “Personal responsibility or Individual Responsibility is the idea that human beings choose, instigate, or otherwise cause their own actions. A corollary idea is that because we cause our actions, we can be held morally accountable or legally liable.”

Ron Haskins expands on that, bringing the definition into the realm many conservatives ordinarily would use, with this: “Personal responsibility is the willingness to both accept the importance of standards that society establishes for individual behavior and to make strenuous personal efforts to live by those standards. But personal responsibility also means that when individuals fail to meet expected standards, they do not look around for some factor outside themselves to blame. The demise of personal responsibility occurs when individuals blame their family, their peers, their economic circumstances, or their society for their own failure to meet standards.”

This definition matches with a great deal of public policy, especially with arguments that people who receive public help may not be doing enough to help themselves, and become too reliant on assistance from others.

In the endless variety of human beings, some – there’s no way to get a clear number – fall short of exactly that test, failing to take personal responsibility for what they can control.

We all have different levels of ability to control what happens around us, of course. For economic, health, educational or other reasons, different people are able to handle different levels of demand and stress. To require personal action to take care of every problem may be entirely justifiable in one case, but unrealistic – and heartless – in another.

Drawing the line between the two is never easy. But that’s part of the implicit responsibility in treating people with decency and respect.
There is another aspect to calls for “personal responsibility” and declining to seek help from others: It can be turned into an argument against organizing to accomplish a task that might be beyond the capability of a single person acting alone.

Imagine a city whose leaders are proposing an expensive, tax-heavy project opposed by many residents. Calling on those residents to act on their “personal responsibility” – and offer individualized opposition – instead of organizing, would be a simple prescription for the city leaders to prevail easily over a deeply and hopelessly divided opposition. An organized, coordinated opposition (which appears to cut against the grain of “personal responsibility”) might instead prevail. Obviously the same concept applies in many places, including union organizing (bearing in mind that many political critics of unions have eased “personal responsibility” into their rhetoric).

When does this translate to: Don’t you dare organize against your (wealthy) betters?

The journey may not be lengthy.
But what does “being responsible” mean?

Depends, on who’s responsible for what.
 

Neo-con

politicalwords

Neo-cons (the word is an abbreviation of neoconservatism, and does not relate to a conference called NeoCon) have been around a while, long enough that the Rolling Stones cut a late-career sarcastic song about them around the start of the millennium. But this currency still circulates.

Columnist Max Boot cited – early in 2019 – headlines including “How the neocons captured Donald Trump.” “The continuing lunacy of the neocons.” “Return of the neocons!” He goes on to suggest, fairly enough, that “‘Neoconservatism’ once had a real meaning – back in the 1970s. But the label has now become meaningless. With many of those who are described as neocons, including me, fleeing the Trumpified right, the term’s sell-by date has passed. There are more ex-cons than neocons by this point.”

The Conservapedia took a crack at it, though: “in American politics [it] is someone presented as a ‘conservative’ but who actually favors big government, globalism, interventionism, and a hostility to religion in politics and government. The word means ‘newly conservative,’ and thus formerly liberal. A neocon is a RINO Backer, and like RINOs does not accept most of the important principles in the Republican Party platform. Neocons do not participate in the March for Life, nor stand up for traditional marriage, advocate other conservative social values, or emphasize putting America first. Neocons support attacking and even overthrowing foreign governments, despite how that often results in more persecution of Christians. Some neocons (like Dick Cheney) have profited immensely from the military-industrial complex. Many neocons are globalists and support the War on Sovereignty.”

Any number of actual neo-cons may quibble with much of that; as people who mostly came to their views more or less one by one rather than in a mass movement, many individual differences may be apparent.

The term started picking up significant usage in the 1970s (neo-liberal would follow soon after) to describe people whose background was in Franklin Roosevelt-style liberalism but who felt that many of their allies had become “soft” in opposing communism, meaning that a hawkish perspective on defense was a key part of their world view.

What did pre-existing conservatives think of them? The Conservapedia offers, “Paleoconservatives, who dislike Neoconservatism intensely, have argued that it emerged from Trotskyite theories, especially the notion of permanent revolution. There are four fundamental flaws in the paleoconservatives’ attack: most of the neoconservatives were never Trotskyites; none of them ever subscribed to the right-wing Socialism of Max Shachtman; the assertion that neoconservatives subscribe to “inverted Trotskyism” is misleading; and neoconservatives advocate democratic globalism, not permanent revolution.”

Neo-cons accounted for many – by no means all – of the early and leading advocates of the 2003 Iraq war, and as that wore on, and as the Obama Adninistration sidelined them, the term fell into less frequent usage.
In the Trump Administration, neo-cons have seen some resurgence. Its current (at this writing) national security advisor, John Bolton, has been described as “one of the key figures of neoconservatism.”

Writer Caitlin Johnstone argued4 that “You can trace a straight line from the endless US military expansionism we’ve been seeing since 9/11 back to the rise of neoconservatism, so paying attention to this dynamic is important for diagnosing and curing the disease.”
 

Entitlements

politicalwords

Why can’t you tell me why you feel entitled to take someone else’s money, their property, their accumulated assets?1 Why does another working American taxpayer have to pay for your existence?
► meme in The Federalist Papers

As a concept, entitlements cut in two directions, one of them much less acknowledged than the other in recent American politics; and those sit on top of other definitions as well.

In the process, the word has spread and mutated beyond all earlier recognition.

In psychology, “Entitlement is an enduring personality trait, characterized by the belief that one deserves preferences and resources that others do not. … When people feel entitled, they want to be different from others. But just as frequently they come across as indifferent to others.”

Entitlement used in a different way, however, is a thing conferred, whether asked for or not: “the right to a particular privilege or benefit, granted by law or custom. You have a legal entitlement to speak to a lawyer if you’re ever arrested …”

So the question becomes, under what conditions should we be entitled to something – such as a benefit? And when does “entitlement” become something that people simply come to expect of others, whether or not there’s any merit to the case?

That’s a demarcation line in American politics.

“Entitlement” as a reference to a government benefit goes back to the GI Bill in the 40s, and was used in a strict sense: If you were served in the military, you were entitled to receive this benefit. Entitlements carry the element of restrictions (you have to qualify) and open-endedness (if you do qualify, we won’t deny it).

That makes it fundamentally different from a charity, which is a purely discretionary gift: It can be dispensed or not, under any – and possibly changing or arbitrary – conditions. You might have to beg for charity; in the case of an entitlement, you either qualify or you don’t.

Language writer Merrill Perlman argued the word has “been weaponized by partisan politics, of the legislative and identity kind. Government offers all sorts of ‘entitlements,’ like Social Security, unemployment compensation, supplementary food purchase programs, etc. ... But now, people opposing ‘entitlements’ often equate them with government giveaways to freeloaders or undeserving people. In racial politics, too, ‘entitlement’ has taken on a negative cast. In 2013, Justice Antonin Scalia called the extension of parts of the Voting Rights Act ‘perpetuation of racial entitlement.’”

As a writer in The New Yorker said at the time, “Scalia is saying, in effect, that the Voting Rights Act gave a gift – a ‘racial entitlement’ – to black people, and the result has been that ‘the normal political processes’ don’t work. More often, it is white people who are said to have the ‘entitlement’ if they act in ways seen as oppressing people of color.”
Entitlement is odd usage in this context, though, since fair voting is something that nearly all citizens – not just a narrowly-defined category – are supposed to be able to rely on.
 

Family values

politicalwords

For years the debate on family values has focused more on ideology than on what actually keeps families together.
► Hara Estroff Marano, in Psychology Today

“Family values” has become an aging term, fallen into disrepair and in some quarters, disrepute, but it continues to pop up in both ironic and non-ironic contexts.

Political people have long promoted the value of family structures – as for that matter an overwhelming majority of people, not just in the United States but globally, do. But not until the 1970s did the political conversation start to factor in the idea that a family model of two-parent single-paycheck heterosexuals with several children was not the only structure at large in American society (and, from the context of the rhetoric, this was a distinction from other countries; you wonder how all those other people reproduced). Other forms of families have developed in the couple of generations since, and they have become a central focus of American politics – in various ways.

The political phrase “family values” got its first substantial appearance in modern time at the 1976 Republican convention, when the party’s platform said “Economic uncertainty, unemployment, housing difficulties, women’s and men’s concerns with their changing and often conflicting roles, high divorce rates, threatened neighborhoods and schools, and public scandal all create a hostile atmosphere that erodes family structures and family values. Thus it is imperative that our government’s programs, actions, officials and social welfare institutions never be allowed to jeopardize the family. We fear the government may be powerful enough to destroy our families; we know that it is not powerful enough to replace them. Because of our concern for family values, we affirm our beliefs, stated elsewhere in this Platform, in many elements that will make our country a more hospitable environment for family life …”

The phrase, or the ideas underlying it, has not left American politics since, and picked up steam in arguments over the Equal Rights Amendment, women in the workplace, abortion, sex education and more.

Writer Neil J. Young outlined part of the battle this way: “As the ‘family values’ movement grew, it increasingly encouraged paranoid fantasies of how secular liberals in the federal government and global organizations like the United Nations worked to separate children from their parents’ ideas and values.”

For example, in 1978 Pro Family Forum issued a paper portraying the 1979 International Children’s Year as a scurrilous undercover plot to ‘liberate’ children from their families: “Do you want the minds (and consequently, the souls) of YOUR CHILDREN turned over to international humanist/socialist planners?”

The 1996 book by then-First Lady Hillary Clinton, It Takes a Village, was similarly described as part of an effort to disrupt parental authority and family structures – an attempt to destroy the family and put children under control of the state. (Anyone bothering to read the book would come away with a very different impression.)

Most of such arguments stay a little submerged in the larger national audience, but for many years “family values” was heavily used as code to evoke them. Language consultant Frank Luntz reported that “family values” has “tested” better than “traditional values,” or “American values” or “community values” – being preferred by more than twice as many people as any of those.

The phrase easily can be put to contrary uses, however. The splitting of immigrant families at the Mexican border in 2018, for example, drew loud complaints that families were being dismissed and devalued, and the phrase “family values” was being invoked in new ways.

Columnist Frank Bruni wrote in late 2017, “Try this on for size: Democrats are the party of family values because they promote the creation of more families. They did precisely that with their advocacy of marriage equality, which didn’t tug the country away from convention but toward it, by encouraging gay and lesbian Americans to live in the sorts of arrangements that conservatives in fact extol. Democrats also want to give families the flexibility and security that help keep them afloat and maybe intact. That’s what making the work force more hospitable to women and increasing the number of Americans with health insurance do. And Republicans lag behind Democrats on both fronts.”
 

Welfare/welfare state

politicalwords

It is best to define the welfare state not through its noble goals but through its instruments.
► Cato Institute, 2018

“Welfare” is enshrined as a core purpose of the U.S. constitution: to “promote the general Welfare.” Its top dictionary definition (per Merriam Webster) very much reflects that: “the state of doing well especially in respect to good fortune, happiness, well-being, or prosperity.
That means, in recent political discussion, “welfare” is one of the great, classic terms of art.

The word gets two basic kinds of artful usage, referring to “welfare” as benefits for the needy – “a government program which provides financial aid to individuals or groups who cannot support themselves. … The goals of welfare vary, as it looks to promote the pursuance of work, education or, in some instances, a better standard of living” – and as the “welfare state,” which is in part a metaphorical extension of that.

There has never been a “welfare program” in the sense of a single specific program going by that name (which may contribute to the feeling that it’s hard to get a handle on it). But support for low-income people is an old concept, going back at least to the first Roman Emperor Augustus, who offered a “grain dole” for the poor. Societies through the Middle Ages and beyond provided variants, more generous or less. Commonly this was regarded as charity, but the term “welfare” came into use early in the 20th century as an alternative to denigrating associations of receiving “charity.”

Welfare in the sense of “well-being” had been around for centuries; Shakespeare used it long before it became part of the American Constitution.

In Great Britain welfare work appeared in 1903, welfare policy 1905, welfare centre 1917, and welfare state 1941. An essay on this noted, “One result of this new usage was that the word moved from being a term for a condition to one for a process or activity.”

Negative connotations soon returned, however, and much political discussion about “people on welfare” implicitly involved lazy and shiftless people (often with racial overtones as well). It was sufficiently part of the social conversation in the 60s and 70s to get a quick reference in a comedy music record (“When You’re Hot You’re Hot”), when singer Jerry Reed, singing the part of a street gambler tossed into jail, complained, “Who gonna collect my welfare?”

There may be no single “welfare program,” but there are – under its umbrella – a number of programs which do distribute benefits, sometimes direct income, often to lower income people. The Supplemental Nutrition Assistance Program (long known as “food stamps”) usually would be included, as would Temporary Assistance for Needy Families, and several others, but exact list easily could be debated. Some people might include Social Security, Medicaid and Medicare on the welfare list, though probably most Americans wouldn’t. Polling has shown those as both highly popular and justifiable at least on grounds that the recipients have paid into it.

“Welfare” has gotten less attention in the new millennium, probably in large part because of the “The Personal Responsibility and Work Opportunity Reconciliation Act of 1996,” signed by President Bill Clinton, which he said would “end welfare as we know it.” It did make some major changes, though the results have been mixed.

One historical description said “The new law built on decades of anti-welfare sentiment, which Ronald Reagan popularized in 1976 with the racially-loaded myth of the ‘welfare queen.’ In the two decades that followed, progressives and conservatives alike put forward reform proposals aimed at boosting work and reducing welfare receipt.

Progressive proposals included expanded childcare assistance, paid leave, and tax credits for working families. Conservatives, on the other hand, tended to favor work requirements – without any of the corresponding investments to address barriers to employment. In 1996, after vetoing two Republican proposals that drastically cut the program’s funding, President Bill Clinton signed the Personal Responsibility and Work Opportunity Act into law. The new legislation converted AFDC into a flat-funded block grant – TANF – and sent it to the states to administer. The law’s stated purpose was to move families from ‘welfare to work’.”

How well that’s worked out is a subject of debate, though as a matter of politics the subject has moved from front to back burner. (That’s not to say it might not shift again.)

Writer Elisabeth Park said that “When conservatives talk about ‘welfare,’ they make it sound like this pit that lazy, undeserving people wallow in forever, rather than a source of help that’s there when we need it – and that we all pay for through our taxes. … Instead, we should say Social Safety Net: This resonates better, because it conjures an image of something that catches us when we fall, but that we can easily bounce out of.”

But the word “welfare” already does double duty, with a second use just as active in the new century as it was in the last, with “welfare state.”

Back in the 70s William Safire wrote of the “welfare state”: a “government that provides economic protection for all its citizens; this is done, say its critics, at the price of individual liberty and removes incentives needed for economic growth.” Welfare state as a term has been around for a while, about a century, referring to various nations in Europe (Sweden may have been the first to get the description).

A welfare state (says the Cambridge Dictionary) is “a system that allows the government of a country to provide social services such as healthcare, unemployment benefit, etc. to people who need them, paid for by taxes.” The exact list of services varies from place to place – by location and by what is meant by “welfare state.”

But what is meant, exactly, by “welfare state”? What benefits or services does it provide, what restrictions does it impose? All that seems to be in a beholder’s eye.

The Cato Institute, which to put it mildly is no supporter of a welfare state, offered in one essay: “It is useful to regard the welfare state as a special kind of welfare system, which we define as arrangements to deal with various risks facing individuals – such as acute poverty, sickness, and accidents. A brief look at history reveals the existence of various welfare arrangements – for example, family (kin) based, religion based, civil based, corporate based, and market based (insurance through jobs, private savings, and commercial insurance). Countries have always had some type of welfare system combining all or some of the above arrangements.” It points out that a “welfare system” – in which some needs are met through non-governmental means – is not the same thing as a “welfare state.” The essay doesn’t clarify, though, how to make the jump from public provision of the services and benefits, to private provision. (Charity never has been nearly sufficient to meet the needs addressed by public systems.)

Most studies of the “welfare state” face an obstacle: No two are exactly alike, and the provisions enacted by each are different, and change over time. Most governments in recent centuries – and many from much older – incorporate at least a few elements of the “welfare state,” while absolute (even if extensive) “cradle to grave” provisioning is still highly unusual.

“Welfare state” can be used only as a general concept (as it’s often used as a term of opprobrium) rather than as something specific; the closer you try to get to specificity, the more the term, as specific definition, slips away. It has that in common with many political words.

One of the growing concepts in health care (astonishingly, not a subject of serious political controversy) is coordinated care, which has been defined as “the deliberate organization of patient care activities between two or more participants involved in a patient’s care to facilitate the appropriate delivery of health care services.”9 Less bureaucratically, it means medical and other professionals work both together and directly with the patient so that a complex problem – such as a chronic illness which may have several causes and need several plans of attack – can be addressed comprehensively; piecemeal approaches wind up in many cases as poor band-aids on symptoms rather than real solutions, resulting in regular relapses.

Welfare (aside from the full-state element) could be reconsidered with something like that in mind.

A book called Radical Help by Hilary Cottom10 addressed some of this in Great Britain. One report on the book11 cited the case of Ella, “a British woman who grew up in a broken home and was abused by her stepdad. Her eldest son got thrown out of school and ended up sitting around the house drinking. By the time her daughter was 16, she was pregnant and had an eating disorder. Ella, though in her mid-30s, had never had a real job. Life was a series of endless crises – temper tantrums, broken washing machines, her son banging his head against the walls. Every time the family came into contact with the authorities, another caseworker was brought in to provide a sliver of help. An astonishing 73 professionals spread across 20 different agencies and departments got involved with this family. Nobody had ever sat down with them to devise a comprehensive way forward.”

Ella’s case is more the norm than the exception, in the United States too. Her situation inside that structure never really improves, even after (in Cottom’s estimate) the British system spends a quarter million pounds yearly, mostly on administrative cost and time, on her and her family.

When Americans turn cynical about welfare, situations like that – and its American counterparts – are part of what they’re bearing in mind.
Cottam described a new approach being tried in Britain comparable to coordinated care. Ella and others like her instead meet with something called a “life team,” an interdisciplinary group of professionals; the focus now is to figure out where Ella would like to take her life – in a productive way – and then find answers to get there. Instead of Ella plugging into a one-size-fits-all system, the “system” reshapes to get her on her feet. And, as in the case of coordinated care, the results often have been surprisingly positive.

If “welfare” is redefined as flexibly helping people lift themselves up, rather than as a system for dispensing benefits, “welfare” could take on a more positive meaning.
 

Income share agreement

politicalwords

Conservatives should have sympathy for millennial borrowers, who did everything their parents and culture told them to do to be successful, only to become the most debt-laden generation in history. Countering a culture of credentialism mania with apprenticeships and trade alternatives is a positive step, but the first rule of finding yourself in a hole should be to stop digging. There’s no reason for the average American to subsidize the elite sorting mechanism universities have become.
► Inez Feltscher Stepman

No, it’s not another term for “marriage,” though that does offer an indication of just how involved this can get.

The use of “share” and “agreement” give the phrase an uplifting, almost cheery, sound, but the underlying consideration here is debt – the mountains of nearly unpayable debt not all but many college students face. After mortgages (which most of the time are a manageable and ordinary part of middle-class living), the largest mass of debt in the United States is higher education debt, more than $1.5 trillion. In many cases that debt is in such large amounts that final payoffs of them seem unseeably far into the future.

This is a new development. During my college days in the 1970s, I took out a couple of student loans, but they were small, and I paid them off without difficulty in four or five years. The loans were small because the costs were too. Finances were not a reason, in those days, a person could not go to college (at least, some decent college) if they chose to.

Conditions have changed. The situation is not good for anyone involved, but especially for those buried under all this debt. One theoretical advantage in a search for solutions is that, increasingly, student debt is not scattered among endless numbers of private lenders but under the umbrella of the federal government; the advantage is not that the federal government is any better as a lender but that it is just one unit to deal with,k and susceptible to congressional action.

One approach for dealing with it, a method that seems to be gaining in popularity, is the “income share agreement,” which is a variation on how a loan will be repaid. Instead of imposing a set amount due every month (depending presumably in part on the size of the loan), the ISA is more flexible: It would vary in size depending on he income the former student receives once employed. A law student who goes to work for a top white-shoe firm might kick in more, while one who works as a public defender might pay less. An in-demand physician would may more dollars per month than, say, an elementary school teacher.

The idea has some appeal (which is about 40 years old), as a way of matching ability to pay with liability. But the story could get more complicated. The debt size in many cases is so enormous that it might not plausibly be repaid in a working lifetime - and what then? (The law is very hard on discharging student loans, albeit not impossible under some conditions.)

That’s only one of the questions.

There’s a financial-structural question, which is beginning to arise as private lenders gradually move back into the business. As writer Malcolm Harris put it, “If you can convince investors you’re going to be rich for the rest of your life, why spend your college years poor?

I.S.A.s bridge the gap. It’s hard to think up a better advertisement for free-market capitalism. But I.S.A.s are premised on the idea of discriminating among individuals. Once the high-achieving poor and working-class students have been nabbed by I.S.A.s, the default rate for federal loans starts to rise, which means the interest rates for these loans have to go up to compensate. A two-tiered borrowing system emerges, and the public half degrades.”

This leads to developments that could even “reshape childhood,” encouraging K-12 students to redraw their K-12 learning and activities to suit not only college admissions offices but also lenders - to persuade that they’d be a good lending risk.

The ongoing steps where this might lead - not least in the discouraging of students even thinking about entering much-needed but less-profitable careers - could take a dark path.

University of Chicago economist Gary Becker said in one study that “Economists have long emphasized that it is difficult to borrow funds to invest in human capital because such capital cannot be offered as collateral and courts have frowned on contracts which even indirectly suggest involuntary servitude.” But under enough financial pressure - we’re talking about really big money here, past the trillion-dollar mark - how long will courts continue to look at it that way?

Which takes us back to “share” and “agreement,” and the question of how such a fine-sounding concept can turn into something so dark.