Press "Enter" to skip to content

Posts published in “Political Words”

Neo-con

politicalwords

Neo-cons (the word is an abbreviation of neoconservatism, and does not relate to a conference called NeoCon) have been around a while, long enough that the Rolling Stones cut a late-career sarcastic song about them around the start of the millennium. But this currency still circulates.

Columnist Max Boot cited – early in 2019 – headlines including “How the neocons captured Donald Trump.” “The continuing lunacy of the neocons.” “Return of the neocons!” He goes on to suggest, fairly enough, that “‘Neoconservatism’ once had a real meaning – back in the 1970s. But the label has now become meaningless. With many of those who are described as neocons, including me, fleeing the Trumpified right, the term’s sell-by date has passed. There are more ex-cons than neocons by this point.”

The Conservapedia took a crack at it, though: “in American politics [it] is someone presented as a ‘conservative’ but who actually favors big government, globalism, interventionism, and a hostility to religion in politics and government. The word means ‘newly conservative,’ and thus formerly liberal. A neocon is a RINO Backer, and like RINOs does not accept most of the important principles in the Republican Party platform. Neocons do not participate in the March for Life, nor stand up for traditional marriage, advocate other conservative social values, or emphasize putting America first. Neocons support attacking and even overthrowing foreign governments, despite how that often results in more persecution of Christians. Some neocons (like Dick Cheney) have profited immensely from the military-industrial complex. Many neocons are globalists and support the War on Sovereignty.”

Any number of actual neo-cons may quibble with much of that; as people who mostly came to their views more or less one by one rather than in a mass movement, many individual differences may be apparent.

The term started picking up significant usage in the 1970s (neo-liberal would follow soon after) to describe people whose background was in Franklin Roosevelt-style liberalism but who felt that many of their allies had become “soft” in opposing communism, meaning that a hawkish perspective on defense was a key part of their world view.

What did pre-existing conservatives think of them? The Conservapedia offers, “Paleoconservatives, who dislike Neoconservatism intensely, have argued that it emerged from Trotskyite theories, especially the notion of permanent revolution. There are four fundamental flaws in the paleoconservatives’ attack: most of the neoconservatives were never Trotskyites; none of them ever subscribed to the right-wing Socialism of Max Shachtman; the assertion that neoconservatives subscribe to “inverted Trotskyism” is misleading; and neoconservatives advocate democratic globalism, not permanent revolution.”

Neo-cons accounted for many – by no means all – of the early and leading advocates of the 2003 Iraq war, and as that wore on, and as the Obama Adninistration sidelined them, the term fell into less frequent usage.
In the Trump Administration, neo-cons have seen some resurgence. Its current (at this writing) national security advisor, John Bolton, has been described as “one of the key figures of neoconservatism.”

Writer Caitlin Johnstone argued4 that “You can trace a straight line from the endless US military expansionism we’ve been seeing since 9/11 back to the rise of neoconservatism, so paying attention to this dynamic is important for diagnosing and curing the disease.”
 

Entitlements

politicalwords

Why can’t you tell me why you feel entitled to take someone else’s money, their property, their accumulated assets?1 Why does another working American taxpayer have to pay for your existence?
► meme in The Federalist Papers

As a concept, entitlements cut in two directions, one of them much less acknowledged than the other in recent American politics; and those sit on top of other definitions as well.

In the process, the word has spread and mutated beyond all earlier recognition.

In psychology, “Entitlement is an enduring personality trait, characterized by the belief that one deserves preferences and resources that others do not. … When people feel entitled, they want to be different from others. But just as frequently they come across as indifferent to others.”

Entitlement used in a different way, however, is a thing conferred, whether asked for or not: “the right to a particular privilege or benefit, granted by law or custom. You have a legal entitlement to speak to a lawyer if you’re ever arrested …”

So the question becomes, under what conditions should we be entitled to something – such as a benefit? And when does “entitlement” become something that people simply come to expect of others, whether or not there’s any merit to the case?

That’s a demarcation line in American politics.

“Entitlement” as a reference to a government benefit goes back to the GI Bill in the 40s, and was used in a strict sense: If you were served in the military, you were entitled to receive this benefit. Entitlements carry the element of restrictions (you have to qualify) and open-endedness (if you do qualify, we won’t deny it).

That makes it fundamentally different from a charity, which is a purely discretionary gift: It can be dispensed or not, under any – and possibly changing or arbitrary – conditions. You might have to beg for charity; in the case of an entitlement, you either qualify or you don’t.

Language writer Merrill Perlman argued the word has “been weaponized by partisan politics, of the legislative and identity kind. Government offers all sorts of ‘entitlements,’ like Social Security, unemployment compensation, supplementary food purchase programs, etc. ... But now, people opposing ‘entitlements’ often equate them with government giveaways to freeloaders or undeserving people. In racial politics, too, ‘entitlement’ has taken on a negative cast. In 2013, Justice Antonin Scalia called the extension of parts of the Voting Rights Act ‘perpetuation of racial entitlement.’”

As a writer in The New Yorker said at the time, “Scalia is saying, in effect, that the Voting Rights Act gave a gift – a ‘racial entitlement’ – to black people, and the result has been that ‘the normal political processes’ don’t work. More often, it is white people who are said to have the ‘entitlement’ if they act in ways seen as oppressing people of color.”
Entitlement is odd usage in this context, though, since fair voting is something that nearly all citizens – not just a narrowly-defined category – are supposed to be able to rely on.
 

Family values

politicalwords

For years the debate on family values has focused more on ideology than on what actually keeps families together.
► Hara Estroff Marano, in Psychology Today

“Family values” has become an aging term, fallen into disrepair and in some quarters, disrepute, but it continues to pop up in both ironic and non-ironic contexts.

Political people have long promoted the value of family structures – as for that matter an overwhelming majority of people, not just in the United States but globally, do. But not until the 1970s did the political conversation start to factor in the idea that a family model of two-parent single-paycheck heterosexuals with several children was not the only structure at large in American society (and, from the context of the rhetoric, this was a distinction from other countries; you wonder how all those other people reproduced). Other forms of families have developed in the couple of generations since, and they have become a central focus of American politics – in various ways.

The political phrase “family values” got its first substantial appearance in modern time at the 1976 Republican convention, when the party’s platform said “Economic uncertainty, unemployment, housing difficulties, women’s and men’s concerns with their changing and often conflicting roles, high divorce rates, threatened neighborhoods and schools, and public scandal all create a hostile atmosphere that erodes family structures and family values. Thus it is imperative that our government’s programs, actions, officials and social welfare institutions never be allowed to jeopardize the family. We fear the government may be powerful enough to destroy our families; we know that it is not powerful enough to replace them. Because of our concern for family values, we affirm our beliefs, stated elsewhere in this Platform, in many elements that will make our country a more hospitable environment for family life …”

The phrase, or the ideas underlying it, has not left American politics since, and picked up steam in arguments over the Equal Rights Amendment, women in the workplace, abortion, sex education and more.

Writer Neil J. Young outlined part of the battle this way: “As the ‘family values’ movement grew, it increasingly encouraged paranoid fantasies of how secular liberals in the federal government and global organizations like the United Nations worked to separate children from their parents’ ideas and values.”

For example, in 1978 Pro Family Forum issued a paper portraying the 1979 International Children’s Year as a scurrilous undercover plot to ‘liberate’ children from their families: “Do you want the minds (and consequently, the souls) of YOUR CHILDREN turned over to international humanist/socialist planners?”

The 1996 book by then-First Lady Hillary Clinton, It Takes a Village, was similarly described as part of an effort to disrupt parental authority and family structures – an attempt to destroy the family and put children under control of the state. (Anyone bothering to read the book would come away with a very different impression.)

Most of such arguments stay a little submerged in the larger national audience, but for many years “family values” was heavily used as code to evoke them. Language consultant Frank Luntz reported that “family values” has “tested” better than “traditional values,” or “American values” or “community values” – being preferred by more than twice as many people as any of those.

The phrase easily can be put to contrary uses, however. The splitting of immigrant families at the Mexican border in 2018, for example, drew loud complaints that families were being dismissed and devalued, and the phrase “family values” was being invoked in new ways.

Columnist Frank Bruni wrote in late 2017, “Try this on for size: Democrats are the party of family values because they promote the creation of more families. They did precisely that with their advocacy of marriage equality, which didn’t tug the country away from convention but toward it, by encouraging gay and lesbian Americans to live in the sorts of arrangements that conservatives in fact extol. Democrats also want to give families the flexibility and security that help keep them afloat and maybe intact. That’s what making the work force more hospitable to women and increasing the number of Americans with health insurance do. And Republicans lag behind Democrats on both fronts.”
 

Welfare/welfare state

politicalwords

It is best to define the welfare state not through its noble goals but through its instruments.
► Cato Institute, 2018

“Welfare” is enshrined as a core purpose of the U.S. constitution: to “promote the general Welfare.” Its top dictionary definition (per Merriam Webster) very much reflects that: “the state of doing well especially in respect to good fortune, happiness, well-being, or prosperity.
That means, in recent political discussion, “welfare” is one of the great, classic terms of art.

The word gets two basic kinds of artful usage, referring to “welfare” as benefits for the needy – “a government program which provides financial aid to individuals or groups who cannot support themselves. … The goals of welfare vary, as it looks to promote the pursuance of work, education or, in some instances, a better standard of living” – and as the “welfare state,” which is in part a metaphorical extension of that.

There has never been a “welfare program” in the sense of a single specific program going by that name (which may contribute to the feeling that it’s hard to get a handle on it). But support for low-income people is an old concept, going back at least to the first Roman Emperor Augustus, who offered a “grain dole” for the poor. Societies through the Middle Ages and beyond provided variants, more generous or less. Commonly this was regarded as charity, but the term “welfare” came into use early in the 20th century as an alternative to denigrating associations of receiving “charity.”

Welfare in the sense of “well-being” had been around for centuries; Shakespeare used it long before it became part of the American Constitution.

In Great Britain welfare work appeared in 1903, welfare policy 1905, welfare centre 1917, and welfare state 1941. An essay on this noted, “One result of this new usage was that the word moved from being a term for a condition to one for a process or activity.”

Negative connotations soon returned, however, and much political discussion about “people on welfare” implicitly involved lazy and shiftless people (often with racial overtones as well). It was sufficiently part of the social conversation in the 60s and 70s to get a quick reference in a comedy music record (“When You’re Hot You’re Hot”), when singer Jerry Reed, singing the part of a street gambler tossed into jail, complained, “Who gonna collect my welfare?”

There may be no single “welfare program,” but there are – under its umbrella – a number of programs which do distribute benefits, sometimes direct income, often to lower income people. The Supplemental Nutrition Assistance Program (long known as “food stamps”) usually would be included, as would Temporary Assistance for Needy Families, and several others, but exact list easily could be debated. Some people might include Social Security, Medicaid and Medicare on the welfare list, though probably most Americans wouldn’t. Polling has shown those as both highly popular and justifiable at least on grounds that the recipients have paid into it.

“Welfare” has gotten less attention in the new millennium, probably in large part because of the “The Personal Responsibility and Work Opportunity Reconciliation Act of 1996,” signed by President Bill Clinton, which he said would “end welfare as we know it.” It did make some major changes, though the results have been mixed.

One historical description said “The new law built on decades of anti-welfare sentiment, which Ronald Reagan popularized in 1976 with the racially-loaded myth of the ‘welfare queen.’ In the two decades that followed, progressives and conservatives alike put forward reform proposals aimed at boosting work and reducing welfare receipt.

Progressive proposals included expanded childcare assistance, paid leave, and tax credits for working families. Conservatives, on the other hand, tended to favor work requirements – without any of the corresponding investments to address barriers to employment. In 1996, after vetoing two Republican proposals that drastically cut the program’s funding, President Bill Clinton signed the Personal Responsibility and Work Opportunity Act into law. The new legislation converted AFDC into a flat-funded block grant – TANF – and sent it to the states to administer. The law’s stated purpose was to move families from ‘welfare to work’.”

How well that’s worked out is a subject of debate, though as a matter of politics the subject has moved from front to back burner. (That’s not to say it might not shift again.)

Writer Elisabeth Park said that “When conservatives talk about ‘welfare,’ they make it sound like this pit that lazy, undeserving people wallow in forever, rather than a source of help that’s there when we need it – and that we all pay for through our taxes. … Instead, we should say Social Safety Net: This resonates better, because it conjures an image of something that catches us when we fall, but that we can easily bounce out of.”

But the word “welfare” already does double duty, with a second use just as active in the new century as it was in the last, with “welfare state.”

Back in the 70s William Safire wrote of the “welfare state”: a “government that provides economic protection for all its citizens; this is done, say its critics, at the price of individual liberty and removes incentives needed for economic growth.” Welfare state as a term has been around for a while, about a century, referring to various nations in Europe (Sweden may have been the first to get the description).

A welfare state (says the Cambridge Dictionary) is “a system that allows the government of a country to provide social services such as healthcare, unemployment benefit, etc. to people who need them, paid for by taxes.” The exact list of services varies from place to place – by location and by what is meant by “welfare state.”

But what is meant, exactly, by “welfare state”? What benefits or services does it provide, what restrictions does it impose? All that seems to be in a beholder’s eye.

The Cato Institute, which to put it mildly is no supporter of a welfare state, offered in one essay: “It is useful to regard the welfare state as a special kind of welfare system, which we define as arrangements to deal with various risks facing individuals – such as acute poverty, sickness, and accidents. A brief look at history reveals the existence of various welfare arrangements – for example, family (kin) based, religion based, civil based, corporate based, and market based (insurance through jobs, private savings, and commercial insurance). Countries have always had some type of welfare system combining all or some of the above arrangements.” It points out that a “welfare system” – in which some needs are met through non-governmental means – is not the same thing as a “welfare state.” The essay doesn’t clarify, though, how to make the jump from public provision of the services and benefits, to private provision. (Charity never has been nearly sufficient to meet the needs addressed by public systems.)

Most studies of the “welfare state” face an obstacle: No two are exactly alike, and the provisions enacted by each are different, and change over time. Most governments in recent centuries – and many from much older – incorporate at least a few elements of the “welfare state,” while absolute (even if extensive) “cradle to grave” provisioning is still highly unusual.

“Welfare state” can be used only as a general concept (as it’s often used as a term of opprobrium) rather than as something specific; the closer you try to get to specificity, the more the term, as specific definition, slips away. It has that in common with many political words.

One of the growing concepts in health care (astonishingly, not a subject of serious political controversy) is coordinated care, which has been defined as “the deliberate organization of patient care activities between two or more participants involved in a patient’s care to facilitate the appropriate delivery of health care services.”9 Less bureaucratically, it means medical and other professionals work both together and directly with the patient so that a complex problem – such as a chronic illness which may have several causes and need several plans of attack – can be addressed comprehensively; piecemeal approaches wind up in many cases as poor band-aids on symptoms rather than real solutions, resulting in regular relapses.

Welfare (aside from the full-state element) could be reconsidered with something like that in mind.

A book called Radical Help by Hilary Cottom10 addressed some of this in Great Britain. One report on the book11 cited the case of Ella, “a British woman who grew up in a broken home and was abused by her stepdad. Her eldest son got thrown out of school and ended up sitting around the house drinking. By the time her daughter was 16, she was pregnant and had an eating disorder. Ella, though in her mid-30s, had never had a real job. Life was a series of endless crises – temper tantrums, broken washing machines, her son banging his head against the walls. Every time the family came into contact with the authorities, another caseworker was brought in to provide a sliver of help. An astonishing 73 professionals spread across 20 different agencies and departments got involved with this family. Nobody had ever sat down with them to devise a comprehensive way forward.”

Ella’s case is more the norm than the exception, in the United States too. Her situation inside that structure never really improves, even after (in Cottom’s estimate) the British system spends a quarter million pounds yearly, mostly on administrative cost and time, on her and her family.

When Americans turn cynical about welfare, situations like that – and its American counterparts – are part of what they’re bearing in mind.
Cottam described a new approach being tried in Britain comparable to coordinated care. Ella and others like her instead meet with something called a “life team,” an interdisciplinary group of professionals; the focus now is to figure out where Ella would like to take her life – in a productive way – and then find answers to get there. Instead of Ella plugging into a one-size-fits-all system, the “system” reshapes to get her on her feet. And, as in the case of coordinated care, the results often have been surprisingly positive.

If “welfare” is redefined as flexibly helping people lift themselves up, rather than as a system for dispensing benefits, “welfare” could take on a more positive meaning.
 

Income share agreement

politicalwords

Conservatives should have sympathy for millennial borrowers, who did everything their parents and culture told them to do to be successful, only to become the most debt-laden generation in history. Countering a culture of credentialism mania with apprenticeships and trade alternatives is a positive step, but the first rule of finding yourself in a hole should be to stop digging. There’s no reason for the average American to subsidize the elite sorting mechanism universities have become.
► Inez Feltscher Stepman

No, it’s not another term for “marriage,” though that does offer an indication of just how involved this can get.

The use of “share” and “agreement” give the phrase an uplifting, almost cheery, sound, but the underlying consideration here is debt – the mountains of nearly unpayable debt not all but many college students face. After mortgages (which most of the time are a manageable and ordinary part of middle-class living), the largest mass of debt in the United States is higher education debt, more than $1.5 trillion. In many cases that debt is in such large amounts that final payoffs of them seem unseeably far into the future.

This is a new development. During my college days in the 1970s, I took out a couple of student loans, but they were small, and I paid them off without difficulty in four or five years. The loans were small because the costs were too. Finances were not a reason, in those days, a person could not go to college (at least, some decent college) if they chose to.

Conditions have changed. The situation is not good for anyone involved, but especially for those buried under all this debt. One theoretical advantage in a search for solutions is that, increasingly, student debt is not scattered among endless numbers of private lenders but under the umbrella of the federal government; the advantage is not that the federal government is any better as a lender but that it is just one unit to deal with,k and susceptible to congressional action.

One approach for dealing with it, a method that seems to be gaining in popularity, is the “income share agreement,” which is a variation on how a loan will be repaid. Instead of imposing a set amount due every month (depending presumably in part on the size of the loan), the ISA is more flexible: It would vary in size depending on he income the former student receives once employed. A law student who goes to work for a top white-shoe firm might kick in more, while one who works as a public defender might pay less. An in-demand physician would may more dollars per month than, say, an elementary school teacher.

The idea has some appeal (which is about 40 years old), as a way of matching ability to pay with liability. But the story could get more complicated. The debt size in many cases is so enormous that it might not plausibly be repaid in a working lifetime - and what then? (The law is very hard on discharging student loans, albeit not impossible under some conditions.)

That’s only one of the questions.

There’s a financial-structural question, which is beginning to arise as private lenders gradually move back into the business. As writer Malcolm Harris put it, “If you can convince investors you’re going to be rich for the rest of your life, why spend your college years poor?

I.S.A.s bridge the gap. It’s hard to think up a better advertisement for free-market capitalism. But I.S.A.s are premised on the idea of discriminating among individuals. Once the high-achieving poor and working-class students have been nabbed by I.S.A.s, the default rate for federal loans starts to rise, which means the interest rates for these loans have to go up to compensate. A two-tiered borrowing system emerges, and the public half degrades.”

This leads to developments that could even “reshape childhood,” encouraging K-12 students to redraw their K-12 learning and activities to suit not only college admissions offices but also lenders - to persuade that they’d be a good lending risk.

The ongoing steps where this might lead - not least in the discouraging of students even thinking about entering much-needed but less-profitable careers - could take a dark path.

University of Chicago economist Gary Becker said in one study that “Economists have long emphasized that it is difficult to borrow funds to invest in human capital because such capital cannot be offered as collateral and courts have frowned on contracts which even indirectly suggest involuntary servitude.” But under enough financial pressure - we’re talking about really big money here, past the trillion-dollar mark - how long will courts continue to look at it that way?

Which takes us back to “share” and “agreement,” and the question of how such a fine-sounding concept can turn into something so dark.
 

Morality

politicalwords

Our first president said that virtue of morality was a necessary spring of popular government. He said who that is a sincere friend to it can look with indifference on attempts to shake the foundation of the fabric [of society].

► Senate candidate Roy Moore in a September 2017 debate with Republican primary opponent Sen. Luther Strange

Well, yes. But what morality are we talking about here?

Are we talking about some of it or all of it?

After all, young George Washington was “a man on the make. He wanted to get rich. He bought, sold and traded slaves, raffling off some in a lottery and permanently dividing families. After arranging to marry the richest widow in Virginia, Martha Dandridge Custis, he wrote a series of passionate love letters to the wife of one of his best friends. And then there was his insatiable craving for land, which led him to cheat some of the men he had commanded in the French and Indian War out of acreage they had been offered as an incentive to join the fight. As biographer Ron Chernow put it, Washington ‘exhibited a naked, sometimes clumsy ambition.’”

Of course, he matured with time, but who’s perfect?

What’s really called for here is more specificity; in fact, that seems inherent. Like a number of words in this list, the problem isn’t that the word isn’t significant; the problem is that the real broad scope of the word has been cast aside, and redefined to include only a tiny piece of the original.

The Oxford English Dictionary defines morality as “Principles concerning the distinction between right and wrong or good and bad behaviour” – which may be a reasonable enough definition, although it offers little guidance: What exactly is “right” and “wrong”?

The author C.S. Lewis took a stab at this, suggesting three components to morality: “(1) to ensure fair play and harmony between individuals; (2) to help make us good people in order to have a good society; and (3) to keep us in a good relationship with the power that created us.” That suggests what the purpose of morality might be, but still doesn’t help answer the question of what it is – the practical nature of the moral.

Because our code of ethics (ethical philosophy covers roughly the same territory as “morality”) eventually covers everything we do, including many or most of the choices we make in our lives, that becomes an awful lot of territory for us to cope with as a matter of public life. Inevitably, nearly all of us wind up paying more attention to some parts of this vast territory than to others, and those choices we make say as much (probably more) about us than about those who we would judge.

The Wikipedia entry on morality includes this useful paragraph:
“If morality is the answer to the question ‘how ought we to live’ at the individual level, politics can be seen as addressing the same question at the social level, though the political sphere raises additional problems and challenges. ... Moral foundations theory (authored by Jonathan Haidt and colleagues) has been used to study the differences between liberals and conservatives, in this regard. Haidt found that Americans who identified as liberals tended to value care and fairness higher than loyalty, respect and purity. Self-identified conservative Americans valued care and fairness less and the remaining three values more. Both groups gave care the highest over-all weighting, but conservatives valued fairness the lowest, whereas liberals valued purity the lowest. Haidt also hypothesizes that the origin of this division in the United States can be traced to geo-historical factors, with conservatism strongest in closely knit, ethnically homogenous communities, in contrast to port-cities, where the cultural mix is greater, thus requiring more liberalism.”

In the book The Hidden Agenda of the Political Mind, researchers Jason Weeden and Robert Kurzban argued that morality is often based in selfishness: “we often perceive our own beliefs as fair and socially beneficial, while seeing opposing views as merely self-serving. But in fact most political views are governed by self-interest, even if we usually don’t realize it … we engage in unconscious rationalization to justify our political positions, portraying our own views as wise, benevolent, and principled while casting our opponents’ views as thoughtless and greedy.”

Or, when socially broader, morality can be used as a lever, an ideological tool to stop other people from doing what (we think) is harmful to them.

So what can we say of morality that people across our society can accept and understand in a common way? Not much, apparently. “Morality” has become a code word, with provisions that would be commonly understood only in split-off – and often in-conflict – elements of society. It’s a brickbat, not a standard of conduct. It will not mean more until people in America reach beyond it and come to come common agreements - which they seem not to do at present - about what actually is good and bad.

Our conceptions of morality, evidently, are flying apart, and some seemingly logical center is failing to hold.
 

Proper role of government

politicalwords

The legitimate object of government is to do for a community of people whatever they need to have done, but cannot do at all, or cannot do as well for themselves, in their separate and individual capacities.
► Abraham Lincoln

... an attack phrase on centralized federal authority and massive taxation and expenditure.
► William Safire, Safire’s Political Dictionary

Safire’s assessment was largely right: a statement including the phrase “the proper role of government” is apt to be an attack on same, with the idea that whatever the main subject at hand is, is not within the “proper role of government.” (Ironically, the supporters of government activism tend not to talk much about the proper role of government as such.)

The attack tends to have three problems in association with each other.
First, all of government is smeared as needing limitation in a way no other aspect of society is; did all those governments officials, high and low, drink some special kool-aid?

Second, it forgets that government is interactive with the rest of us. (You don’t think businesses affect what governments do? Churches? Even, from time to time, individuals? Activist groups?)

Third, in connection with that, obsession on government’s proper “role” diverts attention from other sources of power in our society – which can gain more power in turn, as long as we’re warned not to look in their direction.

One of the foundations of “proper role” rhetoric in recent years is an article written by Ezra Taft Benson, a secretary of agriculture in the Eisenhower Administration, called “Proper Role of Government.” It cleanly isolated a central kernel of the limited-government argument:

“… the proper function of government is limited only to those spheres of activity within which the individual citizen has the right to act. By deriving its just powers from the governed, government becomes primarily a mechanism for defense against bodily harm, theft and involuntary servitude. It cannot claim the power to redistribute the wealth or force reluctant citizens to perform acts of charity against their will. Government is created by man. No man possesses such power to delegate. The creature cannot exceed the creator. In general terms, therefore, the proper role of government includes such defensive activities, as maintaining national military and local police forces for protection against loss of life, loss of property, and loss of liberty at the hands of either foreign despots or domestic criminals.”

It’s a thoughtful contention – if government is created out of the authority of the people, how can it do anything you or I individually cannot do? – but it doesn’t hold up to scrutiny.

You and I as individuals lack the authority to do even the basic things Benson argues governments must do, such as protect people from harm and provide for the common defense. Accomplishing those things means commanding people to do certain things; we as atomized individuals, have no such authority. Nor do we have authority to raise money – other than by begging for it, which as a practical matter wouldn’t work – to accomplish those things.

Authority to do anything that government does, including those things the staunchest libertarians would endorse, means governments have authority beyond that of individuals.

If we want to get into theorizing about how this might be justifiable, simple enough answers are available. These grow mostly out of the concept of the social contract: By living in and benefiting from our society, we give up some of our absolute liberty in the interest of gaining other compensating advantages. It’s transactional; a tradeoff. We can (and many of us do) disagree about the precise terms of that deal, but such a deal is what people who live in any society – whether one like the United States or one drastically different – officially or unofficially agree to. If you don’t, you leave, or you might be punished by the other people who stay.

And there are people who try to withdraw to some extent, sometimes in minor and subtle ways and sometimes in ways more obvious. But most of us take the tradeoff, knowingly or unknowingly, sometimes grumbling as we do. But nonetheless we do.

Benson said in his article that he draws much of his philosophical inspiration from the United States Constitution. But the Constitution itself disagrees with his reductionist view. Its first sentence says this: “We the People of the United States, in Order to form a more perfect Union, establish Justice, insure domestic Tranquility, provide for the common defence, promote the general Welfare, and secure the Blessings of Liberty to ourselves and our Posterity, do ordain and establish this Constitution for the United States of America.”

What it takes for government in America to accomplish these things has varied with time, often has been debated, and sometimes has moved into unpredictable areas.

Few members of the founding generation were as fierce in their calls for limited government as Thomas Jefferson; but in 1803 he doubled the size of the United States with a purchase of land from France, a purchase for which neither he nor Congress had any clear constitutional authority. (Analysits of that day rwisted themselves into pretzels in attempts to find it.) Jefferson did it because like many other people he had become convinced that it was in the interest of the larger aims of the United States. Despite its uncertain constitutionality, it hasn’t been significantly questioned since.
 

Credentialism

politicalwords

How much is a college education worth?

How much is it worth to have attended, or obtained a degree from, the “right” college?

Rounding out the trio of questions: How much should it be worth?
We can get at such questions through the doorway of credentialism, a term almost begging for widespread use on either the political left or right, or maybe both.

Let’s put this into context first. As human society has developed, more information, and more specialized skills and understanding, has been needed to cope and prosper. A person in the 1600s had to understand far more than a counterpart in the 1100s. Someone living in 1800 simply did not need to know as much, to function effectively, as someone living in 2000. Education has helped create the progress, and it also makes itself more necessary as progress continues. More education helps; higher quality education helps more. Of course, let us not forget this, either: Education can come from many sources (a person educated as a fine college who never picks up another book after graduation likely will be far less well educated than a high school grad who continues to learn). And let us not for get that reputation does not necessarily equal actual quality or performance.

Which is to say, most of the academics I’ve met over the years have struck me as highly intelligent people, but I’ve met a few Ph.D.s I wouldn’t trust to park my car.

Next stop, credentialism: “a concept coined by social scientists in the 1970s, is the reduction of qualifications to status conferring pieces of paper. It’s an ideology which puts formal educational credentials above other ways of understanding human potential and ability.”

In the 1960s, amid the rethinking of many social institutions, an approach (fostered by critics such as Ivan Ilich) “proceeded from the assumption that most if not all of the skills needed to competently perform the work tasks carried out by many professionals could be acquired through practical experience and with much less in the way of formal schooling than is usually needed to obtain the “required” credentials. From this perspective, the disguised purpose of much formal schooling (its ‘hidden curriculum’) is to impart a particular disciplinary paradigm, ideological orientation, or set of values to those seeking formal credentials to work in prestigious or ‘high-status’ fields such as medicine, law, and education. Furthermore, the credential systems developed in a number of occupational areas are part of the ‘collective mobility projects’ of practitioners to achieve a ‘professional status’ that brings with it greater material and symbolic rewards. Thus credentialism is closely associated with strategies of ‘social closure’ (to use Max Weber’s expression) that permit social groups to maximize rewards ‘by restricting access to resources and opportunities to a limited circle of eligibles.”

The concept has been pushed much further since then – as well as the pushback against it.

So, for example, we get employers who hire only from Ivy League schools (this including many parts of the federal government in Washington), no matter the demonstrated knowledge, background, skills and other assets that other applicants might bring. The dynamic reaches out on the other end to parent frantic to get their kids into top-rank schools (leading to the corruption of such events as the 2019 college admissions scandal), and the exploding cost of higher education.

As a character on the TV show The Sopranos said, back around 2000 (on the subject of college admissions), “It’s not about grades any more. It’s all, who you know and how many buildings you give.”

Young adults will find they need more education than did their predecessors, but turning the process into a game of extreme musical chairs will lead to social disaster – not least because many students do not go on to college, do not finish or do not attend prestige schools; and there aren’t nearly enough spaces for everyone if they chose to do so. Better answers are needed.

Joseph Fuller, a Harvard University academic, is among those giving the matter some thought, and serving as a critic (ironically maybe, given his professional perch) of over-credentialism in the job market. In a study called “Dismissed by Degrees,” he and co-author Manjari Raman reported that “Degree inflation – the rising demand for a four-year college degree for jobs that previously did not require one – is a substantive and widespread phenomenon that is making the U.S. labor market more inefficient. Postings for many jobs traditionally viewed as middle-skills jobs (those that require employees with more than a high school diploma but less than a college degree) in the United States now stipulate a college degree as a minimum education requirement, while only a third of the adult population possesses this credential.”

As a matter of politics, this tendency leads to anger at the educated elites (mainly on the right) and a a socially-restrictive movement toward income inequality (on the left), among other results.
Credentialism is a term and an issue in land mine status.
 

Coup

politicalwords

One of the older print books in my household collection - old enough that I bought the paperback new for 75 cents - is called Coup d’Etat: A Practical Handbook, a Brilliant Guide for Taking Over a Nation (Fawcett Premier, 1969). It delivers on the title, being a manual for a do-it-yourselfer: Here’s how to forcibly take over a country, preferably a less-technically developed one.

The author, Edward Luttwak, is a serious researcher on military and other history, and has written among other things a highly-regarded study of the strategy the Roman Empire used to grow itself, as well as a guide to military strategy used in armed forces training. Coup d’Etat was an unusual case. Opinion writer David Frum called it “that astounding thing: a great work of political science that is also a hilarious satire.” And it sort of is: Serious, factual and well-researched (he includes detailed lists of recent coups, successful and failed, referenced in the body of the book).

If on the surface it seems almost like an invitation to anarchy, the introduction (written by another writer) makes the case that “this book is as much a matter for the prevention of the coup as for initiating one.” (It was not, by the way, the first book on the subject. There was at least one predecessor, Technique of the Coup d’Etat, by Curzio Malaparte.)

Overthrows along the lines of what we might consider a coup go back to the days of ancient empires, but the modern form of the coup, in Luttwak’s telling, is made possible in the last couple hundred years or so by modern governmental bureaucracy and communications and transportation systems. He comes up with this definition: “A coup consists of the infiltration of a small but critical segment of the state apparatus, which is then used to displace the government from its control of the remainder.” The full term coup d'etat means in English a blow against the state. It does not have to be violent (though it might be), and it need not rely on support from the constituency (though it might obtain that).

That gives the sense that a coup is a long shot, that a number of elements have to fall into place to make it work, and Luttwak seems to make that case; his basic list of coups and attempted coups from 1945 to 1967 includes about as many failures as successes. He suggests that coups are much more likely to succeed when a set of preconditions are in place, such as “economic backwardness,” political independence (no close entangling alliances) of the target country, and a basic unity of the country (it’s not likely to break into pieces under pressure). It also depends on the standing, non-political, parts of the government not being strong enough to push back against an illegal change in leadership. Coups work best, then, in developing countries where institutions and economies are not large and stable. They also may be on the decline; 2018 was only the second year a century - 2007 was the first - to report no coup attempts internationally. See also the Coup d'etat Project.

A coup is an abrupt, generally unexpected, wrenching of the power of a state from whoever was legitimately installed to lead it. Luttwak refers to using the tools of the state to aggressively change its leadership, but that’s not the same as changing leadership using legitimate procedures. The rise of Adolf Hitler in Germany, for example, wasn’t a coup despite all of the activities of Brownshirts and others in the street; he was handed high office in that country according to constitutional procedures, at least at first.

The charge of one side or another fomenting a “coup” turns up periodically in recent American politics. On unusual occasions there were rumblings to effect from the left, about Republican efforts to kick out President Barack Obama (or, before that, Bill Clinton). As determined as some of those efforts were, none rose to the level of a coup; even the impeachment effort against Clinton was undertaken

Former television personality Bill O’Reilly wrote, for example, about what he described as a coup attempt targeting President Donald Trump: “The story of our time is the coup d’état that is being planned in this country. Sounds pretty bad, doesn’t it? In most countries, coup d’états happen when the military tries to overthrow the government. The United States military would never do that… but the national media certainly would.” News organizations, in other words, were trying to engineer a coup.

But even if you assume they were trying that, the description misuses the word “coup.” There is no forcible overthrow here; the governmental system of elections and succession remained in place, and O’Reilly wasn’t really trying to contend that it hadn’t. He was comparing criticism by news organizations to a violent military overthrow of the government, but the two things are wholly different; the most news organizations could do would be to influence the opinions of various sectors of the public.

A coup is not criticism or opposition. It is an illegitimate seizure of political power.

On that basis, it might be worth reviewing what the Russian government was trying to accomplish in the American elections of 2016. That did not involve direct seizure of the governmental levers of power. But, as a quiet, well-placed attempt to grab power, it comes close to meeting the definition of coup; whether successful or not, being a subject for further review.