Press "Enter" to skip to content

Posts published in “Political Words”

Race realism


To be realistic & understand that stereotypes within each race exist for a reason and are usually backed by hard data. To understand that although facts surrounding a particular race can sometimes be hard to hear, it's still a fact, therefore it is more valid than how you feel.
► Urban Dictionary, definition 1

A term used by idiots to justify racial generalizations against nonwhites by using out-of-context statistics and pseudoscientific studies. The overwhelming consensus among evolutionary scientists is that zero evidence exists of inherent behavioral, intellectual, or anatomical differences between races. But don't tell that to the idiot race realist; it will hurt his ego and his false sense of superiority.
► Urban Dictionary, definition 2

The overarching reality of race shows slippery and elusive it is. Barack Obama has typically been described as the first black president of the United States, and photos of him appear to confirm that; but he’s actually bi-racial. And that makes him not at all unusual in the racially complex United States (not to mention the world).

“Individuals often share more genes with members of other races than with members of their own race,” Gavin Evans, who has written extensively on genetics and race, wrote in 2018.2 “Indeed, many academics have argued that race is a social construct – which is not to deny that there are groups of people (‘population groups’, in the scientific nomenclature) that share a high amount of genetic inheritance. Race science therefore starts out on treacherous scientific footing.”

If you want a rationalization for something, notably an argument or data point that makes yourself look better, you usually can find it, especially in the age of the internet. But the search for justifications for racial superiority goes back to ancient times, and continued through human history, reaching perhaps its peak of awfulness in the Nazi holocaust.

What is now called “race realism” is sometimes presented as a non-racist way of looking at race, but in practice usually is a more centrist-friendly term for the former “scientific racism.” And that, Wikipedia reports, is “the pseudoscientific belief that empirical evidence exists to support or justify racism (racial discrimination), racial inferiority, or racial superiority. Historically, scientific racist ideas received credence in the scientific community but are no longer considered scientific.”

If this had no specific social or political connection it might not really matter very much; but those connections are significant, and long have been. It is a central project of spokesmen “alt-right” (which see), which as Evans noted “like to use pseudoscience to lend intellectual justification to ethno-nationalist politics. If you believe that poor people are poor because they are inherently less intelligent, then it is easy to leap to the conclusion that liberal remedies, such as affirmative action or foreign aid, are doomed to fail.”

Attitudes about race, as about many other things, are not, as it were, black and white, but exist through endless shades of gray. The range of people who accept hard white supremacy is small, but larger segments do agree with pieces of what supremacists have to say, and more skilled communicators in the new millennium have been targeted those softer ideas to try to advance their overall racist agenda.

In his last book, Martin Luther King, Jr., argued that “To live with the pretense that racism is a doctrine of a very few is to disarm us in fighting it frontally as scientifically unsound, morally repugnant and socially destructive. The prescription for the cure rests with the accurate diagnosis of the disease.”

Then there’s also a very different concept: racial realism. Turning the first word into an adjective made quite a difference.

One 2013 book4 argues that “Race is now relevant not only in negative cases of discrimination, but in more positive ways as well. In today's workplace, employers routinely practice ‘racial realism,’ where they view race as real – as a job qualification. Many believe employee racial differences, and sometimes immigrant status, correspond to unique abilities or evoke desirable reactions from clients or citizens. They also see racial diversity as a way to increase workplace dynamism. The problem is that when employers see race as useful for organizational effectiveness, they are often in violation of civil rights law.”

Shorter: It’s not a subject we can avoid.



Clean it up or sweep it out.

The word corporatism seems, in a day when massive corporations occupy so large a part in society, like a natural: It’s short, simple and seems to relate to something specific.

It’s not that simple.

It has been used on the right to attack the left (notably during the Obama Administration) as a big-government approach; some on the left have attacked others on the left using the word to reflect a a too-close consortium with big corporations and the finance sector. It has been used on the left to attack the right for its close association with large business interests.

Author Michael Lind pointed out a few years ago that “there are at least four different and incompatible meanings of ‘corporatism’: political representation by vocational groups; centralized collective bargaining among employers and organized labor; modern industrial capitalism;1 and ‘crony capitalism’ or the corruption of public policy by special interests.”

Benito Mussolini preferred the term to fascism, which was the ideological label that stuck to his regime in Italy, and that might make for a fifth definition. Other dictators, including António de Oliveira Salazar of Portugal, also have used it.

Lind argued that “As an epithet, ‘corporatism’ fits into the worldview of Jeffersonian populists, for whom large corporations have always been suspect. And ‘corporatist’ as an insult also makes sense when deployed by libertarians who insist that they are pro-market, not pro-business.”
One libertarian writer suggests the term can be dated much earlier than a century ago as a reaction to capitalism, proposing a return to the governmentally-controlled guild and mercantile system, to “avoid the ills of laissez-faire capitalism, with its accidental (‘atomistic’) agglomerations of unconnected individuals.”

Lind’s complaint that corporatism has no clear discernible meaning seems well taken.

That doesn’t mean it couldn’t, and the word itself – forking off the base of corporate – offers something the work with. Merriam-Webster defines that original word as “formed into an association and endowed by law with the rights and liabilities of an individual,” and that could offer a concept “corporatism” could be drawn from.

Evidently, though, no one is especially eager to identify themselves with it.

Alternative facts


“Who you gonna believe, me or your own eyes?”
► Chico Marx (Usually attributed to Groucho Marx as, “Who are you going to believe, me or your lying eyes?” The Groucho-attributed version scans better, but he may not actually have said it.)

When President Trump’s Counselor Kellyanne Conway appeared on Meet the Press on January 22, 2017, she was asked about the president’s press secretary’s recent description of the size of the crowd at the recent inaugural.

Host Chuck Todd asked why he would “utter a provable falsehood” (which it was, as photographic evidence soon showed).

Conway responded that he was delivering “alternative facts.”

Todd: “Look, alternative facts aren’t facts. They’re falsehoods.”

One actual fact is that this conversation took place; it was watched live by many people, and video evidence of it exists. Another apparent fact is that Todd was correct. Merriam-Webster Dictionary defines fact this way: “In contemporary use, fact is generally understood to refer to something with actual existence, or presented as having objective reality.”

Conway tried to walk back her statement, indicating that the press secretary was referring to additional facts which had not yet been included in the discussion. But “alternative facts” is a modified version of “fact.”

As in, a modified version of “truth.” (It might barely fit the Stephen Colbert coinage of “truthiness.”)

One of the top Twitter has tags, #AlternativeFacts, was swiftly born.
But as one Psychology Today article4 points out, the concept at least is not new: It derives from the same stream, and uses the same structuring, as the messaging of the totalitarian state in the George Orwell novel 1984 – “newspeak.” Newspeak, among its other features, is designed to avoid “negative” words; these ideas instead are conveyed by using modifiers of “positive” words. The word “bad” would be made over, for example, into “ungood.” Similarly, “falsehood” or “lie” is translated into “alternative fact” – not a negative in sight.

Just place a modifier in front of “fact” – which choice of modifier almost does not matter – and you get the same effect, both as a matter of language and in real world practice.

If you do need an actual (real) alternative, try fict.

Used (invented?) by Greg Jenner, a history consultant at the British Broadcasting Corporation, a fict “is a falsehood that is widely believed to be true.” This sounds like a commonplace word waiting to be unleashed.



Just a joke? Sounds like gaslighting to me.
► internet meme

A Google search results for blame-shifting also brings up the phrase “blame-shifting and gaslighting,” which suggested that an opening word about gaslighting also is in order.

It entered our common vocabulary in 1944 with the movie Gaslight, a thriller in which a treasure hunter who has committed murder is at risk of being found out by his bride – who he attempts to manipulate by adjusting her perception of reality to the point that she begins to doubt her own sanity. (The “gaslight” here is a reference to his ploy of turning up or down the light, then denying it had happened, leading the bride to think she was misunderstanding reality.)

Wikipedia lists these as the most common strategies a gaslighter may use:

“Hiding: The abuser may hide things from the victim and cover up what they have done. Instead of feeling ashamed, the abuser may convince the victim to doubt their own beliefs about the situation and turn the blame on themselves. Changing: The abuser feels the need to change something about the victim. Whether it be the way the victim dresses or acts, they want the victim to mold into their fantasy. If the victim does not comply, the abuser may convince the victim that he or she is in fact not good enough. Control: The abuser may want to fully control and have power over the victim. In doing so, the abuser will try to seclude them from other friends and family so only they can influence the victim’s thoughts and actions.”

The concept has been richly mined over the years, mostly in fiction (it was even turned into an episode of the old Dick Van Dyke Show), but it has become a significant factor in politics. As an article in Vox explained, “The term ‘gaslighting’ has gotten thrown around a lot over the past year, mostly in reference to political campaign tactics – when candidates claimed something had (or hadn’t) happened, and refused, when confronted with contradictory evidence, to acknowledge otherwise. Lauren Duca most famously wrote about the term for Teen Vogue in a piece titled ‘Donald Trump Is Gaslighting America,’ for which she caught some heat and also raised the profile of Teen Vogue.”

Blame-shifting, an age-old device in human personal relationships, adds a small twist into the concept.

In this version, the viewpoint of the victim (or observer) is changed not only to disorient, but to shift guilt from the actual perpetrator to someone else – often the victim. This too increasingly has come to the fore in politics. A definition (from the on the personal level: “Blame-shifting is when a person does something wrong or inappropriate, and then dumps the blame on someone else to avoid taking responsibility for their own behavior.” The article lists five techniques – tactics for this mental jujitsu – including playing victim, minimizing feelings, arguing about the argument, telling self-pitying stories and “the stink bomb” (a major, loaded, counter-accusation).

All of this has clear political application; look around many ideological web sites and you’ll find larger or smaller examples of it.

One such case was raised by Washington Post columnist Jennifer Rubin, after the Senate hearing for Supreme Court nominee Brett Kavanaugh, when his accuser Christine Blasey Ford fielded criticism from Kavanaugh’s defenders:

“Right-wing male politicians such as Sen. Tom Cotton (R-Ark.) have the audacity to declare that Ford has been victimized … by Democrats. (Maybe ask her?) Even if you thought that, why would anyone say such a stunningly condescending thing? Telling someone who has said she is the victim of a sexual assault whom she should and should not hold responsible for her pain represents a new low in Senate Republicans’ twisted exercise in blame-shifting.”

On any given day, check the political news out of Washington and count the instances of blame-shifting. You may be surprised at the number. (Warning: Don’t try this as a drinking game.)



A century ago, “anti-trust” was a serious political subject, a crusade even, around which political careers – Theodore Roosevelt’s to name but one – were partly or wholly built.

Today, when the concept could use serious activation more than in generations, it has become a sad joke.

Here’s a semi-clear Google definition: “relating to legislation preventing or controlling trusts or other monopolies, with the intention of promoting competition in business.”

It’s semi-clear because it begs another question in the new millennium, which is: What’s that trust thing about? We sometimes hear about trusts in the sense of “living trusts,” a personal financial structure; or a “charitable trust”, an irrevocable organization to hold funds for charitable uses; or a “land trust” which does something similar for property; or else we think of it in the sense of having confidence or faith in someone or something else (as in trusting a friend to carry out a request).

The “trust” part of “anti-trust” came from something else altogether. An 1888 academic article, reflecting the sense of that time, said a trust is “one person holding the title of property, whether land or chattels, for the benefit of another, termed a beneficiary. Nothing can be more common or more useful. But the word is now loosely applied to a certain class, of commercial agreements and, by reason of a popular and unreasoning dread of their effect, the term itself has become contaminated. This is unfortunate, for it is difficult to find a substitute for it. There may, of course, be illegal trusts; but a trust in and by itself is not illegal: when resorted to for a proper purpose, it has been for centuries enforced by courts of justice, and is, in fact, the creature of a court of equity.”

Anti-trust, then, is opposition to monopoly, which is control of a business sector by a single person or interest. In that sense, it has also been defined as a rule intended “to make sure that businesses are competing fairly.”

The interest in trusts coincided with changes in business structure; those trusts so popular toward the end of the 1800s largely have been superseded by other forms of business structure. But “trust” in the older sense was the issue at hand when, in 1890, Senator John Sherman led passage of the federal anti-trust law that still bears his name, and he proclaimed, “If we will not endure a king as a political power we should not endure a king over the production, transportation, and sale of any of the necessaries of life.” It was intended as a tool for keeping monopolies from controlling the American economy, a (per the Federal Trade Commission) “comprehensive charter of economic liberty aimed at preserving free and unfettered competition as the rule of trade.”
The Sherman law, which contained criminal as well as civil components, held some sharp teeth, useful in the hands of regulators willing to employ them. The FTC description of the law says: “The Sherman Act outlaws ‘every contract, combination, or conspiracy in restraint of trade,’ and any ‘monopolization, attempted monopolization, or conspiracy or combination to monopolize.’”

It added, “Long ago, the Supreme Court decided that the Sherman Act does not prohibit every restraint of trade, only those that are unreasonable. For instance, in some sense, an agreement between two individuals to form a partnership restrains trade, but it may not do so unreasonably, and so may be lawful under the antitrust laws. On the other hand, certain acts are considered so harmful to competition that they are almost always illegal. These include plain arrangements among competing individuals or businesses to fix prices, divide markets, or rig bids. These acts are per se violations of the Sherman Act; in other words, no defense or justification is allowed.”

The counter-argument from some business advocates is that the law is a restraint on business operations, and that when enforced it can be very costly for important businesses; both assertions are true. The counter to that is that the cost of unregulated monopoly, to consumers, fellow businesses, and the citizenry at large, can be much higher.

Anti-monopoly law was strengthened in 1914 with passage of two major laws in 1914 (the Clayton Anti-Trust Law and the Federal Trade Commission Law, setting up an enforcement agency). And there have been two more, largely strengthening, laws passed since then, Robinson-Patman in 1936 and the Cellar-Kefauver in 1950. That last was the most recent substantial legislative update on the subject.

You may have noticed a little slowdown here on the anti-monopoly front.
That slowdown has coincided with an explosion in the types of reach of business organizations and new varieties of financing and ownership. To say that federal law hasn’t kept up – most especially in the years since the rise of the internet – would be drastically undertelling the story.
That point from the Google definition about “promoting competition in business” ought to mark the law as inherently pro-free market, since the marketplace is supposed to depend and thrive on competition. In the real world, few business people relish competition unless they see a clear path to dominating or eliminating it. In the latter part of the 20th century, “anti-trust” law was steadily redefined from the pro-marketplace measure it was designed to be, to a description of a crippling regulator of business.

Chris Hughes, who co-founded Facebook, has said that anti-trust law should be used to break up that online megalith. In discussing how that law has been used less in recent years: “Starting in the 1970s, a small but dedicated group of economists, lawyers and policymakers sowed the seeds of our cynicism. Over the next 40 years, they financed a network of think tanks, journals, social clubs, academic centers and media outlets to teach an emerging generation that private interests should take precedence over public ones. Their gospel was simple: ‘Free’ markets are dynamic and productive, while government is bureaucratic and ineffective. By the mid-1980s, they had largely managed to relegate energetic antitrust enforcement to the history books. This shift, combined with business-friendly tax and regulatory policy, ushered in a period of mergers and acquisitions that created megacorporations. ... The results are a decline in entrepreneurship, stalled productivity growth, and higher prices and fewer choices for consumers.”

It can’t be said that the anti-trust – anti-monopoly, to put it more plainly – laws aren’t used at all. The Trump Administration, for example, did unveil the machinery in throwing a roadblock before the massive AT&T and Time-Warner merger (albeit that the motivations weren’t clearly centered around promoting competition).5 But that and a few other instances in recent decades are anthills compared to the mountainous mergers and combinations of recent generations – most of them excellent news for executives, insiders and some stockholders, and bad news for employees, vendors, vendors and almost everyone else.

Economist and former Clinton Administrator Labor Secretary Robert Reich warned (during the Obama Administration) that the weakness of anti-trust regulation was creating serious damage:

“Wall Street’s five largest banks now account for 44% of America’s banking assets – up from about 25% before the crash of 2008 and 10% in 1990. That means higher fees and interest rates on loans, as well as a greater risk of another “too-big-to-fail” bailout. But politicians don’t dare bust them up because Wall Street pays part of their campaign expenses,” he wrote. “Because U.S. airlines have consolidated into a handful of giant carriers that divide up routes and collude on fares. In 2005 the U.S. had nine major airlines. Now we have just four.”
And so on across the economy.

The implications of severe economic consolidation are much broader than that, and can turn frightening.

In 2018, writer Tim Wu noted, “We must not forget the economic origins of fascism, lest we risk repeating the most calamitous error of the 20th century. Postwar [World War II] observers like Senator Harley M. Kilgore of West Virginia argued that the German economic structure, which was dominated by monopolies and cartels, was essential to Hitler’s consolidation of power. Germany at the time, Mr. Kilgore explained, ‘built up a great series of industrial monopolies in steel, rubber, coal and other materials. The monopolies soon got control of Germany, brought Hitler to power and forced virtually the whole world into war.’ To suggest that any one cause accounted for the rise of fascism goes too far … [but] extreme economic concentration does create conditions ripe for dictatorship.”

Power combines are not an easy thing to stand up to. The last century of American history has been ample demonstration of that.

Politically motivated


If you don’t like what a politician says, a quick response (and sometimes the only one) is to decry their statement as politically motivated. Or attach the phrase to a policy, or a criminal prosecution, or a smear, or …
The purpose of saying so is to cast a sense of distrust on the statement or action. But what does it mean?

Look first at motivation.

The site Business Jargons calls that word (in a not-unusual definition among dictionaries) “a driving force which affects the choice of alternatives in the behavior of a person.”1 I chose a business-oriented source for the word because the study of motivation is so central to modern business activity. (One book on my shelf is Why We Buy: The Science of Shopping, by Paco Underhill.) Successful and modern businesses know a lot about our motivations, sometimes a scary lot. But they keep researching, because there’s always much more to learn; they’re smart enough to know they never know all about what motivates us – and what might motivate us to buy from them.

There is, of course, the motivation to fulfill basic needs (shelter, food, water, and so on). One report suggests motivation can be split into inside and outside factors: “intrinsic motivation and extrinsic motivation.
Intrinsic motivation states that people are motivated by internal rewards like fulfillment and contentment. Conversely, extrinsic motivation states that people are motivated by external rewards like a bonus or raise as well as negative external factors like getting fired.”

In any one life, many things are going on, and for any person (even a politician) the cross-currents can run unpredictably. We may jump to a conclusion about why a person did a particular thing, but the truth is that we often don’t know perfectly clearly why we ourselves do some of the things we do. That complexity is what keeps whole economic consulting businesses in business: There’s a lot we don’t know.

Why did a politician do X? We can guess. They can proclaim. But the answer may be hard to determine conclusively.

Was something done with the motivation of gaining some advantage in a political situation? Maybe.

Prove it.



Compromise: An agreement or settlement of a dispute that is reached by each side making concessions.
► Oxford English Dictionary

Truth is found neither in the thesis nor the antithesis, but in an emergent synthesis which reconciles the two.
► Greg Wilhelm Friedrich Hegel

I first heard the word “squish” used by a group of college Republicans, to refer to someone who is not an absolutist and is open to compromise. It seems to have remained much more in use on the right than on the left, though there’s no particularly good reason for that: The point it implies is applicable about the same on either side of the fence.

The point is that compromise is caving – that it amounts to giving in and giving up, and an abdication of principle.

Compromise is not that – people who engage in politics often wind up fighting fiercely because they are principled – but it does involve the mature, as opposed to childish, idea that in a society where a variety of people want different things, you can’t (to coin a phrase) always get what you want. The word comes from 14th century France (a compromis), which refers to a willingness by two or more parties to submit to a joint arbitration, as opposed to, in their case, fighting to someone’s death.

Compromise, in other words, is in the DNA of politics in non-dictatorship situations; it is absent in authoritarian states, where only one point of view is allowed ever to win out. Compromise means that two sides have to come together and try to find common ground where they can, and agree each to concede a little in return for a larger agreement – one that can be lived with, if not become beloved, by both sides. The idea of a settlement reached by an easing back of demands, a willingness to make concessions, grew from that.

Compromise may not be as unpopular as it’s sometimes made out to be. In 2017 the Pew Research Center found “In general terms, the public continues to express a preference for elected officials who seek political compromises. About six-in-ten (58%) say they like elected officials who make compromises with people with whom they disagree, while fewer (39%) say they like politicians who stick to their positions. About seven-in-ten Democrats and Democratic leaners (69%) say they like elected officials who compromise. Liberal Democrats (76%) are more likely to hold this view than conservatives and moderates (63%). Republicans and Republican leaners have much more mixed views: 52% say they like elected officials who stick to their positions, while 46% say they like elected officials who make compromises with people they disagree with. By 56% to 41%, conservative Republicans prefer elected officials who stick to their positions. By contrast, a greater share of moderate and liberal Republicans say they like officials who make compromises (55%) than say they like officials who stick to their positions (43%). Those with higher levels of education are especially likely to have a positive view of officials who make compromises.”

Compromises often are messy, incomplete and unsatisfying. The congressional compromises of 1820 and 1850, two of the leading achievements of 19th century American politics, were stopgap measures, entirely pleasing almost no one; but they did keep the nation intact, for a while. (The breakup came when fire-eaters in the South decided they would compromise no more.)

But then in politics, issues are never over, completely: They’re always subject to relitigation. Compromises are temporary fixes but then, in the larger picture, there is never any other kind – even if, at a given moment, one side or another seems to have prevailed utterly.

In a 2016 column, New York Times writer David Brooks said, “Over the past generation we have seen the rise of a group of people who are against politics. These groups – best exemplified by the Tea Party but not exclusive to the right – want to elect people who have no political experience. They want ‘outsiders.’ They delegitimize compromise and deal-making. They’re willing to trample the customs and rules that give legitimacy to legislative decision-making if it helps them gain power. Ultimately, they don’t recognize other people. They suffer from a form of political narcissism, in which they don’t accept the legitimacy of other interests and opinions. They don’t recognize restraints. They want total victories for themselves and their doctrine.”

What this leads to in politics – and in American politics notably – what a failure to compromise often leads to is trench warfare, a long-running series of battles between two dug-in sides, with no reasonable resolution in sight.

The only way out involves climbing up from the trenches and starting some serious, and honest, discussion.

Overton window


What shall we discuss? Or, what shall we discuss and be taken seriously? A person can throw out almost any idea, but many of those ideas may be batted aside as nonsense. At least, they may be batted aside as nonsense today; tomorrow, the idea might be more acceptable, or even a majority opinion.

That’s the concern of the “Overton Window of Political Possibilities.”
Joseph Overton, an academic at the Mackinac Center for Public Policy in Michigan, developed the concept in the mid-90s. The center described it this way:

“Imagine, if you will, a yardstick standing on end. On either end are the extreme policy actions for any political issue. Between the ends lie all gradations of policy from one extreme to the other. The yardstick represents the full political spectrum for a particular issue. The essence of the Overton window is that only a portion of this policy spectrum is within the realm of the politically possible at any time.”

This doesn’t amount to a value judgment, but it does suggest what’s realistic, as a matter of public policy, at a specific moment.
Same-sex marriage would be a useful case study of how a subject once considered out of bounds – an abomination or a joke if considered at all – could move over time into the window of political realism. Marijuana legalization may be a similar example.

Ideas move in and out of the window with some regularity, over the span of time. Judgment comes into play when we decide which ideas should or shouldn’t move, and in which direction.

Why ideas move is a question for political scientists, and many have weighed in (whether or no specifically citing Overton).

And there are other uses. Conservative talk show host Glenn Beck released a novel called The Overton Window (2010), a political conspiracy potboiler about a powerful elite seeking to take over the United States by moving an unacceptable concept – “one world, ruled by the wise and the fittest and the strong, with no naive illusions of equality or the squandered promises of freedom for all” – into the Overton window.

Whatever the virtues of the novel (few, reviewers seemed to agree), it got the point of the “window” backward: It is not something that can be manipulated by a “wag the dog” strategy, but rather serves as a measure of how the public changes its mind.

Writer Maggie Astor described it this way: “The key is that shifts begin with the public. Mr. Overton argued that the role of organizations like his own was not to lobby politicians to support policies outside the window, but to convince voters that policies outside the window should be in it. If they are successful, an idea derided as unthinkable can become so inevitable that it’s hard to believe it was ever otherwise.”



The pieces of the word suggest an integration of several things, and they do, but we’re not talking here about racial integration, in the civil rights sense. This is something far different.

The Wikipedia description says that it suggests “a fully integrated social and political order, based on converging patrimonial (inherited) political, cultural, religious and national traditions of a particular state, or some other political entity. Some forms of integralism are focused on achieving political and social integration, and also national or ethnic unity, while others were more focused on achieving religious and cultural uniformity.”

To get more specific, a lot of the discussion grows out of a long-running – centuries long – discussion within the Roman Catholic church, arguing that the state should serve the church. (Presumably the Vatican, in which church and state always have been integrated, is exempted from the discussion.) The term seems to come from the 1905 decision by the Third French Republic formally to separate itself from the Catholic Church; French Catholics who opposed that decision called themselves Catholiques integraux (integrationist Catholics). The idea has been adapted since, and the argument has ebbed and flowed since the Second Vatican Council of 1962-65, which among other things seemed to loosen the church-state relationship.

Integralism in anything like a doctrinaire sense is not likely a majority view, or even more than a sliver-sized minority view, in the Catholic community.
R.R. Reno of the Institute on Religion and Public Life noted that there’s plenty of deep church doctrine against integralism: “Our supernatural destiny is other than our natural end. As St. Augustine put it, though they are intermixed in this age, the City of God is ordered to a different end than the City of Man. As a consequence, a Catholic politics never seeks to be a sacred politics, never proposes a full and complete integration of statecraft with soulcraft.”

It was that point John F. Kennedy made when he addressed Baptist ministers in Houston in the 1960 campaign, speaking to concerns about whether his Catholicism would lead to subservience to the Pope. He said, “I believe in an America where the separation of church and state is absolute – where no Catholic prelate would tell the President (should he be Catholic) how to act, and no Protestant minister would tell his parishioners for whom to vote – where no church or church school is granted any public funds or political preference – and where no man is denied public office merely because his religion differs from the President who might appoint him or the people who might elect him…. I believe in a President whose views on religion are his own private affair, neither imposed upon him by the nation or imposed by the nation upon him as a condition to holding that office.”
The speech won a great deal of praise, but some – not all – Catholics disapproved. Catholic writer John Courtney Murray (who had seen the speech before it was delivered) said that “to make religion merely a private matter was idiocy.”3 (Be it noted that religion being a private matter was a central point in the idwology of Thomas Jefferson.)

There are many perspectives pro and con, but one of the most concise pro- arguments may have been expressed from writer Daniel Pink:

“Integralism – the need for a confessional Catholic state – is part of Catholic teaching about grace. Grace is required to repair and perfect all of human nature. Human nature involves the political not as a mere expression and instrument of the private, but as a distinctive sphere of existence and understanding in its own right. Unless we commit ourselves to Christ as a political community, a vital part of human reason will remain untransformed by grace. The result will be spiritual conflict and degradation. … [The Catholic tradition] takes the state, and coercive authority in general, to have a teaching function. One central mode of teaching is through legal coercion. The supposition that it is the proper business of the state to teach, and teach coercively, extends back to Aristotle’s Nicomachean Ethics.”

Another analysis by Timothy Troutner (in describing integralism advocates, not his own view) takes this a step further, to the point where the argument becomes fully joined: “The maintenance of a neutral public square forbids the dominance of any thick conception of the good not shared by all. This quickly leads to the privatization of religion and the secularization of society, manifesting liberalism’s hostility to any religion which sees itself as more than a private concern.”

And Troutner adds, “Integralists argue that if Catholics do not dictate terms, others will dictate to them.” That fully gives the game away, making clear the real point in the last paragraph: Integralists simply want to be those who dictate the terms. To everyone. Including everyone who thinks something other than what they do. Under such a system, freedom of religion belongs only to one group, not to others; it’s a redefinition of what “freedom of religion” means (freedom only for certain believers).

The term integrationist primarily is Catholic in usage and then mainly among a core of scholars and pundits, but the concept is open for adoption elsewhere. Many fundamentalist Protestants could adopt it structurally – and a good many have already, even if they haven’t used the specific word. In that quarter, many extend the concept further: There’s the expressed fear there not only that they may be dictated to, but that if their views are not the diktat for all of society, that all of (unsaved) humanity may be condemned to hell … or something like that.
Shorter version of integralism: My way or the or the highway; believe as I do, follow all the rules I deem righteous, or get out of town, Jack.