The biggest problem in American politics is not the Republicans. It’s not the Democrats, either. It’s not even Donald Trump, the man who broke and domesticated the former in order to run roughshod over the latter.
No, all these things are mere symptoms of the disease. But what is the disease? We have to understand the affliction before we can cure the body politic.
The disease is nothing less than a fundamental breakdown in human communication itself. It takes time to analyze something and appreciate all the nuances of a given issue. And people don’t have time for that. They would rather pass judgment immediately than take the time to think things through.
Indeed, people who even attempt to think about things in-depth are automatically condemned as traitors by their own side. Pointing out nuances or subtleties is never something zealots are interested in, and in today’s climate, you’re either a zealot or you’re intimidated into silence by the zealots. “The best lack all conviction,” etc.
Back in the ’90s, there was an extremely popular business book by Stephen Covey called The 7 Habits of Highly Effective People. Like all self-help books for business types, it contained its share of platitudes and buzzwords, but there was also some very sound advice. The part I remember most was habit number 5: “Seek first to understand, then to be understood.”
This is extremely good advice, and it’s something that seems to be rarely heeded these days. Certainly not in the world of online political debate, where humanity seems to have regressed to its most primitive societal constructs: small villages of like-minded individuals who venture out only to engage in raids against rival tribes.
There is some historical precedent that we can use to guide us in understanding how social media has changed communication. In the late 1500s, the spread of the printing press made it easier for people to create and distribute pamphlets. These were used to attack or defend certain people, ideas, nations, religions etc., much as social media is today. As Wikipedia helpfully summarizes: “In addition, pamphlets were also used for romantic fiction, autobiography, scurrilous personal abuse, and social criticism.” The more things change, the more they stay the same.
The most famous pamphlet in history is probably Thomas Paine’s Common Sense, which advocated for the independence of the American colonies and attacked the British monarchy. This was pretty late in the pamphlet game, though. The real high point of pamphlets-as-propaganda seems to have been in the 1600s, when they played a major role in fomenting and prolonging the English Civil War.
Governments gradually adapted and shut down such publications, mostly by use of copyright and libel laws. It’s possible that down the road, the same thing will happen with social media. However, this is not a great solution, since it could very easily turn into a totalitarian dystopia where all speech is controlled. Paradoxically, history suggests that nothing clears the path for rigid totalitarian control so smoothly as anarchic mob rule. I suspect the internet is no exception to this pattern.
Besides the role of laws and censors in reducing the relevance of pamphlets, there was also a change in social norms. Now they are ignored or seen as the hallmark of political fringe elements. If somebody gives you a printed pamphlet about their cause, it makes them seem slightly kooky. These days, if you want to be seen as legitimate, you have to have a website and a Twitter account, or at least a blog.
It’s possible that with time, social media as we currently know it will fall out of favor, and be replaced with something else. It’s already skewing away from the written word and towards pictures: in 2004, blogs were all the rage. By 2010, it was Twitter. Now it’s moving towards things like Instagram, which by design is meant for pictures, not words.
In a way, I think this is a good thing. People who like fashion (and by fashion, I don’t just mean clothes, but everything, from movies to political views, that is seen as fashionable) can have their site, and people who don’t care about fashion—that is, people who do care about substance—can stay on their stodgy old blogs and have real discussions.
The internet isn’t the only issue, though. The rise of mass-media, which acts as a force-multiplier for charismatic leaders, has been gradually paving the way for this for decades.
I’ve talked about this at length in other posts, but I want to briefly make some points about the role of charisma, because it’s the single most important force there is in modern politics. Televised political events, debates, ads, and so on were the equivalent of atomic energy as far as revolutionizing politics, and charisma is the reason why.
The average person does not have the time to understand all the issues they are voting on. It’s hard enough to hold a job, raise a family, take vacations and live a normal, healthy life without having to also be an expert on the multiple dimensions of policy that they are electing officials to manage.
A person naturally looks for shortcuts to make the decision easier. This has been true certainly throughout U.S. history, and probably the history of all democracies. Once mass communication technology became widespread, politicians were quick to leverage it to their advantage, just as those in an earlier era used bribes and grift.
It will always be easier to vote for the candidate who “seems like a better person” than it is to study and fully understand all the potential policy implications of a candidate’s platform. I would say that no one person can fully understand all the different spheres of policy that the president, for example, can affect. People dedicate their entire careers to understanding just one of them.
People vote for the person they like better. And what determines whether you like someone or not has very little to do with a rational weighing and measuring of objective facts, and a great deal to do with hardwired human instincts combined with subconscious associations based on your past experiences.
Thus, politicians try all kinds of tricks to associate themselves with things that people like–they seek the endorsements of movie stars, championship-winning athletes, other popular politicians, etc. They try to prove that they are “just regular folks” like the voters. But that only helps with the subconscious association part of the equation. The instinct part was decided centuries before, as people developed their instincts to survive in a very different world than the one we live in now.
Here’s an example: the fundamental thought-process underlying sexism is that, in our primitive mind, we think of men as stronger than women because men, on average, have greater upper-body strength, and in ancient times, that was important because you wanted your leader to be able to climb, or carry heavy animal carcasses, or win a physical fight.
Of course, that’s irrelevant to the present day for two reasons: first, the strength gap between men and women is narrowing, and second, because the modern day leader doesn’t need to do any of that–but the hardwired instincts in the average human brain don’t know that.
Charisma is about appealing to our instincts; our so-called “lizard brains“. And we voters are all too happy to let them appeal to us this way; because it’s much easier than the fundamentally impossible task of learning about all the issues.
The way mass media has changed politics has been a gradual shift. It started with small things, like Kennedy beating Nixon by knowing he needed to use makeup in televised debates. A half-century later, a reality TV star won the Presidency.
I’ve tried to avoid talking about Trump too much on this blog, partially because it’s nearly impossible to get away from news about him as it is, and partially because the mere mention of his name tends to bring out strong negative emotions in people–both his detractors, who become enraged, and his supporters, who viciously attack his detractors. It’s unproductive.
But there is no way of writing about this subject without discussing him. Trump’s entire PR strategy depends on appeals to deep, instinctual feelings. Tribalism, nostalgia, fear of the unknown, etc.–Trump taps into all of these things in order to galvanize his supporters. And he largely relies on TV and social media to do it.
Of course, he isn’t the first politician to do this. All of them try, to some extent. Trump is just better at it. His competitors in 2016 felt like they had to keep at least one foot planted in the world of policy. But they were living in the past. In the new system of politics, being a reality TV host is far better training than service in government or the military.
This is where the charisma-infused cult-style politics, with mass media acting as a catalyst, combine to create an extremely potent brew that tells voters to revert to their most basic urges, and do what is easy and comes naturally.
Taking the time to understand others does not fit into that equation. Nor does analyzing policies and examining complicated issues with ambiguities and shades of grey. Ironically, in this regard as well, modern technology has once again just made it easier for people to revert to the ancient practice of following the tribal chieftain.
The human tendency to fall in line behind a charismatic leader and the acceleration of technologies that gratify our desire for easy answers and acceptance by our tribe have combined to make politics poisonous.
Is there a way out?
For a lot of people, I think the answer is no. Many people have no interest in thoughtful debates or analysis; they just want to say their piece and have instant agreement. Trying to debate such people is a waste of time for everyone. It just makes both sides mad.
One of the most common pieces of advice for dealing with a toxic relationship is simply to leave it. Unfortunately, it’s also one of the hardest pieces of advice to follow, because usually people feel some strong urge, be it guilt, money, fear, or something else, that tells them to stay in the relationship.
The same dynamic is at work most political arguments. In the majority of debates, no minds will be changed, and all that will happen is that people will get angry. That’s practically the definition of toxic. And yet, to just quit arguing altogether seems wrong. It feels like giving up on your own beliefs. After all, if you don’t argue for your own beliefs, who will?
You should stand up for your beliefs, absolutely. In that regard, it’s actually OK to follow the crowd and just put your opinion out there. Say what you think and why you think it’s true. Instead of reacting to someone who you think is wrong, just say what you think is right. That’s what’s really important anyway. After all, there are a theoretically infinite number of wrong ideas in the world; right ideas are a far more limited and therefore valuable commodity.
“But won’t that in itself lead to group think and insularity?” you ask. “Isn’t this how the dreaded ‘epistemic closure’ begins?”
I agree that it certainly sounds like it could, but it’s going to take a lot to prevent like-minded people from flocking together. As we’ve seen, technology and human nature are both pushing us strongly towards doing that. We can’t fight that trend; nor would we even necessarily want to, as like-minded people grouping together can produce great things. But we can and do want to mitigate the trend of different groups getting into protracted and pointless fights with each other.
The key part is that when people try to argue with you—and inevitably, they will–you will have to use your judgment as to how best to handle them. I don’t want to offer too much advice on this, as there are lots of possible angles from which they might attack, from the most childish insults to actual threats to strong, well-reasoned arguments. Each one requires a specific response.
That said, here are two key things to keep in mind: first, every argument feels like a personal attack, whether it is or not. And in fact, almost none of them are; even the ones that are designed to seem like it. The natural instinct is to strike back immediately (I’ve been guilty of this) but it’s better to take a little time to ask yourself “Is this worth responding to?” Often, it isn’t. If it is, it probably means that somewhere, it contains a nugget of useful or interesting information. Address that, and disregard the chaff.
The second thing is that the vast majority of arguments online are all formulaic lines that the arguers themselves didn’t originate. They just got them from some source of pre-made arguments for their side. If you read an online political debate as a neutral observer, you’ll realize that it’s not organic—it’s a choreographed dance where each side unwittingly follows the pattern their party has set down for them. It’s an understatement to say both sides do this—all sides do this. Most people don’t know how to argue, so they look to others (often charismatic leaders) to show them how.
Don’t be like most people. Focus on having something new to say, both in your original statement and your counter-arguments. You can quote others as supporting evidence, but your central point should be your own. After all, if somebody else already said it, why should you say it again?
This method has two good results, which act as antibodies to the disease that’s killing communication. One is that if you strive to create something original, whatever ideas you come up with are likely to be well-thought-out and robust, because you’ll have to work hard to think of them. And the second benefit is that to a degree it protects you against the charismatic leaders who are trying to cajole you into echoing them.
Ultimately, political debates will be settled by the test of which ones have the most success in the real world. So don’t worry about trying to correct people who are wrong, unless they signal that they’re open to correction. Wrongness is its own punishment, in the end. Focus on getting your own ideas right, engage with the people who have something useful to contribute, and ignore the others.
I blogged about Mark Paxson’s story The Marfa Lights a while back. This week I finally got around to reading the rest of the stories in the collection, and I enjoyed them tremendously. I think my favorites were the post-apocalyptic poem (bonus points to Mark for his use of the excellent word “gloaming”) and the sci-fi tale laced with David Bowie references. All the stories are quite good.
Some of the stories have a bit of a Twilight Zone-like feel to them, which I liked quite a lot. Like Phillip McCollum, Mark has a knack for setting the reader up for a surprising ending in a subtle and economical way.
Speaking of Phillip, I blogged about him recently as well, and since then he’s just kept on putting out more terrific stories. Branded and Halfway are two of his most recent works that I’ve enjoyed lately.
Both Phillip and Mark are very adventurous in their writing. While there are certain themes that recur, they are always experimenting–trying on different voices, styles and genres, and it never fails to make for an engaging read.
Ever since I first started dabbling in the writing business, I’ve read numerous people claiming that short stories aren’t read much outside of schools and small literary circles. If you want wide acclaim as an author, goes the conventional wisdom, you’ve got to write novels.
This has always baffled me. Modern audiences are famous for their short attention spans. If anything, you’d think they would be more interested in a short tale that can be finished in a few minutes or an hour than a long, drawn-out novel. (Or, as is even more popular, series of novels.)
Think about it: when it comes to other entertainment, most people watch sit-coms or hour-long episodic dramas. A sizable but somewhat smaller audience goes to two-hour movies. And only hardcore artsy types go to sit through really long movies or, for the truly committed, operas. Why is this situation reversed when it comes to literature?
Maybe in the past you could have said it was because novels were all that was widely available, but the internet changed that dynamic in two ways. The first is simple economics–you can get a good short story collection like The Marfa Lights for ninety-nine cents on Kindle. Phillip publishes his work on his blog. You can get good writing while spending less of your time and money than a novel requires.
The second thing is that the internet makes it easy to discover authors that big publishing outfits haven’t taken yet because they are too risk-averse. I would never have read the work of Mark, Phillip, and other terrific indie authors if not for the internet.
So why aren’t the short, independently-published stories flourishing? Talented writers are all around us and easier to find than ever. The big publishers’ stranglehold has been broken, just as the major traditional news outlets have lost out to bloggers and independent, specialized news services. What is holding so many readers back?
In a way, novels from big-name authors and publishers are like major Hollywood movie franchises, in that they are a relatively safe investment. Audiences go to them because they know pretty much what to expect. Similarly, when it comes to novels, people feel like they can be confident about what they’re getting–especially once they know a certain genre or author. And moreover, once you get into a novel, you (usually) don’t have to worry about changing gears and getting reintroduced to a new situation and set of characters with every new chapter.¹
Short story collections, by definition, can’t be like this. There has to be variation in them, or reading the collection will be a slog. For that matter, writing such a collection would be a slog. Almost every writer likes to try out different things now and then.²
So consumers are still playing it close to the vest with their entertainment choices. Most of them would rather invest in novels from major authors and publishers, from which they think they know what to expect. (Ironically, consumers of news couldn’t wait to jump at any excuse to ignore the traditional news outlets. They’re more careful with how they invest their entertainment budgets than who they trust to tell them the news.)
Don’t be like typical consumers. Give independent authors and short stories a shot. Reading is like anything else in life–if you want better than average return, you can’t just do what everyone else is doing and hope someone will give you exactly what you want. You have to be willing to be different if you want the best.
1. Lest anybody misinterpret what I’m saying here, I’m not claiming that novels are somehow intrinsically inferior to short stories. Some stories really do need to be 40,000 words or more in order to be told well. My point is just that I can’t see why novels should attract more readers than short stories. A satisfying story is a satisfying story, regardless of its length.
2. The King in Yellow, by Robert W. Chambers, which contains one of my all-time favorite short stories, “The Repairer of Reputations”, is a good example. Chambers loosely tied the first four stories together using the sinister title character and some other elements, but the later stories gradually turn away from the weird and more to the romantic. But all the stories contain elements of weird horror and fin de siecle romance, so the reader is always a little uncertain of what’s going to happen next. That’s what makes it good.
This book gives a comprehensive and thorough history of the United States government’s plans for surviving a nuclear war. The book spans the Atomic Age, with detailed information from the Truman through Obama administrations, with occasional references to the comparatively primitive security measures under earlier presidents.
There are a number of interesting stories in the book, from the day that President Truman practically shut down Washington as he stepped out to go to the bank to the total chaos and confusion that reigned on 9/11, when the emergency procedures were implemented rather haphazardly.
For all the programs aimed at “continuity of government”, the ultimate conclusion of Presidents, generals, CEOs, and bureaucrats throughout the decades seems to invariably have been that in the event of a nuclear attack, the United States as we know it would cease to exist, and survivors—if any—would live under martial law at best for a considerable length of time.
And yet, the preparation proceeds anyway, as the government tries to figure out a way to survive the unsurvivable. In one memorable section, Graff discusses a secret bunker at the Greenbrier resort in West Virginia, complete with underground chambers for the House and Senate to convene, all maintained without the knowledge of even the CEO of the resort himself.
Throughout the book, I repeatedly thought of this exchange from the British political sitcom Yes, Minister:
Sir Humphrey: There has to be somewhere to carry on government, even if everything else stops.
Minister Hacker: Why?
Sir Humphrey: Well, government doesn’t stop just because the country’s been destroyed!
That really summarizes the absurdity of the whole enterprise. The book’s subtitle, “The story of the U.S. government’s secret plan to save itself–while the rest of us die” is a bit unnecessarily hysterical and sinister-sounding, (they can’t really be expected to save everyone, can they?) but it does underscore the inescapable problem of attempting to preserve a way of life that can’t exist in the unimaginably horrible new world that would be created after the bombs went off.
Graff did a lot of research for this book, but too often sacrificed readability in the interest of being thorough. There are plenty of paragraphs that bog down in the alphabet soup of government programs, plans and agency acronyms. (This is perhaps inevitable to some degree—the government loves acronyms.) Even more confusingly, information is sometimes poorly organized, and occasionally repeated in different sections. Once or twice this caused me to think I had accidentally gone back to a section I’d already read.
There’s also at least one flat-out error: on page 278 of the Kindle version, Graff asserts that “Reagan was the first president shot in nearly a century.” This is obviously not true, and probably the result of some kind of copy/paste error. That’s one that anybody would know is wrong, but it made me wonder what other, less-apparent-but-equally-serious errors the editors might have missed.
So, should you read it? A lot of the negative reviews say things like “I could have gotten all this from Wikipedia”. Which is true, but also raises the question, “Then why didn’t you?” A journalist like Graff isn’t required to discover new information—compiling and correlating existing information into one convenient book is also useful.
Unfortunately, Raven Rock isn’t as convenient as it could have been. A bit more editing and condensing would have improved the book a great deal. As it is, though, there’s a wealth of information for those willing to slog through and find out what secret projects the government has been spending our taxes on in the hopes of surviving Armageddon.
I still use an old flip phone. It makes calls. It can send texts, albeit not long ones. It even has a camera, although the lens is so smudged it’s basically useless.
Would it be fun to have a phone with apps and a better camera and a connection to Cloud storage? Sure, it would. In fact, that’s exactly the problem–I’d spend all of my time on it.
Carrie Rubin tweeted this earlier today:
Never imagined this would be a headline I’d read in my medical journal one day. pic.twitter.com/7z2cMnHS8K
— Carrie Rubin (@carrie_rubin) July 10, 2018
By coincidence, I was reading Paul Graham’s 2010 essay, “The Acceleration of Addictiveness” earlier in the day, in which he says:
“Most people I know have problems with Internet addiction. We’re all trying to figure out our own customs for getting free of it. That’s why I don’t have an iPhone, for example; the last thing I want is for the Internet to follow me out into the world.”
He’s right. Our challenge now is to get away from all the technology. Like I wrote the other week, it’s getting harder and harder to avoid the ever-increasing growth rate of technology. We are getting swamped by it.
The flip phone is bad enough as it is. Recently, I read that keeping your phone in your pocket (where I’d always kept it) can cause male infertility.¹ So I started keeping my phone in a briefcase, and leaving it behind when I go for a walk or go to the gym. It was amazing how liberating this felt—rather than checking the time every couple minutes, or looking to see if I had new messages, I just figured “it can wait”. And it can.
I realize that sometimes you want to have your phone. I’m fortunate in that my gym is practically next door to where I live. If it were farther, and I wanted to take my phone, I’d take a gym bag. But I’m rapidly getting addicted to going for walks without it. If you feel unsafe walking alone without your phone, I suggest trying to find a friend or group of friends to go with you—you can have better conversation and get some exercise as well.²
When I wrote The Directorate, I ran up against the problem of how to devise some even more powerful and omni-present technology than smart phones for the characters to use. It seemed like they’d have that by 2223. But the more I thought about it, the more I started to think our current technologies dominate life to a degree that already seemed like something out of sci-fi. And at that point, I realized the really futuristic innovation might be if people would opt out of being constantly attached to their communication devices.
I’m not anti-technology by any stretch. I couldn’t do most of the stuff that I do for work and for fun without computers, game consoles and, of course, my trusty iPad. I wouldn’t have anybody to write this for if the internet didn’t connect me with wonderful people all over the world. But as with all good things, you need to have some discipline so you don’t overdo it. A smart phone just makes it that much harder for me to maintain that discipline.
- To be fair, the evidence on this is mixed. When I researched it, I found plenty of places saying there was “no clear link” as well. Cell phones are relatively new; it’ll probably be a while yet before the researchers come to any definite conclusions. But I’m playing it safe on this one.
- I know, there’s something to be said for solo walks, too. Believe me, I’m
a misanthropean introvert; I get it.
A couple years ago, I read the Jonathan Safran Foer book upon which this film is based, and at the time I wrote that it made me feel very glad to have been a vegetarian all these years.
Well, the movie also does that, and then some. It’s one thing to read about how the proverbial sausage gets made. Seeing it is stomach-churning. A word to the wise: skip the snacks before this one, or make sure you eat them all during the previews.
But Eating Animals isn’t just a glimpse into the sickening nature of the meat industry. It’s partly that, for sure, but it also explores alternatives, interviewing organic farmers and animal welfare advocates who offer other, less horrifying systems for farming.
One of the key points that the film and the book raise is the way that modern farming has corrupted the biology of the animals. What we think of as “normal” chickens aren’t where the meat comes from—instead, meat chickens are bred to be morbidly obese, barely able to walk once they reach adulthood. (I’ve seen these first-hand; it’s incredibly sad.)
And it gets worse: because modern animal farming conditions are so horrible, the animals need to be pumped full of antibiotics just to survive to adulthood. And those antibiotics end up in the meat that people eat, and in turn cause antibiotic-resistant “superbugs” to breed.
This is really the big takeaway from Eating Animals: the modern farming system is hurting humans too. Whether it’s dumping animal waste in cesspools that drain into rivers or allowing pus from diseased cows to seep into milk, the problems with the present-day meat industry aren’t simply related to animal welfare, but ours as well.
As a film, it works pretty well, though it is a bit disjointed as it hops back and forth to tell the stories of various farmers and activists. For the most part, it’s done in a straightforward interview style, although there was one cut from a KFC commercial to the interior of a corporate chicken farm that had a darkly ironic tone worthy of a Michael Moore film.
The film makes a number of strong points about the ties between the meat industry and the U.S. government charged with regulating it. As with so many things, the lobbying interests are able to control the bureaucrats who are supposed to regulate them.
This brings me to one question that the film never fully answered: the role of government regulation. The general theme of the film is that the huge, centralized nature of the meat industry is responsible for most of the appalling practices. (In the film, Christopher Leonard from something called “New America” likens the meat industry’s structure to the Soviet Politburo) The better alternative, the film implies, is local, organic farming—in other words, farming as it was prior to 1960 or so.
The problem here is that it would be hard for the government to regulate such small, decentralized outfits, which in turn runs the risk of food produced in a non-standardized fashion, which could very easily become contaminated. Say what you want about the current system, but it at least hasn’t caused a major pandemic yet. That might be due to pure luck, but still, I would have liked to see more of an explanation of how, exactly, the FDA or the USDA or whatever is supposed to regulate a nation of small, independent organic farmers.
This, by the way, is one of the less obvious points about political economy that neither the Republicans nor the Democrats like to acknowledge: that government and big business need each other. Government needs big business because it’s too hard to regulate (or raise money from) small business. Big business needs government because it can lay a foundation for it to maintain its monopolies or oligopolies.
Eating Animals makes a strong case that the current, horrible system of factory farming has developed as a result of deals and organizational hierarchies devised by huge organizations, but from there, it doesn’t address how we’re supposed to get back to the “old” style of farming. After all, the fundamental factors that caused organic farming to vanish in the last half-century are still present. How do we change that?
By the end, the film suggests that nature will change things for us—perhaps in the form of a pandemic or severe global climate change. In the meantime, the best we can do is try to think long and hard about our food choices, and choose options that are healthier and less destructive.
Watching Eating Animals was a surprising experience for me personally because of how close to home it hit—much of the film is shot in the rural Midwest, and the farms and fields look like the ones I remember from my childhood. Many of those interviewed could have been my neighbors. And, most disturbingly, some of the footage of animal cruelty came from a farm in Plain City, Ohio; a mere 20 minutes from where I grew up. (You can read about the case here—be warned; there are some disturbing pictures.) The horrible consequences of modern farming are all around; it’s just that few people bother looking for them.
After seeing an early sequence in the film showing aerial footage of cesspools outside pig farms, I decided to check online and see if they really looked like that. Sure enough, if you go on Google maps and look at the satellite images, you can see the pink-tinted pools outside the long, grey buildings that house the pigs. They’re all over the place in North Carolina.
Of course, most people know, in some vague, abstract sense, that the way their meat got made was not pretty, and frankly, most of them would just as soon remain ignorant of the details. When I recommend this movie to my meat-eating friends, most of them react by saying “I’d rather not know.” Some of them go a step further and try to justify eating meat as a hard-nosed “just-the-way-of-the-world” realism that only naïve idealists ignore. And some of them say simply “I have to eat meat.” (They assert this without ever having tried to do otherwise.)
Eating Animals isn’t arguing that everyone should abandon meat altogether. (I might argue for that—but then, I’m awfully fond of cheese and eggs, so I can’t claim total innocence in this.) But it is arguing that we need to think long and hard about the way we get our meat, and whether this system is one that can continue indefinitely without causing massive, deadly problems. And to do that, we first need to be willing to confront the current reality. There may be some nasty things in the world that are best left unexamined—the comments sections on most news articles come to mind—but this isn’t one of them.
Chances are that most people who voluntarily go to see Eating Animals are people who have read the book or who are already aware of the problem of factory farming. And that’s well and good, but it isn’t enough, because the film is most effective as a form of aversion therapy to make people reconsider what they eat. So I not only recommend that you go see it, but drag some of your carnivorous family and/or friends along as well. Say you’ll treat them to dinner afterwards—and then see if they don’t suddenly become interested in organic or vegan food.
In P.G. Wodehouse’s 1938 novel The Code of the Woosters, there’s a great character called Roderick Spode. A parody of Sir Oswald Mosley, Spode is the dictatorial leader of a fascistic group called “The Black Shorts”. Bertie Wooster, the protagonist, describes his appearance “as if Nature had intended to make a gorilla, and had changed its mind at the last moment.”
Ultimately, Spode is thwarted when Bertie’s valet Jeeves reveals that he knows about “Eulalie”–which Bertie learns later is a ladies’ lingerie shop called Eulalie Soeurs that Spode operates. Spode fears that he will lose face if this becomes known to the other members of the Black Shorts.
Wodehouse was one of the greatest humorous writers of all-time, but Spode was a rare instance when he satirized a particular public figure. And a clever satire it was too; suggesting that a would-be dictator moonlights as an underwear designer instantly reduces them to figures of fun.
Of course, even in Wodehouse’s comic world, he still assumed that such people could be cowed by such basic things as shame. It was a more genteel universe that Wodehouse imagined, in which even the villains played by the rules.
The number one issue that humanity faces today is technological growth. If you look under the surface of most political issues, what drives them is the way technology has given us abilities we did not previously have, or information we could not previously have accessed.
What makes this especially powerful is that technology evolves much faster than human beings do. Technology can go through many generations in the course of one human’s lifetime.
This is important, because in evolutionary biology, new traits usually emerge over the course of generations. (This is why biologists usually study traits in organisms with short generations. You can observe multiple generations of flies over the course of a year.)
But since technology moves faster than humans can evolve new traits, it means that we are always playing from behind. When I was born, cell phones were huge, unwieldy things used by rich people, sales reps, and techies. Now they’re everywhere, and are more powerful than the top-of-the-line supercomputers of three decades ago.
For the last 200 years, technological progress has been increasing at an incredible rate. And humans have often suffered by being slow to adapt. This is illustrated most dramatically by wars: in World War I, the officers had all been trained in tactics derived from the Napoleonic era. This resulted in huge massacres, as cavalry and infantry charges–which would have worked against men with inaccurate single-shot rifles–were torn to pieces by machine guns. Technology had made a huge leap in the century between the battle of Waterloo and the battle of Artois. And that was as nothing compared to the leap it would make in the next thirty years, with the advent of the Atomic Bomb.
The thing is, while it may seem to us like a long time since the days of cavalry charges and flintlock rifles, in terms of human history, that’s a drop in the bucket. Homo sapiens first emerged roughly 200,000 years ago. On that scale, Waterloo might as well be yesterday, and the Roman Empire was just last week.
For the vast majority of our existence, life was pretty much the same: people worked the land and hunted and raised families. Periodically, they organized into larger tribes to make war or trade. If you took a citizen of Athens from, say, 450 BCE and transported him to Renaissance Italy—nearly 2000 years later–he’d still have a pretty good handle on how things worked once he got past the language barrier. Whereas if you transported somebody from 1890s London to the present day—a mere 128 years!—he’d have no idea what was happening.
When you read history, it’s easy to be struck by how human nature seems unchanged over the centuries. We can recognize things in the texts of the earliest historians and philosophers that seem analogous to modern phenomena. While it may seem like this means human nature is eternal, what it really signifies is that it hasn’t been that long, in biological terms, since what we think of as “ancient times”.
It’s commonplace to observe the technology changes, but human nature remains the same. But observing it is one thing; grasping the full implications is another.
For instance, there is a major “culture war” debate in the U.S. over the issue of transgender rights. Those who favor transgender rights view their opponents as closed-minded bigots. Those opposed see the others as radicals bent on destroying the social order. What both sides ignore is the fact that until very recently, transgender people had no medical treatment available to them. For hundreds of thousands of years, transgender people had no option but to live in the body they were born with. And the rest of the population scarcely even knew they existed; and so built whole societies premised on two rigid gender roles. It wasn’t until very recent breakthroughs in medical technology that any other option became viable.
Once you view it in these terms, you realize it isn’t a liberal plot to destabilize society, but simply a group of people able to access treatment that previously did not exist. Likewise, you also realize the reason so many people are responding with fear and suspicion is that history and tradition provide no guidelines for how to deal with the issue. It simply wasn’t possible in the past.
A number of social conflicts, I suspect, are in fact the result of people being optimized for a very different world than the one we actually live in. Ancient prohibitions against homosexuality, sodomy, and other non-reproductive sexual behavior made some sense in the context of their time—in the past, when mortality rates were high, people needed everyone who was physically capable of reproducing to do so, personal feelings notwithstanding. It was about survival of the group, not any one individual.
Nowadays humanity is threatened more by overpopulation than by extinction—but we’re still adapted to the world of thousands of years ago. That’s just one example. I think people in the developed world still have a slightly-irrational fear of famine; simply because we evolved over millennia where food was, in fact, extremely scarce. (This is why philosophies like the so-called “abundance mentality” seem so counter-intuitive. In the past, it would’ve been suicide to assume there were enough resources for everybody.)
Instinct is a powerful thing, and incredibly dangerous when it gets outdated. To borrow an example from Paul Graham: because human beings haven’t had the power of flight until recently, it’s easy for our senses to be fooled in bad visibility.
Of course, this is something where we use technology to make up for our own shortcomings. A human being would have no idea how to fly a plane if not for instruments that correctly show the position of the aircraft. And this leads to another obvious point about technological evolution—it is, in many ways, nothing short of miraculous for humans. It allows us to accomplish things our ancestors could never have imagined. Whatever bad side effects it has, no one could ever rationally argue that we’d be better off getting rid of all of it and returning to primitive life.
The saving grace is that technology has been designed by humans and for humans, and so generally is compatible with the needs of humans. The things that conflict with human needs aren’t usually a direct result of this, but rather side-effects the designers never thought of.
But side-effects, almost by definition, are insidious. Any obvious, seriously harmful side-effect gets fixed early on. The ones that don’t usually fall into one or more of the following categories:
- Not obvious
- Don’t seem harmful at first
- Can’t be fixed without destroying the benefit
The designers of automobiles probably never thought the exhaust would cause pollution; even if they had, they probably wouldn’t have realized that cars would be widely used enough for it to matter. Marie and Pierre Curie had no idea the new element they had discovered was dangerous. It seemed like just a useful illuminative substance. And pretty much every communications technology in history, from the printing press on, has the potential to spread pernicious lies and propaganda just as much as news and useful information. But no one can figure out a way to remove the bad information without also getting rid of the good—the only option is censorship, which can pose a danger in its own right.
I’ll say it again for emphasis: technology is evolving faster than humans. As a result, our instincts will often lie to us when it comes to dealing with technology. It’s the same way modern junk food is engineered to please our taste buds while poisoning our bodies—it’s designed to set off all the right sensors that tell us “get more of this”.
The rise of nationalism throughout the world in the last decade has gone hand-in-hand with the rise of social media. It’s not a coincidence. Social media plays to an old instinct that takes human society back to its most basic state: the tribe, and the desire to win approval from that tribe. But in the past, when we were in tribes, we didn’t know what the rival tribes or nation-states were doing—they were in far-off lands and rarely encountered each other. But now, we always know what they are doing—they are just a click away. And because they are a different tribe, our instincts tell us to fear them, as our ancestors feared invaders from distant places.
What can we do about this? We can’t get rid of technology; nor would we want to. And I don’t think it’s a good idea to make it into a political question. Politicians want easy, non-nuanced issues, where they can cast themselves as leaders of a huge, virtuous majority against a tiny, vaguely-defined band of evildoers. That would be a terrible thing to happen on this issue. As we’ve already seen in the social issues I’ve mentioned earlier, politicians tend to cast these things as moral questions rather than technological change ones.
We’re going to have to deal with this one on our own. But how? After all, technology brings huge benefits. How can we keep getting those while minimizing the side effects? We don’t want to completely ignore our instincts—not all of them are outdated, after all—but we can’t always trust them, either.
The best advice I can give is to always be on the lookout for what side-effects technology produces in your own life. Always ask yourself what it’s causing you to do differently, and why. Then you’ll start to look for the same in the wider world. We know human nature doesn’t change that much; so when you see or read about a large number of people behaving in an unusual way, or a new cultural phenomenon, there’s a decent chance that it’s in response to some new technology.
It’s easy to look at the dangers of technology and decide you want to opt out, throw it all away, and return to the simple life. This is probably healthy in small doses but it’s impractical on a large scale or for an entire lifetime. What I’m advising is cultivating an attitude of extreme adaptability, where you are so flexible that you can both use new technology and see the potential problems with it coming before they hit you. Listen to your instincts, but know when you need to disregard them. Remember, your instincts are optimized to give you the best chance at survival in a world of agrarian societies and/or tribes of hunter-gatherers. And they are damn good at it; but their mileage may vary in a world of computers, nanomachines, and space travel.
Before I begin, let me give a special shout-out to my blogger friend and loyal reader, Pat Prescott: yes, Pat; it’s finally happened! I don’t know how many years it’s been since you first told me about this series, but I finally have gotten around to reading it. Many thanks to Pat for the suggestion, and for all his support over the years.
Also, for those of you who don’t want to wade all the way through my long-winded review, I made a short video review for your convenience. (And also just for my own amusement.)
Casca begins with military doctors in war-torn Vietnam finding an American soldier named Casey suffering what should be a mortal wound that miraculously begins to heal. As the doctor examines him, he feels himself drawn into a vivid recollection of the man’s past: a flashback to his time as a Roman soldier, Casca Rufio Longinus, a legionnaire assigned to the province of Judea.
During his time in Judea, Casca torments a prisoner about to be crucified–Jesus of Nazareth, who curses him to an eternity as a soldier of fortune, until the Last Judgment. (Note that in the above video, I mistakenly said Casca stabs him on the way to the crucifixion. I meant to say he guards him on the way, and then stabs him.)
Casca dismisses the curse as the raving of a mad prisoner, but as he fights and receives wounds and does not die, he begins to realize that it truly is his doom to live forever, always moving from one battle to the next.
He is sent into slavery for a time, where he is mentored for by a kindly Chinese man who teaches him martial arts as well as philosophy. Eventually, he makes his way into the gladiatorial arena and battles his way to freedom. Ultimately he rejoins the Legion, centuries after he originally knew it, when Rome has seen many emperors rise and fall, and the once-mighty empire verges on collapse.
The book flashes forward again to the hospital in Vietnam–Casey having gone, and the doctors shaken by the experience. In the final chapter, the action moves to Egypt, where young Israeli soldiers fight alongside a grizzled mercenary–Casey again, who recalls fighting in the same desert many centuries before.
The writing is straightforward with no frills, so the book is a quick read. The description is limited, with most heavily-described parts being those relating to battles and Roman tactics.
There is a lot of violence, naturally, and quite a bit of sex as well. Actually one of the things that bothered me about the book was the sexism–women are described exclusively in sexual terms, and rape is commonplace. The worst part is, this probably is an accurate depiction of attitudes during the time period. There was also one section during Casca’s time as a gladiator about his rivalry with a cruel Numidian (African) gladiator that was dripping with racism (and sexism, in terms of how the man is depicted preying upon women) that rivaled Lovecraft in terms of appalling the modern reader. I could have done without that.
Maybe I shouldn’t be surprised by this from a book written in the 1970s by a man who grew up in pre-Civil Rights America. All in all, Sadler had a strange life–maybe one that would have been worthy of a novel in its own right. His military career was cut short when, to quote Wikipedia, “he was severely wounded in the knee by a feces-covered punji stick“. Before writing Casca, he wrote and performed the patriotic song “The Ballad of the Green Berets”. Later on, he shot and killed a romantic rival, for which he served 28 days in jail. Years later, he himself would be shot–whether accidentally by his own hand or by a would-be killer is unclear.
Honestly, people who don’t like to learn the biographical details of authors are missing out on a lot.
Anyway, back to Casca: for me, the most memorable character in the book was the Chinese slave whom Casca meets when sailing back to Rome. He’s also a bit of a cliché–an Asian philosopher-warrior-monk who dispenses wisdom as well as being a master of martial arts–but it kinda works anyway. Unlike most of the characters, he does a bit of introspection, and seems to grasp the horror of Casca’s curse even before Casca does.
What I liked most about the book was the concept: the idea of a man condemned to live forever is an ancient one. Or, as Harlan Ellison wrote for an episode of The Outer Limits:
“Through all the legends of ancient peoples — Assyrian, Babylonian, Sumerian, Semitic — runs the saga of the Eternal Man, the one who never dies, called by various names in various times, but historically known as Gilgamesh, the man who has never tasted death … the hero who strides through the centuries …”
This idea of an immortal condemned to live through endless cycles of fruitless quests is a great one. It’s the premise for the legendary video game Planescape: Torment, as well as Stephen King’s Dark Tower series. (I’ve also heard some claim that King’s protagonist Roland was influenced by the soldier-of-fortune character “Roland the Headless Thompson Gunner“, from the song by Warren Zevon, which features the line “the eternal Thompson gunner”.) It’s a great premise for exploring themes like the futility of war, “man’s inhumanity to man”, etc.
Because the concept is so fruitful, the Casca series currently spans 47 books and counting, following Casca’s adventures across pretty much every war in recorded history. It surprises me the series was never made into a movie. I could see it very easily being adapted into one of those over-the-top, hacking and slashing and/or guns-blazing action films like they made in the 1980s. Or maybe that’s the problem: the teenage boys who would probably have been the perfect audience for these books in past eras are now spending their leisure time watching action movies and playing online first-person shooting games, and don’t even know about them.
Imperial Passions is a sweeping historical novel told from the perspective of Anna Dalassena, who at the beginning of the tale is a 14-year-old orphan girl living with her grandparents. Over the course of the novel, she grows up, marries, becomes a mother, and through it all is witness to many major events during a tumultuous time in the Byzantine Empire–emperors and empresses rise and fall, wars are waged, and all the while daily life goes on in what was then one of the most powerful cities on Earth
A major plot thread is Anna’s hatred for Constantine Ducas, a powerful official in the imperial court who viciously abuses his wife, Anna’s cousin Xene. Ironically, by the end of the book, she finds her family in an uneasy alliance with the man–though he is clearly maneuvering to gain power for himself, just as many of the other palace bureaucrats do.
One of the things I liked most about the book is the way the political machinations cause real effects in the characters’ daily lives. Another plot thread is how the government levies taxes on its citizens to build extravagant churches and palaces, while failing to pay soldiers on the empire’s edge. It’s one thing to read that someone is an officious bureaucrat–it’s another when you read that their corrupt tax collection scheme is robbing the main character. (The Econ major in me also liked seeing an early example of Ricardian equivalence.)
The large cast of characters is composed largely of actual historical figures, though in a few cases Stephenson takes understandable liberties, given the relative lack of historical information. Some of the most memorable characters are Anna’s uncle Costas, who teaches her about strategy through their frequent chess games, and the bureaucrat Psellus, a “Vicar of Bray“-like character who manages to retain his high office by constantly courting the favor of the various rulers.
Imperial Passions is a truly ambitious work, and Stephenson clearly has done extensive research. Almost every aspect of Byzantine life is covered–food, clothing, travel, religion, marriage and almost anything else you can think of is discussed in some fashion. As a result, the story is rather slow to unfold. If you like a rapid-fire plot with lots of sudden twists and turns, it might not be your cup of tea. And there are times when the otherwise commendable commitment to authenticity hurts the flow of the tale–for example, since many of the characters are historical figures, there are a lot of duplicate names. I wish I had a solidus for every “Marie” and “Constantine” who crops up.
Also, because there are few historical novels about Byzantium, (compare with how many there are about, say, Tudor England) some readers may be intimidated by the unfamiliar setting and the forbidding Byzantine terminology, although there is a helpful glossary in the back. But it’s well worth sticking with it, even–maybe especially–for readers unfamiliar with the setting, because you will end up learning quite a lot about a fascinating and unjustly neglected period in history.