FullSizeRender[1] 2

The number one issue that humanity faces today is technological growth. If you look under the surface of most political issues, what drives them is the way technology has given us abilities we did not previously have, or information we could not previously have accessed.

What makes this especially powerful is that technology evolves much faster than human beings do. Technology can go through many generations in the course of one human’s lifetime.

This is important, because in evolutionary biology, new traits usually emerge over the course of generations. (This is why biologists usually study traits in organisms with short generations. You can observe multiple generations of flies over the course of a year.)

But since technology moves faster than humans can evolve new traits, it means that we are always playing from behind. When I was born, cell phones were huge, unwieldy things used by rich people, sales reps, and techies. Now they’re everywhere, and are more powerful than the top-of-the-line supercomputers of three decades ago.

For the last 200 years, technological progress has been increasing at an incredible rate. And humans have often suffered by being slow to adapt. This is illustrated most dramatically by wars: in World War I, the officers had all been trained in tactics derived from the Napoleonic era. This resulted in huge massacres, as cavalry and infantry charges–which would have worked against men with inaccurate single-shot rifles–were torn to pieces by machine guns. Technology had made a huge leap in the century between the battle of Waterloo and the battle of Artois. And that was as nothing compared to the leap it would make in the next thirty years, with the advent of the Atomic Bomb.

The thing is, while it may seem to us like a long time since the days of cavalry charges and flintlock rifles, in terms of human history, that’s a drop in the bucket. Homo sapiens first emerged roughly 200,000 years ago. On that scale, Waterloo might as well be yesterday, and the Roman Empire was just last week.

For the vast majority of our existence, life was pretty much the same: people worked the land and hunted and raised families. Periodically, they organized into larger tribes to make war or trade. If you took a citizen of Athens from, say, 450 BCE and transported him to Renaissance Italy—nearly 2000 years later–he’d still have a pretty good handle on how things worked once he got past the language barrier. Whereas if you transported somebody from 1890s London to the present day—a mere 128 years!—he’d have no idea what was happening.

When you read history, it’s easy to be struck by how human nature seems unchanged over the centuries. We can recognize things in the texts of the earliest historians and philosophers that seem analogous to modern phenomena. While it may seem like this means human nature is eternal, what it really signifies is that it hasn’t been that long, in biological terms, since what we think of as “ancient times”.

It’s commonplace to observe the technology changes, but human nature remains the same. But observing it is one thing; grasping the full implications is another.

For instance, there is a major “culture war” debate in the U.S. over the issue of transgender rights. Those who favor transgender rights view their opponents as closed-minded bigots. Those opposed see the others as radicals bent on destroying the social order. What both sides ignore is the fact that until very recently, transgender people had no medical treatment available to them. For hundreds of thousands of years, transgender people had no option but to live in the body they were born with. And the rest of the population scarcely even knew they existed; and so built whole societies premised on two rigid gender roles. It wasn’t until very recent breakthroughs in medical technology that any other option became viable.

Once you view it in these terms, you realize it isn’t a liberal plot to destabilize society, but simply a group of people able to access treatment that previously did not exist. Likewise, you also realize the reason so many people are responding with fear and suspicion is that history and tradition provide no guidelines for how to deal with the issue. It simply wasn’t possible in the past.

A number of social conflicts, I suspect, are in fact the result of people being optimized for a very different world than the one we actually live in. Ancient prohibitions against homosexuality, sodomy, and other non-reproductive sexual behavior made some sense in the context of their time—in the past, when mortality rates were high, people needed everyone who was physically capable of reproducing to do so, personal feelings notwithstanding. It was about survival of the group, not any one individual.

Nowadays humanity is threatened more by overpopulation than by extinction—but we’re still adapted to the world of thousands of years ago. That’s just one example. I think people in the developed world still have a slightly-irrational fear of famine; simply because we evolved over millennia where food was, in fact, extremely scarce. (This is why philosophies like the so-called “abundance mentality” seem so counter-intuitive. In the past, it would’ve been suicide to assume there were enough resources for everybody.)

Instinct is a powerful thing, and incredibly dangerous when it gets outdated. To borrow an example from Paul Graham: because human beings haven’t had the power of flight until recently, it’s easy for our senses to be fooled in bad visibility.

Of course, this is something where we use technology to make up for our own shortcomings. A human being would have no idea how to fly a plane if not for instruments that correctly show the position of the aircraft. And this leads to another obvious point about technological evolution—it is, in many ways, nothing short of miraculous for humans. It allows us to accomplish things our ancestors could never have imagined. Whatever bad side effects it has, no one could ever rationally argue that we’d be better off getting rid of all of it and returning to primitive life.

The saving grace is that technology has been designed by humans and for humans, and so generally is compatible with the needs of humans. The things that conflict with human needs aren’t usually a direct result of this, but rather side-effects the designers never thought of.

But side-effects, almost by definition, are insidious. Any obvious, seriously harmful side-effect gets fixed early on. The ones that don’t usually fall into one or more of the following categories:

  • Not obvious
  • Don’t seem harmful at first
  • Can’t be fixed without destroying the benefit

The designers of automobiles probably never thought the exhaust would cause pollution; even if they had, they probably wouldn’t have realized that cars would be widely used enough for it to matter. Marie and Pierre Curie had no idea the new element they had discovered was dangerous. It seemed like just a useful illuminative substance. And pretty much every communications technology in history, from the printing press on, has the potential to spread pernicious lies and propaganda just as much as news and useful information. But no one can figure out a way to remove the bad information without also getting rid of the good—the only option is censorship, which can pose a danger in its own right.

I’ll say it again for emphasis: technology is evolving faster than humans. As a result, our instincts will often lie to us when it comes to dealing with technology. It’s the same way modern junk food is engineered to please our taste buds while poisoning our bodies—it’s designed to set off all the right sensors that tell us “get more of this”.

The rise of nationalism throughout the world in the last decade has gone hand-in-hand with the rise of social media. It’s not a coincidence. Social media plays to an old instinct that takes human society back to its most basic state: the tribe, and the desire to win approval from that tribe. But in the past, when we were in tribes, we didn’t know what the rival tribes or nation-states were doing—they were in far-off lands and rarely encountered each other. But now, we always know what they are doing—they are just a click away. And because they are a different tribe, our instincts tell us to fear them, as our ancestors feared invaders from distant places.

What can we do about this? We can’t get rid of technology; nor would we want to. And I don’t think it’s a good idea to make it into a political question. Politicians want easy, non-nuanced issues, where they can cast themselves as leaders of a huge, virtuous majority against a tiny, vaguely-defined band of evildoers. That would be a terrible thing to happen on this issue. As we’ve already seen in the social issues I’ve mentioned earlier, politicians tend to cast these things as moral questions rather than technological change ones.

We’re going to have to deal with this one on our own. But how? After all, technology brings huge benefits. How can we keep getting those while minimizing the side effects? We don’t want to completely ignore our instincts—not all of them are outdated, after all—but we can’t always trust them, either.

The best advice I can give is to always be on the lookout for what side-effects technology produces in your own life. Always ask yourself what it’s causing you to do differently, and why. Then you’ll start to look for the same in the wider world. We know human nature doesn’t change that much; so when you see or read about a large number of people behaving in an unusual way, or a new cultural phenomenon, there’s a decent chance that it’s in response to some new technology.

It’s easy to look at the dangers of technology and decide you want to opt out, throw it all away, and return to the simple life. This is probably healthy in small doses but it’s impractical on a large scale or for an entire lifetime. What I’m advising is cultivating an attitude of extreme adaptability, where you are so flexible that you can both use new technology and see the potential problems with it coming before they hit you. Listen to your instincts, but know when you need to disregard them. Remember, your instincts are optimized to give you the best chance at survival in a world of agrarian societies and/or tribes of hunter-gatherers. And they are damn good at it; but their mileage may vary in a world of computers, nanomachines, and space travel.

DelarocheNapoleonMost histories of Napoleon’s downfall begin with his disastrous invasion of Russia, and at first glance, this seems appropriate. Napoleon suffered huge losses, failed to gain much of anything, and never won a campaign again after the invasion. It seems like the obvious point where his fortunes turned for the worse.

But the truth is, Napoleon’s downfall started much earlier. And it wasn’t due to any “nearest-run-thing-you-ever-saw” kind of bad luck that happens in battle, either. It was due to the fact that Napoleon didn’t understand economics nearly as well as warfare.

In 1806, the British Empire began a naval blockade against France. In retaliation, Napoleon–who at this point controlled most of continental Europe–enacted an embargo against trade with Britain, forbidding all French-controlled nations from importing British goods.

By all accounts, it didn’t work. Even the Empress Josephine herself purchased smuggled British products.¹ And Britain simply made up the losses in revenue from Europe in other parts of the world.

Finally, it was in an attempt to impose his ban on British goods that Napoleon invaded Russia to begin with! If he hadn’t been trying to enforce the embargo, he would never have had to make such a risky move at all.

At the time, France had a very strong military tradition. Nowadays we tend to stereotype Germany as the most militaristic European nation, but German militarism is heavily rooted in reforms introduced in Prussia following their losses to Napoleon. So in the early 19th century, it was French militarism vs. British capitalism.

Napoleon was a great military strategist and leader, but he seems to have been pretty ignorant when it came to economics and trade. Napoleon fell into the error of regulators everywhere, in that he assumed he could end demand for goods by making them illegal. In fact, all he did was create a lucrative black market for the British and punish his own people simultaneously.

It would have been different if Napoleon had been able to defeat the British Navy. Then he maybe could have enforced the embargo more effectively. But then, if he could defeat the British Navy, the whole problem of Britain would have been solved anyway.

Napoleon was seeing everything in military terms–that was what he was trained to do, after all. British policy was designed more in economic terms, and the military (mainly the Navy) was just a tool used by Britain to secure their material wealth. The results of the differing philosophies speak for themselves: Napoleon got to be in a lot of famous battles, sure; but eventually lost his Empire and died alone on St. Helena. Britain became the dominant superpower in the world for the next century.

Napoleon should have been patient. Yes, the British were constantly financing uprisings against him, but they weren’t working out very well, and they couldn’t keep it up forever.

There are a couple lessons here. First, you can’t ignore the laws of economics, even if you are the greatest military strategist of your time. And second, though it may be more dramatic to depict Napoleon’s downfall with a retreat from a burning Moscow or a failed charge at Waterloo, he sealed his own fate much earlier with a serious error of his own design.

Fireofmoscow
Study Economic policy, or this could be you. (“Fire of Moscow” by Viktor Mazurovsky. Image via Wikipedia)

It’s easy to point to one battle or one bit of bad luck as being “Where It All Went Wrong”, but oftentimes, such events are really just the culmination of a less dramatic, more systematic bad decision made much earlier.

So instead of saying “So-and-so met their Waterloo when…”, look instead for when So-and-so made their Continental System.

UPDATE 5/22/2018: See Patrick Prescott’s post on this subject for more info–he has a lot more expertise on this than I do.

CITATION

  1. See Napoleon: A Life, by Andrew Roberts. p. 429

Most fiction is treated as entertainment and nothing more. You watch a movie for two hours, maybe talk about it a little with your friends afterward, and that’s it. There are some works here and there that are so dazzling they make a more lasting impression on you. Really spectacular special effects in a movie, or a particularly good line of dialogue, or a moving character death in a novel can do this.

This is as much of an impression as most fiction makes upon its audience. But there is another level on which a story can function. It is the most powerful, and also the hardest to achieve. That is the type of story that actually makes the audience look at the world differently, and act differently as a result.

This is, I think, pretty rare. There may be many stories trying to achieve it, but only a few succeed. And even those that do succeed probably only do so for a small percentage of their total audience.1

Note that when I say “act differently”, I’m not referring to the people who saw Star Wars or Harry Potter and decided to start attending fan conventions in costume, or to name their children “Anakin” or “Hermione”, or to have themed weddings based on the stories. That’s fandom, and can happen with anything.

What I’m talking about is general knowledge that you can apply to a wide variety of situations. And it has to be something that wasn’t obvious or easy, at least not for you. Lots of stories try to have some overarching theme on the order of “You can do anything if you believe in yourself”. Which may be true, but is so obvious most audiences probably have heard it already.

Naturally, the idea for this post began when I asked myself, “What works of fiction changed how I act?” This is the list I came up with. Long-time readers will probably not be surprised by most of the entries:

  • Star Wars: Knights of the Old Republic II. (In a nutshell, the big takeaway is that every action has consequences, often ones we don’t foresee. So choose wisely and think about how your actions will influence others.)
  • Jane Got a Gun. (The lesson here is that you should never assume you know the whole story. You should listen to what other people have to say, even if you think you know better.)
  • Nineteen Eighty-Four by George Orwell. (This one is pretty well known, but for me the lesson is that people try to seize power not only by force, but by controlling the thoughts of others. You have to resist them.)
  • Eating Bull by Carrie Rubin. (The point here is that what people eat is driven by a number of personal, societal and economic factors. Your diet is a more complicated business than you might realize.)

KotOR and Jane changed how I approach day-to-day interactions with people. Nineteen Eighty-Four changed how I read political news and think about government. And Eating Bull changed how I eat.

Obviously, this isn’t an exhaustive list of fiction I consider “good”, though it is a sub-set of it.2 In fact, I was shocked at how short the list is, given how many works of fiction I enjoy in different genres and media.

I am a big fan of weird fiction, but I can’t say I did anything different after reading Lovecraft et al. (Other than trying to write weird fiction myself, I guess.) I love the movies Lawrence of Arabia and Chinatown, but they didn’t change how I approach the world. And the works of Gilbert and Sullivan are also absent from this list, even though it was from a G&S critic, Gayden Wren, that I first learned how to analyze fiction in terms of “levels” of storytelling.

Now, it’s probably true that the stories I listed above weren’t the only way I could have learned these lessons. Maybe the reason I needed fiction to learn them at all is that I’m an especially unobservant person, or else I would have figured them out myself from observing the real world.3

But if so, that speaks to the power of fiction: it can teach people things they would otherwise never have learned.

NOTES

  1. To a degree, it’s a personal thing. The unique circumstances under which somebody sees a film, plays a game, or reads a book, probably play just as much of a part as the work itself.
  1. It’s important to realize that a story can also be pretty bad, from a technical perspective, but still change how people see the world. Many people seem to get life-altering epiphanies from reading Ayn Rand’s novels, but they still have many flaws as works of drama. This raises an important point, which is that some people  “cheat” and try to tell a story about big, powerful themes without first having a solidly-constructed plot and characters. If you do this, you usually just end up making something incoherent and pretentious.
  1. I guess this is the central difference between fiction and non-fiction. Fiction is entertainment, and it’s a bonus if you learn something from it. Whereas every work of non-fiction should teach you something new, or it’s a waste of time.

Many moons ago, when I was in college, I had to take what they called a “writing course”, which was a class designed specifically to teach writing, but about subjects in our chosen major. (Mine was Econ.) I think the point was to prevent a bunch of mathematics geniuses from taking over the field with equations and graphs strung together by incoherent babble.

It doesn’t seem to have worked.

Anyway, the section I was in was unpopular, because the professor assigned not one, not two, not three, but four books. Now, they were all short books, and one of them (The Ghost Map) actually became one of my favorites. But that’s not the one I want to talk about here. I want to talk about the first one we had to read: The Doctors’ Plague.

The book is about Ignaz Semmelweis, a Hungarian doctor who, in the 1840s, tried to reduce the so-called “childbed fever” then prevalent in the hospital where he worked. Germ theory was not widely understood at the time, and Semmelweis’s radical proposal was that doctors and nurses who treated infants and mothers should wash their hands.

This sounds absurdly obvious to us modern readers, but at the time it was heretical, and indeed, Semmelweis wasn’t taken seriously by the medical establishment. Whether due to his difficult temper, some unknown mental disorder, or possibly a language barrier, Semmelweis failed to prevail upon the medical community to adopt hand-washing as a regular practice. He died in an insane asylum, and his work was not recognized until long after his death.

Naturally, we Econ students were all puzzled by this. (Those of us that read it, that is. I suspect a quarter of the class just looked up the book’s synopsis online, and another quarter didn’t even do that.) What on God’s Green Earth does this have to do with Supply and Demand?

After the week or whatever our allotted time to read the book was, the professor started the class by giving his summary of the book–I assume for the benefit of the ones who didn’t read it. He finished up by raising the question we were asking ourselves: why did he assign this?

The point of the book, he said, was that Semmelweis couldn’t communicate his ideas to his colleagues. “So,” he concluded, “You have to learn to write well! It doesn’t matter if you discover something great if no one can understand you.”

I think he intended this as a carpe diem moment, but most of the class felt like they’d just been told the world’s longest shaggy dog story. But he was right; you do have to be able to write well, no matter how good your underlying point is.

I’m not even sure if that was really the main lesson of the Semmelweis story, but nevertheless, it’s true. And regardless of whether writing well has anything to do with Semmelweis or not, the professor created a helpful mnemonic: writing well is as important as good hygiene in a hospital.

allworkandnoplay

Most days, it’s a real struggle for me to get started on writing even a paragraph in one of my stories. Once in a great while, I’ll be struck by some inspiration and then it’s just a matter of getting the words down as fast as I can, but that’s rare. The more normal case is something like this:

 I need to write something where X happens.

 [Write a word or two]

Huh, I wonder what’s going on in the news.

 [Half hour later, force myself to write another sentence or two]

Are there any good videos on YouTube?

I have to consciously force myself to stay on task and write something down. If I manage to do that, most of the time I hate what I’m writing up until I finish, at which point it starts to seem possibly decent. But the whole time I’m doing it, I feel like I’m doing lousy work, and moreover, it takes all my willpower to even do that.

Why is this? Writing is supposed to be what I like doing. No one is forcing me to do it—it’s what I want to do.  But then why am I strongly tempted to avoid doing it, like it’s a job or something?

At first, I thought maybe I was just a lazy bum. But I follow lots of hard-working writers on Twitter, and they frequently report this same problem. I even did a poll of my followers, and while the sample was small, 100% reported they procrastinated:

So, it’s not just me being lazy. Other writers face this problem too.

The simple and obvious explanation is that writing is active. You have to consciously do something to make it happen. Whereas reading the news or watching cat videos is passive—you just find your way to the site and put your mind on cruise control.

But this doesn’t totally explain it. One of the ways I procrastinate is by playing video games. And that’s not passive; I still have to press buttons and make decisions to get the outcome I want in the game.  Yet it’s far easier for me to play a game of FTL or computer chess than it is to write. I don’t have to will myself to play a game.

My next-door neighbor has had all kinds of hobbies over the years I’ve known him, from shooting guns to building model airplanes to mixing drinks to, yes, playing video games. And he doesn’t seem to need a huge amount of willpower to make himself work at any of his hobbies. Why is my hobby different?

Part of the problem is that I’ll write something down and then think, “Well, that’s not any good”. This feels unsatisfying. And at some level, I think procrastination is a defense mechanism. Skimming the sports headlines may not yield much satisfaction, but at least it won’t be as disappointing as writing something imperfect.

But why should that be disappointing? After all, no one else is going to judge me by the first draft. No one else will even know it existed unless I show it to them. So why am I bothered if it’s not right the first time? I don’t get discouraged if I don’t win a video game right away. On the contrary, losing a game just makes me want to try again.

Writing, unlike other activities, is more closely associated with having an audience. After all, if you’re just writing for yourself, why bother writing? You know the story already—the only reason to write it down is to communicate it to others.

That’s the heart of the difference: When I play a video game or exercise or any of the other things I do for fun, my only audience is myself. If I’m satisfied with my performance, that’s all I need.

We are trained very early on that writing is different. Writing is what you do when you want to tell other people something. As a result, when you write, you are subconsciously trying to please other people.

Ta-da! This explains the mystery of why writers procrastinate. Procrastination is something you do when you are assigned a task by other people, and writing feels like that because that’s how we’re trained to regard it. It’s the same reason we all procrastinated when our teachers assigned us to write a paper on such-and-such-thing-no-one-cares-about.

Some of the most common advice I’ve seen from successful authors is stuff like “Write for yourself,” “Ignore your inner critic on the first draft” and perhaps the most common, “Lose your fear of writing”.*

This advice always puzzled me. Of course I was writing for myself! Who the hell else would I be writing these weird stories for? And my inner critic? Who’s that? As far as I knew, I didn’t have one. The fear thing seemed the most sensible, although for me, the fear wasn’t so much of writing as it was of publishing.

But now I see what all those famous writers were saying: you think you’re writing for yourself, but you aren’t really. In your unconscious mind, you are still trying to figure out what the readers are going to think of what you wrote. It’s a deeply-rooted habit, probably one that evolution instilled in us—the societies where people could clearly communicate their ideas to one another were the ones that flourished.

I’m not saying you shouldn’t write so that other people can understand you. But the point is, that has to come later. First, you have to treat writing as a personal challenge between you and the part of your mind that wants to stop you from doing it. It’s like working out: you know it’s good for you, and you know you will feel great afterward, but you have to overcome the natural instinct that tells you it’s easier not to do it.

The precise way to do this can vary from person to person. You’ll discover the method that works best for you as you go along.

One exercise that I think can help teach how not to write for an audience is to just try writing stream-of-consciousness. For this post, I deliberately tried an experiment where I turned off my sense-making filter and just spewed forth whatever came to mind. This is what resulted:

Grey window skies empty noises and duahgter nothing al dhpauiw hope thjat move listen coffee  righ fjor wdesk need time hope sk

Sitting on a cold day that is grey and deporessing why am I doing this write exercise imagine plains vision skies weird black nebulous

This seems like incoherent babble, but it’s really not all that random. For context: I was sitting at my desk by a window on a cold grey day, drinking coffee. I could hear people outside talking and someone said something about a daughter.

For the second paragraph, the other people shut up, and I started to let my imagination roam, which led to visions of Lovecraftian weird cosmic horror, because that’s my favorite genre, or at least the one I’m most familiar with.

As sloppy and gibberish-filled as that is, you can see my thought process even through all the errors and downright nonsense. Which brings me to my point: as in many other fields, “true randomness” is actually pretty hard to achieve in writing. Your brain will work very hard to force you to make sense. Which is helpful in many other ways, but the problem is that our brains have become so good that they will try to prevent us writing anything less than the perfect sentence on the first try. That part of the brain would much rather procrastinate than risk writing something nonsensical.

This is what all those famous writers mean when they say “Write for yourself” or “Don’t worry about the audience” or “Ignore the inner-critic.” It’s all true, but it’s not specific enough, because when you are tempted to put off writing and procrastinate instead, you don’t realize you’re writing for someone else, or that it’s your inner-critic, or your fear of the audience. It feels like you’re just trying to write something that makes sense, and for some horrible reason, you can’t.

That’s because it doesn’t make perfect sense, and your brain hates that. But it’s okay. You can fix it later. Editors and beta readers will make sure of that.

So my advice is: don’t worry about making sense. In fact, I’ll go even further: actively try to avoid making sense on the first draft. Just put down the most basic, sub-literate version of what you want to convey. You’d be surprised how hard it is to not make sense—your unconscious mind will keep you at least within saluting distance of it most of the time. After that, you can just iterate until your visceral idea has been refined into something your readers can understand.

FOOTNOTE

* As Phillip McCollum has observed, fear can also be extremely useful for writers. But that’s fear of other things, not writing itself.

In the last year and a half, two things happened to that made me understand calories better. The first was that I started doing cardio workouts and monitoring the calorie counts on the machines.

A half-hour of jogging burns about 300 calories. (The machine estimates a bit more, but I’ve heard these things tend to add about 15%-20% over the true amount) Then with a bit of time on a machine called “Jacob’s Ladder”, I can usually add another 100.

On my best day of cardio ever, I got to 500 calories. I was exhausted and sweaty, but it still felt good. 500 calories! I thought that was pretty awesome.

The second thing that happened was that I started following author Carrie Rubin on Twitter. She frequently discusses health/nutrition issues, and specifically menu-labeling. I never thought about it until I read what she has written, and after that I started paying attention to calorie counts on restaurant menus and food labels.

What I saw was horrifying. There’s no other word for it. For example, the typical plain bagel with cream cheese at most restaurants seems to be about 450 calories.

Before working out using calorie counts, I had no frame of reference to tell me whether that was good or bad. But now, I can roughly translate the number on the menu to how hard I have to exercise to burn that any calories. And the results aren’t pretty: I have to do my maximum cardio workout just to negate the calories from one bagel.

Once you see things in these terms, you take a whole different attitude towards food. When you see a delicious thing that contains 1000s of calories, you don’t think: “Yum! I want that.” You think: “My god, I’m already tired from all the running that’s going to require.”

Many food sellers are, naturally, reluctant to do menu-labeling, precisely because they know that people will see those calorie counts and change their purchase decisions accordingly. The good news is that they are–or at least, will be–required to do so.

(My fear is that restaurants will raise prices to make up for it. This leads to an even bigger problem: The fact the healthy food is also more expensive. It already seems like only the middle-class and above can afford to eat healthy, and the poor are stuck eating junk food because it’s cheaper.)

But menu-labeling is only half of the battle. The other half is for the consumer to be able to translate the calorie counts on those menus into something meaningful—specifically, the amount of effort it costs to burn those calories later on.

What he said:

“I mean, had Andrew Jackson been a little later, you wouldn’t have had the Civil War. He was a very tough person, but he had a big heart, and he was really angry that he saw what was happening with regard to the Civil War. He said, “There’s no reason for this.” People don’t realize, you know, the Civil War, you think about it, why?”

Like so many things Trump says, this makes no sense.  But I think I know what he meant.

I think he is alluding to the Nullification Crisis–a conflict between the Federal Government and South Carolina during Jackson’s presidency.  The stated reason for the crisis was that South Carolina claimed they didn’t have to abide by Federal tariff laws.  The real motives were a bit deeper, and are an obvious prelude to some of the issues that sparked the Civil War.

Jackson himself wrote: “the tariff was only a pretext, and disunion and southern confederacy the real object.”  It was sort of a trial run for the South, which would later use similar states’ rights-style arguments as a reason to preserve slavery, ultimately leading them into conflict with the North.

Trump, of course, knows none of that.  But Stephen Bannon, an admirer of Andrew Jackson, probably does know it, and Trump vaguely remembered him saying something about it once.  Of course, he couldn’t remember specifics, like that it was about the issue of Federal vs. State power, or that it led to Southern states claiming they had a right to preserve slavery. He just remembered “Andrew Jackson” and “something that led to the Civil War”.

(I don’t know this for sure, but I suspect Bannon is one of those guys who argues that the Civil War wasn’t about slavery, but was instead about “states’ rights.)

The end result is the totally rambling and nonsensical quote above. But I think on this one, it’s pretty easy to trace Trump’s incoherent babble back to the primordial Bannon-stew that spawned it.

These are two errors people make in all types of organizations.  They seem to be complete opposites, but in fact they stem from the same failure in logic.

“The Competition Is Doing It”: People in business, sports, politics etc. will often say this to justify doing something.  “We need to spend the big bucks on this.” “Why?” “Because the competitors spent big bucks on it–we don’t want to be left behind.”

The problem is, this makes you susceptible to fads and fashions.  If the other guys are doing it and it’s actually a bad idea, then you are copying their mistakes. It’s an advanced form of peer-pressure. People who don’t know what they are doing will just copy other people on the assumption they do.

This doesn’t mean you shouldn’t see what the competition is doing–of course you should–but rather that the fact that they are doing something is not in itself a reason to copy them.  Only if it’s working for them is it a reason to copy them.

Of course, people sometimes make the complete opposite mistake…

Not Invented Here Syndrome“: This is where people are too concerned about keeping their own insular culture, and refuse to adopt new ideas. A variant is “we’ve always done it that way” as a justification for something.  People are too afraid to try something new and justify it by saying its not “who we are” or “how we do it”.

Now, on the surface, these errors are in complete opposite directions.  One is about taking ideas from the outside, the other is about refusing to do so.  But the common theme in both is that people are unwilling to do something no one else is doing. They are afraid of the risks involved with trying something no one else has tried.

So, how to avoid making either of these errors?  It seems like a delicate balancing act, where if you try too hard to avoid one, you end up making the other one.

The answer is to focus on what actually works. That way, when someone says, “The competitors are doing it”, you can say, “And is it working for them?” And when someone says, “We’ve always done it that way”, you can say, “And has it worked for us?”

The truth is, many screw-ups occur because someone was afraid to do the thing that they knew would work, either because no one else was doing it, or because they themselves had never done it.

get-results

Pilate therefore said unto him, Art thou a king then? Jesus answered, Thou sayest that I am a king. To this end was I born, and for this cause came I into the world, that I should bear witness unto the truth. Every one that is of the truth heareth my voice.

Pilate saith unto him, What is truth?

–John 18:37-38, King James Version

After a lecture on cosmology and the structure of the solar system, William James was accosted by a little old lady.
“Your theory that the sun is the centre of the solar system, and the earth is a ball which rotates around it has a very convincing ring to it, Mr. James, but it’s wrong. I’ve got a better theory,” said the little old lady.
“And what is that, madam?” Inquired James politely.
“That we live on a crust of earth which is on the back of a giant turtle,”
Not wishing to demolish this absurd little theory by bringing to bear the masses of scientific evidence he had at his command, James decided to gently dissuade his opponent by making her see some of the inadequacies of her position.
“If your theory is correct, madam,” he asked, “what does this turtle stand on?”
“You’re a very clever man, Mr. James, and that’s a very good question,” replied the little old lady, “but I have an answer to it. And it is this: The first turtle stands on the back of a second, far larger, turtle, who stands directly under him.”
“But what does this second turtle stand on?” persisted James patiently.
To this the little old lady crowed triumphantly. “It’s no use, Mr. James – it’s turtles all the way down.”

–J.R. Ross, Constraints on Variables in Syntax. 1967, via Wikipeida

Everything sticks until it goes away / And the truth is we don’t know anything.

–They Might Be Giants, Ana Ng.

I got into a debate the other day with a Trump supporter. Our disagreement was originally whether or not Russia had attempted to influence the U.S. Election by hacking into Democratic Party files and releasing them via Wikileaks.

My position was that the Russians did it. As evidence, I cited the fact that they had motive, opportunity, ability, and that the U.S. Intelligence agencies have now said that the Russians did exactly this.

My opponent conceded that the Russians did have motive and opportunity, but argued that many other nations did as well.  Moreover, he argued, there was no evidence the Russians had done it, and no one at the CIA had said the Russians did it. That was propaganda from the liberals to delegitimize Trump.

“What about the Director of the CIA saying as much?” I asked.

“Made-up story,” he countered. “Fake news.”

According to my opponent, this is a typical strategy used by Democrats to undercut Republicans who win Presidential elections.  He claims that they have done similar things in the past–for example, they told everyone that Al Gore won the popular vote in 2000.

“Al Gore did win the popular vote in 2000″, I responded.

He shakes his head.  “No–liberal propaganda.”

“You can look up the vote count online,” I persisted.

He was dismissive. “The government is run by liberals–they lie about the votes.”

It quickly became clear that there was no way we could ever conclude this argument.  Both of us had to invoke authorities the other considered unreliable. If I referred him to the National Archives count of the votes, he deemed it liberal propaganda. Similarly, if he referred me to Breitbart or Rush Limbaugh supposedly refuting the published vote tallies, I would deem that conservative propaganda.

The only way it could possibly be resolved would be if the two of us were able to personally count all the ballots ourselves. And even then it wouldn’t work–if it came out against him, my opponent would no doubt insist that liberals had secretly removed some ballots before the counting.

And when you get right down to it, I can’t absolutely prove that’s false. I can make all sorts of educated guesses, assert things with 99.99% confidence, but I technically can’t prove it beyond all doubt.

If you push it far enough, no one truly knows much of anything with “absolute metaphysical certitude”, as John McLaughlin would say.  People are just proceeding based on logical assumptions. We don’t know for absolute certain that aliens didn’t secretly replace all our family and friends with evil body doubles overnight–but it’s fair to feel confident they probably didn’t.

There’s a term for this need for absolute certainty: it’s a form of Obsessive-Compulsive Disorder. People with this disorder experience crippling anxiety and disturbing thoughts because they have uncertainty about something.

You have to either accept some level of uncertainty, or live a miserable life.

At the moment, the entire country suffers from this crippling anxiety because they have lost faith in all the old institutions–the Press, the Government, and even Religious organizations. (Except on the issue of abortion, where Priests and Preachers still have some influence.)

The real problem is that people have not only lost their faith in old institutions, but put their faith in new, highly dubious ones, that promise to assuage their anxieties. It reminds me of a quote often attributed to G.K. Chesterton:

When a man stops believing in God he doesn’t then believe in nothing, he believes anything.

This may not always be true of single individuals, but I think it is true of populations. Once a whole culture has lost faith in the institutions they used to believe in, they are vulnerable to being taken in by any charismatic con man with a compelling tale.

Scientific reasoning is about analyzing data gathered via scientific methods. It does not allow for appeals to authority.  However, the average person does not have time to rigorously test every single issue that might affect his or her life. This means that it is sometimes necessary to either believe authority or, if the authority is thought to be untrustworthy, find a new one. As my vote-count problem above illustrates, there are some matters that cannot be personally verified by every single person.

But, in a quest for reassurance from authority, people will not seek the authorities who give them the most truthful answer, but rather the most comforting. A man with the supreme confidence to assert “I alone can fix it”, whether he can or not, will inevitably be more popular with people adrift in a world of doubt and uncertainty than one who seems unsure.

There’s a final irony to this: Trump himself talks about the importance of making decisions while uncertain.  In The Art of the Deal, he discusses how many of his deals involve some element of risk-taking.  He says he simply makes decisions by gathering information from as many people with knowledge of the issue as he can, and then going with whatever his gut instincts tell him.

Most executives, military commanders, and other leaders throughout history learned to cope with the idea of uncertainty or risk.  They simply made the best decision they could with the information available. They did not constantly question all information or demand it be replaced with new information that was favorable to them.

(Interestingly, people like Stalin and Hitler would require that their intelligence be favorable to them, and filled most of their officer corps with politicians and “yes-men” who wouldn’t give them the full story.)

The argument strategy like the one I described above is to first devalue all information by emphasizing the tiny element of uncertainty that exists in everything not witnessed first-hand, and then appeal to charismatic and reassuring authorities who promise to fix all problems.

The best way to counter it is as follows: argue based simply on facts everyone–or at least, the person with whom you are arguing–agrees on, and extrapolate logically from there. As I said, even my bull-headed opponent had to admit the Russians had motive and opportunity for hacking the election.

Above all, when arguing with someone like that, don’t make any appeal to authority, or cite any source, because they will immediately dismiss it.

I had a friendly bet with Barb Knowles on the AFC Championship game.  The loser had to do a post about the winner’s blog.  But, I like her blog “saneteachers” so much that I am going to post about it even though I didn’t lose.

She has a delightful post about the dialect differences she encountered on coming to Ohio Wesleyan University from New York. As she puts it:

They don’t speak New York in Ohio.  They speak Ohio in Ohio. Of course, to me it sounded more like Ahia.

As a lifelong Ohio resident–I grew up about a half-hour from Ohio Wesleyan’s campus–I know what she means.  Non-Ohioans have frequently pointed out that central Ohioans sound like this when listing our home country, city and state:

I’m ‘Merican, from C’lumbus, Ahia.

but then again, they might be from a place a little way east of Columbus: Newark, which is pronounced something close to “Nerk”.

I took a linguistics class in college where we had to do an assignment on regional dialect differences.  For instance, when informally addressing a group of people, Southerners would say “you all” (often rendered as “y’all”) whereas Midwesterners say “you guys”.

That of course was small potatoes next to the big dialect difference: what do you call those glowing insects we get in the summer–fireflies or lightning bugs?

In her post, Barb also mentions the age-old debate of “soda” vs. “pop”.  (Some also call them “soft drinks” or “fizzy drinks”.)  This one I missed, because in my family we called the drinks by their brand name, but I remember the first time I heard someone call it “pop” I was puzzled.

I’d also never heard of the confusion over “bag” and “sack” that she describes–I’ve always heard both used interchangeably. With the prevalence of television regional dialects have declined over time–maybe that’s the reason. I also never heard “rubber” for “rubber band”.  I shudder to think at the mix-ups that could cause.

I once got into an argument with two of my friends–both of whom are also native Ohioans–about whether you call this a “flathead” or a “slotted” screwdriver. (It’s “slotted”.  Don’t let my evil friends tell you otherwise.) I don’t know if this is a generational or regional thing, but it was interesting.

I’m lucky in that I have relatives all over the country, so I get to hear a lot of different regionalisms.  Even if it does cause some confusion sometimes…

Anyway, you guys–and you all–should check out Barb’s blog.  She’s a terrific writer, and has some very witty observations.  I wouldn’t have made my bet with her if I didn’t think so–and the fun of a bet like this is that everyone wins.