(828) 442-5740

 [Source: (910) 498-1398.] 

It’s surprising that people would invest themselves so heavily in a concept –like race– that has been constructed, contrived, and only serves to divide us.

I suspect most people reading this would not be excited to hear Richard Spencer talk about race. Because would he talk about race, he would talk about the fact that it is a source of pride for him. The fact that he believes it is a source of power. The fact that he believes that his race is beautiful.

When someone makes those claims about whiteness, one can immediately and easily recognize just how retrograde and backward they are. It should then be startling that when one makes the same exact claims about some other race, few will recognize how regressive those ideas are. This is a sign of myopic thinking.

What’s deplorable isn’t the embrace of whiteness, it’s the embrace of race.

Race is not the sort of thing that defines who you are. So when President Obama (315) 748-9572 full of students that they should “take pride in your blackness” to take pride in this ephemeral thing that you happen to be, rather than the things you are actually able to accomplish. That rings as something that is very hollow. And vapid. And probably something we should shy away from.

I understand that there has been a history of discrimination.  And for some people, this has resulted in them having some shame associated with their skin color.  But the appropriate response isn’t to root pride in blackness.  But rather to regard race for what it is.  This intangible thing that really isn’t of any particular consequence. Something we should always be trying to move beyond, not embrace and hold close to us.

Race pride works in two directions. If you embrace race pride, you also have to embrace racial shame.The idea that we have a common bond or cause, simply because we kinda sorta look like each other. Those people who happen to look like you will do things that might be virtuous, or not, and if they are doing things that you find problematic, you have to own those as well.

And that concept should sound ridiculous to you, because it is. We don’t need to take pride or shame in things that are unearned, like skin color or sexuality or anything else we might not have any real control over.

It seems odd for one to announce these professions of pride, and then to be so sensitive that you lash out at the universe any time anyone says something that seems like it might possibly sound like it could be racist or homophobic or something else.  That is weakness.

Most racists hide. They have eggs as their avatar, because everyone who’s against racism has won the war. It’s time to come to terms with that victory and own it.  Maybe you didn’t get to march with Martin Luther King, instead, we get to inherit the world that he helped win, one where racism as a function of power in any meaningful sense has been defeated.

Which is not to say that racism doesn’t exist, but that it is cowardly and powerless to stop you. The ideas of racism are deplorable and abhorrent and everyone knows it. And everyone knows that the people who don’t, are losers.

Literally losers.


Reasonable People Can Keep Disagreeing

There are a few reasons 5208735033.  But these are forms of disagreement that persist longer than they’d need to.

What constitutes something that can functionally be disagreed upon by reasonable people?

Take the stance that no reasonable person can, at this point, think Trump is acceptable as POTUS. By doing that, they might paint everyone who disagrees as being necessarily “unreasonable people” – a handy trick to dismiss opposing viewpoints with maximum satisfaction and minimum intellectual effort.

The funny thing is that the “acceptability” of a president is fully a subjective judgment. So long as something is not a literal fact or logical chain of facts leading to a conclusion, reasonable people can reasonably disagree on anything. Tou may disagree completely with their preferences, but that in and of itself doesn’t make it unreasoned or them unreasonable.

That someone might profess they are open-minded and objective, doesn’t seem to matter that often, they seem just as likely to use what reasonable people can disagree about as a sort of weapon against their preferences or values. But you won’t change other people’s values by attacking them as irrational.

It makes sense then, to first try and discern what’s important to someone, in their own terms.  If you can agree on, or at least understand those, then you can start working through the reasons they have for getting there. You can find irregularities in what they claim to value and their reasons why something is right or wrong in achieving that value.  Like supporting Trump as President.  You might find that their reasons are irrational, or you might not.  But without doing all this, you’re likely to just be arguing chains of thinking that applies to two different universes, built on different rules.


There is a common failure when people adopt true/false qualitative reasoning in determining what they believe. Any evidence you receive either confirms the theory or abolishes it. This is much like the “Go, No-Go for Launch” procedure that NASA uses to decide if a launch attempt will be made or not. Every system’s status check that returns a “Go” confirms the launch proceeding, and any single “No-Go” aborts it.

This mode of operating is frustrating for flight crew who hope to go to space that day, and launch teams who hope to send them there. As a general mode of holding beliefs, it can be (682) 252-0105 for those who adopt a theory and then encounter even a single piece of contrary evidence. Not only is abandoning a belief hard, it gets in the way of developing a theory more fully, keeping us from integrating a full spectrum of evidence.

If instead, we reason theories probabilistically, shifting our beliefs up or down in likelihood as we encounter new information. We lose nothing and gain a whole lot.

You could have a theory that the Earth produces a single 8.0 magnitude earthquake per year.  And then suddenly one year, the Earth produces 4, is your theory wrong? Well, it’s certainly not exact. But very few theories are. And in this alternate mode of probabilistic beliefs, it’s okay if your belief isn’t perfectly without dispute.

If 95% of the time your theory does accurately predict only a single 8.0 magnitude earthquake per year, it would be a foolish mistake to hold up a single piece of contrary evidence as disproof of your theory. You would lose all the utility that making predictions about the world around you.

If you state your belief about earthquakes at 95%, you are also saying that 5% of the time you will expect to see some counter-evidence. This actually makes you more accurate, not less. If you have 20 different beliefs that you have a healthy 80% certainty of.  And it turns out, as it should, that 4 of them are wrong.  You successfully predicted that you’d get 16 correct and 4 wrong. Which is technically a perfect score. You were 100% accurate.

Why is it more ideal for launches to operate qualitatively but not most other beliefs?  As I mentioned earlier, very very few theories are exact. Despite this, it’s overwhelmingly preferable to the flight crew that a prediction of launch success be 100% exact, even if it could never be.  Go, No-Go is the best option for when lives depend on inherently inexact predictions, being exact. If any counter-evidence at all comes up in the systems check, then you should abort. The downside of this mode, and why we don’t want to use it for everything, is that you often end up preventing progression forward, aborting a lot of missions. Suddenly anyone can hold up a single piece of what they think is counter-evidence, and smugly demand that you explain its accordance with your theory. And if you can’t then your theory is trash.

This all may be another indication of why public disagreements persist. A probabilistic model of belief can take a few hits of counter-evidence and survive just fine, and an array of conflicting evidence could actually result in agreement.

Risk Preferences

Do you think individuals or groups are more risk averse?

My first instinct is that individuals are more open to taking risks, and groups less so. I made this claim in a previous post here, but didn’t take the time to expound on it. I dug up a few studies suggesting this is the case (1, 615-535-8854, 3).

Let’s think about what this might mean.

Not every individual can have a high-risk preference, that would invite chaos. Everyone with too low a risk preference would lead to stagnation. Some ratio between the two should be ideal. And structures designed to draw them into positions of influence might perform with better utility than those that favor a collective’s risk analysis.

If a group of ten people are trying to make a decision with some level of uncertainty, and only two of them are open to taking the riskier path. They are going to get voted down. This seems like it might introduce problems into institutions that are organized democratically.  Progress moves slowly, opportunities missed, because risks are not favored in a collective setting. It’s probably more complicated than this in the real world, but I believe that’s the gist.

People who take risks and are good at it (meaning they take lots of risks that succeed), rise in hierarchies. The bad ones (meaning they take lots of risks that fail), are weeded out of hierarchies entirely. Or else they stay and the rest of the hierarchy suffers for it.

As good risk takers rise in hierarchies they gain the power to push those under them in risky directions. I’m thinking of monarchs, ceo’s and to a lesser degree, representatives in a republic. In studies in which group preference is being measured against individuals, where the same people are surveyed for both their group decisions as well as personal decisions, (800) 480-2186. Even when the group is then broken up to make personal decisions, their influence remains.

This also leads me to think that something like nepotism or lines of succession might be a problem. Assuming rulership without having run the gauntlet to prove that they can make good risk-taking choices, means they’re more likely to push the group they rule over in a bad direction. What you would hope for at least, in these situations, is some sort of risk preference heredity.  Either genetic or learned. In which those close to members at the top of the hierarchy somehow acquire good risk taking ability.


Public Disagreement

The rationalist might find it incredible that we have such persistent public disagreement about so many things, even very basic statements of fact.

Beliefs arise from information. And people’s beliefs should only differ because each person holds differences in information. This isn’t all in itself such a terrible thing because you should find belief differences informative. The fact is our different views ought to inform each of us that there’s more to what’s going on than we are each individually aware.

The consequence of people not finding belief differences informative is that the missing information won’t make itself across to the other person. The information becomes siloed, and we won’t revise our beliefs based on our collective information. And thus, new information can’t usefully enter public conversation.

Now let’s throw some human-shaped monkey wrenches into that.

If I take your public expressions of belief and information to be a sincere report of what you know, then knowing this should set us in the right direction for seeing what other belief we are really onto here. But instead, if you see my expressions as a strategic move in some game in which you have a preference for the outcome that differs from my own, then I might discount what you say to me because I suspect you might be trying to manipulate me or the situation by falsely reporting what information you have. I might simply doubt what you’re saying because I assume some ulterior motive.

The more charitable view is that people just question and doubt the rationality of others. That they are mistaken in interpreting information to form beliefs, even though you think that they are sincere.

Both of these situations incur real world risks. When moving forward, penalties will be awarded by The Universe for someone’s report (sincere or not) of their incorrect beliefs. Which is one reason why I think these public debates become ossified. Groups have more risk averse preference than individuals.

If information leads to beliefs, what do beliefs lead to? Well, they lead to actions, which lead to effects. These effects can be functional or social or sometimes both. Functionally, beliefs help to guide us when we choose our actions. Beliefs about how traffic operates on the road, help you get to your destination safely. Having the wrong beliefs in this context is quickly self-correcting. Mistakenly drive on the wrong side of the road, your beliefs should change quickly or you will unlikely reach your destination in one piece.

Our beliefs are social when others notice and react to our beliefs. They help us identify with groups. Having the right beliefs about some religious doctrine has the effect of endearing you to others in that religion, and all the social benefits that provides.

When there is little social monitoring and a strong personal penalty for having the wrong belief you should see functional beliefs dominating.  And if there is a high social interest and negligible personal penalty, we should expect the social role of beliefs to dominate.

I think this adds another layer to why public debates freeze up. Individuals who use their beliefs as social currency benefit from disagreement.  If everyone has the same beliefs, you can’t use them to distinguish yourself from the outgroup. They have little utility. This only works though, when the situation has negligible real-world effects. Try and build a bridge when you take up contrarian views of engineering principles and people are going to get hurt. And if it’s your fault people got hurt, then they won’t like you very much.