18 October 2017

Passive Voice and Crime

This is a response to a Tweet that showed up in my Twitter stream a couple of days ago. We often speak about crime in the passive voice. And there's a thing about passive voice that will be clear to anyone who has learned Pali or Sanskrit - the passive voice has no subject. Which is why I prefer agent/patient to subject/object when discussing grammar.

In the active voice, a subject does an action to an object. In the passive voice an action is done by an agent to an patient. But we can and do use the passive voice without a subject, i.e. with just an action and a patient.

In terms of crime we can say things like:
"The woman was raped."
"A man was mugged."
"A child was run over."
"The official was bribed."
"The house was burgled"
This is a pretty common way of talking about crime. What's missing in all of these statement is the agent of the action: "... by a rapist", "... by a mugger", "... by the dangerous driver", "... by the developer", "... by a thief". And so on. Just because the verb is in the passive voice, does not mean that the action is not carried out by someone.

Similarly, there is a trend for people who have responsibility to skirt it by saying bullshit phrases like "mistakes were made". In which case we can always ask "By whom were the mistakes made?" Just because they shift to the passive voice, does not mean that we are forced to abandon the notion of a grammatical (and real) agent of the action.

Use of the passive voice without an agent is a problem to the extent that it shifts the conversational emphasis onto grammatical patient, i.e. the victim, the location, or the nature of the crime, while obscuring the agent of the action. Of course crimes happen to us, against our will, so the passive voice is designed for exactly these situations. But if we leave off the perpetrator of the crime, we may create an unfair situation.

Why? Because when we comprehend actions we typically understand them in terms of agents with motivations. And why is this? It's because the archetype for a willed action is our own experience of turning our head to say we've had enough milk. Or our first experience of grabbing something and pulling it closer. The archetypes in other words are our own willed actions.

So if we only mention the patient of the criminal action, then we leave a conceptual gap in which the victim (potentially) becomes the agent: i.e. we blame the victim. Someone has to make the action happen, and if the actual agent is out of the picture, then we look to the only other participant. Crime is emotive, and perhaps no crime more so than rape. If someone was raped, then yes, I think it is vital that we insist that it was an action carried out by someone.

In rape, resistance often makes things worse for example, because the assailant may become more violent and it both intensifies and prolongs the experience. Each case is different, but no woman ever wants to be raped, or "asks for it". That much has to be clear. And it ought to be clear in how we talk about it. But its not justice to hold a whole section of society to blame for the crimes of individuals. This is an important principle of our justice system: collective punishment is not just. I cannot be blamed or punished for crimes in which I am not explicitly involved in committing. I don't accept that just being a man makes me complicit in violence. I've been the victim of more violence than most people I know. Quite a bit of that was from women or girls, by the way.

With that said, I do want to continue to think more about the use of passive voice verbs in the way we speak of crimes generally. For example, with respect to the example of bribery, you may have thought, "hang on, the official who was bribed actively committed a crime by accepting the bribe." Yes, they did. The bribe was accepted by the official. Interesting, this is the passive voice, but in discussing bribery always seems to specify an agent. The verb is passive in this sentence, but it is clear who is doing what. So this makes it an interesting one to think about. For every crime there is a criminal.

It's important to specify the agent of the criminal action, especially in the case of groups who tend to be oppressed or disempowered. The story is not, to take a topical example, that some actor was raped, but that an actor was raped by Harvey Weinstein (allegedly). The criminal becomes the focus rather than the victim of the crime. Our justice system is skewed towards punishing perpetrators and so we have to identify them, or we consider that justice has not been served. A more restorative justice system would go about it differently and would require us to focus on the victims. 

A load of crime words are used in both the past active and past passive voice "he raped..." and "she was raped...". Similarly with murdered, robbed etc. We almost always talk about crimes in the past - unless we are in the process of being mugged or whatever.

Of course this is more difficult when the criminal is as yet unknown or as yet not proven guilty (as in Weinstein's case). But the thing about the passive voice is that it cries out to be qualified "by....". Which is why one amusing way to identify a verb in the passive voice is to see if following it with "by zombies" still makes sense. e.g.

  • The man was being pursued [by zombies]. Makes sense, verb is passive. 
  • The man pursued [by zombies] his dog. Doesn't make sense, verb is active. 

The person on twitter who inspired this little rant, was insistent that perpetrators should particularly be identified as men. To me this smacks of the old "all men a rapists" bullshit. A man might have raped a woman, and yes, it is usually a man, but actually the number of men who are rapists is pretty small. I have known many hundreds of men, and I know of one who was accused of rape. I'm not sure that anything is gained by emphasising the gender of criminals. In the case of violence, men are very much more likely to be the victims of violence than women are.

In any case, people sometimes say we should strive to eliminate the passive voice. When I looked at a few news headlines, I did not see much use of the passive voice. Many crime stories do use the active voice and of course are therefore forced to attribute the crime to someone. So maybe the prejudice against the passive voice is having an effect. In which case the original complaint might have overstated the problem.

On the other hand because we attribute crimes to someone, and are often lazy about the adverb allegedly, some people are splattered with guilt by association. The "no smoke without fire" fallacy. I have seen no evidence that anyone thinks that Weistein did not rape, molest, and pester women, though he has yet to be charged by the police, let alone appear in court to be judged. He is being tried in the media and punishment has already commenced.

On the other hand, when the police raided Cliff Richard's house, and tipped off the media so that they could film it, the man's reputation was severely damaged by the allegations. A crime was more or less deliberately attributed to him, when in fact, as far as anyone knows, he is innocent (false accusation is also a crime). Same with Paul Gambaccini, who was caught up in the same furore, but was always innocent. Accusations of child sex-abuse are extremely damaging, especially to someone who makes their living in the public eye. And that is balanced against the damage that sex-offenders cause if left unchecked (as they have been for decades in the entertainment industry).

So, even if we were to switch entirely to using the active voice, the way we talk and think about crime is not a simple matter. We usually have less than perfect knowledge and people are unreliable witnesses (both passively and actively).

There is nothing inherently wrong with the passive voice. Especially when things happen to us against our will, the passive voice is exactly what we need to express that directly. If someone punched me in the face we could look at it in different ways. If I wanted you to empathise with me and perhaps comfort me, I might say "I was punched in the face". The focus is on me. But if I want you to get angry I might say "Phil punched me in the face." Now I am directing your attention to Phil. If you report this to your friend you (unconsciously) make similar determinations, i.e. who is the focus? What emotion am I trying to elicit? Who is to blame? And so on. A good deal of subtly is available to us by adding extra words, stress, and facial expressions to the mix.

16 October 2017

Technological Frogs

The universe as we know it, began 13.7 billion years ago. The earth formed out of the solar disc about 4.5 billions years ago. The first definite evidence of life can be dated to about 3.5 billion years ago. Mammals evolved a bit over 200 millions years ago, and primates about 60 million years. Modern humans first appear between 300-200,000 years ago, they left Africa about 100,000 years ago, settled in Europe about 40,000 years ago (having bonked a few Neanderthals along the way).

Electricity was discovered in the 19th Century. The triode amplifier was invented in 1906. TV was invented in 1927. The first electronic computer was built using vacuum tubes or "valves" in 1943. The transistor in 1947. Integrated circuits combining multiple transistors was invented soon afterwards but were not mass produced until the early 1970s.

The first TV broadcast in New Zealand was in 1960. I was born in 1966. I remember the manual telephone exchange where you told the operator the number you wanted and they manually connected you. I remember the first time I saw colour television (ca. 1972), and the excitement of a second TV channel in 1975. I remember my older brother getting an electronic pocket calculator ca. 1976.

The first personal computer based on ICs was marketed in 1977 (just 40 years ago). You had to assemble the circuit board yourself!

I first saw a personal computer at school in 1980 and learned to program it in BASIC and Assembly Language (though I realised that I didn't really enjoy programming that much).

Computers double in power every 18 months or so (Moore's Law). So my current PC ought to be roughly 17 million times more powerful than those Apple II computers at Northcote College. But with, like, a billion times more RAM and a trillion times more external storage.

When I was born, a single channel black&white TV was the most advance consumer electronics device I knew. An adult could just about lift one on their own.

Now a computer is my TV, record player, clock, telephone, camera, video recorder, tape recorder, library, teacher, publisher, recording studio, translator, etc. And I can carry it around in my pocket.

I worry a bit that we're like the apocryphal frogs being slowly boiled alive and not noticing until it is too late. And I think it is too late already.

14 October 2017

Deliberative Democracy

"When a sample of citizens is brought together, divided into small groups, and, with the soft prodding moderator, made to discuss policy, good things happen. The participants in these discussions end up better informed, with more articulate positions but also a deeper understanding of other people's points of view." Mercier & Sperber. The Enigma of Reason, p.309-10.


Last week on the radio, a BBC presenter interviewed Dr Carol Dweck. She was initially a child psychologist interested in why some kids succeed and why some fail (I'm leaving these undefined on purpose). She identified an important pattern that was predictive and found that it applied to adults as well.

She called the discovery "mindset". And it sounds deceptively simple. If you go at a problem with the mindset that you can learn then you will. It doesn't matter what the problem is, if you believe you'll make progress, then you will.

However, if you start with a fixed mindset that says you can't do it, then you won't learn, you won't make progress.

I sort of naturally have a growth mindset when it comes to certain things. I've taught myself to paint, play music, read Pāḷi and Chinese, and a bunch of other stuff, because it never occurs to me that I can't learn. I get interested and just work away at it. Nothing I've ever done was simple. I was never a natural at music for example. I sang incessantly as a kid, but so badly that my mum sent me to a singing teacher so that at least I would sing in tune (so she tells me). When I started playing the guitar nearly 40 years ago, I had no clue. I struggled with everything. I constantly made mistakes. But I just kept at it. I learned. I got better, slowly. It was hard. After 10 years I played pretty well. After 40 years I'm beginning to really understand the instrument. The learning never stops for me.

Now you may say that I have some kind of talent that perhaps you lack. But the research suggests that talent makes much less difference than we think. Mindset is is what makes the difference. Its the approach, that encompasses failure and is not destroyed by it, which makes the difference.

One of the upshots is that we should focus on process - an insight that keeps popping up. If you praise a kid, focus on what they tried, rather than what they achieved. Keep them excited about the process of learning rather the making praise contingent upon success. Ironically, if we make praise contingent upon success, then kids don't succeed as often. In fact they often give up.

How many times have we heard someone say "I'm no good at maths"? That is a mindset problem, not an inability to do maths. Actually, everyone can learn to do high-school maths - its just a matter of learning, and being convinced that learning is fun. If society or our teachers manage to suck the joy out of learning, this is not an indication that we are stupid. Yes?

And actually all along the way we fail. When you start playing the guitar or learning to drive or whatever, you fail every few seconds to start with. At the start, it's almost all failure. But you learn more from a failure than you do from a success; and if you learn then success starts to outweigh failure. If you are focused on *learning* then a failure is no big deal, because you learn more and actually enjoy it more. And with this mindset you succeed more often anyway.

A lot of people come to learn to meditate and the first time their mind wanders they say "I can't meditate" or "it's not for me". This is a fixed mindset. A growth mindset makes the mind wandering a fascinating learning exercise - you first of all realise that your mind simply wanders off without your permission(!), you start to understand why, you start to learn how to focus, and before long you are experiencing the incredible sensations of having a pinpoint focused mind. Then a whole new world can open up in which you use that pinpoint focus to examine your own mind. But only if you have a growth mindset, only if you approach it as something to learn, only if failure at first is not an obstacle to eventual success. Everyone can learn to meditate, with very few exceptions. Everyone would benefit from learning some basic meditation techniques, whether or not they want to take it further.

Learning goes on in a lively mind, it never stops. Every kid starts off with the lively mind. Staying lively has real benefits too. You are less likely to suffer dementia and other brain problems in later life. But you're also more likely to find meaning in what you do, because meaning emerges from being immersed in the process, not in achieving goals. Achieving a goal is a cadence, or punctuation point, in an ongoing process. And it is the process that really satisfies.

Very little else is satisfactory about my life and things have certainly not gotten any easier lately. But I'm still learning, still curious, still willing to take on new ideas and challenges. It's the process of learning that I love. It gets me out of bed each day and literally keeps me alive some days.

02 October 2017

Dunbar and Brain Size and Triratna

One of my colleagues wrote something, a little vague about the importance of the number 150 in human society, and since I have a long fascination with this, I thought I would write a brief introduction.

Dunbar and Brain Size

In 1992 Robin Dunbar published a paper in which he compared the average neo-cortex-to-brain-volume ratio in wild primates with the size of their social groups. There was a linear relationship which enabled him to predict that the average human social group would be 150.

Given that many of us live in cities with millions of people, what does this mean? It means that we use the most recently evolved parts of our brain to keep track of relationships - and to imagine how other people see the world, especially how they view their own relationships. This is an essential skill for a social mammal.

For example, all social mammals understand and operate a system of reciprocity. Sharing food, resources, grooming, guard-duty, or mates etc creates obligations for other group members. If I share with you, you have a social obligation to share with me. And vice versa. In apes and humans, we also keep track of obligations that are between third parties. I may share with you, knowing that you share with Devadatta and that way come into indirect relationship with Devadatta. Devadatta will probably notice that I share with you, and my reputation with him increases. Ans so on.

Humans can routinely track these abstractions into 4th and 5th order. Shakespeare could imagine how his audience would feel that Othello would feel about Cassio, after being convinced that Iago thinks that Desdemona loves Cassio; while we also know that Iago is lying. Shakespeare could imagine our tension as the story progresses. What if Iago is found out? What if he is not? This is part of what makes Shakespeare a great story teller.

Keeping track of these social obligations takes brain power. The more of our brain given over to keeping track of such things, the more relationships we can keep track of.

The Magic Number 

Dunbar predicted that on average the maximum number of relations humans could keep track of in this way would be about 150. And it turns out that the average community size in the New Guinea highlands, the units of Roman armies, and the average village size in the Domesday book (and a whole range of other measures) was .... 150.

But 150 is not the whole story. 150 is the size of an intimate community where everyone knows everyone's business. But we are usually involved in both smaller and larger groupings. If 150 is a tribe, the a tribe is usually made up from several clans of about 50 members. Clans comprise several families of about 15 members. Each person has approximately 5 intimates. These groupings may overlap. On the other hand tribes may be part of larger groupings, of 500, 1500, and 5000 and so on. The smaller the grouping, the more intimate and detailed, the knowledge; and contrarily the larger the grouping the less intimate and detailing the knowledge. The limits seem to go roughly in multiples of three, starting with 5 as the smallest.

What we expect is that, in a society of 150 people who live together, relying on each other, each will know all of the others, and who is friends and relations with whom. They will be intimately familiar with trists and disputes. And they will know who has what status under what circumstances.

In larger groupings there may be people we don't know. Larger tribal grouping may adopt symbols of membership with which to recognise other members. For example they may hang a strip of white cloth around their neck. They know that anyone who has one of these white strips is a member of the tribe. They can expect to have some basic values and interests in common, and thus are open to each other socially in ways they might not be with complete strangers. It may even be the case where the tribe mandates certain levels of hospitality are required. Some cultures require this even for strangers, when travellers are particularly vulnerable (as in the desert).

In larger groupings there are a number of ways of ensuring that every gets a say in how things are run. But let's face it, beyond 7 ± 2 everyone having a say is unrealistic. This is another magic number (aka Miller's Number) and relates to the capacity of our working memory. Groups bigger than ca 9, tend to schism into separate conversations, unless formal procedures apply.


With respect to schism, the 150 level is the limit of a sense of knowing everyone in a society living together on a daily basis. Much beyond it, and some of the people are going to start seemingly like relative outsiders. We don't know who they are friends with for example. This may explain why when humans meet who are part of a larger less intimate grouping, they often exchange information that establishes *who* they know. It's likely that some above average connectors know many more people, and across social networks. They are the glue that hold larger groupings together.

There is no absolute requirement to schism at any number. Schisms happen in small groups and large. But primates feel more comfortable with groups where they know the others. Being surrounded by strangers is often quite stressful for a social primate because they have none of the knowledge they need to know how to relate to everyone. On the other hand, being experts at empathy, primates pick up this info very quickly.

What tends to happen is that we are comfortable being relatively informal members of several larger groups, but prioritise our most intimate relationships and family.

The Order

The Order is complicated because most members of the Order are still enmeshed in other groups, particularly family. Even if there were only 150 of us, we don't live together as one community, relying on each other to survive. It is already a somewhat looser grouping than that, so the fact that it has crossed several thresholds (in total membership) is not a clear cut indicator of anything. Those who were around in the early days do tend to reminisce about how good it felt when you knew everyone. I would expect nothing less. But they all still had friends and family outside the movement too.

The Dunbar Number describes the dynamics in close knit societies living together. Beyond 150, such communities do tend to split into more manageable groups.However, it doesn't really say anything about the Order because we are not that kind of society.

Note that the more plugged into other groups we are, the fewer relationships we can track in the Order. And vice versa. Note also that pair-bonding makes no difference to Dunbar's numbers. Primates adopt a wide variety of lifestyles and these are secondary.


Dunbar's original article rapidly became a classic of anthropology and evolutionary psychology (the latter being Dunbar's main subject of interest). His predictions became known as Dunbar Numbers while he was comparatively young (he is still alive and working at Oxford University). If there was a Nobel for evolutionary psychology, he'd have won it for this discovery.

As a final caveat I would insist that Dunbar's numbers are theoretical averages, albeit with considerable empirical support. There will be a bell-curve on which individuals sit. Some will easily cope with 300 relations, some will barely cope with 50. There will always be outliers, but the existence of outliers does not alter the theory or the supporting empirical evidence of accuracy.

For further reading on the Dunbar Numbers and other concepts mentioned above, I very highly recommend "Human Evolution" by Robin Dunbar, published by Pelican in 2016. Aimed at a general readership, and highly readable, it nonetheless takes a cutting edge look at human evolution by incorporating Dunbar and his group's research on group sizes and theory of mind. Dunbar explains how we went from being general purpose apes, to highly specialised humans. How we solved the energy gap required by our big brains and big social groups through cooking, dance, laughter, and religion.

05 September 2017


In the new definition of reasoning, what reasoning is, is the process of finding reasons (justifications, rationalisations etc) for decisions made and/or actions taken. First comes the decision, then the reasons. It's always this way around for us, and unless someone enquires, we may not even have reasons for things we do, think or say. Unconscious processes guide all of our actions, but we are equipped to explain them to others if required. But we do this in a post hoc manner: reasons come after the fact and on demand.

Unfortunately, humans have biases in this department. For example, we stop searching when we find any plausible reason, we don't keep searching for the best reason. Unless we are arguing with someone who shoots down our reasoning. Reasoning is a group activity and solo humans don't do it very well.

When we don't have strong intuitions about a decision, it still better to go with our gut. When we stop to reason about a decision it drives us towards decisions that are easier to justify. But in the long run, such reasoned decisions turn out to be less satisfying.

One of the reasons we do this is to appear rational to our peers. This is a very important for humans. We are social and in the modern world appearing to be rational is an important aspect of group membership. Rational is defined locally, however. What is rational for the girl guides, is not rational for the Tory Party or the Hell's Angels or my family.

Rationality is being able to offer reasons for actions and decisions that one's peer group accept as being rational.

Sometimes when trying to fit into our social group we make decisions that seem less than rational to an outsider. "Would you jump off a cliff if they told you to?" Anyone who has heard this in earnest will know what I mean. As if happens my paralysing fear of falling kept me from jumping off cliffs, but it was a situation I faced in real life and yes, had I not been phobic, I would have jumped. I wanted nothing more than to jump off that cliff and be one of the gang. I did other brave things. Just don't ask me a jump of a curb, let alone a cliff. Although I was always fascinated by space, I knew at a very young age that I did not want to be an astronaut for this very reason.

An outsider may see this as irrational. But as human beings, it may be more rational for us to do some mildly irrational things that assure us of group membership because group membership is a long term survival mechanism. We evolved to live in groups.

While making irrational decisions may be suboptimal, losing my social status, let alone being ostracized, is a catastrophe. So there is a delicate balance that we all know. We allow ourselves to be pressured into conforming because instinct tells us that acceptance is more important than rationality. And this is true.

Or it was true 12,000 years ago in our ancestral environment. In that milieu, living as hunter-gatherers, satisfying the expectations of our peers, was probably a good rule of thumb for life. More so when we consider that our "peers" included the older more experienced members of the tribe.

So yes, people succumb to peer pressure. They behave in atrocious ways. But at the time, in their milieu, it may have been the rational thing to do, no matter how ugly it seems to us now. Until you're in the situation, you don't know how you'll react. This is why surveying someone's opinion of how they would react is meaningless. What we do in crucial situations cannot be predicted, especially by ourselves. Asking people about the trolley problem (where you can rescue 5 people by killing 1) for example is meaningless. No one knows what they would do in that situation.

All we can do is imagine that we have done something and how easily we can justify it. If we are further asked to explain ourselves, it will often change our answer, since we have to say the reasons out loud and watch the reactions of the person asking the questions. We get a better idea of how the justifications sound and we chose the best justification, which tells us what action we might do in that situation. I'd be willing to bet that there is no long term relationship between what we say we might do in these extreme hypothetical situations and what we actually do when it comes down to it. Although in more realistic scenarios that we actually have experience of, we can turn to that experience to guide us.

So rationality is not what we were taught. It is not what philosophers have classically defined it to be. Most solo humans are poor at reasoning and only reason well when arguing against someone else's proposed proposition. Reasoning certainly uses inference to produce reasons, but it does not help us find truth or make better decisions. It may help us convince people that the decision we have already made is the only decision they could have made, or the best one, or it may help us describe why someone else's decision is the worst one.

The problem with the classical view of rationality and reasoning is that it is completely at odds with the empirical evidence. It is a fiction maintained in spite of the evidence. The classical view of rationality and reasoning is so far past its use-by date that it approaches being intellectual fraud or hoax. What is actually happening is a lot less grandiose, a lot more banal, but it is what it is. We are what we are. Living a fantasy is the epitome of irrationality.

04 September 2017

Fermented Foods

I'm not into food fads. Not at all. But I am intrigued by a recent documentary I heard about fermented foods. Foods transformed by microorganisms are very common: cheese, wine, beer, yoghurt, pickles, soy sauce, etc. But in most of them, the bugs are either dead or we kill em.

Yogurt, sauerkraut, tempeh, blue cheeses, and other foods contain living microorganisms: bacteria and fungi (including yeasts).

Giving rise to the joke.

Q. What is the difference between yogurt and {country X that you wish to ridicule}? 
A. Yogurt has a living culture. 

And the idea is that these bugs take up residence and help make us healthy.

One of my great science heroes is microbiologist Lynn Margulis (d. 2011), one of the great scientists of the 20th Century. Margulis established that the mitochondria which live in all of our cells were once free living bacteria. She emphasised the role of symbiosis in evolution (in contradiction to the fetishisation of competition amongst male biologists). This has been one of the strongest influences on my thinking about the world: the importance of symbiosis, hybridization, communities, and cooperation. We are not only social animals, but in fact, we are colonies of cells, with many different symbionts living in our gut. A colony of colonies.

For many decades the existence of bacteria and fungi living in our gut, as symbionts, not pathogens was scarcely acknowledged. In the last ten years or so it has started to dawn on the world of biology that Margulis was on to something big. These intestinal flora are not passive hitch-hikers. They are actively involved in homoeostasis - the collection of processes by which we maintain our internal milieu at the optimum for life.

We now know, for example, that gut microbes participate in and contribute to our immune system. They are involved in processes that govern blood-sugar. And so on. Our gut is full of symbionts - a mutually beneficial association. Thousands of species of them and in vast numbers (perhaps as many as 100 of their cells for every cell in our body, though this figure has been challenged).

I think most people are probably aware that yoghurt has this reputation for repopulating the gut with healthy bacteria. But now expand that out to every food with living bugs. And keep in mind that the gut contains a community of bugs, all "communicating" and working together. Thousands of species are involved. And it seems the more the merrier.

I'm certainly not conducting a scientific experiment, but as part of an effort to eat healthily, I'm now regularly including sauerkraut in my diet and some soy-based yoghurt. The sauerkraut is a bit of an acquired taste, but tastes can be acquired with repeated exposure (like olives). And actually, sauerkraut is *very* easy to make so I might have a go at it.

02 September 2017

Life Goes On

The cells that make up our bodies all come from that single fertilised egg created at our conception. It divides into 2, 4, 8, 16, etc. None of our cells was ever dead and infused with life. All of our cells were always living because each cell was created by a mother cell dividing into two daughters.

The sperm and ova that became our first cell were also living cells. produced by cell division in our parents. All of our parents' cells were also always alive and multiplied by dividing.

All the cells of every animal, going back into the mists of time originating by one living cell becoming two living cells. Similarly for all plants, fungi, and bacteria too. All cells come from dividing. All cells except the original cells.

We have a pretty good idea of how such cells might have formed, but we don't know for sure. But in any case, everything alive to day, literally every living cell, was produced by cell division. Every living cell, and thus every living thing, is a direct-line descendant of those first living cells. Every living cell is directly related to every other living cell.

Along the way, some of the cells recombined to make more complex cells or formed symbiotic relationships. Combining is as important as division in evolution, though it happens less often.

The lines of living cells, going back to the original cells, are unbroken for at least 3.5 billion years, possibly longer. Each individual cell eventually dies, but the processes of life continue, without interruption. And even if humans manage to wipe themselves out, bacteria will survive literally anything we can do. Some bacteria live in boiling pools of acid, so nothing we do is going to kill them all. Life will continue on earth at least until our sun expands out to become a red giant, engulfing the earth in fire, about 5 billion years from now. But there is a good chance that by then humans will have seeded life on other planets, if only in our solar system. So in all probability, life will go on indefinitely.

The only limit is that life requires an input of energy which can be put to use. And this will have completely run out in our universe by about 10^100 (1 followed by 100 zeros)  years from now. Then it's curtains for life in this universe. Until then, however, life goes on.

01 September 2017

Theseus's Boat and Grandfather's Axe.

My writing in the last couple of days has been exploring the ancient philosophical problem known as The Ship of Theseus, which you might know as grandfather's axe - when granddad says it's his favourite axe; and that he has replaced the head 3 times and the handle twice. The question philosophers usually as is, "Is it really the same axe?"

Unpacking this problem and establishing useful ways of thinking about it has been very enjoyable.

My way into the problem was to notice that no matter whether we think it is the same axe or a different axe, we never doubt that it is an axe. Because the parts are generic we can replace them at will without changing the intrinsic properties of the object. Any correctly assembled combination of axe-head and axe-handle makes up an axe. Change of a part does not affect the identity of the complex object as a whole.

So, at least at this level, the object has identity and continuity as an axe. It is an axe and we know it is an axe. These are objective facts. The first is an objective fact about what is (ontically objective), the second is an objective fact about what we know (epistemically objective). The object either has the relevant properties or it does not. The fact that it is an axe is dependent on the observer knowing what an axe is. But any observer who knows what an axe is (no matter what they call it) will correctly identify it as an axe.

But is this grandfather's axe? Ownership depends entirely on the minds of grandfather and his community. He asserts "this is my axe" and the community either ascent or they don't. So ownership is some kind of subjective fact. In which case, there is no one right answer. Some might feel that property is theft, in which case grandfather's assertion carries no weight. Or grandfather might have become confused with another similar axe.

Maybe it's not so much a matter of ownership, but of close association. In which case this is also a subjective fact. Recognition is a matter of seeing the object and having a feeling about it. In the bizarre neurological disorder, Capgras Syndrome, people visually recognise their loved ones (usually, but it might also include pets or familiar objects like one's home), but the identification does not set off an emotional reaction. The spouse looks exactly right but feels wrong. The person with Capgras is usually at a loss to explain this. And the explanation that they have suffered brain damage doesn't help much. They often confabulate stories - the spouse has been replaced by a duplicate or doppelganger for nefarious purposes. Again there is no right answer. If grandfather feels that this is his axe, then that is what he feels. That we do not feel it only tells us that we are not grandfather (which we already knew).

Objective facts are independent of observers. Metal is hard, it can be shaped into a cutting edge. Wood is firm but flexible and can be shaped into a handle. None of these statements depends on an observer or what they believe. Subjective facts are not always shared. They do depend on the observer. Money, for example, is based on us all agreeing that bits of paper or plastic represent units of wealth. A £5 note is intrinsically almost worthless. But £5 of wealth is enough to redeem for a cup of coffee and a slice of cake (outside of London). If we stop agreeing to those special bits of paper or plastic are valid tokens, then the system breaks down. This is what happens when there is hyper-inflation for example.

The Athenians maintained a boat that at one point in its history carried Theseus and his companions to Minos, where he overthrew the Minotaur, and then it ferried him back to Athens. Theseus went on to become a great general/admiral. So for Athenians the boat is a symbol of a national hero; of  someone they feel epitomises their national character. For the Athenians it is definitely Theseus's boat. If we don't know who Theseus was, or his story, or anything much about ancient Athens, then we may not feel any connection with the symbol. We may conclude that it is not Theseus's boat. But even if we had lived at the time, what we believed would probably not have changed the minds of the Athenians.

If they had been celebrating a goat as the boat of Theseus, then we could have made an objective argument that a goat and a boat are not the same. A goat might be Theseus's goat, but it cannot be Theseus's boat. But because it was a boat, and remained a boat despite repairs, we can only make subjective arguments. And, frankly, why should the Athenians care what we think about their hero and his boat?

And of course it gets much more interesting when we get to the fact that the boat or the axe is a metaphor for ourselves.

31 August 2017

Asimov and his Laws

In the original Asimov books, robots are conceived of as servants to humans, hence the original Laws are formulated the way they are

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. 
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The robots in the stories become more autonomous and are portrayed civil servants, especially a detective named  R-Elijah Baley. The name Elijah is surely no coincidence. As a detective in a world with almost total surveillance, Elijah is confronted with highly devious and irrational human behaviour. He has to put himself in the shoes of the criminals in order to solve the crimes.

Asimov, like most writers on robots was basically retelling the Pinocchio story over and over. How does a machine think like a human? It can only do so by becoming ever more human. There is no other solution to this problem.

After writing a bunch of robot stories (and seemingly thoroughly exhausting Pinocchio as a trope) Asimov moved onto the Foundation novels - two sets of them written decades apart. In the first set a shadowy organisation, headed by Hari Seldon, is guiding humanity through an impending crisis. In other words Seldon is also a prophet, though armed with science rather than righteousness. Seldon has invented a calculus of human behaviour, psycho-history. He sees patterns that only become apparent when trillions of us span the galaxy. Using the methods of psycho-history, Seldon sees the crisis coming and he prepares for the knowledge of humanity to survive.

But it gets very weird after this. Asimov becomes increasingly interested in telepathy. And it begins to permeate all the stories. And now he goes back to robots. What if a robot is like a human, but also telepath... of course he would see how human frailty would lead to suffering. Any robot cursed with telepathy would suffer an existential crisis. And so was born the zeroeth law:
0. A robot may not harm humanity, or through inaction allow humanity to come to harm.
Elijah, returns able to read minds. He can understand what motivates humans and tries to stop them from destroying themselves. It is he who guides Seldon to psychohistory and pulls many other strings behind the scenes. Note that Elijah is still bound by the three laws.

Asimov's earlier books place Pinocchio in a future utopia that is marred by humans who are what we might call psychopaths - incapable or unwilling to behave according to the law, despite universal surveillance. Asimov becomes consumed by contemplating impending disaster and how a great empire might avoid collapse. In other words he reflected some of the major social issues of 1950s USA; through a rather messianic lens.

By the time he came to reinvent Elijah as telepath, in the second set of Foundation novels, the Cold War and arms race were in full-swing. Asimov was apparently fantasising about how we could avoid Armageddon (and I know that feeling quite well). If only someone (messiah/angel) could come along and save us from ourselves, by reading our thoughts and changing them for us so we didn't mess things up. But what if they could only nudge us towards the good. Note that at present the UK has a shadowy quango department--"The Behavioural Insight team"--designed to nudge citizens towards "good" behaviour (as defined by the government, mostly in economic terms).

Ironically, Asimov's themes were not rocket science. He sought to save us from ourselves.

Humanity is going through one of those phases in which we hate ourselves. We may not agree with Jihadis, but we do think that people are vile, mean, greedy, lazy, untrustworthy, etc. Most of us don't really know how to behave and the world would be a much better place if humans were gone. We are, the central narrative goes, "destroying the planet".

For example. We drive like idiots and kill vast numbers of people as a result. In the UK in 2016 24,000 people were killed or seriously on our roads. This includes 1780 fatalities. AI can drive much better and save us from ourselves. The AI can even make logical moral decisions based on Game Theory (aka psychopathy) - the trolley problem is simply a matter of calculation. Though of course to describe a person as "calculating" is not a compliment.

It's a given in this AI scenario that humans are redundant as decision makers. This is another scifi trope. And if we don't make decisions, we just consume resources and produce excrement. So if we hand over decision making to AIs then we may as well kill ourselves and save the AI the trouble.

If we want AIs to make decisions that will benefit humans, then we're back to Pinocchio. But I think most AI people don't want to benefit humans, they want to *replace* us. In which case it will be war. In a sense the war over whether humans are worth saving has already begun. A vocal minority are all for wiping us out and letting evolution start over. I'm not one of them.

Computers are tools. We already suffer from the bias that when we have a hammer everything looks like a nail. May the gods help us if we ever put the hammer itself in charge.

22 August 2017


Thinking about uniforms. Most schools I attended were run like North Korea.

Inmates wore uniforms. Uniform codes were strictly enforced.

There were many arbitrary rules. Breaking rules resulted in arbitrary detention and in my day beatings, some of which were quite brutal. Prisoners were often kept in solitary confinement.

There was "nationalism", school songs and so on.

We were all indoctrinated with the same useless knowledge designed to make us better citizens.

In my day this included systematic lies about the history of our country and especially the wars of aggression we fought against the Māori in order to steal their land. I believe this has changed to some extent in NZ. Here in the UK, they mostly still seem to believe that the British Empire was a benign force for spreading civilisation.

The leader or headmaster generally had a funny haircut and we had to treat them with exaggerated deference. They held assemblies in which we were forced to listen to interminable speeches which extolled the ideology of the state. [An obvious difference is that we did not have to salute].

The schools were surrounded by fences and no one was permitted to leave.

The staff were frequently paranoid about what inmates got up to and we were constantly under surveillance. Teachers had networks of informants.

I've never been to school in the UK, but looking at the uniforms and the environments, as well as what I can glean from TV, the whole set up is far worse here.

A lot of work places are also like North Korea these days. Democracy has seldom extended to the workplace or school. And they wonder why we don't take it seriously?

20 August 2017

Persuasion (reprise)

A consequence of Einstein's theory of relativity is that we can no longer think of space and time as distinct:
“Henceforth space by itself, and time by itself, are doomed to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality.” — Herman Minkowski, 1908.
I more or less understand the reasoning behind this (if not the maths), but I admit that in terms of my experience it is completely counter-intuitive. So in fact, a century after Einstein, space by itself and time by itself have not faded into mere shadows. Maybe they have in the higher echelons of university physics departments, but not in general use.

And this is the thing about intuition and counter-intuitive ideas. For many people, evolution is simply counterintuitive. It feels wrong. So facts presented without values don't make much difference to how people *feel* about evolution. And how they feel about it determines how they think about it. This is simply a fact about how humans work.

The question then is not why ordinary people who find science counter-intuitive don't change their minds. Why would they? The question is why scientists are so bad at communicating? In fact, there is a well-developed science of persuasion, which we see at work in our daily lives across the media in advertising, promotions, political speeches and so on.

A single example will suffice. A century ago in the West, very few people were in debt. Since the 1970s this has changed so that now almost everyone is in debt. From credit cards to payday loans, we all seem to have forgotten the virtues of thrift, saving, and financial prudence. That we would borrow money rather than save up for something would have been considered counter-intuitive 100 years ago. If my great parents had talked about borrowing money at 30% APR while inflation was at 2%, just to buy something they wanted by did not absolutely need, their family and friends would have thought them mentally ill. Now it is just what everyone does.

I have no credit rating in the UK, having never borrowed money here, and I am still regularly sent credit card applications by major banks.

The counter-intuitive becomes intuitive and vice versa. Persuasion is rocket science, but it is science. It's about time that scientists cottoned on to this and stopped blaming other people for their failures to communicate.

16 August 2017

Buddhism and Cessation

I was talking with my friend Satyapriya last night. We were discussing my work on the Heart Sutra and his experiences in meditation.

The non-Buddhist approach to life is generally to cram in as much experience as possible. In NZ people used to say they "lived life to the full" and this meant having as many experiences as possible, and as intense as possible. Extreme sports, bungy jumping, white water rafting, night-clubbing, and so on.

The Buddhist approach is the opposite. Buddhists, ideally, strive to calm down, to eliminate unnecessary distractions, to reduce the intensity of experiences. Ultimately the goal is to meditate in such a way that one is aware and alert, but there is no sensual or mental experience whatever. A state traditionally called cessation (nirodha) or emptiness (śūnyatā).

I should emphasise that this is not ceasing to be. It is not non-existence. It is a state of perfect balance and contentment, with no attention being paid to the senses or too superficial mental processes (like our inner monologue). One is emphatically alive and *existent*, just without all the distracting effects of experience.

Which already sounds weird to people oriented towards experience. Why would you want to experience nothing?

Cessation is not an end in itself. The experience of no experience is *profoundly* transformative. It reorganises how you perceive the world. It often results in an attenuation of the first-person perspective so that "ego" or self-seeking drops off. One stops being selfish and self-centred because there is no self to centre on. (This practical result has led to much unhelpful metaphysical speculation, but I'm not going to get into that today).

The trouble is that it takes a particular kind of person to experience cessation. In our Order, to 2000 members we have a handful with any experience of cessation, and a minority of them have any great depth of experience.

The rest of us know fairly early on that we're not that kind of person. If you discover meditation and just naturally start doing it for two hours a day, then you're in with a chance. If you struggle to sustain 20 minutes a day, then you're not in the running. You still *benefit* from calming down, but you'll always be too over-stimulated for cessation. We don't often state this up front. Indeed we tend to maintain the myth that anyone can experience cessation. In theory, maybe, but in practice, no.

One has to be thoroughly disinterested in the pleasure of sense experience. To be happy with very low levels of stimulation. To be fascinated by just watching one's mind for hours on end. One has to be quite non-reactive to other people. Most of these qualities cannot be learned, at least not to the extent required. We can get better at all of them, but unless we have the temperament or talent to start with, we're always going to be mediocre.

So the rest of us form an auxiliary that ideally would support the people who are experiencing cessation/emptiness, or who genuinely have the potential to.

For example, I try to write about issues of conceptualising this process and the philosophy that is often invoked. In doing so I'm trying to clarify things, to eliminate wrong or unhelpful views, and assessing whether or not certain ideas serve the greater goal of our community (i.e. cessation). On the whole, our conceptualisation of the process and the goals appear to be highly convoluted and confused. Our metaphysics are a mess. I advocate a radical clean out - we could eliminate all the history and 90% of the metaphysics we talk about without any deleterious effect on those who seem cessation.

Indeed, the history and a lot of the stories are to gee up the auxiliary. Because, deep down, we know that we're not going to be anything special. We're not going to experience cessation or anything like it. So we constantly have motivation problems. Pursuing a low stimulation lifestyle against one's natural inclinations is pretty difficult. Without the payoff of deep meditative states, it is not very rewarding and we end up getting a bit nihilistic or cynical. There is only so much reward to be gained from taking the moral high-ground and criticising people who seek pleasure. There's a lot of that about. A lot of criticising other people for not being good enough Buddhists from people who will themselves never experience cessation.

It's a weird thing to be involved in. At first, it seems like a cornucopia - a solution to all of one's problems. Many of the people get religion have major problems (or they wouldn't be looking). Religion promises the universe. We all start off with convert zeal. What religion delivers, on the whole, and at its best, is a supportive group of like-minded friends and one or two inspiring role models. If you have the kind of talent required, you'll find an outlet for it one way or another. If you don't, you'll be filling the pews, making financial contributions, and hanging out with the talented people. At its best, this set-up does allow some people to shine in mundane ways. Me as a writer for example. Someone else as an administrator. Another as a teacher of values or basic principles.

Still, the ideal of cessation inspires many people to slow down, to calm down, to stop being overstimulated, and so on. And on the whole, I think many of us who live simpler, calmer lives, find them more satisfying than the usual alternatives.

12 August 2017

The Evil of Mercantilism

When I was studying library management I clearly remember reading a book on technology published in 1971. It noted that immediately after WWII there were very significant gains in productivity due to mechanisation of work. The early prediction was that everyone would work less and retire early. Filling up our leisure time was predicted to be our pressing problem. ROFL.

Here it is, 2017, and productivity is something like hundreds of times higher than it was in 1945 and we are working longer and retirement as a concept is being phased out. What went wrong?

One answer is that the share of the wealth created by the economy going to the ruling classes has increased exponentially. So despite the fact that productivity has increased by so much, inequality has grown even faster.

Capitalists will rightly point out that everyone has benefited - we are all richer than we were in 1945. We all eat better, lived longer, child mortality is down etc. This is all true. But the rich have benefited more.

The thing is that if you worked hard to get by in 1945; your family are probably still working hard to get by in 2017. The poor still have to work very hard just to get by. And that is the plan. That has been the plan for 600 years. Marx and Engels noted it 150 years ago, but even then it had been going on for more than four centuries.

The plan is always for the poor to have to work hard all their lives just to get by.

600 years ago it wasn't like this. Poor people mostly worked in the fields and had little supervision. Staying alive was quite a good motivator. They might have paid a tax once per year, but the rest of the time ordered their own lives. They worked hard at planting and harvest time; moderately in the middle, and not much at all over winter. They grew all their own food, mostly on common land. If they were lucky they might own a cow or a goat or two. At that level, they all had to look after each other and work together. At that point it was probably the Church who inflicted artificial rules on the people, telling them how to live.

The ruling classes technically provided law and order to enable trading on a wider scale (between towns for example) but in practice, they often just fought amongst themselves for profit. The taxes paid for a standing army, and crimes like theft and murder were adjudicated by a ruler, if at all.

Gradually work and wealth took on moral tones. Being rich or working hard were good. Being idle or poor were bad. Working hard but being poor was OK; being idle but rich was also OK. Working hard and being rich was the ideal. Working hard was linked to being rich, though for most of history and now, the two are usually unrelated. The people who work the hardest, doing physical labour, are paid the least.

Since the ruling classes wanted to see the poor working hard, they took away the common land and forced the poor to pay for food. The industrial revolution offered crippling hours and dangerous conditions for the poor, so they could just about earn enough to live in unsanitary conditions and eat food that was often unfit for consumption. Sometimes whole families had to work for 12 hours a day to achieve this. And this was seen as a good thing by the mercantilists. It also broke up communities and the networks of care and assistance that had existed for centuries.

The mercantilists gradually took over running things from the aristocracy and the church. Hereditary wealth replaced mere birth as the mark of the ruling class, and morality changed from saving souls to ensuring that people were useful.

Increased wealth and reach required increased administration and bean-counting. Universities that used to train priests now trained civil servants. The middle classes were inculcated with the values of mercantilism: consumerism was born. From the middle class, some hoped to ascend into the ruling class - though opportunities for outsiders were strictly limited. Others simply became acquisitive.

As technology destroyed more and more of the jobs of traditionally working class people, the idea of social mobility was born. Let the working poor become middle class. Infect them with the virus of consumerism and acquisitiveness to distract them from the fact that their communities were being destroyed. Flood the market with cheap imitations built by their even poorer counterparts in Asia.

The thing is that this story arc is hardly affected by the politics of the government or by wars. Women hail the "progress" of them re-entering the workforce, but they mostly did so at rock bottom wages. Nowadays only a two salary family can afford to own a home. 70 years later they have almost reached pay parity, but generally speaking wages are falling and the poor and getting less and less from participating in production. Far from winning, they have simply played into the hands of mercantilists. The idea is that we all work very hard to just get by. Nothing we do is going to change this unless we stop acting like mushrooms. A smart woman might have fought for her right not to work. Nowadays women's empowerment seems to mean parading around in your underwear, while the idea of empowering men is seen as akin to genocide or eugenics.

Humans need time for socialising. For sitting around chewing the fat, telling stories, and laughing. We need time to make music, to sing and dance together. Working together for a common goal is uplifting, but what is the common goal of most workplaces now? Certainly screwing workers out of their fair share is inherent in all workplaces these days. We thrive in small communities where most people are social equals but merits are acknowledged. We still have not figured out a good way to organise ourselves in larger units. Democracy is, as that epitome of the ruling classes, Winston Churchill said, the worst form of government, except for all the others that have been tried.

But until workers get their fair share of production; until workers own the means of production; this world is going to be unfair and unjust and it will continue to break the backs of the poor so that the ruling classes can be comfortable and fight wars when they get bored.

I have no hope that technology is going to change the basic philosophy of mercantilism. Look at the internet. It was supposed to give power to the people. But it is clearly just another tool for enslaving people now. I get to say what I like, but amidst millions of conflicting voices, what I say doesn't register or matter. Those who do register are part of the system and therefore part of the problem.

Mercantile capitalism, or mercantilism, has been winning, largely in the background, for 600 years. Despite changes in technology, revolutions, wars, and empires.

01 August 2017

Are We Living in a Simulation? No, we aren't.

Anyone who has listened to the latest Infinite Monkey Cage (BBC Radio 4) and is worried that we might live in a simulation can relax. Anil Seth was talking bollocks. He and a lot of other bad philosophers have this method that is mostly hand-waving. It breaks down like this:

To yourself
1. State your belief.
2. Derive assumptions from this belief
To others
3. State your starting assumptions as axioms.
4. Use straight-line deduction to produce a paraphrase of your starting assumptions.
5. Claim that *logic* supports your conclusion.

Assumptions are propositions that you believe in the absence of evidence or things you take on faith. Axioms are propositions stated as universal truths. If you are reduced to stating assumptions as axioms, you're already floundering. Far from being "logical", this is completely irrational.

And then deduction is a very weak logical operation. All you can do with deduction is draw out the implications of your starting axioms. And what this usually boils down to is a paraphrase of your axioms.

All of the assumptions that Anil Seth stated last night struck me as demonstrably false or at best highly questionable. Here is his "logic".

1. Assume we live in a simulation
2. State some fact consistent with living in a simulation
3. Restate that fact as a universal truth
4. Deduce from this that *must* live in a simulation
5. Therefore it is only logical that we do live in a simulation

For example, he glibly stated that it would be possible to replace a neuron with an electrical device in such a way as you would not notice. For a start to do this you'd have to crack my skull open and I promise you I'd notice! Second, this is a bold claim for which there is absolutely no empirical evidence. No one has ever accomplished this or anything like it and had the recipient *not notice*.

The surgical techniques currently do exist to operate on the molecular level. And really there's no plausible way to do this type of surgery - our synapses are chemical, not electrical. It's not remotely plausible to transplant an identical neuron, let alone some electrical device that imitates one. So Anil Seth is asking us to take a science fiction idea as a universal truth. And he can just fuck off as far as I'm concerned. He's just making shit up and giving public intellectuals a bad name.

Furthermore, there is a 1mm long round worm called C. elegans. We know that it has exactly 280 neurons with  6393 chemical synapses, 890 electrical junctions, and 1410 neuromuscular junctions. It's whole brain has been mapped out in exquisite detail at the cellular level. So you'd think that we'd be able to exactly simulate the worm. Yes? No. Not even close. Else modelling the brain of C elegans would be easy and you'd be able to buy scaled up working models that had all the same behaviour by now.

So Seth takes this idea as trivial and true, but in fact, it is very, very complex and almost certainly false. His starting assumption is nowhere near plausible, let alone "true". And if this is so, then his subsequent "logic" is dubious at best.

I call bullshit. This is bullshit philosophy. And it's not the only bullshit philosophy I've seen associated with Anil Seth. He is a bullshitter and no one need be perturbed by anything he says.

23 July 2017


I've been saying for a while that Triratna, like many modern Buddhist organisations, is as much a Romantic organisation as it is a Buddhist one. A lot of people have no problem with that. Romanticism is seen as a way to the "Truth". The Romantic poets, especially—despite being a bunch of degenerates—are seen to express some kind of "higher" truth in their poems. There is a religious belief in a "higher reality", a "transcendental reality", over and above the reality we normally interact with. And Buddhists are supposed to seek this reality.

Romanticism values emotion over intellect. As an ideology and methodology, it seeks truth and reality in feelings and imagination, rather than in reason and analysis. Reason and analytical modes of thought are suspect at best. Intellectuals are suspect, except where they embrace mysticism.

I've been listening to a documentary on truth on the BBC Radio 4, and this occurs to me... President Trump is the apotheosis of the Romantic valorization of emotion over intellect. He has become the god of Romanticism. That's the problem with Romanticism in a nutshell. We live in an age where the manipulation of emotions is achieved with precision (ironically) on a vast scale that the despots of earlier centuries could only dream of. The masses feel what they are *told* to feel by the media, and what they think follows.

Because there is a deeper irony. Reason itself depends on emotion. While we assess the accuracy of facts using rationality, the value placed on facts is encoded as emotional responses. We decide on the basis of the strongest emotions. But then having decided, we use the reasoning part of our brains to produce reasons to support our decision. It is always this way around for everyone.

Thus, the way to take over is not to have the best facts or the most facts. It is to sway the emotions of the people. Once swayed, they will produce their own justifications. One doesn't need to give reasons.

Some may doubt this, but I would say look at big budget advertising. In my lifetime these ads have gone from information rich to information poor. Advertising a car, for example, is all about *image* now. It's all about how the consumer feels about the product. Ads seek to manipulate how we feel about products. Because if we feel disposed, we will produce our own reasons.

Those of us working with old models of rationality, look on and scratch out heads. How can someone who is an obvious liar and cheat take the top job? The facts are all against him. He was helped by his opponent also being incredibly unpopular and widely perceived to be a liar. But Trump, quite consciously I believe, used his knowledge of the US electorate to manipulate how they felt about him. He did not need to supply reasons to vote for him. Having decided, on the basis of feelings, how to vote, voters come up with their own rationalisations.

Ironically, it is conservatives who have embraced this new understanding and manage to exploit it most successfully. Liberals still tend to believe that arguments are won by people with the best facts. So politically, it is conservatives who are aligned with Romanticism, and liberals who are the rationalists. I'm not sure why this is.

In the battle for hearts and minds, we can safely ignore the minds. We just have to win hearts, because of the way they work together. Where hearts go, minds follow. And the opposite doesn't work.

It goes against the grain for me, because I value rationality very highly. But I've watched so many rationalists utterly fail to win arguments, that I have to accept the truth of this proposition. Until liberal politicians get this, they'll always be weak compared to conservatives. And people like Trump will worm their way into positions of power.

We all need a radical shift in perspective on how these things work. A rationalist utopia will never exist. But a Romantic nightmare, like we have now, is not inevitable.

22 June 2017

Crown Estates

Her Maj opening parliament
with her pro-EU hat on.
Partly just because they're in the news again, there are the usual complaints about the Royal family sponging of the taxpayer. I'm always surprised that British people believe this. As far as I can tell it's simply not true.

As I understand it, a badly indebted George III, on his accession in 1760, signed over all rents and other income from his portfolio of land and forestry holdings, currently valued at ~ £12 billion. In return the govt administer it all and pay the monarch a stipend. In 2016 the Crown Estate earned the UK government about £305 million in profit.

The Queen gets about £45 million a year to run the Royal household, most of which is not discretionary. Leaving HMRC roughly £260 million better off. Prince Charles has his own private income of ~ £20 million p/a from lands in Cornwall. Both of them now pay taxes.

The Royal family make a large net contribution to the UK economy and the tax base without even considering factors like tourism. And they don't get to hide their money offshore like other rich people.

I'm inclined towards republicanism and redistribution of the vast unearned wealth of the ultra-rich, though seeing the Queen out there comforting victims of the tower block fire (at her age) and wearing that EU hat to parliament yesterday, I feel well disposed towards her personally.

Its a bit depressing how much of British public opinion seems to come from the gutter press. And the negative impact this has on how Brits feel about themselves and their countries.

18 June 2017

HIV and Intelligent Design

If I was going to provide evidence for an intelligent design argument, then I might well choose the Human Immunodeficiency Virus (HIV). It really is a finely honed and efficient system for killing human beings.

HIV attacks the immune system. Our immune responses mostly come in the form of various types of white blood cells. Amongst this variety are the Helper T-cells. When they come across a pathogenic cell in the body, say a bacterial cell, it is T-cells that release chemicals to attract the other kinds of white blood cells that clean up the infection. Plus it releases another chemical to induce other white blood cells to multiply, so that there are plenty of them. And a third type of chemical, an antigen, which marks the pathogen and makes them easy for other white cells to find, identify, and destroy it.

In short the T-cells coordinate the body's immune response to pathogens. HIV infects various white blood cells, but infecting T-cells is crucial to understanding how HIV kills humans. By disabling T-cells, HIV gives rise to Acquired Immune Deficiency Syndrome or AIDS. A person with AIDS becomes susceptible to every other type of infection - viral, bacterial, fungal, and even parasitical. Normally the body just swats down infections. We only occasionally succumb. And even then our body's immune response helps keep the disease from killing us. What kills the host is not HIV per se, but the range of opportunistic infections that benefit from the weakened immune response.

HIV has a long incubation period. Once infected it can taken anywhere from two years to two decades before any symptoms begin to manifest. In that time the host can be infecting other people. The one limiting factor is that it only spreads in direct exchanges of body fluids - through sex, sharing needles, childbirth, breast-feeding. Were it spread like influenza, we'd all have it by now.

The virus has two layers. The outer layer is made from the bi-lipid outer layer of a human cell - creepily the HIV virus drapes itself in a human "skin". It is studded with proteins that recognise and bind to T-cells. The inner layer is a protein capsule containing two copies of the viral genome and some, plus some protein-based enzymes: e.g. reverse transcriptase, integrase, ribonuclease, and protease.

When the HIV attaches to a T-cell, proteins contract drawing to two together so that their cell walls merge, and then inserts the inner capsule which breaks up releasing the strands of genetic material and enzymes.

Since human cells use DNA to code genetic material, in order to hijack the human cell, the virus needs to produce DNA. The enzyme reverse transcriptase is what does this. But here's the thing. HIV reverse transcriptase is inherently buggy. The HIV genome is about 10,000 base pairs, coding for just 19 proteins. By contrast the human genome codes for tens of thousands of proteins. Crucially, when converting RNA into DNA the enzyme makes on average 1-10 errors every single time it copies the viral genome. Since each infected cell makes billions of copies, this means billions of random variations on the HIV virus.

Darwinian evolution is driven by random mutations. Most organisms have mechanisms for preventing copying errors and suppressing localised mutations which might otherwise, for example, cause cancer. As our cells produce proteins from DNA templates, they proof-read as they go and correct mistakes. Mutations caused by radiation damage can be repaired up to a point. HIV goes in the other direction and creates mutations, by design. Of course many of these mutations will be dead ends. They will not be viable. But many of them are viable and so HIV quickly and constantly evolves into new forms. This helps to defeat any immune response to HIV itself, but it also makes the disease very difficult to fight with drugs.

Having turned the viral RNA into a strand of DNA, another enzyme transports the DNA into the nucleus where another enzyme inserts it into our genome. Viruses that do this are relatively rare and are called retroviruses. Quite a large chunk of our genome is junk DNA, some of it inserted by previous retrovirus infections. In theory these ancient retroviruses could be reactivated. It's a science fiction trope. But in practice the process is complex, that its unlike to happen.

Once it becomes part of our genome, the viral genome is copied in the normal run of things, though it can remain dormant for a period as well. Our standard cellular machinery starts to produce the building blocks of new viruses - strands of RNA, the 4 enzymes, and the proteins that encapsulate the package, as well as some other proteins involved in identifying host cells and infecting them. Finally a last enzyme helps to assemble viral capsules inside the cell, which is transported to the cell wall. As they leave, the virus particles take a little of the cell wall to wrap around themselves, studded with the proteins needed for infecting more cells. The fully formed virus is now in the body fluids and waits for a chance encounter with another T-cell, preferably in another host.

This presentation is obviously simplified. For example, it's likely that HIV first infects another kind of white blood cell that is less detrimental to the host, building up numbers so that when the assault on T-cells begins it is devastating. And the whole process is now understood in a good deal more detail.

At present only one person has even known to have been cured of HIV. Out of 70 million cases. Although drug treatments do exist, they can only slow the disease down, rather than cure it.

One of the fascinating things about these kinds of pathogens is how non-specific they are. It is true that some people are resistant to some strains of HIV, but on the whole the virus can infect any human. If we get the wrong type of blood in a transfusion, we die because the body rejects it as foreign. Patients who receive transplants have to artificially suppress their immune responses for the rest of their lives to prevent rejection. The virus however is not at all choosy about blood type or tissue type or any of these factors. Indeed we really get into trouble when viruses from animals mutate to infect humans. For example, when an influenza virus in birds and/or pigs mutates and jumps the species barrier, we get influenza epidemics.

All in all HIV is devastating pathogen, seemingly engineered to kill humans. A number of conspiracy theories exist which suggest that it was engineered, though I don't find any of the plausible. We still don't really have the depth of understanding to design and make something like this. On the other hand some of the conspiracies suggest that it was a mistake that came from attempts to create Frankenstein's monster bugs by breeding different viruses together. This might work with bacteria, which can share genetic material, but it wouldn't work with viruses which cannot. So, it looks like HIV just evolved.

Intelligent Design?

If you were sceptical about evolution, however, and were looking for an organism to support an intelligent design argument, HIV is certainly a great candidate. The specificity of the mechanism is complex enough to be astounding and yet simple enough for most people to understand it. A series of events have to occur in just the right order, in just the right way, for the virus to be effective, but they do happen. It's almost too perfect, hence the conspiracies.

In particular HIV seems designed to defeat medicine. It can rapidly counteract an effective drug.  The standard treatment in wealthy countries, or for wealthy people in poor countries, is a cocktail of three drugs which target three different aspects of the viral life cycle. This makes it much harder for the virus to circumvent the effects. But it's not enough to kill it outright. The viruses DNA is copied into our DNA where it is very difficult to get at - it's difficult enough to get drugs into the cell, but near impossible to get them into the nucleus of the cell. The cell itself acts to prevent this molecules that disrupt our DNA are almost always detrimental - retroviruses being a case in point.

In the West, the communities who were most affected by HIV happened to be hated by Christians, so they could rationalise it as God's punishment. This is tricky because the Christian God is supposed to love everyone, and having people die horribly, but not before infecting dozens of other unsuspecting, often entirely innocent people, is difficult to reconcile with this view. Why is God using a shotgun to remove a splinter? There's far more collateral damage, e.g. AIDS babies, than actual punishment for evil.

However, the real twist is that HIV in the West is tiny compared with Sub-Saharan Africa. In some countries in Africa, HIV infection rates are one in four of the population. In Africa roughly ten times as many people have AIDS and have so far died from AIDS as in Europe and the Americas combined. And final irony? A large number of these Africans are conservative Christians. They are the Christians fighting the modernisation of the Church of England for example, resisting the ordination of women or homosexuals. AIDS is more prevalent in countries where homosexuality is illegal, than in those countries where it is legal.

So if HIV is an example of intelligent design, what is the designer telling us? First of all the designer seems to be a homicidal, but highly intelligent psychopath. Secondly he is targeting poor Christian people, who often live in crushing poverty, with little education; while the wealthy capitalists of the world continue to steal all the wealth from poor countries. If an intelligent designers was going to loose a plague on us, why would he target Africa of all places? Is he racist? And lastly, very many of the people who contract AIDS now are babies, born to infected mothers. Why is the designer killing babies?

I suppose one might still argue that the HIV virus is too specialised to have evolved through random mutations. The specificity, the argument goes, requires a designer; and this design would have required considerable intelligence. But that intelligence is utterly lacking in empathy. The designer, if we believe in it, is chillingly inhuman and following an agenda that does not include any thought for our well-being. HIV may well be intelligently designed, but it is intelligently designed to kill human beings indiscriminately and wantonly. Worshipping such a designers would be as pointless as a fly worshipping the child that is pulling off its wings.

In fact when it comes down to it, the situation makes an intelligence seem extremely unlikely. Intelligence completely without empathy could hardly have created anything, because it would have lacked the motivation to do so. Things like HIV make random chance seem by far the most likely explanation, but random chance can be productive, but it doesn't care about the outcome. Given how indifferent the universe is to human values and desires, a process which had no view to a particular outcome seems the only plausible explanation for how we got here.

15 June 2017


I'm reading a completely fascinating book at the moment: The Enigma of Reason by Mercier and Sperber. It's about reasoning. It's been known for most of my lifetime that we're not very good at solo reasoning tasks. In the classic experiment to test how reasoning works, the Wasson Selection Task, only 10% of people were able to reason through a fairly basic logic problem. And yet 80% of the participants were 100% sure about their method.

The authors argue that the main purpose of reasoning is for coming up with reasons. Yep, the reason we reason is to produce reasons.
"Why do you think this? Why do you do that. We answer such questions by giving reasons, as if it went without saying that reasons guide our thoughts and actions and hence explain them." (p.109)
The authors point out, though again this is not news, that in fact most of our reasons are after-the-fact rationalisations. We decide first, based on criteria we're mostly not even aware of, and then we come up with reasons that we hope make that decision seem reasonable. Reasons are how we explain things to ourselves and others. But on the whole, our reasons are fictions that we make up to explain ourselves to ourselves and the world.

Simplistically, in a court of law, reasons are sought and given and then tested and weighed for veracity. A reason has to be consistent with the physical facts. But it also has to be consistent with the psychological facts, i.e. how the jury think they might act in similar circumstances (for which they ask themselves how the reason feels).  If we the jury find the defendant's reasons plausible then they are not guilty. If not then they are guilty and owe us and/or society a debt.

Ask yourself... Why do I believe the things I believe? You've probably got reasons already. But now ask, Why is that reason a justification for believing anything? What is it about the reason that makes your belief reasonable.

For instance, I believe that the UK is probably better off in Europe so I voted to remain in it. The reasons are actually a little vague. I don't like the Tories. I think the world is safer if we work together more closely. But those who voted to leave also had reasons. Maybe their reasons were less vague - the EU is an inefficient bureaucracy, with too many unelected officials making decisions, it costs us too much, it's run by foreigners, it allows too much immigration, and so on.

If reasoning was anything like the classical view of it, then this kind of divided opinion couldn't happen. We'd all weigh up the evidence and decide the most rational course to take, and most of the time there would be broad agreement. But we don't do this.

What we do is have a feeling about it, and then fish about for reasons, which the media provide for us. Or we're confused, then we hear a reason that resonates and stick with that. Which is why when people give reasons for political decisions, they often unconsciously repeat, word for word, a  political slogan, like, "I want my country back" (a line uttered in a TV program around the time - but ironically uttered by a spy who was helping the Nazi's subjugate his country).

We're all doing this. Deciding on what feels right, then producing reasons ourselves, or reproducing reasons we've heard from third parties. And since we also accept the myth that reasoning and "being rational" are the highest faculty of humans, we assume that our reasoning must be the best. We think that our reasons are good. And why? Well for reasons. And the criteria for judging those reasons? Well they are also reasons. And so on down into the unconscious functioning of our minds that we cannot yet fathom.

Things happen for a reason. Yeah, right!

02 March 2017

Buddhism and Philosophy.

I've been contributing to Reddit's r/Buddhism a bit lately. It required me to block a large and growing number of unpleasant people, but trying to explain how I think and why in this format, in the face of some intense scepticism, has been a welcome distraction. One continuing frustration is that Buddhists seem to think that philosophy boils down to an argument between Cartesian Dualism and reductive materialism. This essay aims at showing what a tiny little corner of philosophy this covers, but also to try to sketch out how we can do better if we take a more sophisticated view.

In this approach I will outline a number of metaphysical views, not all of which are actually held, but which represent a full spectrum of possibilities. There are two fundamental approaches to metaphysics, reductive and antireductive. One focusses on fundamental substances and the other on encompassing structures. In the Chinese yin-yang symbolism the former is yang and the latter yin. Within each approach we can subscribe to there being nil, one, two, or many types. This gives us several modes of metaphysics which I will now run through.

Modes of Metaphysics

Reductive metaphysics assert that there is a fundamental substance that is real or that there is an underlying "true nature" of reality to which everything can be reduced.
  • Reductive nihilism: In this view there is no fundamental substance, everything is structured. Certain forms of Madhyamaka take this view.
  • Reductive monism: There is a fundamental substance and it is of one kind. Quantum field theory is the modern representative of this mode.
  • Reductive dualism: The world is divided into two substance, typically mind and matter or mind and spirit. There are four sub-modes of reductive dualism
    • Reductive dualist realism (aka Cartesian dualism) - mind and matter are both real
    • Reductive materialism - only matter is real
    • Reductive idealism - only mind is real
    • Reductive nihilism - neither mind or matter are real. 
  • Reductive pluralism: In this view there are many kinds of substances each of which is real.
Antireductive metaphysics argues that there is no ultimate substance and that everything is systems, or indeed one big system. 
  • Antireductive nihilism: In this view there is no real universe. We may be living in a simulation, for example. In the film The Matrix, the heroes had grown weary of the simulation and craved something "more real", even if it was less satisfying. This view has similarities with Gnosticism.
  • Antireductive monism: the universe, taken as a whole, is the only real thing, no subset of the universe is real.
  • Antireductive dualism: The universe is divided into superstructures consisting of mind and matter, or mind and spirit. Again there are four sub-modes of antireductive dualism
    • Antireductive dualist realism mind and matter are both real on the universal scale, but cannot be subdivided.
    • Antireductive materialism:  only the material part of the universe taken as a whole is real.
    • Antireductive idealism - only the mental part of the universe taken as a whole is real.
    • Antireductive nihilism - neither mind or matter are real. 
  • Antireductive pluralism: In this view there are many kinds of substances each of which is real. An example of this view is the Shingon idea of the three mysteries (triguhya), in which the figure of Mahāvairocana represents the entire universe. All forms are his body, all sounds are his voice, and all mental activity is his mind. 
From this we can see that the possible views are quite diverse, including some that may not be held by anyone presently, or at all. And note that characterising debates on metaphysics to reductive dualism and reductive materialism is to reduce the play to a small corner of the field.

But also these are the extremes. For example the materialis biologist who knows that their organism is only interesting when whole and alive takes an antireductionist approach when they study it's behaviour. Or the chemist who understands that atomic theory is sufficient to describe chemical reactions and to analyse an unknown compound. Neither may assert that this view is ultimate, but they display a tendency in the respective directions. The chemist may also take an antireductive approach when dealing with the synthesis of a new compound; while the biologist may dissect an example of their organism to better understand its physiology.

So within each category there are degrees of membership and different people may pragmatically take different approaches depending on the kind of knowledge they are seeking. The latter gives us a clue to a more general approach to metaphysics.

When we are interest in substance, we take a reductive approach. In describing substances we may use reductive epistemology, and finally in approaching metaphysics we may argue that one or more substances are fundamental. However reductive approaches to structure do not produce knowledge. The biologist who dissects a dead specimen learns nothing about its behaviour. Indeed one the organism is taken apart it no longer even exists. So in dealing with structures, systems, or complex objects, we need to take antireductive approaches. We look at things as wholes, or parts of larger systems.


In my essay on Theseus's ship I described how both the planks and framing that make up the ship as well as the structure that they are made into are required to obtain an object with the intrinsic properties of a ship. I argued that it was the structure itself that facilitated the emergent properties to emerge. Structure is both existent and causal, and thus, by most definitions, real.

Everything we experience with our human sense is somewhere in the middle of the scale of minimally simple and maximally complex. All the objects we experience are complex. Everything is made up of parts, and those parts are not simple. So everything we experience requires us to consider both substance and structure, both reductive and antireductive metaphysics, epistemology, and methods. Substance and structure exist together in a gestalt or dialectic. Focussing exclusively on one or the other excludes a considerable part of the universe from our purview.

Thus taking a fixed position on metaphysics that sides either with reductive or antireductive ontologies makes no sense. And the fact that so many Buddhists see philosophy as a dichotomy between two extreme forms of reductionism means that debates have unsatisfactory outcomes. The world in which we live is one that requires us to adopt strategies for knowledge seeking that are appropriate to the kind of knowledge that we seek. 

When it comes to understanding mind we also need to be aware of bias. For example, though we experience mental phenomena and material phenomena through different sensory modalities, and though mind appears to us as subjective and matter as objective, this does not mean that our experience accurately reflects reality. 

In my essay on experience and reality I noted that in the case of the sunset illusion, our motion and acceleration sensors (proprioception, kinaesthetics, inner ear, visual, and viscera) inform us that we are at rest with respect to the earth. In reality, the earth is turning, meaning that a person at the equator is moving at 1600 kmph. However the circle being described is 9600 km in radius, and thus the acceleration is tiny and below the threshold of our senses. Hence when we watch a sunset we intuit that the sun is moving, because our usually reliable senses are telling us that we are at rest. In reality we are moving and the sun is still with respect to us (though of course it too is in motion).

The point is that experience and reality do not always coincide and experience can be very misleading as a guide to reality. We should not put too much store on the fact that the experience of mental and physical phenomena seems real. It seems much more likely, given the evidence discovered by scientists, that there is only one kind of substance. Mental and physical phenomena are not fundamentally different. The universe is made of one kind of stuff (reductive monism), but that stuff is made into a myriad of complex and beautiful things (antireductive pluralism).

I have been mulling over what to call this view that combines substance reductionism and structure antireductionism and have been using substance-structure dialecticalism. I see substance and structure involved in an exchange that "creates" the universe, or at least makes it possible. 


23 February 2017

Morality and Metaphor.

Looking at the way metaphors shape the way we think metaphorically about morality, combined with some insights from evolutionary biology helps explain why people take fairness and justice so seriously.

English has two main metaphors for morality, both of which are ultimately based on the schema of balance. In one, we more literally see acts as having weight. In the Egyptian Book of the Dead the soul of the deceased is weighed in a set of scales with a symbol of the law on the other side of the scales. In this view, justice involves either lightening the weight of evil, or adding to the weight of good.

Theravādins used this metaphor in discussing karma which can be weighty (garuka) or light (agaruka).
Early Buddhists saw karma as inescapable. This is actually what Buddhaghosa meant when he referred to the "restriction on karma" or kamma-niyāma. Mahāyānists introduced many ways to avoid the consequences of actions through religious exercises, including confession. There is a list of such practices in Śāntideva's other book, The Training Almanac (Śikṣasamucaya).

The other metaphor is more abstract and involves book-keeping. When credit and debit columns of the ledger match we say the books are "balanced". It's the same schema, but a different metaphor. In this view an evil action is a debit, or a debt. A good action is a credit. In most human societies debts have to be paid and often with interest. This why when we've done something wrong we metaphorically say that someone is owed an apology. The apology settles a debt. It balances the books.

The least sophisticated version of this is like for like (an eye for an eye). Other models allow substitutions. Which is why we think locking people up is about "paying your debt to society". Moral debts follow can be settled by various methods: confession, atonement, restitution, reparation, etc. And of course debts may be forgiven. The same Hebrew tribes that gave us "an eye for an eye" had built in mechanisms for forgiving debts as well. Later in this worldview Jesus came down to earth to settle all our debts with God and leave the books balanced. This is religious genius and has played very well with the punters. Buddhism has never been so daring in its forgiveness of moral debts.
The various Buddhist version of karma do not use this metaphor explicitly, but there is always a sense in which the rebirth one gets balances out how one has lived in this life. The metaphor is implicit - the principle at work is balancing good and evil.

Now this schema of balance and the resulting metaphors are not accidental or random. They emerge from the two fundamental features that all social mammals and birds share: empathy and reciprocity. Clearly reciprocity is more important in the idea of moral balance. At base it is simple give and take. It is why the statement "actions have consequences" seems so intuitive to us (and why it is universally recognised as a moral principle). Though reciprocity in the fullest sense requires us to recognise and respond to the needs of others, i.e. empathy.

Social animals have to practice give and take to make a group successful. Sharing of resources and making sure that even the weaker members of the group have enough is important because the evolutionary strategy of social animals is "safety in numbers". The coherence of the group is what makes it effective as an evolutionary strategy. Where those animals have a hierarchy (which is always) then being higher up the hierarchy is associated with greater privileged access to resources, but also greater obligations to the group. Groups gang up on predators, for example, and being higher up the hierarchy means being on the front line. Except in civilised humans, where are leaders are often not in the front line physically. Leaders are seen as too precious to put at risk of death in combat.

But actually "actions have consequences" is not quite specific enough for morality to work. And here we can refer to Buddhaghosa's use of the term niyāma "restriction". The consequences of actions must be appropriate to the action (bīja-niyāma) and they must be timely (utu-niyāma). By bīja-niyāma Buddhaghosa meant that a kuśala action was restricted in such a way as to have a kuśala consequence and an akuśala action had to have an akuśala consequence. Hence the image of a rice seed (bīja) giving rise to a rice plant. And by utu-niyāma he meant that consequences were restricted to arrive in the right season (utu), just as the monsoon rains come at the right time (at the end of three months of baking hot dry weather), or fruits and flowers all happen at the same time. Utu means "seasonal" and can also refer to other cyclic processes like menstruation. Buddhaghosa added another restriction which was the karma had to ripen and could not be avoided, which he called kamma-niyāma.

Where the consequence of actions are seen to be avoided we call that unfair or unjust. Where the consequences are not appropriate to the action we call that unjust. And when consequences are delayed we call that unjust. We share this basic view not just with all other humans, but with most other social animals. Buddhism does not have a unique take on morality, it just has has the same package in a different wrapper.

Now as regards kuśala/akuśala is it apparent that these do not balance out in this life. Hence an afterlife is required and a primary the function of the afterlife is exactly to provide this balance. If the world is just or the universe is moral, then an afterlife is necessary to make up for the obvious injustice that prevails in saṃsāra.

Timeliness can vary. Aṅgulimāla for example found all his karma ripening in this life (though for a mass murderer he got off very lightly). The Loṇaphala Sutta describes how someone poor in the Dharma might experience life times in hell, but someone rich in the Dharma might experience a trifling sensation in this life for the same evil action. Mostly early Buddhists saw rebirth as the fulcrum of the balance - any imbalance in how we live in this life directs our rebirth. Later views changed, especially in relation to the extent that a Buddha may intervene in this process (more and more as time goes on).

Of course Buddhists introduced the radical idea that one could escape from this cycle of actions having appropriate and timely consequences by ensuring they removed the conditions for rebirth. If one is not born, then none of these arguments apply. One is free of all these constraints and goes beyond explanation.

The acme of Buddhist debt forgiveness is the Vajrasatva mantra which is said to purify all our karma in one go. It may well give us subjective relief, but it doesn't change how society sees the balance of our actions in relation to them. the bottom line is that we are social animals and all morality has to be seen in terms of how our actions impact on others and how their actions impact on us. We all understand morality in terms of "balance" and we intuitively know when things are out of balance and we desire to see balance restored.

In other words Buddhists seem to say that we can forgive ourselves for transgressions and that will somehow magically translated into social forgiveness. It does not take too much effort to see that this is never the case in practice. Society wants to see justice done, and they don't much care if you have forgiven yourself.

This desire for moral balance can be frustrated in many ways, by the exercise of power for example, or because other demands are weightier. But the desire doesn't go away. If reciprocity breaks down, then the message we get is that our survival is threatened. The desire for justice is visceral and powerful for this reason. This is why people will kill if they perceive that it will restore the balance.
I haven't gone into how conservatives and liberals see things differently. This is another fascinating dimension of the cognitive approach to morality. But people will only read so much on the internet and this rave is already too long.

This rave is based on ideas found in John Searle's book "The Rediscovery of Mind"; George Lakoff's long essay "Metaphor, Morality, and Politics"; and Frans de Waal's book "The Atheist and the Bonobo". I highly recommend all three.