Pages

12 December 2016

Anālayo and Momentariness

This was a originally supposed to be comment on a Facebook post by Kamalaśīla. He posted a video of Anālayo talking and then an article about mindfulness in different traditions. But it was too long and broke Facebook.

~


I respect Anālayo, but I'm not convinced by this analysis, especially with respect to the need for and the impact of the Doctrine of Momentariness. This subject is a big part of the book that I'm currently writing.

Across the early Buddhist world there was a recognition, especially within the Abhidharma "schools", that karma cannot work with pratītysamutpāda. The former requires that consequences of conditions manifest long after the condition has ceased. The latter says that the condition must be present for the effect to manifest. The two as found in the suttas are mutually exclusive This is my independent observation, but it is also one that Nāgārjuna makes in the Mūlamadhyamakakārika (opening verses of chp16). Nāgārjuna says that to admit that an effect does not cease immediate that the condition ceases is tantamount to eternalism.

Something is wrong with either the theory of karma or with the theory of dependent arising. And Buddhists of all stripes chose to modify the theory of dependent arising in order to allow karma to function. My conclusion is that the doctrine of karma was far more important to early Buddhists and central to Early Buddhism than is usually recognised.

The solution to this problem that came to dominate Buddhist doctrine was the doctrine of momentariness, but it was not the only contender as the time. The sarvāstivāda (always-existing theory) dominated the intellectual landscape of North India for a few centuries around the time of Nāgārjuna. And there were many others, some of which are documented only by their opponent's arguments against them. Nāgārjunas own, wildly unpopular solution was to relegate the whole mess of karma (agents, actions, results, rebirth) into saṃvṛtia-satya or relative truth. In other words none of it is real. But the primacy of karma in Buddhist intellectual life asserted itself once more and Nāgārjunas solution lost out to Vasubandhu's invention of the alāyavijñāna.

Momentariness modifies dependent arising to say that rather than a single step between action and consequence, as implied many suttas, that there are an infinite number of infintesimal steps. It is the calculus to the algebra of Early Buddhism.

Where Anālayo and I begin to converge is on the problem that though the doctrine of momentariness is a step forward, it did not actually solve the continuity problem. This is to say it did not provide a substantial enough link between action and consequence across lifetimes for karma to operate.

It is axiomatic that karma *accumulates* and momentariness doesn't seem to allow for this. Another axiom of Buddhist Abhidharma, that cittas happen one at a time, means that continuity is not possible when there are two or more karmas that accumulate to cause effects in the future. On its own momentariness doesn't provide the continuity required for karma to work.

Contra their own preserved texts Theravādins promoted the idea that the viññāna is what carries karma through lifetimes (which can be found in modern expositions of karma), and, as Anālayo says, Yogācārins, especially Vasubandhu, invented a new viññāṅa/vijñāna, in the ālayavijñāna. A place for karma to accumulate before manifesting. Yogācārins also accepted momentariness, so contrary to what I was taught, the karmic seeds to not lie dormant until ripening, but in fact immediately ripen into identical cittas that create a stream of identical cittas connecting result to condition.

Now Anālayo is very diplomatic about it, but the fact is that the ālayavijñāna doesn't exist. It's a hypothetical entity made up solely for the purposes of fixing a broken theory of karma - the theory that lies at the heart of Buddhism (though it is down-played in modernist accounts of Buddhism). It's just a post-hoc rationalisation and not a very good one as it doesn't really solve the problems with momentariness. So you cannot meditate on it or be mindful of it. At best it is an imaginative exercise in picturing the accumulation of karma and how that might affect your next life (which is where it will most likely manifest according to most of the very many versions of the theory).

The continuity that we experience from moment to moment, and the accumulation of experience into habits of thought cannot be explained by traditional doctrines. I tried for many years to go along with it all, but the internal logic doesn't work. The phenomenology we experience requires a wholly different set of propositions than what we get from medieval Buddhism. For example, what we experience is smeared out across time for example, or we could not process change: change is the past being different from the immediate past and we have to hold both in mind simultaneously to appreciate change. One citta at a time would not allow us to experience change. Similarly nothing could surprise us if we did not constant construct and hold in mind a probable future, with which we constantly compared the present. There is very good evidence to suggest that anticipation is an important aspect of perception. Under certain experimental situations we cannot tell what a percept is until we are told what to expect, and then it is clear as day. (Cf my essay critiquing Momentariness http://jayarava.blogspot.co.uk/2016/07/the-citta-bottleneck.html)

I can of course see the continuing appeal of these medieval models of the mind and why the reasons they persist despite being inaccurate and imprecise. After all, we still use the language of the four humours in every day life. I know people who still understand that the common cold is related to catching a "chill". That is the remnant of ancient Greek four humours theory where cold + wet results in too much black bile and this leads to illnesses such as melancholia and "colds". This is emphatically not how people catch colds, but it is widely believed to be the case, probably because colds are more prevalent in winter. All of us carry around superstitions and legacies of medieval thinking about the world.

But at some point that fact that our model is wrong will hamper our efforts to awaken and to pass on our understanding of the cultivation of awakening. If we are going to prosper then we need to update out models according to the best scholarship of the day.

There are two problems with this. Firstly, authorities who change their mind lose their authority. We see this regularly with politicians. Critics will hound a government minster to change their policy; but if they do the same people will turn around and savage them even more for "doing a u-turn", "dithering", or "lacking conviction". The public perception of an authority is that they must be decisive and resolute, especially in the face of criticism. They do not go around changing their minds. This is one of the great problems with climate change. In the early days the scientists involved trumpeted first one and then another dire warning, each different from the last. The media isn't good at subtlety so the way they portrayed it didn't help. The public just interpreted this as "they don't know what they are talking about" or "they're just making it up - for some reason they are not revealing". Now that the scientific consensus has emerged more decisively it's an uphill battle to convinced most people because climate scientists are not seen as authoritative.

A second difficulty is that the people in the best position to talk about the phenomenology of awakening are mostly steeped for decades in the medieval worldview of Buddhism and like everyone else in the world fall victims to confirmation bias. The tradition itself is hyper-valued and almost never critiqued or criticised (in the way that I do for example) so there is very little motivation to change: it all feels right, the scholars are not challenging the view, and it's our connection to a long tradition that validates our approach to life. And since that approach to life often entails considerable sacrifice and hardship, endorsing and emphasising the "truth" of the motivation to undertake that sacrifice seems intuitively right.

I said in an earlier comment that Anālayo is truly non-sectarian. But he is still a celibate Buddhist monk. His Theravādin scholar colleagues are quite openly biased for the Theravādin sect. They have given up family, career, sex, and sexual relationships to commit themselves to being Theravāda minks, so it's no surprise that they find plenty of confirmation for their chosen lifestyle. And no surprise that in this day of challenges to their authority from outside of Buddhism that they have started to write impassioned apologetics for the medieval worldview they follow.

Anālayo is not sectarian in this narrow sense. But he is still partisan for the Buddhist tradition more broadly. He is no doubt brilliant, industrious, and dedicated; and he meditates a lot as well; but he is wholly engaged in confirming the medieval worldview that motivates him to do what he does in the lifestyle he has chosen. This is great for people that share his worldview and his lifestyle, including many of my friends, colleagues, and acquaintances.

Somewhere along the way that medieval worldview fell apart for me. I realised that it was inaccurate. I'm sure *something* does happen when one experiences emptiness, for example. And I'm sure that people who have these experiences find them enormously valuable. But my experience certainly does not fit the medieval models I've spent most of the last 10 years studying. From talking to friends, colleagues, and acquaintances (some of whom have very strong insight experiences) my sense is that the models don't really describe their experiences either.

Let me give a couple of examples. We all still talk about viññāna, but for the life of me, after more than 10 years of learning Pāḷi, reading texts in Pāḷi, and saturating myself in the academic literature on the language and doctrines, I cannot tell you what viññāna meant to the people who composed those texts. I can confidently say that it does *not* mean "consciousness", even though it is still almost universally translated with that word. It seems more to relate to a conscious state, but the meaning was so transparent 2000 years ago that the term was never defined in detail. I have similar problems with a number of common Pāḷi terms that seem not to give anyone else any trouble: nāmarūpa, saṇkhāra, vedanā, dhamma, manas, citta. We all think we know what these words mean, but inevitably we define them in ways that make sense to us. So the worldview is medieval, but much of the terminology is given a modern spin because it's actual meaning is no longer clear. The etymology of vedanā for example is that it is related to veda and to the root √vid 'to know, to see, to find'. It's from an action noun vedana 'knowing', or perhaps 'seeing'. And cognate with either our "wise, wisdom" or with "video, vision". And we translate it as "feeling"? That has to be wrong.

Another conundrum is that in Pāḷi emotion is not a separate category of experience to thought. In our world we have thoughts, emotions, and physical sensations. In Pāḷi there are just sensations related to the body (kāyika) and sensations related to the mind (cetasika). They have names for states that we label "emotion" but they don't understand an emotion to be different from a thought. There is a mismatch here that I for one now find confusing. Why would I insist on using medieval (or even Iron Age) Indian ways of thinking about my experience when I grew up in 20th Century New Zealand and now live in 21st Century England?

What I would love to see is that we start to move out of the medieval world and into the time we actually live in. Living in the present, as well as in the present moment. Which is about a lot more than using Facebook to keep in touch. I would love to see those with a fascination for meditation start finding a fascination for Antonio Damasio, Thomas Metzinger et al and that we examine the phenomenology of experience and express what we find anew. That rather than looking for confirmation of our existing view, and celebrating when we find it, that we describe what is going on in the language of *our* day. There's no legitimacy in couching everything in archaic terms any more.

People sometimes argue that the words don't exist in English. but English has a very much larger vocabulary than either Sanskrit or Pāli (and most other modern languages in fact) and it is supremely welcoming of neologisms and loan words. So do what Shakespeare, Milton, and other writers did when they couldn't find the exact word they wanted, and make something up!

But then we come back to the motivation to break from tradition when tradition is what validates us as community members and/or leaders and as experts in the field. Facility with the traditional jargon has its own kudos, even if the words are not wearing any clothes. Unless we are sure that others will follow, most of us are not willing to go out on the limb that I'm on. I happen to have the kind of personality disorder that means I'll be out here sabotaging my membership of the group anyway, so I might as well do everyone a last service by shouting my conclusions as I fall from the tree........

08 December 2016

The Hard Truth Behind Post-Truth

This post-truth thing is a blessing in disguise. We've been labouring under a massive misapprehension for a couple of centuries, i.e. that human beings are fundamentally rational.

Those who understand this have been exploiting it since at least the 1920's, when Freud's nephew, Edward Bernays, convinced US women that smoking cigarettes was a symbol of their freedom, thus dooming millions of them to miserable deaths from lung cancer and/or emphysema.

Of course using fantasies to change minds on a mass scale has been stock in trade for religion over millennia, but the priests were probably no more informed than their flock and going on instinct. .

There is something sinister about this knowing manipulation of our decision making processes by psychologists. The government now do it as a matter of course. And it's out in the open that lies can be more persuasive than truth in the right mouth.

We knew all this. If for no other reason, than because it came out in the close examination of how Nazi propaganda turned the German people against their neighbours near and far. Of course our governments are almost as bad. The British Empire was a fucking disaster outside of Britain, a genocidal monster, but the government has continuity and controls the narrative at home. They made sure we knew what monsters the Nazis were, without ever admitting to the atrocities that they committed. None of want to believe that our side are the monsters. But in this case we are.

A market executive explained it to me this 25 years ago (For Kiwis, he invented the hugely successful "Trim Pork" marketing campaign). People, he told me on the marketing course for librarians that I attened, make emotional decisions and then look for rationalisations after the fact. All my research, reading, and experience since then has borne this out. Reasoning is often just an after thought to make sense of how we feel about things. It has very little to do with how we decide things.

Sit down to think a problem through and most individual humans immediately fall into one or more of dozens of cognitive biases and/or logical fallacies. It turns out, however, that we do much better in small groups. Here, the ubiquitous confirmation bias allows me to present the strongest case I can for our idea, while the group will look for and find flaws because they have no investment in confirming my bias. Small groups of like minded people are by far the best approach to decision making. of course groups are also susceptible to group-think. Nothing is perfect.

Democracy, as we now employ it, is pretty hopeless because it is predicated on voters having accurate information about who they are voting for and making a rational decision about who would best represent their interests. Since neither of these propositions have *ever* been true, it's probably best that we see the system comprehensively failing because this might provide the motivation to fix it.

But the fact is that the lesson has not yet gotten into the core of our understanding of ourselves. We probably have a couple more generations of ruthless exploitation of our myopia with respect ourselves, by knowing and unscrupulous parasites, before we start to clock that the story was wrong all along and think about rewiring society.

And there is no point in demonising ordinary people in any of this. This is not happening because ordinary people are stupid. If anything it is happening because intellectuals are stupid. After all it is intellectuals that have promoted this completely false view of humanity. Mind you they generally replaced a religious view that was even more wrong, so generally speaking the trend is towards less stupidity. As I've said before, I'm mildly optimistic about the species, it's just the individual members I don't like.

In which case the question becomes, can we finally discover what we really are (i.e. social monkeys) before we cause our own mass-extinction. I'm not necessarily against the mass extinction of Homo sapiens, but it is a shame that we seem so determined to take so many other species with us. But in the long run, life will continue on well beyond any ecological disaster we might cause.

Bacteria are the dominant life-form on the planet, and have been for 3.5 billions years since they appeared. They've survived much worse than humanity in those thousands of millennia. Much worse! So that's a happy thought, eh?



07 December 2016

Frequentives

Here's a nice little English thing. Those words that end in -le are frequentives. I tramp once, but if I do it a lot it's a trample. A lot of the original words are now lost. If you are in fine fettle for instance, there is no word for the single instance of being fet (fettle is from the Lancastrian dialect).

If scrambled eggs are too intense, then just ask for them to be scrammed once and leave it at that. Cuddle? No just one cud, please! Too many wrangs make a wrangle (wrang is the past-participle of wring). And so on. On the other hand a bell rings, and what a telephone does is, in fact, ringle.
Sometimes a frequentive is just avoided. I can stomp, but no matter how many times I stomp and I am not stompling.

However, there are a few faux amis. A "single" is not singing all the time. Single comes from a Latin word sim, with a diminutive suffix -lus, giving singulus (one, individual, unaccompanied). From this root we also get simple. Singulus in Middle French became sengle or sangle and by the 14th Century, English single.

I often ride, but this is not riddling. In fact riddle is an imposter that ought to be spelled riddel. Here the root is related to read (originally "to advise or counsel") and the original noun suffix was
-els. Some genius thought the s was an incorrect and so it was dropped. And then the -el became -le. Why this happened is a riddle in itself. There is another kind of riddle, a kind of coarse sieve, which ultimate comes from a Proto-Indo-European root *√krei.

"Disgruntled" is interesting, because we still have grunt, but we've lost the positive frequentive, gruntle. We can't be gruntled, but we can be dis-gruntled (which I often am). Gruntle means to grunt a lot. Something a pig is thought to do when content. Though I suspect a lot of things that people say about pigs are made up.

It gets more interesting when you create an agent noun by adding -er (almost the same as the Sanskrit form with -ṛ). To frequently whit is to whittle, and one who does this is whittler. Of if you often whis, you are whistler. Politicians are bullshittlers.

The origins of babble are not, as something, concerned with the story of the Tower of Babel, but with a now unknown word bab for a sound, perhaps onomatopoeic or imitative of baby talk. Bab bab bab.... babble.

I'll stop prattling now.

30 November 2016

Recreational Drugs and the Law

It ought to be clear by now that people are not going to stop taking drugs recreationally. Prohibition hasn't worked. It won't work. Prohibition never works.

So the question now is, how long are we going to tolerate the supply of, and profit from, recreational drugs being in the hands of multi-billion pound, international criminal gangs? How long do we let our kids and peers buy drugs from criminal gangs, with no standards or guarantees on content or purity, no advice about safe use, and an active discouragement from seeking help with drug problems?

Government policy is supporting the criminal gangs by ensuring that they are the only ones who can supply recreational drugs. Vast amounts of money are wasted on pursuing drug users and drug gangs to almost no effect, since drugs are freely available and we cannot even keep drugs out of our prisons.

Some people advocate harsh prison sentences. This has been tried in the USA where they introduced mandatory minimum sentences. The prison population tripled mainly due to extra sentences given for non-violent drug crimes. The three-strikes rule means that a lot of people have life sentences for non-violent drug crimes. And has drug us abated in the USA as a result? No, it hasn't. In the UK it costs about £50,000 per annum to keep someone in prison. The average wage is just £30,000. Due to budget cuts and privatisation our prisons are dangerously overcrowded and understaffed, and many of the building are suffering from decades of neglect. If we start jailing more drug dealers for longer, where is money going to come from? Where do we put these people? If harsh penalties don't work elsewhere, what makes us think they'll work here? And remember, drugs are freely available in prison.

We've had prohibition for about 100 years. Vast amounts of money (trillions) have been spent on prohibition; hundreds of thousands of people have been killed prosecuting prohibition, millions have been turned into criminals and spent time in jail for trivial drug possession. And nothing has changed.

Governments should be obligated to make rational policies and to abandon policies that are demonstrable unfair, unworkable, ineffective, or harmful.

This does not mean that a small percentage of people who use drugs will not suffer some ill-effects. But guaranteed levels of strength and purity, combined with accurate safety advice, would mitigate most of the accidental harm. Proper education that was practical rather than moralistic would also help. Being able to seek help without fear of criminal prosecution might also mean that problems are less likely to escalate.

Addiction would continue to be a problem for the minority of drug users who get addicted. But such people would no longer be criminals. They could openly seek help. Chronic opiate addicts could be prescribed heroin and past experience tells us that this would be a much more cost effective way of dealing with the problem. Addicts frequently turn to crime to support a habit. Give them drugs that cost us pennies and they stop doing petty crimes that cost us thousands; and we stop locking them up at the cost of tens of thousands. Give them drugs of known strength and purity, clean needles, and counselling if they want it and they most likely stay relatively healthy. They don't burden the health system with serious diseases like AIDS, hepatitis, septicaemia, or accidental overdose. To clean up an addict needs stability and supportive conditions. Not easily found while trying to get money to score drugs from street dealers.

Remove the irrational prohibition from drugs and the therapeutic uses could be explored more easily. Some of these drugs have important effects that could help many people.

Current drug laws are irrational, unfair, unworkable, inefficient, and criminalise a lot of people who are really not criminals. They don't work and cost too much. And there are better ways to reduce the harm from drug use. Not all recreation drug use is drug abuse.

I don't necessarily endorse drug use. Certainly we would still want to restrict children's access to drugs as we do with other things that might harm them. But adults ought to be able to make their own informed decisions about these things. Just as we make our own decisions about who we have sex with, what kinds of sexual practices we enjoy, and who we marry. It's not up to me, or the government, to dictate anyone's lifestyle as long as that lifestyle is broadly compatible with the continued existence, prosperity, and security of society. In the vast majority of cases, no one is harmed by drug use. A lot less people would be harmed if drugs were made safer by being out in the open and regulated. There would be less motivation to seek out alternatives of unknown properties. Make the old favourites freely available and the novelties would be much less attractive.

We've had a century of irrational drug policies and laws. It is time to have rational policies and laws.

29 November 2016

The Aliens are Coming

I went to see the film Arrival yesterday. This review/essay will contain spoilers, so don't keep reading if you want to see the film without foreknowledge (which would be ironic). I don't recommend paying to see it however. The film is dumb and boring.

As with the film Contact, I was very disappointed. These are sciency fantasy films, not science fiction. I say this because the laws of physics are simply abandoned, the stories employ multiple deus ex machina devices, and magic is the dominant paradigm, followed by the Romantic myth.

I find the lack of distinction between science and magic irritating and more recently  have become bored with the standard tropes of it: time travel or knowledge of the future, aliens with improbable body plans, faster than light travel, substances impervious to analysis, telepathy, etc. Such fantasies may make for useful plot devices for Hollywood writers, but they are fantasies that have no basis in reality. And they are used so often that they become clichéd and passé. The key aspect of good science fiction is that is has a basis in reality. The best science fiction mostly obeys the laws of physics or breaks them knowingly and makes it clear that it is unusual.

Here's the thing about aliens. If they ever come, which is massively unlikely, it will have taken them centuries or millennia to get here. The necessity of solving all the same engineering problems to get off-planet places strict limitations on what they will be like. They'll be roughly the same size as us. Too big and getting out of their gravity well in the first place wouldn't be feasible. Too small and the ability to develop metallurgy (mining, smelting, forging etc) wouldn't be feasible. They'll be physically strong but capable of fine dexterous manipulation of objects. They have to get around and make stuff, so some kind of analogue of legs, arms, and hands can be expected. They will be intelligent and will use language and writing. They will be social, and thus prosocial, because getting into space requires collaboration on a massive scale. Being social, they will understand reciprocity, fairness, and justice. They will empathise, at least with their own kind and have social mechanism for limiting and managing intra-group conflicts. Still, they may well be hostile to outsiders, just like every other social species.

Their metabolism will be carbon based, because nothing else is feasible. Indeed their chemistry is likely to be very similar to ours because there are only a certain number of elements and only certain conditions where both complexity and continuity over time are possible. If their planet was too cold, too hot, too acid, too alkaline etc, then either complexity or continuity would be impossible. Silicon simply does not allow for the required flexibility and at the kinds of temperatures and chemical environments where complexity and continuity occur, silicon tends to rapidly oxidise and form extremely inert compounds like silicon-dioxide.

Getting out of a gravity well uses up enormous quantities of resources and is a very complex engineering problem. Suitable fuel/oxidiser combinations and cryogenic storage of same; pumps and reaction chambers for freezing cold, but incredibly volatile compounds; materials capable of withstanding extremes of cold and heat. These tasks would only be possible for a narrow range of life forms. No solitary apex predator would ever make it into space on their own, for example.

Space is a vast desert, far more extreme and inimical to life than any terrestrial environment. Getting up there is one thing, surviving for centuries with no pit-stops or comfort breaks in inter-stellar space is another thing all together! Having continuity of purpose over centuries is something we have yet to achieve.

Of course the Arthur C Clark dictum will hold. Any technology sufficiently advanced will seem like magic. But so will the McLuhan dictum that the medium is the message. Technology extends the human senses and sensibilities. I've commented to several people recently that my current mobile phone would seem like magic to my younger self. But when it comes down to it, the device enables me to do things like talk to people, educate myself, make music and art, record and access memories, keep track of time, and so on. The how seems like magic, the what is utterly mundane and predictable. Aliens will also use technology to extend their senses and fulfil their desires for connection and continuity in predictable ways.

In short, the universe places limitations on what aliens will be like. And in all likelihood they will have evolved in parallel with us and not be so very different. Indeed a fish living on the deep ocean floor might be more alien, but precisely because it has solved a completely different set of problems to those of us who live on land.

The other main theme of the film is linguistics. In this, the film follows in the footsteps of books like Babel-17, Children of God, or Embassytown. All of which I recommend reading. The film centres on written communication, which is fine. It's quite a likely scenario. In fact, I have thought of this myself. In my imaginary scifi novel it is the Chinese who make the breakthrough with the aliens precisely because of the way their written language works and how that affects the way they think about language. The Heptapod writing is also logographic, like Chinese, but they seem to have no trouble understanding the English alphabet. In a real encounter a lot of time would have to be spent introducing the signs themselves and the idea that they represent sounds, but this gap in the plot is understandable for Hollywood's attention span. Logographic writing would simplify this process enormously since sound is not inherent in it, the way it is with an alphabet. The actual Heptapod writing is less credible - again it requires magic to shape the ink. Why resort to magic at this point? What does it achieve that a more mundane approach would not have?

The idea that the language we speak shapes our worldview is referenced by name in the film as the Whorf-Sapir hypothesis. Unfortunately this hypothesis is largely discredited amongst linguists, though it does get an outing from time to time by amateurs and journalists. And perhaps this is nice, becaused technically Whorf was an amateur linguist, in the sense that it wasn't his main job. The WSH was one of the main themes in the novel Babel-17, where a language is used almost as a trojan horse virus to take control of people's minds. The viewpoint angle might have been an interesting one to explore (one day I may even do so in my scifi novel), but again the film introduces a magical element, because learning the Heptapod language somehow allows the speaker to perceive time differently and thus know the future. Again, why does magic trump science in this way? What are the film-makers thinking at this point where they act like genies granting impossible wishes? Who is served by this form of entertainment in which magic dominates reality? (Hint, it's not the workers).

Part of the reason I'm annoyed is that this was a lost opportunity. The ability to see the world through another's eyes, to appreciate their worldview, by learning their language, is full of potential as a story line. And if there was a time when Americans needed to expand their horizons and see the world through other eyes, then it was certainly now. But the simple wonder and value of this perspective is lost in the magic bullshit about knowing the future. As a plot device, knowing the future is about seeking certainty and security in the known, it is the opposite of expanding one's horizons by embracing the unknown. So the opportunity is wasted.

So, Arrival is a bad film because it does not pay enough attention to physics and misses valuable opportunities. It uses magical tropes in order to pursue a Romantic agenda (fundamentally the film is about a couple getting together and having a doomed child). This is no more evident than in the last five minutes, which are overlain by the most intense, extended violin-wanking I think I've ever heard on a film. The film blatantly attempts to wring emotions from the audience by the device of a dying child, had with the foreknowledge that she would die young from a rare form of cancer. Could it be any less subtle? Hardly. It's sledgehammer stuff and at the end of a long film in which science and linguistics are buggered many times, I felt less than charitable about it.

There were a few moments when if I'd had a compass or a sharp object I might well have stabbed myself in the leg to try to block out how awful the film was. The worst moment, perhaps in the whole film, was when, asked if he could have his life over would he do anything different (and this is with the knowledge that they can now know the future) he says that he might express his feelings a bit more. That's his life's biggest regret? Kill me now. Of course he's only saying that because he wants to fuck the main character. Feminism has achieved a lot for women, but it's made many men into contemptible fucking idiots.

I said there were spoilers in this review. But the real spoilers, i.e. the things that spoil the movie, are in the movie itself. The bad plot, the magic, the lost opportunities, the Romantic bullshit.

24 November 2016

Hierarchy of Otherness

In Orson Scott Card's novel Speaker for the Dead, the author invents a hierarchy of otherness, or as he calls it, a hierarchy of exclusion, based on words from Nordic languages. The levels are similar in come respects to the community limits discussed by Robin Dunbar in his work on human evolution, particularly on group sizes (see Dunbar's book Human Evolution)

Card Anglicises the words for the different categories and leaves off the diacritics that would accompany them in the original. Briefly the levels, with restored Swedish diacritics, are:
  • Utlänning  (Swedish 'foreigner'), literally an out-lander is someone who is recognisably human but from a different country. 
  • Främling (Swedish 'stranger') is also human, but from a different world. In the novel there are 100 human colonies out amongst the stars. 
  • Råmän (Probably from Swedish 'crude' + män 'person'; sounds like raw-men) are not Homo sapiens, but are recognised as 'human' and can be communicated with. 
  • Varelse (Swedish 'creature') are true aliens, not human, possibly sentient, but so different that communication is impossible. In the novels, this category also includes animals. 
  • Djur (Swedish 'beast') are beings so alien that we cannot guess what their minds are like. All we can do is fight them. 
Arguably some primates are more råmän than varelse because we can communicate with them to some extent. Also in observing primate behaviour it is not, in fact, that difficult to understand that their minds are much like our own. They are more different from us than any other human, except perhaps a psychopath, but they are more similar to us than has been popularly conceived. Watching them it is relatively easy to recognise emotions and motivations for example. However, a recent attempt to have chimpanzees granted human rights in the USA failed. I don't know, but it was presumably because though chimps do need the protection that such rights might afford them (mainly protection from humans), they certainly would not be able respect those rights in others, nor the obligations entailed in the declaration of human rights. They have their own kind of morality, but it is almost entirely in-group focussed. Killing an out-group person would not trouble their conscience for a second. Nor would infanticide.

OSC's books explore the råmän/varelse distinction following, in the book, Ender's Game, the apparently completely destruction (the xenocide) of the "buggers", an alien species by earth's military with child-soldier Andrew "Ender" Wiggin as commander. The buggers were considered varelse, with no possibility of a political or negotiated settlement to the dispute between them and earth. To earth's leaders, killing them all seemed to only possible approach, so they took the brightest children and gave them intensive training in strategy before getting them to play war-game "simulations", which turn out to control real fleets of spaceships in real battles with the buggers. Ender's subsequent discovery of an egg containing the germ of a hive queen of the buggers and her consciousness is the starting point for a number of sequels. In making psychic contact with the Hive Queen, Ender realises that the buggers are not varelse, but råmän and that the xenocide is in fact a mistake, a crime of unprecedented proportions. And of course he feels responsible.

22 November 2016

Life After Death and All That.

What is life?

Life is a collection chemical reactions in an energy gradient across a membrane. Fundamentally, what drives life is the reduction of CO₂ by hydrogen. This results in the production of complex carbon compounds, which I call macro-molecules. Life as we know it involves four main kinds of macro-molecules: proteins, nucleic acids, lipids, and organo-metallic complexes. We are ~90% water, and ~9.9% macro-molecules and ~0.1% salt of various kinds. Some of the macro-molecules, particularly the nucleic acids, have the ability to self-replicate. Self-replicating molecules function as structural elements, catalysts, or as templates for the production of the first two.

The original energy gradient was probably hydrogen and methane gas bubbling up from warm alkaline undersea vents into cold acidic, CO₂- and iron-rich sea water; through porous structures made of precipitated calcium-carbonate. Nowadays the most important energy gradient is provided by sunlight falling on the surface of the earth.

The dominant form of life is bacterial (taking in the Kingdoms of Bacteria and Archaea). It has been for at least 3.5 billion years and probably nothing will change that. Eukaryotes (all other forms of life) are certainly everywhere, and multi-cellular eukaryotes certainly make a lot of fuss, but bacterial life is more numerous, more diverse in form and genetic variation, adapted to a greater range of ecological niches, and greater in biomass.  Furthermore, all other forms of life rely on bacterial symbionts to survive: from mitochondria and chloroplasts within animal and plant cells, to our gut microbiome. Without our bacteria symbionts, we'd be dead. And the next most dominant form of life are fungi. 99% of life on earth is bacterial or fungal. And 99% of eukaryote life is plants. 99% of animal life is invertebrate. 99% of vertebrates are fish. Whatever led humans to consider themselves the dominant life-form on the planet?

Life is intrinsically interesting because it is exceeding complex and self-sustaining. Life modifies the environment to make it more suitable for life, consuming resources and converting them into waste products, which in turn become resources for some other form of life. Life shifts the environment far from its natural (or chemical) equilibrium. Life is all interrelated and interactive. Everything relies on everything else.

In this age of individualism, winner takes all, and survival of the greediest, the fundamental themes of life—interconnectedness, communities, cooperation, symbiosis, ecological networks, recycling, equilibrium (or homoeostasis)—give us an alternative starting point for thinking about how we understand the world, our place it in, and how we ought to live. In the long term, our birth and death are simply short cycles of resources being used to create structures and then being returned to the pool for reuse. Ideally how we live will be conducive to life generally, but life is incredibly adaptive and it won't matter how we live in the long run: life will adapt. And when we die, our molecules and elements will be recycled just the same. We are waves in a field of resources; rising, falling, rising.

But what we mean by "life", in the context of life after death, is usually tangled up with notions of conscious life. When we talk about "life after death" we don't mean life per se. We don't mean the chemical reactions. We mean life in the sense our conscious existence. What we seek in talking about life after death is continuity of our inner lives. Life after death plays on the ambiguity of the word "life". To be more accurate we ought to say "consciousness after death", rather than "life after death".


Consciousness After Death

On one hand life does continue after death. Our bodies become food for a host of bacteria and fungi which recycle everything we are made of and return it to the environment to be used by other forms of life. Nothing is wasted in life. Indeed some people have observed that most of the molecules that make up our bodies all came from other living things originally. Even the air we breath is recycled. 99% of the oxygen in the atmosphere is excreted by plants and algae.

But this doesn't solve the problem of our attachment to our mental survival. Most people don't care about physically coming back after death, though zombies are a very popular meme at present. Most people accept that their bodies won't last, but want their memories, their personality, and their opinions to survive.

This way of thinking is only possible because we routinely divide the world into two: physical and mental. It is true that we know about the world in two main different ways, that for convenience we may label "physical" and "mental". But to generalise from this that there are two corresponding modes of being is a leap of faith. And it's not one that is supported by our systematic investigation of the nature of the world. Everything points to one mode of being, of which there are various manifestations and we which experience in a variety of ways because of the windows we have on the world, i.e. our senses.

The idea of a dual mode of being continues to appeal for a variety of reasons. Since our knowing seems to come in two varieties, the idea that the world is literally divided in this way seems plausible. There are a number of experiences, including dreams, sleep paralysis, out of body experiences, and so on that make a non-material mind seem highly plausible and almost certain to exist. We are predisposed to confirmation of our opinions and tend to stop seeking explanations once we have one that is even vaguely plausible. So dualism is the norm. It is wrong, but the reasons for this are subtle, complicated, and counter-intuitive. So it's hard to convince most people on this score.

Also we see the dissolution of bodies at death. The sights and smells of putrefaction elicit disgust because the by-products of this process are poisonous to us. Disgust protects us from eating poison. The naive view, however, sees the putrefaction of the body and rebels from the idea that the mind goes the same way. We deeply desire for our inner lives to continue. And the certain knowledge of death creates a cognitive dissonance.

So humans mostly create this split in their minds. They divide the world into physical and mental; or into matter and spirit. Each has strong associations and metaphorical entailments with the two modes of being. Physical is cold, hard, heavy, unresponsive, lifeless, typified by rock and by putrefaction. Mental is warm, soft, weightless, responsive, living, typified by light and renewal. Metaphorically spatial metaphors are important: we are standing, upright, and up when alive and well; prostrate, flat, down when asleep, ill, or dead. Physical is down; mental is up. Punishment is bad, hence down; hence Hell is the underworld. Reward is good, hence up; hence heaven is up. Matter is corrupt; spirit is pure. Matter is temporal; spirit is eternal. And so on. There is a net work of entailments and associations that make up a self-consistent worldview. It's just inconsistent with reality.


Dualism is False

Dualism is intuitive and its consequences are desirable. It implies that we can escape the fate of our bodies, escape putrefaction. We can in short survive death and live forever. Hallelujah. On its own this is too simple. But we are social animals, hierarchical, and moral. We are also biased towards perceiving things as conscious: animism is the most widespread belief there is. According to one survey I read, 100% of modern hunter-gatherers are animists, while only 80% also believe in an afterlife. If life really resides in the spirit side of things, the disembodied living things become plausible, and our bias towards seeing consciousness in the world reinforces this. So most pre-modern humans live in a double world: a world of matter, with beings made of matter but enlivened by spirit; and a world of pure spirit. Special people, shamans, can bridge the gap and communicate between worlds. In civilisation shamans become priests.

As intuitive and plausible as it seems, dualism is false. The two ways of knowing create the illusion of two kinds of world, but in reality they are two kinds of window on one world. All the reliable evidence we have about the world points to this conclusion. There is in fact no distinction between mental and physical being. We live in one world, at most. So none of the stories we tell that are based on duality are true: God, ghosts, spirits, the afterlife, ESP, rebirth, karma, etc. None of it is true. This is a tragedy. A wrench. A blow. A crisis. A source of cognitive dissonance. Most people will not accept this argument because it conflicts too much with what they think they know.

Even atheists often accept this intuitive dualism. Scientists tacitly accept the dual nature of the world even though they argue that only the material is real. You cannot have an argument over which are real (or more real)—mental phenomena or physical phenomena—unless you first accept that the distinction is valid. If like me, you reject this distinction, then the scientific materialism argument starts to look as suspicious as any religious argument.

Unfortunately, this insight into the true nature of the world, the one world, means that the life after death that we crave is not possible. This is because what we think of as mental is not separate from what we think of as physical. To put it another way, the part of the world that we view through the window we label "mental", is not different from the part of the world that we view through the window we label "physical".

A dramatic demonstration of the oneness of the world and the relation between mind and matter can be found in the aetiology of Alzheimer's Disease. In this disease, protein plaques form that disrupt the connections between neurons and eventual kill them, especially in the hippocampus where memories are made and stored. As the connections in the brain as disrupted the person progressively loses their ability to function in the world. New memories stop forming, then older memories are lost. One gradually loses the ability to recognise people, places, and things. One's sense of identity, which is based on memory, is degraded and gradually lost. Sufferers have increasing problems with reasoning, concentration, and orientation. Changes in personality such as aggressiveness may appear. Eventually a person with Alzheimer's loses the ability to do basic functions like eating, and they die.

The progressive destruction of the brain destroys everything about a person that makes them unique and special; it destroys everything it makes them a person. It destroys their inner life, their personality and their opinions. And it does all this before it kills them. Presuming they survive long enough for the disease to progress that far, the person is gone long before the body finally stops metabolising. A more tragic end for a person is difficult to imagine. If one believed in a God, one would be tempted to conclude that a God who included Alzheimer's in their creation was incorrigibly cruel.

Dualism would predict that a disease like Alzheimer's would have no significant effect on the mind, because the mind is not dependent on the brain. Monism, the one world theory, predicts exactly what we see. Modified versions of dualism exist which try to offer workarounds for cases like this, such as the brain as radio antenna theory, but these fail to explain other aspects of mind functioning. Monism is the only worldview which correctly predicts the effects of Alzheimer's.


One World, One Life.
The hard truth is that we only live once and we live that life in one world. But the way we evolved makes us susceptible to all kinds of belief about life and the world that are not true. So weirdly, most of us live out a delusion. And many people are happy to exploit that susceptibility to delusion for their benefit. Some even sincerely believe that their delusion is a better delusion than your delusion. The thing about genuine delusions is that they are compelling. A genuinely deluded person has no conception that they are deluded. They understand themselves to be seeing things as they are. Given that delusion is the norm, it's better to assume a sceptical stance and assume that one is not seeing things as they are. There is always room for improvement.

I tend to state things as I see them, since it is only by doing this that one can be clear about what one thinks at any given moment. But I think one can see that my thinking evolves over time. I'm prepared to accept new information and to change my mind.

11 November 2016

Day Three

Day Three of the Apocalypse

What is that smell? Perhaps people should take a break from flinging turds and wash their hands!

Empathy at its most basic level is emotional contagion. We unconsciously just pick up on the level of emotional/physiological arousal of our peers and tend to match it. If they are alert, we become alert. If they are relaxed, we relax. Everyone is responding to everyone else. When we become aroused or alert, we scan the environment for threats or opportunities and what we find, or think we find, becomes the rationalisation for how we feel. How we feel is pretty much a function of how the people around us feel. This is why there is truth in the meme:
"Before you diagnose yourself with depression, make sure you are not surrounded by arseholes!"
At the moment, a lot of people I know feel threatened by Donald J Trump, though really they seem to be picking up on the media hype about him, i.e. suffering from emotional contagion planted by companies that trade on stimulating negative emotions. They are pouring out emotional signals of distress and arousal. Threat signals. There's a kind of hysteria. This is not new.

When Obama won the Presidency the first time the same kind of thing happened, but it was jubilation for the people I know. Everyone seemed to catch it. But I just said, "He's a politician. You cannot judge him on what he said he would do, you have to judge him retrospectively on what he did." For example, Obama said he would close the Guantanamo Bay prison in the first 100 days of his Presidency. Eight years (~ 2900 days) later it is still there. With a hostile Congress the whole time, Obama was mostly pretty ineffective. Some of his acts, like the systematic assassination of US enemies by drone strikes, seem far from the optimistic, morally upright figure than made everyone so jubilant 8 years ago, with his "Yes, we can!" slogan.

Similarly Trump is unlikely to be as bad as the hype currently going around. He'll also face a hostile Congress and Senate full of people he climbed over to get to the top. But we won't really know what he does or can do until he's sworn-in and starts being President. He has yet to name his cabinet, let alone have it approved by the Senate.

I wish everyone would calm the fuck down in the meantime. You're driving me nuts! Can't we face the apocalypse with some dignity?

10 November 2016

Day Two

Day two of the Apocalypse. Donald Trump confirms that he is indeed the Anti-Christ. The head of the US Treasury is yet again going to be from Goldman Sachs, who brought you such diabolical events as the 2008 global financial crisis. Remember when people voted for Bill Clinton but got Alan fucking Greenspan? Happy days for the spawn of Hell.

So right now, Hell is ahead on points and Heaven is scrambling to catch up, having been caught out by the misleading polls. So much for the much vaunted omniscience of Yahweh! These are the most exciting End Times since... well, since time began! Who will win?

09 November 2016

Dear America

Dear America,

Congratulations on holding your elections without large scale violence, corruption, or electoral fraud. That's really something to be proud of.

We all knew that the consensus form of politics that has dominated not only your country, but the whole of the industrialised world, was not working. Millions of people lost their jobs, homes, and savings in the bonfire of vanity that was the Global Financial Crisis. Some of us hoped it might be a left-leaning candidate who got the protest votes, and Sanders had a certain appeal. But America does not elect socialists, so Sanders was never going to become president. So Trump got the protest votes. See it for what it is. Not an endorsement of Trump, but an indictment of Neoliberalism (especially so-called "free markets" and globalisation).

While middle America was desperately clinging to what they had acquired and managed to hold onto during the financial crisis, the rich prospered as never before, and a lot of poorer Americans lost everything and any chance of ever getting ahead. Too many Americans live in poverty, too many are trapped in minimum wage jobs. Of course these people are angry at the government when it allows their jobs to be exported to South-East Asia (or Mexico) and when it allows banks to parasitise the economy. Of course they were desperate for change. How do you feel when you look down and see a mosquito, bloated and red from gorging on your blood, about to fly off and lay thousands of eggs to produce more of its kind? Most people have a visceral urge to swat the little bastard. Voting for Trump can be seen as an attempt at swatting those parasites. Just as voting for Brexit was here in the UK.

In any case, you seem to have elected Donald J. Trump as President. And now he gets to form a government. Trump will change his tune now. You can hear it in the speech he gave on accepting Clinton's concession (what a phone call that must have been!). It's all about coming together and all that. He even praised Clinton's long years of service to your country. Taken in isolation, it was quite statesmanlike. Of course it ought not to be taken in isolation, but seen in the context of his campaign (at least). And of course Trump wants all the hostility to his candidacy to go away, along with all the rape allegations. The point is that he has shifted gears already (something that some left-field commentators predicted he would do). Campaigning is over and now he has to face being President, with everything that entails. "The American people have spoken", as they say.

Something that seems to be being overlooked is that the Congress is still solidly Republican, i.e. conservative and authoritarian. Consider that Trump climbed over those people to get where he is. He hijacked their party and has no regard for their values and traditions. And these are the people who have to pass his budget and his legislation. Like Obama, Trump will not find it easy to proceed without offering major concessions to Republicans in Congress. Even now the machinery of Washington is winding up to prevent Trump from achieving anything in office. They loving having this power and will exercise it with glee.

Some would argue that tying the hands of politicians is exactly what the disenchanted and disenfranchised electorate wanted. Disrupting the system is the best they can currently hope for, because a candidate who genuinely shares their concerns is not an option any more - only millionaires can afford to run, and millionaires will never share the concerns of the average American, let alone Americans working on minimum wage and living in a crime-filled neighbourhood.

In the USA you have a distrust of government that exceeds even many of those who live in totalitarian states. You seem to resent paying taxes at all, let alone with representation. You know the govt spies on you, often illegally (thanks again Edward Snowden). And you know that many of the institutions of govt are systematically discriminatory. Americans seem to fear and resent government telling them what to do. Many people believe something along the lines of one of Frank Zappa's aphorisms: "Government is the entertainment wing of the military-industrial complex."

What better way to disrupt the machinations of government than by electing a combative outsider who ran in order to be disruptive of the status quo? The fact that he might be an asshole or even a criminal is secondary to the very real desire for substantive change. Since substantive change is not on offer, the next best thing is disruption. A restive electorate will do anything to kick an unresponsive government into paying attention to their concerns. In the UK it was the Brexit referendum - which cost most of the government their hand on the ouija-board of power, though they all still have government jobs!

Trump has promised to invest heavily in infrastructure and to aim to have the best infrastructure in the world. To my mind this is the best possible policy. I only wish UK politicians had any plan to invest in the UK, but they don't. Investment creates jobs and returns that can ease the tax burden. It remains to be seen what Congress will allow him to do in this line.

Trump understands investment because its the core of what he does as a businessman. Yes, he is guilty of cutting corners on many occasions and ending up in court on many occasions, but he's an American libertarian who resents government interference, so a disregard for the government's rules is more or less what you expect. It seems that it was exactly his disregard for the rules, and for etiquette, that made him appealing to voters.

The big question now, after "Who will be in his cabinet?", is "Will Trump get his spending plans approved by Congress?" There has to be serious doubt about this. Congress is still dominated by the kind of old-fashioned conservatives who set up the current system and who benefit from it. They won't be in a hurry to disrupt it, and most of them probably hate Trump.

Clinton is history now. Given that she actually lost to Trump, that makes her probably the most unpopular candidate ever to run for President. A lot of people are trying make out that it was a gender issue, but it wasn't. Had her reputation been for scrupulous honesty, and had she not been seen to be far too close to Wall St, she might have done a lot better. Remember that Wall St effectively planned, engineered, and caused the 2008 financial crisis. And they got away with it because they themselves, in service to successive governments, had drafted the regulations that determined what was legal and what was not. Wall St made a lot of money from betting against those people who lost everything. Clinton was too close to the bonfire and was burned by it. Yes, I know it is ironic that the biggest liar, Trump, had the reputation for honesty because he appeared to just say whatever came into his head. But reputation is very important with social primates and hard to shift.

When Obama was elected, there was jubilation to match the wailing and gnashing of teeth today. But Obama was stymied and unable to do much because Congress opposed him at every step. In the end he was an OK president, and came out of his time seeming like a nice guy. But he also made assassination of America's enemies in the Middle East by drone strikes routine and systematic. The man is a stone cold killer. This is the thing about politicians. They are never the best of us. Sometimes they are the worst. To rise to the top in any country is difficult, but the US seems to produce an array of dubious figures, until you see them as entertainers who distract from the exercise of power throughout society and the world. Obama poses with his family, rails against Trump, and emphasises his concern for ordinary people, but every day he is authorising drone strikes to assassinate his enemies. You have to see politician in the round. The same will be true of Trump. Most of the predictions of doom and disaster will not come true.

Obama changed very little, he made little or no different to the lives of African Americans for example. In the end they tried to help themselves by starting the Black Lives Matter campaign to dissuade the police from shooting them on sight. Clinton would not have made life better for women, either. Nor have either of them made it easier for those who come after them. You still have to have enormous privilege and wealth to start with, to do what they did. And most of the people who have the privilege and wealth are white men, and will continue to be so for the foreseeable future. Obama did not change this. Clinton would not have. Trump won't (though since he's a white man we don't expect him to).

America voted against the status quo. That's the take home message. They only had one effective choice to do this. Though of course, there were other candidates in the race, they received almost no media coverage and were not invited to appear in TV debates. Had the alternatives to the Republicans and Democrats had a higher profile, had they been granted the kind of saturation coverage that the media gave to Trump, the result might have been quite different. It's one of many things America needs to think about this morning. Mind you, the system where I live is also deeply flawed and in desperate need of reform. Our House of Lords (= Senate) is not even elected, but most members are government appointees, inherited the privilege from their father, or are appointed by the Church of England! Although we were the first modern democracy, we have yet to fully embrace the concept!

We have now to wait and see what comes next. But the media cycle cannot wait. Twitter is flooded with anxiety (and I only follow 23 tweeters!), the papers and TV news will be constantly hyping everything about the events of yesterday and ruminating wildly on what might come next. Beware of hyper-stimulation in the next few days. The come down is a bitch.

18 October 2016

Cybernetics Seminar

I went to a seminar at the uni today on cybernetics. Most of the participants seem to be speaking different versions of English - a kind of self-defeating vocabulary in which none of the questions seem to make sense to the invited guest (and didn't make sense to me). I wasn't very impressed. Smart people often seem to make things more difficult than they need to because otherwise they get bored.

The invited guest kept talking about an "ontology of unknowability" and unfortunately I didn't get the chance to point out that this was an oxymoron - knowability is the domain of epistemology. Ontologies tell us nothing about whether or not something is knowable. Which is why question about the knowledge of non-existent things causes such confusion - the ontological status of a phenomenon tells us nothing about it's knowability.

He also kept insisting that science was an ontology in which everything was knowable. But this hasn't been true since the 1920s when the quantum mechanics established problems like the Heisenberg Uncertainty Principle. The quantum universe clearly exists (ontology) but it is almost completely unknowable (epistemology). What we do know has radically changed human culture since it made electronics possible.

The take away was a pragmatic question which was opposed to the usual philosophical question. In ontology we ask what the world is like, but in pragmatics we might ask, "What does the world do?" I think this is an interesting question. One that takes us out of a purely cognitive approach to understanding the world and moves us into an experiential approach.

It's like we focus on what impact the world has on us, which leads to the question of how we respond to that impact, rather than worrying what to think about things.

The other take away is the cybernetics seems to be closely allied to behaviourism and game theory in it's distrust of minds and people. The aim seems to be to remove people from decision making processes and replace them with automatons that specifically do not employ knowledge based or cognitive approaches to problem solving. The human analogue is the physical reflex in which a pain stimulus in the limbs travels to the spinal chord where a response is initiated without involving the brain. And people do not seemed suited to this role.

I'm reminded also of that important component of NeoLiberalism, i.e. free-market economics, where the "market" is a blackbox that magically produces the optimum price for commodities (even though economists have known since the mid-1970s that the mathematics of supply and demand theory don't work - with more than one product or consumer the demand curve can be *any* shape and slope. There is no linear relationship between demand and price in any real world case.

Cybernetics and Society

I'm going to a seminar later today on cybernetics. The idea has a certain appeal because of the obvious way that feedback operates in organisms and ecosystems. However, in reading for the seminar I'm also learning why the field is not more mainstream.

The readings focus on Stafford Beer. A management consultant and self-confessed Marxist. Beer was fascinated by how cybernetic systems could replace human beings as decision makers. Just as a reflex is faster and more responsive to a simple stimulus such as pain than a cognitive response is, Beer thought he could make cybernetic feedback systems respond faster than traditional computers. This is back in the mid 20th Century when computers first began to escape from academia and the military. To be fair using computers to do things in exactly the same way as humans had done them might have been short sighted. But it never seems to have occurred to Beer that replacing human beings with machines was a monstrous goal completely out of kilter with the thought of Marx as I understand it. Beer seems to have considered all kinds of organic substitutes for human beings as well - at one point trying to map factor inputs and outputs onto a pond.

The readings are problematic in other ways. For example the author continually refers to what he calls an ontology of unknowability. I may not know much about philosophy, but I do know that what is knowable or not is the domain of epistemology. Ontological views, even Realism, tell us nothing about what may be known or what must remain unknown. They tell us about what can be inferred to exist. So an ontology of unknowability is an oxymoron. I plan to ask about this.

The author of the two papers, who will also lead the seminar, seems to see machines that use feedback in an animistic way. He focusses on the homoeostat, a kind of current regulator that can hold an output steady under different input currents, but in a quite inefficient way by changing the internal resistance of the machine randomly to 1 of 24 values in response to rising input currents. As the current rises it forces the machine to adopt first one then then another value. With four linked together, the output of one linked to the input of another, one can create a feedback system which stabilises the output current.

The author wants us to see this as agent-like behaviour. It isn't. He says that such devices "explored the unknown". They didn't. That they "reacted constructively". They didn't. And there are better ways to make current regulators using transistors!

Sadly beer ended up becoming a "tantric yogi" and this opens the door to all kinds of nonsense. It links all this to "spirituality", that empty modern word for values we can no longer articulate, but which has something to do with what we feel when we walk into a grand church or a forest, or when we shut ourselves of from ordinary sensory perceptions as in meditation. The ontology of unknowability is also a neo-Taoist ontology. And as this point I begin to doubt I will get anything at all from the seminar. However I have watched a long interview with the author on YouTube and in person he is quite a bit less flaky than he appears to be in the seminar readings.

So my expectations for the seminar are pretty low to say the least. However, this is the first in a series with a lot of guest speakers and I hope to attend them all to see if there is any value to be had in talking about cybernetics. Going by what I've read so far I'm unlikely to adopt the language of cybernetics even though the ideas are clearly related to things I've been writing about. Indeed it seems to me that the more urgent need is for Amistics, the study of the impact of technology on humanity and the world.

Since R. D. Laing is praised in one of the articles, I revisited part of Adam Curtis's documentary The Trap which describes the baleful influence of Game Theory on Laing (and on society in general). Game Theory was the invention not of a beautiful mind, but of a mind warped by paranoid schizophrenia. The influence of Game Theory on Laing seems not the be widely recognised.

Ironically, Laing who had so viciously chided the medical profession for medicating their patients, was an inveterate user of drugs and alcohol, who became an alcoholic. On the plus side, though it was far from being a cure for psychosis, Laing's practice of actually talking to the insane as human beings did relieve their suffering. Game Theory led him to see the family as the cause of insanity. His view of the family was bleak and paranoid in the way that John Nash's view of humanity as expressed in Game Theory was. Curtis suggests that Laing was sublimating his feelings about the Cold War and projecting them onto the families he studied, seeing them purely in terms of tacit and deceptive struggles for control. This is a view of humanity lacking in reciprocity and empathy - i.e. lacking the basis for morality that is found in all primates.

17 October 2016

METHODOLOGY

Common Sense: If it looks like a duck, and quacks like a duck, then it is a duck.

Common Skeptic: If it looks like a duck, and quacks like a duck, then it is probably a duck.

Science (General): If it looks like a duck, and quacks like a duck,
then proceed to test the null hypothesis that it is not a non-duck.

Science (Physics): If it looks like a duck, and quacks like a duck, then the detector needs recalibrating.

Science (Biology): If it looks like a duck, and quacks like a duck, then it compare its DNA to other waterfowl in the family Anatidae to determine the degree of genetic relatedness.

Philosophy: If it looks like a duck, and quacks like a duck, proceed to question the existence of the universe and/or consciousness.

Buddhism: If it looks like a duck, and quacks like a duck, it is not a duck, but not a non-duck, nor is it both nor neither. Forget about the duck(!) and just watch your mind ducking.

After 50 years, it turns out that I'm a fan of common sense.

14 October 2016

Blackhole Bounce

I just heard Carlo Rovelli explain a scenario for blackholes, let's see if I can reproduce it.

Take a supermassive star. As it reaches the point where it has fused together the last fusionable elements, it begins to cool. Since the heat generated by nuclear fusion has been the only thing preventing gravity from collapsing all the matter into the centre, it begins to collapse. Matter from the star rushes towards the centre very fast, causing an implosion. This throws most of the matter back out into space, but at the very core matter becomes super-compressed. All the protons and electrons are converted into neutrons and these are squeezed into very dense neutronium that takes up very little space.

Relativity says that above a certain mass the neutron core of the star continues to be compressed by gravity and that there is no limit to this compression. If there is enough matter to start with, the neutronium becomes so dense that nothing can reach escape velocity and it becomes a blackhole. Compression continues forever making the star effectively infinitely small with infinite density.

However, regions of infinite density are ruled out by quantum mechanics. Even a blackhole must have a finite density - the smallest possible scale is the Planck length of 10-33m, and the smallest possible volume is a Planck cubed. Rovelli suggests that a blackhole collapses down to a minimum, but finite, volume and then explodes outwards again. And this process takes about 1 millisecond.

What? If we go back to Relativity it says that the closer we are to a large mass, the slower time will go for us compared to a distance observer. This is because of the fixed speed of light. Near a large mass, space is tightly curved and distances between points are compressed, but light goes the same speed (300,000 ms-1), so in order for the speed (ms-1) to remain the same, time must slow down. If a blackhole caused infinite density, then time would have to cease. But since matter it cannot be infinitely dense, due to the limit on how small a volume of space is, time must continue to pass, however slowly. But since the time dilation occurs to everything at once, subjectively time would probably continue to pass at the same rate near the mass. It is only a distant observer who would see things slowing down.

If we were able to travel down into the heart of a blackhole as it collapsed and bounced back, it would seem to take about 1 ms to us. Subjectively, inside the blackhole time would continue pass at normal speed. But if we are looking at a blackhole collapsing from several light years away, the process would appear to take billions of years. And this is why we don't see blackholes exploding.

If blackholes do explode then we ought to be able to see it, and this is something that astronomers can look for. Perhaps such explosions would be visible to the LIGO gravity wave detector or some future detector which is more sensitive?

At the moment this is just hand-wavy stuff. Something other than a singularity must exist in a blackhole, because of quantum mechanics, but we're not sure what it is yet. This is quite a cool scenario though.

13 October 2016

Faith

Extract from The Atheist and the Bonobo, by Frans de Waal.
"People simply believe because they want to. This applies to all religions. Faith is driven by attraction to certain persons, stories, rituals, and values. It fulfils emotional needs, such as the need for security and authority and the desire to belong. Theology is secondary and evidence tertiary. I agree that what the faithful are asked to believe can be rather preposterous, but atheists surely won't succeed in talking people out of their faith by mocking the veracity of their holy books or by comparing their God with the Flying Spaghetti Monster. The specific contents of belief are hardly at issue if the overarching goal is a sense of social and moral communion.  To borrow from a title by the novelist Amy Tan, to criticize faith is like trying to save a fish from drowning. There's no point in catching believers out of the lake to tell them what is best for them while putting them out on the bank, where they flop around until they expire. They were in the lake for a reason." (p.96)

12 October 2016

UK Government Mental Health Policy

My comment on the story in The ConversationMany wealthy countries face a mental health crisis – here’s what governments can do. 4 August 2016.

Your policy approach is entirely focussed on after the horse has bolted. Palliative approaches are always more expensive in the long run than preventative approaches. It is better vaccinate against the disease than to wait for the epidemic and treat everyone in hospital. This is axiomatic in health care, and yet one that is almost never applied to mental health.

You say the most worrying trend is the rise of mental ill-health is in young people. But you want to wait until they are unemployed to help them. You miss the most obvious government intervention which is to introduce resilience training to schools to help young people stay mentally healthy and not succumb to mental illness. A variety of approaches are available, though most of them threaten to undermine the education system churning out obedient workers and insatiable consumers. Which may be why think-tanks and governments are unconsciously reluctant to consider them.

Secondly we need to look at underlying causes for mental illness. The most serious problem we have is social dislocation and alienation. Not only is this implicated in problems like depression, but it is also the main underlying cause of addiction problems also.

Free market capitalism is predicated on ignoring human needs in favour of profit (this was what Marc and Engels complained about 150 years ago and is just as true under the modern version). Families and communities are torn apart because investment does not go where the workers are, but workers are forced to go where the investment is. With no extended family and no community, workers are reliant on nuclear families and these cannot sustain the load - they are increasingly breaking down. Capitalism as we currently practice it is driving the atomisation of society. And the atomisation of society is driving the rise in mental health and addiction problems. At which point a welfare safety-net becomes stretched to breaking point. And at this point we watch in horror as the government begin to dismantle the safety-net, making it considerably less safe, and to demonise and punish people who use it, with the negative emphasis most on those who need the safety-net the most.

So government policy needs to invest in and strengthen local communities. And lately it has massively cut local government spending. Two 30% cuts in funding since 2010 in my county. Though in out county we are fortunate to have high employment we also have a very acute housing shortage, with attendant high housing costs and high homelessness. Local communities need to be able to provide work, housing, and social services. At present many cannot.

More and more people are struggling to afford shelter and food. The government pays out nearly £20 billion a year in subsidising rents through Housing Benefit. Meanwhile there is a chronic lack of housing, and an acute shortage of affordable housing. If basic needs are not met comfortably, then people are stressed. If they are stressed and there is no community support, then they may become mentally ill. This is not rocket science. The government needs to build 250,000 mainly lost-cost houses. But it knows that if it does this the housing bubble will collapse, the middle classes will be left with negative equity, and they will be voted out next election. So they promise to build 25,000 houses at some point in the future, at a cost of £1 billion, and proclaim themselves the party of the workers. The irony is so acute that one would impale a rhino on it.

In order to formulate effective policies any government needs to clarify whose needs they prioritise. Clearly there are many constituents, many communities, many powerful lobby groups, divided loyalties with political parties, and pressures from international trading partners. All we can hope is that the cost of mental illness becomes unsustainable and forces the government to formulate effective policies. As it is the crisis has not yet peaked and government are still betting on pandering to powerful business lobbies and the 1%. I predict it will have to get a lot worse before government take any action in the right direction, and we may continue to see effective cuts to mental health funding as in recent years.

~

Note. George Monbiot has written on the same subject: Neoliberalism is creating loneliness. That’s what’s wrenching society apart. The Guardian. 12 Oct 2016.

11 October 2016

Women in Capitalism in the 19th Century

"Our bourgeois, not content with having the wives and daughters of their proletarians at their disposal [as 'mere instruments of production'], not to speak of common prostitutes, take the greatest pleasure in seducing each other's wives." 
"Bourgeois marriage is in reality a system of wives in common and thus, at the most, what the Communists might possibly be reproached with, is that they desire to introduce, in substitution for a hypocritically concealed, an open legalized community of women. For the rest, it is self-evident that the abolition of the present system of production must bring with it the abolition of the community of women springing from the system, i.e. of prostitution both public and private" - The Communist Manifesto, 1848. 

Apparently the Communists were already controversial in 1848 when the manifesto was written and one of the protests was that they would destroy the structure of society. Indeed where Communism was imposed on societies, they often did this on purpose. Marx and Engels seem to have seen it as a natural consequence of changing the system of production however. 

We have to remember that pre-industrial revolution women worked mainly in and around the household - not as "housewives", but as members of a team that produced enough food for the family to live on. Men spent time away from the household in order to bring in what the women could not grow or collect. Common lands made it possible to survive lean times. On industrialisation women and children began to be employed in factories on much smaller wages than men. This is one of the changes that the Communists were reacting to. Also, as now, people had to move to where the factories were built and this fragmented traditional communities. The enclosure of common land meant that women and often children had to seek industrial work to survive. The Bourgeoisie happily put the children of the proletarians to work, while pampering their own children. In a sense they still do this, though it is far more subtle. 

Prostitution was certainly not new to this time, but a growing number of women were unable to rely on community support or to support themselves if they became detached from a man. Meanwhile the bourgeoisie were newly rich and indolent. They increasingly felt that the rules of propriety did not apply to them - the English Romantic poets epitomise this attitude. They were a bunch of toffs with no need to work, too much money, and free access to drugs. 

The Communists saw the irony in a bed-hopping high society, who viewed women as property, units of production, or sexual objects, complaining that abolition of private (i.e. bourgeois) property would result in the abolition of the ownership of women and allow women to (re)emerge as a power in society in their own right. The bourgeoisie were afraid of losing property, power, and control. They still are. 

Since 1848 the power of the bourgeoisie has been consolidated and internalised. State education is aimed at creating obedient workers and greedy consumers. Mass-media has replaced religion as the opium of the masses and has the advantage of not making any claims on the morality of the viewer. So-called democracy has been set up so that the bourgeoisie always win - they even won the global financial crisis that destroyed the homes, savings, and jobs of many workers. 

Unfortunately the solutions to the problems identified by the Communists are not obviously workable. Communism as enacted by various regimes (usually in the form of Stalinism or Maoism) have been brutal and disastrous. As my landlady recently quipped, the communists under-estimated the aspiration of the middle-classes. 

So what works? The places with the best overall living standards and happiness are the smaller socialist democracies, even after a few decades of the Neoliberal wrecking ball: New Zealand, Sweden, Norway, the Netherlands. Britain has a variable record, but has done best when it was more obviously socialist - public health, education, broadcasting, etc. A well regulated and supervised capitalist economy, and a government with the well-being of the people as it's main goal seems to be the least worst option in the present. The capture of democracy by business interests is an ongoing disaster. 

08 October 2016

Greed and Folly

Everyone loves the Taj Mahal, right? The romantic story of one man's undying love for this wife, blah blah. Take a step back and look at Shah Jahan. He led an incredibly opulent and extravagant life which required him to appropriate something like 40% of the GDP of his kingdom to pay for it (this is an estimate). This is not just 40% of workers incomes, but 40% of profits from business as well.

The modern day Royal family of the UK is paid a 15% cut of the rents from the Crown Estate (144,000 hectares of rural land, plus the whole seabed, and many commercial properties including all of Regent St in London), or about £43 million which is about 0.003% of GDP (note they are not paid out of taxes!). The government pockets £255 million. So the Royals actually make us money. 40% of UK GDP would be £600 billion. The Prince of Wales has a private income of £20 million from 53,628 hectares of land that go with the title.

Jahan's son was concerned about the impact of these taxes, but also concerned to get his own slice of the action, so he overthrew his father. But the damage had been done and the Mughal Empire was in terminal decline. This created a power vacuum, into which stepped the British. At which point vast amounts of wealth where transferred from India to England - which is partly why today India is a poor country and UK is rich.

Another example is Ankor Wat in what is now Cambodia. King Suryavarman II bankrupted his kingdom building this elaborate temple complex and when he died the kingdom erupted into bloody civil war. The temple was complete disaster, though it is now appropriated as a symbol of piety and holiness.

Capitalism involves a part of society appropriating the wealth created by the labour of others. This is justified on the basis of the risk of losing capital is the business fails. But when the balance of the economy shifts from production to appropriation and gambling (as it is now) then inequality starts to build up. Empires tend to fall when this happens.

Note also that the high tax rates routinely levied in Scandinavia are not detrimental to society because the taxes are spent on the people, rather than on monuments. In fact the socialist countries are widely regarded as the best places to live on many measures including standards of living and health care. People in socialist democracies are more prosperous and happy compared to people who live in free market democracies. Singapore is often cited as an example of a successful free market economy, but this comes at a severe cost to civil liberties and other freedoms.

The Taj Mahal and Ankor Wat are both symbols of greed and folly. We shouldn't romanticise them. Both caused a great deal of suffering.

Based on an article in the Financial Times 2012. Direct links hit the paywall. Search "The monumental folly of rent-seeking" you can get to it that way. 

07 October 2016

Divisive Politics and Politicians

On hearing that Diane Abbott is the new Shadow Home Secretary opinions the Twitterati are sharply divided (funny that).

On one hand she's a black woman, so all the people wanting to promote equality for minorities are thrilled. Though breaking through these barriers is subject to moral license - having one assuages any guilt the hegemony may feel at not having any. It's like the token woman on Mock the Week. Not that Corbyn is guilty of tokenism, I'm sure he admires Abbot, after all they had sex many years ago.

But on the other check out this a lovely English backhander from Owen Jones, the youthful leftie commentator:
"There's plenty of useless, bungling and/or mediocre white male politicians who don't get the bile Diane Abbott does. Why is this exactly?"
Some agree in part, saying she wouldn't get so much criticism if she were a man. But this is countered by pointing to the coverage of Gordon Brown, Ed Miliband, and Jeremy Corbyn on the left, Nick Clegg in the middle, and Boris Johnson and Nigel Farage on the right.

This does not show that Britain is not biased against prominent black people or women, but it does show that rich white men are also targets.

The poor black woman argument is also countered by people who point out that her children go to a private school. So actually she's an old-style social elitist, which plays badly with the left and the right (too posh/too common).

One serving Labour MP tweeted:
"I only know one white male who's as consistently mean-spirited to fellow Labour people as Diane, and I respond same to him."
A number of other Twits said much the same. She is apparently quite rude to fellow lefties.

Several people commented that they thought she had made a number of racist comments against white people. But then some people still feel bad about slavery and colonialism and are willing to cut her some slack on this. I think being black is still very difficult here.

The divisions in the political left seem to be festering. I can't think of a single Labour politician who does not divide opinion. And Lord help us, Tony Bliar is musing about getting back into politics. Meanwhile the Tories are making a decent show of unity, despite the changes the new PM is making, and understand that if they just settle down and keep possession, they will continue to score.

And those of us who think socialism is a good thing continue to wait for the messiah who can unite the warring tribes and bring peace and prosperity.


06 October 2016

Immigration

Another stray paragraph that I'm rescuing from the cutting floor and a follow-up.

One of the defining political issues of our time is immigration. Primates tend to think of this as strangers coming to live amongst us. It's stressful and it takes time to accommodate and/or assimilate them. But they also invigorate our gene pool and bring new ideas, attitudes, and practices, and so there are benefits to immigration as well. In recent decades net immigration to the UK has been in the hundreds of thousands each year. In 2015, 300,000 migrants arrived. This is less than half a percent of the population, but it is also a large town. If you set up a new town of 300,000 people it would require considerable investment in infrastructure: roads, schools, shops, health care, governance, police and so on. And yet, government has been cutting funding to all these functions at the local level creating strain on resources.  Even the mainstream are now using the phrase "housing crisis". As social primates having and maintaining groups norms is one of our main survival strategies. If our communities are unstable, if our standard of living is in decline, then we are unlikely to welcome strangers coming to live with us because we're already anxious about our society.

But let's not pretend that the UK is not also a very wealthy country with relatively high wages and a high standard of living compared to many nearby countries, for example in Eastern Europe or North Africa. So of course enterprising people will want to come here to seek a better life. Chances are that if someone reaches escape velocity from their own country to wind up in ours, then they are enterprising or desperate (I find the practice of referring to refugees as "migrants" puzzling). People often say things to me about the national character of Aotearoans based on Kiwis they meet in London. But they never meet the people who are put off by the high cost of travel, daunted by the difficulties, or who just want to stay home. Of course the young folk they meet in London are a lively, outgoing, friendly bunch. But they would be, wouldn't they? I expect many of the people I grew up with never made it out of our small town, let alone all the way to Britain.

I write this as a sort of inadvertent migrant. I think Britain is undergoing a crisis of identity. Unlike many other nations, the national character here has few unifying characteristics and many divisive ones. The idea of Britain in the post-imperial era is up for grabs. Popularist politics mean that we'll get no help in this from our political leaders - their "vision" is simply to remain in power. Society is fragmented and possibly atomising. And it is into this lack of cohesion and clarity that outsiders are pouring, fuelling the uncertainty. The most obvious result has been the vote to leave the European Union, yet another cause of division.

04 October 2016

The Nature of Rules

I'm writing up this idea for a larger essay, but I'm so struck by it I wanted to try to encapsulate it on its own. Let's start with speech. When we speak we make a series of noises. Our vocal chords vibrate in a noisy way creating a sound rich in harmonics, and we shape our mouth as a resonant chamber to selectively amplifying some of those harmonics (vowels), while at the same time using our throat, mouth, and tongue in various ways to impede the air flow, creating articulation points (consonants). The minimal unit of spoken sound is a phoneme. We speak quite quickly, so in fact many of the phonemes blur into each other, and the sounds before and after a phoneme can affect how we perceive it.

A sentence of English comes out as a series of sounds. We don't always put spaces between words, so a sentence is a more or less continuous series of sounds. From which we parse out words. In English, word order and prepositions give us grammatical information so that we can tell how the speaker intends the words to relate to each other. So the speaker must not only make the right sounds, they must make them in the right order to get words right and to make the grammar apparent.

English is an umbrella term for a large number of regional variations on a language that results from the cultural collision of a group of closely related Germano-Scandinavian languages with a Romantic language (Norman French), beginning in the 11th Century in what is now England. It continues to develop, but took several centuries to reach it's modern form. Later, through trade, imperialism, and colonialism it became widely established around the world. It was always a language with many variants and dialects and continues to be so, even or perhaps especially, in England. For my purposes I'm going to look at the language at a level that largely obscures the variations. But those variations are real and important at other levels and I will refer to them at times.

English has considerable leeway in word order, but if we examine a large number of sentences, we can discover conventions which apply most of the time. Mostly English speakers use the order: subject, verb, object, but there is considerable variation. Consider: "Is it you?" (VSO); or "words fail me" (OVS). In Star Wars Yoda's English consistently breaks the rules, but is perfectly comprehensible as English.

Influenced by a cult of grammar spawned by the European obsession with Classical Greece, the Roman Empire, and the 18th Century discovery of Pāṇini's treatise on grammar, the Aṣṭādhyāyī, generations of scholars dissected the English language to discover patterns of use that we now think of as the "rules" of English grammar. And they attempted to standardise written English, though they seem to have done a very bad job of this.

There are two tendencies in the study of grammar. One tendency is to think that if one can discover rules then they ought to be followed, we call this the prescriptivist tendency. Unfortunately for English speakers, the cult of grammar did not just discover rules, they also invented arbitrary rules, like not splitting infinitives or leaving prepositions dangling. In spoken English, people split infinitives all the time. There is no natural rule against doing so, i.e. no rule that emerges from how people speak English. Even so the prescriptivist tendency insists that such rules must be followed, and proponents produce turgid and unnatural sounding speech as a result.

By contrast, descriptivist tendency acknowledges that there are conventions about how to speak English, but concedes that the variations are valid forms of English and that the leeway allows English speakers to make novel constructions if they wish. Yoda's English is still English. Descriptivists see their role as grammarians as describing how people use language, rather than being judges of good and bad English by some arbitrary standard. And they tend to reject the underlying power structures in such standards, which are often related to class and privilege.

So there are rules that govern spoken English and most of the time when we speak we follow these rules. But its apparent that we can break the rules and still be understood to be speaking English. It's also apparent that almost no one (except obnoxious pedants) is consciously parsing their sentences and making conscious decisions about vocabulary or word order; when we speak we may obey rules, but we are not using rules to construct speech. Indeed a good deal of humour arises from breaking the rules of speech in subtle ways - puns, substitutions, and other plays on words. Similarly in poetry rhyme and rhythm may override syntax and grammar.

How does this work? Are we unconsciously following the rules and then sometimes breaking them? Probably not. Instead what happens is that as we grow up we learn first vocabulary and then syntax and grammar. It's like any skill. As we learn a sport, or to drive a car, or to do handwriting, we start off consciously applying rules. At first we are slow and clumsy. But gradually we develop competencies that mean the rules fade into the background and we become fluent. John Searle describes this as developing dispositions to behaviour that is consistent with the rules, but which does not itself follow the rules either consciously or unconsciously.

This makes good sense to me. Following rules is too slow, even if the rule following is happening unconsciously. Parsing a sentence in Sanskrit can take me a long time because I have never developed the fluency that comes with leaving the rules behind and internalising the script, morphology, syntax and grammar of the language. I can understand written Sanskrit with some effort, but not speak the language. Just as I can do other skills that I am competent at or have mastered. When associated physical skills, such as playing the guitar, we call this muscle memory - my hands just know where to go to play certain chords or patterns; some songs remain accessible to my hands even when I struggle to consciously recall how they are played.

This result is not good for the people who think of the brain as a computer. Computers can only follow rules. When they get powerful enough they can give rule following a certain grace, but they are not doing what we are doing. They do not, and at present cannot, develop a disposition to behaviour that is consistent with the rules. Computers are bound by rules in ways that human beings are not. We can deliberately cheat, for example. Or we can try to distract our opponent. Or we can appreciate that our opponent has made a particularly good or bad move. People say that the computer can "play" chess, but all it does is calculate chess moves very fast. The verb play does not apply here. The computer does not even move its own pieces. Arguably powerful neural networks that are tuned to do one activity, like calculate chess moves, might be approaching this capacity to develop rule following dispositions. However, to date no computer has needed to be programmed with correct use of the word j'adoube - which a player says when they idly touch a piece they do not intend to move.

Consider a sportsman playing a the highest level in a team sport. There are a number of rules which govern the conduct of the players and the progress of the game at any given moment. The players have usually played the game from an early age. They know very well what the rules are. If they were simply following the rules there would be no need for an umpire or referee. Of course players may deliberately break the rules because it is to their advantage if they do not get caught. If they are caught they are penalised, so there maybe an element of calculation in this. In many sports the best players have a disposition to exploit greys areas (in soccer at the point of tackling; in rugby the off-side rules; the charging rules in basketball; and so on). But the majority of the penalties are not given for deliberate fouls, but for mistakes.

We make mistakes because we are not following rules, instead relying on our behaviour to be rule consistent while pursuing the goal of the activity without any direct reference to the relevant rules. The same happens with speech. Slips of the tongue may go unnoticed, be hilarious, or confusing. It also happens when we visit a foreign country and find the rules for social intercourse are different from what we grew up with.

A sportsman making a heroic effort to kick a ball between two sticks, is completely focussed on that goal: they are not thinking about how to run, how to control the ball, how to kick, how to aim, how to avoid the opposing team. If all this was happening in real time, at a sprint down the field, under pressure from the other team, and if we were relying on rules, it would all come crashing down. And we do see this in amateur games. Where there is less skill and someone has to think about what they are doing, even unconsciously, they are less efficient, less effective.

We all know that the master of anything makes it look effortless. In a sense it is effortless. When Michael Jordon would sail into the air, float there for a second or two, and slam the ball down into the hoop, only to land gracefully on his feet, all seemingly in slow motion, it was astounding. He did it again and again. Think about everything he was coordinating in those moments: his whole body had to be coordinated in just the right way, he had to exquisitely accurately judge where everything was in space: himself, the hoop, the other players; and all at the top speed at which a very fit, 1.98 metre tall man could sprint.




Does anyone genuinely think that because they can discover patterns in what Michael Jordon was doing in this video, that this means he was following rules? Please.

So yes, there are rules; or at least rules can be discovered. And yes our behaviour is tuned to harmonise with those rules. But no, we do not consciously or unconsciously follow these rules. Instead after a period of learning the rules, we adapt our behavioural range to be more or less compliant with the rules, and to exploit grey areas to our advantage. Anti-social behaviour is another question all together that I'll have to deal with separately.

But here's the thing. This observation applies to all kinds of agents doing or causing all kinds of actions. The atoms does not think, "Oh I'm experiencing the curvature of space, now I have to lean to the left"! The atom has a disposition to follow the curvature of space. We can explain the behaviour, but not the disposition in this case. The universe is just like that. A eukaryote cell undergoing mitosis does not follow rules either - it has no mechanism for following rules. But the process does follow a pattern. Rules can be discovered in nature at all levels. Science is all about finding and describing these rules. Atoms must follow the rules that govern them. But as we go up the hierarchy of scale and complexity, flexibility emerges as a property. A cell also follows patterns, but it has vastly more degrees of freedom than an atom does. Its behaviour is more sophisticated given the circumstances. But also because it is interacting with other complex entities on its own level (i.e. other cells) the cell experiences a vastly greater diversity of circumstances than an atom does.

Patterns can be discovered in human behaviour at various levels. We might begin by learning rules, but we rapidly progress to developing dispositions to behave in ways that are rule-consistent without actually being rule-determined. This may be why certain behaviours developed early on and constantly reinforced might be hard to shift. It explains not only our incredible successes, but also our inexplicable failures. If we simply followed rules, we'd be computer-like in our accuracy. But we are not. We effortlessly produce rule-consistent behaviour, though our levels of consistency may vary.