Old Blogs

This is the space where I put blogs which are more than six months old, with the more recent blogs at the top.

  1. BUTTERFLIES
  2. TALKIN’ ‘BOUT MY GENERATION
  3. THE GLITTERY ALLURE OF TRASH
  4. LET’S GET REAL ABOUT CLIMATE CHANGE
  5. THE DUTY TO DIE
  6. OUR SEARCH FOR ULTIMATE REALITY
  7. CREATIVITY & THE LOSS OF SELF
  8. ART & SPIRITUALITY: A REPLY TO ANDREW KLAVAN
  9. GRAMMAR SCHOOLS
  10. ECHO CHAMBERS & KEYBOARD WARRIORS
  11. SURVIVING RETIREMENT
  12. THE SWANSONG FOR CLASSICAL MUSIC?
  13. LOVE IS IN THE AIR
  14. ART, PUBLIC IMAGE, & PRIVATE SELF
  15. MY FIRST TIME
  16. RESEARCHING THE PARANORMAL
  17. LIES, DAMNED LIES, & STATISTICS
  18. A BELATED HAPPY NEW YEAR
  19. GAY CHATROOMS
  20. CHOICE & FREEDOM
  21. IS CAMP DEAD?
  22. THE MARBLE INDEX & DESERTSHORE
  23. FANTASY GIRLS & DISNEY WORLD
  24. MEN’S COMPLAINTS ABOUT MODERN WOMEN
  25. ARE WE REALLY WHAT WE EAT?
  26. HOW I WRITE MY POEMS
  27. THE SEXUALISATION OF SOCIETY
  28. GAY MARRIAGE
  29. ART & INFORMATION OVERLOAD
  30. SOME QUESTIONS ABOUT MEMES
  31. FOOLS RUSH IN
  32. SWEET DREAMS
  33. GOOFING OFF
  34. WRITING HAIKU
  35. ART & AI
  36. CENTRIFUGAL & CENTRIPETAL POETRY
  37. NICHE
  38. WHAT IS TRADITIONAL POETRY?
  39. POP POP POP MUSIC
  40. FLASH FICTION
  41. WELCOME TO MY SUNDAY UPDATES

BUTTERFLIES

When I was a young man, I wrote a fairy tale about a boy who tried to catch the butterfly of his soul but accidentally killed it by breaking its wings. I imagine I must have read somewhere about the ancient Greek word psyche meaning both ‘butterfly’ and ‘the soul’ (although I have no recollection of this). I don’t have the story any longer – it got lost on my travels around the world – but I remember that it had a happy ending when the boy went up into the mountains and discovered a new soul.

My fairy story drew heavily on one of the most common connotations of ‘butterfly’: something that is beautiful but transitory, and never quite within our reach. This was an image in a successful pop song of the mid-sixties, Bob Lind’s Elusive Butterfly, which sang about chasing ‘the bright, elusive butterfly of love’. This metaphor is hardly surprising to anyone who has ever watched a butterfly skip from flower to flower. I know that when I see a butterfly, I often wish it would rest on a flower for a while and spread open its wings to take in the sun, so that I have time to look at and enjoy its outrageous beauty rather than the dull surface of its underwings whenever it lands. But almost at once it’s in flight once more, and all I can do is to try to follow its flapping, which is much too fast and unpredictable for the human eye. Butterflies have a beauty almost always hidden from us except for the briefest of flashes.

This restlessness is reflected in the host of words for ‘butterfly’ in different languages: butterfly (Eng), papillon (Fr), mariposa (Sp), borboleto (Por), farfalle (It), fluture (Rom), schmetterling (Ger), vlinder (Dut), sommerfugl (Dan), fjäril (Swe), babushka (Rus). What is really striking here is that even closely related languages which usually share cognates, such as the five most important Romance languages listed, all have completely different words for the same insect. It is as if the nature of the creature flitting from flower to flower has somehow created a linguistic flightiness that demands a new word in each language. Meanwhile in Indonesian, as I know from my time there, the word for butterfly is kupu-kupu, with a rather poetic rhythm which suggests the flapping of wings, while prostitutes are kupu-kupu malam, or ‘butterflies of the night’.

There is a negative side to this freedom of spirit which is expressed in a related, but more unfavorable connotation of butterfly, to describe someone who has no substance or loyalty, as in the phrase ‘a social butterfly’: a person who feels no deep commitment to anyone or anything. In matters of the heart, they are promiscuous, and have no meaningful friendships: when the going gets tough, they get going. The sense of beauty remains, but it is a narcissistic beauty that attracts in the way that emeralds and rubies attract, for their glitter and their monetary value, not because of any deep intrinsic worth. Their lightness becomes a lack of moral seriousness: life as an aesthetic game. Which is fine until the gestapo comes knocking on your door and you need a neighbour with courage and principles to hide you in their own home.

Another key idea surrounding the butterfly is its transformation from the dull grub of a caterpillar to a creature of great beauty: if you’ll forgive my clumsy mix of metaphors, the ugly duckling transforms into a swan. On this issue of beauty, it is amazing how many of our impressions of animals and things in nature are essentially aesthetic. As biologists point out, there are good reasons why human beings should fear snakes, and why such a fear should be built into our genes; however, there are also many creatures and things in nature which we should definitely avoid and yet we find their beauty alluring, sometimes dangerously so, including even the most venomous of snakes. We are influenced far too readily by aesthetic considerations: a snail is cute because it bears its house on its back, while a slug is an ugly blob of snot which leaves behind a trail of slime. If a butterfly had no wings or the wings of a fly or a cockroach, we would almost certainly feel repulsion rather than attraction.

Perhaps this partly explains the distinction between butterflies and moths in the popular imagination. Butterflies are beautiful creatures that flit from flower to flower and gather pollen in bright sunlight; moths are stupid creatures that immolate themselves in candle flames or pests that chomp at the clothes in our wardrobe. As creatures of the night, moths are generally far less colourful and tend to be chunky rather than slender, and even sometimes rather hairy. They are Morlocks to the butterflies’ dazzling Eloi.

Returning to my fairy tale, my talk of ‘the soul’ might draw derision from materialists as an outdated and superstitious way of thinking that humanity needs to transcend. They may be right, yet even hard-nosed scientists are not immune to the lure of the butterfly as metaphor. For example, when they seek to explain chaos theory to the general public, scientists turn to the butterfly flapping its wings in Brazil and causing a typhoon in China (the countries and details differ between several versions but the core metaphor remains the same). More predictably, philosophers and poets have also turned to butterflies for inspiration. There is the famous example of Zhuangzi dreaming that he was a butterfly and now wondering in the daytime if he is a butterfly dreaming he is a man. Or, in the field of the arts, Apollinaire’s experimental concrete poem, or what he termed a calligramme, Papillon, with its words written on the page in the shape of a butterfly.

Butterflies (along with bees) are also often used as indicators and symbols of how we are destroying planet Earth and replacing nature’s diversity with identical slabs of concrete and stretches of asphalt. Rats and cockroaches thrive in our cities; in contrast, bees and butterflies rapidly vanish. It’s certainly true that there are fewer butterflies around these days compared to when I was young. One of the nice things about living where I do now, on the island of Gozo in Malta, is that I sometimes see butterflies on my walks through the countryside, which I doubt I would if I were walking the streets in the town where I was born. In fact, I suspect that many young people born in cities have never seen a real butterfly in their lives. It is not surprising, therefore, that butterflies should become a symbol of our broader alienation from nature.

This brings me to a shameful confession: when I was in my early teens, I collected butterflies. I had a little book listing all the butterflies in Britain (part of the Observer series, if my memory is correct), but instead of merely sighting them and ticking them off my list like a good train-spotter, I caught them in my net and then preserved them in my collection. At the time this was seen as a perfectly normal or even a praiseworthy thing for a boy to do, keeping him off the streets and teaching him about the natural world. Nothing could better express the instrumentalist relationship to nature which is actively encouraged in western thinking, with human beings as lords and masters of the planet, which is there for our pleasure and use. Now that I’m adult, however, when I see a butterfly in a glass case, I don’t see something that is beautiful but something which is dead.

My interest in lepidoptery also led to my first awareness that nature was not always the benign goddess that existed in the Romantic poets I was reading at school, with their rainbows and nightingales and odes to autumn. I kept some cabbage white caterpillars in a jar in my bedroom, along with some leaves they could feed on, hoping to see them turn into chrysalises and then emerge one day as butterflies. One night I went to bed and I looked at my jar and there were maggots crawling out of their bodies. There is a parasitic wasp which injects its eggs into caterpillars and when they hatch the grubs eat the caterpillar from the inside out. I panicked and took the jar outside and emptied its contents onto the ground and crushed them under my feet, and then had nightmares all through the night. As a boy who had grown up in an industrial area and had very little day-to-day contact with nature, this was the first time I came into real contact with its indifference to the individual suffering of its creatures, or, from our human point of view, its mindless cruelty.

One thing I would love to do before I die is to witness the hatching of the monarch butterflies in Mexico before they head off northwards. I’ve seen videos and it really looks incredible, and I would love to be surrounded by thousands of them fluttering all around me. Sadly, though, I have read that this amazing gathering is at threat because of climate change. Could it even come to an end one day? At first thought this seems impossible simply because there are so many of them. But then we remember the billions of passenger pigeons which were so numerous that during their migration they could block the sun, yet human beings drove them to extinction in a very short moment of evolutionary time, so surely we must wonder if the monarch couldn’t go a similar way. We are making a total mess of this beautiful world.

Normally at this point I try to sum up my essay. But it’s hard to do that here, because it feels as if, in some spooky way, my essay has taken on the flightiness and transience of the subject I have chosen. My thoughts have skipped from point to point, as my essay has become a kind of butterfly gathering pollen and transporting it around, bearing little inner structure or the thread of a coherent argument. So all I can do to try to describe the labyrinth I have wandered through and to wrap up this piece is offer a suggestion: do your best to get to a place where you can see butterflies because they are truly elusive and will become even more so in the future, as human cities grow and grow and more of the world is covered in concrete.

TALKIN’ ‘BOUT MY GENERATION

I’ve become a cultural dinosaur. I’ve switched off from contemporary music, film and TV and I know almost nothing about them. I realise that many will see this as a surrender, a sad retreat into the past and a checking out from current social and cultural reality. But I’d counter that there comes a time when we are no longer in the loop and it’s better to accept this and be ready to pass on the baton to those who are much younger. I’m happy to live in the past, at least culturally, and to take only those features of the contemporary world which I can use, such as the internet, and pay little attention to the others.

I was part of the Baby Boomer generation in post-war Britain. Born too late to be sent to fight in a war or have memories of rationing, and also lucky enough to avoid conscription, our youthful years were charmed compared to those of our parents, especially those of us who were born into the working class. This was the era when the poor had free healthcare, welfare and education, including in my own case seven years of grants to study at university with all the tutoring fees paid by the government. And we also faced a job market with almost zero unemployment so we could pick and choose what we wanted to do and companies had to compete to hire us by offering things like sports and social clubs with subsidised food and beer. Happy days.

We could also choose not to work, as I did for long stretches, because there was a benefits system which allowed me to ‘sign on’ once a fortnight and pick up my social security giro, which I could exchange for cash at the local post office. (My friend Nelly and I used to go and play bingo with this, among an army of little old ladies who were unbelievable bingo demons able to work ten cards at the same time.) If ever there’s a good time to be young and poor, surely this was it. I was effectively financed by the government to play bingo and then further my personal education by lazing around on the dole, staying in my home, reading books and listening to music.

Plus the counter-culture was happening all around us. I was born a little too late to be a bona-fide hippie and was then a fraction too early to be a fully-fledged punk (these tiny differences in age matter a lot when we’re young), but the rebellious ideas were swirling around in the air, aided by the music we loved. Looking back, how hugely I must have disappointed my working-class parents, who had been told by my teachers when I was five years old that one day I would be a professor (as my mother loved to recall), and there I was, throwing it all away. My father died just before I finished the sixth form (the UK equivalent of high school) but it must have been especially difficult for him, who had been forced to leave school at the age of fourteen to go out and earn money for his family, so I’m glad he never saw me ‘dropping out’.

How pampered and privileged we were, how little we realised it, and how ungrateful we seem in retrospect. With the arrogance of youth, we thought we knew it all, dismissing the blood, toil, tears and sweat that our parents had gone through to give us all they could, things which had not been available to them. But we were above this day-to-day struggle, or so some of us thought; we didn’t want to become ‘breadheads’ or part of the corporate machine. We liked to think we rejected the grind of the nine-to-five as part of a noble anti-materialism, and we were going to live our lives to higher standards and worthier ambitions than the benighted generation which brought us up.

Far from everyone saw things this way, though, even if documentaries about ‘the sixties’ suggest a mass movement among the young. For most of my classmates at school, the future was going to be a lifetime in the factories (or that was how it seemed, until Thatcherism led to them all closing as manufacturing was outsourced to Asia). The group to which I felt I belonged was a small tranche of young people who saw ourselves as an intellectual, political and even spiritual avant-garde which imagined it could transform this fallen world. We believed we were the cause of the social changes happening around us when essentially they were the effect of post-war reconstruction and a brief economic moment when the ruling classes desperately needed labour, perhaps similar in nature to the weakening of feudalism following the Black Death. It was naive of us to overestimate our own agency, of course, and we were riding on the back of our parents’ sacrifices, but there was also a genuine idealism in there somewhere.

But if we compare life now for young people and life as it was for us then, we have to say that we failed and the system survived us, and even thrived. In my own case, for example, faced with a lifetime of tedious office work once it got harder to game the system and get my dole cheque, I soon opted to go to university as a ‘mature student’ (although I was only twenty-five), all expenses paid. Practicality, and probably boredom, pushed us towards this move to drop back in, and the heady hippie dreams were soon a disappearing sight in our rear-view mirror. I guess we helped along some significant trends in terms of changing the general culture, changes which are now the target of right-wing ideologues intent on igniting culture wars to push gay people back into the closet, women back into the kitchen, and trans people to the J.K.Rowling Internment Camp for detransitioning. But if our aim was to take down capitalism and set up a more caring system, we didn’t merely fail but we accelerated the car down the cul-de-sac of mass consumerism.

My generation was effectively bought off. The most important part of this revolved around housing policy. When I was born, many of the working class lived in what was called council housing, which meant it was owned by the local authority which rented it out to people at a relatively low price. This more or less guaranteed that everyone had a roof over their head (although the famous drama, Cathy Come Home, showed that this was not always the case). But council housing – and particularly council estates – soon gained a stigma, while owning your own property was mostly just a dream for the poor. Then legislation was introduced which enabled and encouraged working class people to take out a mortgage and buy their council property at way below the market value and, unsurprisingly, many tenants rushed to do so.

I don’t like to put forward ‘human nature’ as part of an argument because often this is just a veil for lazy thinking – who can confidently say what ‘human nature’ is? – but those of us against this sell-off of public housing never really understood why people in their millions jumped at this chance and we lost both the argument and the political struggle. The ruling classes had realised that one way of making people resistant to change and conservative in general outlook, and also in party allegiance, was to offer them something which would raise them one small step above their neighbours; most importantly, however, this was a higher status they could lose if they didn’t stay on the treadmill and do the nine-to-five (a treadmill which has now turned for many into three zero-hour contracts in order to merely survive).

So in the end, in my opinion, the working class lost much more than they gained: many of the benefits which my generation took for granted, since we never realised how much could be taken away and how easily this could happen. Homelessness, for example, has rocketed now that a rump is all that exists of council housing. But the biggest long-term loss to social mobility is possibly free tertiary education, because unless they do a degree like medicine or law which is eventually going to pay them back and truly be an investment in their future, teenagers from poorer families with parents who cannot subsidise their education are going to leave university burdened with enormous debt and in possession of a degree which might not even help them to do more than land a job flipping burgers. Not surprisingly, all the data suggests that social mobility has gone backwards since I were a lad. Social resentment, on the other hand, seems to be positively thriving, stoked by a malevolent right-wing.

So I understand young people today if they curse my g-g-generation. We were given so much and took it and kept on taking, with little concern for those who followed us. Far from being anti-materialistic, we embraced consumer capitalism with gusto, abandoning the ashrams and the retreats for the shopping malls. Meanwhile, the leading lights of hippiedom turned into entrepreneurs heading the kind of greedy corporations they had once castigated and threatened to smash. So the generations after us Boomers are left in a world on the brink of ecological collapse and menaced by the return of neo-fascism, and all that is left of our legacy is some pretty good music and a heap of sanctimonious bullshit.

My only defence of my post-war generation is that people often do things which result in long-term harm to themselves, not because they are wicked and not necessarily because they are stupid, but because intelligent collective action is very hard to plan and manage. We like to imagine that we are generally in charge of our destinies, on both a personal and societal level, but the reality is usually the opposite. And things were ripe for change after the economic failures of the 70s. Plus the right-wing politicians who emerged in the UK at that time were not the demented James-Bond villains of today’s crop of neo-fascists: half melodrama monsters twirling their moustaches, half vicious trolls motivated by malice and greed, and dreaming of a goose-step future. The Thatcherites were smart and knew what they were doing: they knew exactly which buttons to push to get people to buy into their agenda even at an ultimate cost to themselves.

So I guess I’m making a plea not to judge my generation too harshly, even if we sold you down the river. Although as I pick up my state pension while you contemplate a future where there will be no such thing and you’ll be left to die on the street if your bank account is empty, I understand completely if you thrust your middle finger in my face.

THE GLITTERY ALLURE OF TRASH

As Art moved into a role that was once the domain of religion, some philosophers, artists and critics began to see it as a way of asking deep, metaphysical questions or as a dose of medicine for the soul. Much of the Art of the 20th century, especially Modernist painting, literature, music and film, took itself very seriously indeed and we were encouraged to approach it in an austere, reverential state of mind. Films by the likes of Bergman were seen as much more meaningful and worthy than thrillers or westerns or musicals, and ‘literary fiction’ twittered at a higher level than hard-boiled pulp or Mills and Boon romance. However, there was a movement against this earnestness even in the first part of the century, in things like the Dada cabarets, and this rejection of lofty aims and claims grew stronger in the second, spurred on by the social and political problematising of the canon by the Birmingham School and then the theories of postmodernism, and there was increased academic interest in works of art that would once have been dismissed as disposable trash. Plus the internet now makes available a raft of lesser-known work that would once have been lost forever. There is plenty of trash out there.

We all have our favourite trash, of course: mine is 1950s sci-fi movies with their characters wrapped in tin foil, their risible special effects, and their heroines who spend the whole film making coffee until they’re screaming like Fay Wray as the monster carries them off. Then there’s the outrageous camp nonsense of Carmen Miranda in full fruit salad or the dyke drag of Johnny Guitar. And in music, I’ve always had a weakness for silly novelty songs like The Purple People Eater or They’re Coming to Take Me Away, Hahaaa!. The difference these days is that such divertissements are no longer guilty pleasures, shameful moments when we slink off from the heady realms of serious Art and read a comic book under our bedclothes with a torch. Nowadays we glory in our decadence and any opprobrium is reserved for people who enjoy movies like Death in Venice, who are often seen as pretentious poseurs rather than members of an aesthetically sensitive elite.

So what are the attractions of trash? The most important one is obvious: trash is a way we can switch off, relax, and have fun. We aren’t under any pressure to have a meaningful experience or to contemplate the puzzles and bitter twists of an angst-ridden existence. Trash offers us all the fun of the fair with no hard work or responsibility; it is candy floss for the heart and mind and soul.

And let’s face it – sometimes we need that. Life can be hard enough without having to stare into the void each time we step off the treadmill into the welcoming arms of the arts. In that sense, trash performs a similar role in our lives to traditional comedy, with the crucial difference that often the very best trash does not set out to make us laugh; to be laugh-your-ass-off funny, it usually needs to take itself rather too seriously and woefully fail. Nor does it offer laughter only; it sweeps us through a full range of cheap emotions, such as sexual attraction for preppy boys and busty heroines, a frisson of fear when we see them having nookie in their car in the woods and the chords go from major to minor, and relief and joy when the monster is finally destroyed and all is right with the world. Another similarity between trash and comedy is that both work better in front of an audience in collective bursts of laughter. There is little pleasure, for instance, in an ironic reading of Jackie Collins or Jeffrey Archer since this is a solitary activity and offers none of the camaraderie of a Eurovision night or a Rocky Horror party.

Trash also has the appeal of simplicity, of both content and form. Novelty songs, for example, are nearly always musically inane, with silly lyrics (Itsy Bitsy Teenie Weenie Yellow Polka Dot Bikini) and moronic, repetitive tunes (Mah Nà Mah Nà). This helps to explain why one person’s gloriously trashy song is another person’s screech of chalky fingers down the blackboard. The morality is equally simple (good versus evil; all-American hero versus people named Boris or Olga). One of the problems I personally have with modern action movies, especially superhero films, is that they now insist that the hero is morally and psychologically complex, and personally I think this muddies the waters and we can’t enjoy cheering for the good guys like we used to. It’s true that I love the moral ambiguity of film noir, but that follows different rules: when film noir is bad, it just becomes plodding and dull, not funny. Bad film noir is generally much worse than bad sci-fi or horror because it never blossoms into trash.

Trash is democratic, especially in comparison with what used to be labelled ‘high culture’. Traditionalists might not see this as a virtue, of course, but as a symptom of decline. What both sides agree on is that we now exist in a world where there is much less confidence about the distinction between high and popular art. Once, perhaps, this was often a gap in quality between the national/international and the local; we couldn’t expect a local orchestra in a small town to be as good as the New York Philharmonic, but we wouldn’t have seen what they created as trash, simply as less professional and accomplished. Once works of art become products, though, which is how most of them are viewed these days, this sense of local pride is generally lost and we judge them much more harshly or even cruelly compared to the more professional stuff that we see online each day. And while we would no more laugh at a local orchestra for their shortcomings than we would at a group of children performing a nativity play, it is much easier to ridicule products which are industrially constructed and of which no single individual claims ownership. (A quick detour here to say how I hate the phrase ‘the culture industries’: this is human beings painting or writing or playing music, not corporations churning out cans of beans.) There is often a cruel element to the mocking of trash and this is made much easier by its anonymity, because generally no one person is the target. Finally, there is simply a lot more art available to us nowadays, so we’re going to see or watch or hear much more z-level stuff which is fair game for ridicule.

Another possible reason for our increase in the love of trash is that aesthetics has been downgraded as a way of evaluating art compared with things like social issues and identity politics, and we tend to care more these days about how works measure up to desired social standards and are acutely aware of the stereotypes which are so prevalent in trash. Thus, we laugh or shake our heads in disbelief when a character says something which is embarrassingly sexist or speaks in an accent like the BBC of old. This is definitely part of my enjoyment of 50s sci-fi and sometimes even work which is definitely not trashy, such as Brief Encounter. As we grow less sensitive to features like beauty and form, we notice only the most gross aesthetic failings or risible special effects, such as a monster that looks like a person under a rug. In contrast, the out-of-date social norms we spot immediately, and we often find them hilarious (perhaps because of an unadmitted social anxiety?).

A couple of months ago, I wrote an essay about camp, and much of what I say about trash here is also true of camp. Often a work of art can be trashy and camp at the same time, and the two identities clearly overlap. Trash offers similar pleasures to camp: a sense of superiority as we look down at what other, less discerning people take seriously. Just as bling is worn to show off wealth, nights spent with other aficionados at Eurovision or Rocky Horror can be critiqued as sly demonstrations of sophistication, but I think this is unfair: mainly they’re just an excuse to party and we don’t need to overthink it. And, as with camp, this ironising gaze has spread out beyond like-minded subcultural cliques into the population at large, so any sense of intellectual superiority from enjoying trash is well and truly gone. We are all devo now.

One field where there is little public enjoyment of trash is contemporary painting, sculpture and art. Many traditionalists, of course, argue that this is because a lot of highly-regarded contemporary art is trash to begin with, and we have been hoodwinked by critics and art dealers into accepting it as meaningful and valuable because this oils the greasy wheels of the art market. Also, very often, as in the work of people like Koons, the Chapman Brothers, and Gilbert and George, the irony and pastiche is built into the artwork, so it becomes almost impossible to ironise it further. Contemporary art is also supported by abstruse and impressive-sounding theory, so many of the people who feel confident about laughing out loud at Plan 9 from Outer Space or Glen or Glenda may not feel the same confidence when looking at Hirst’s spot paintings in a gallery. And even if they do think that his art is grossly overrated, there won’t be the same sense of fun in rejecting it, so I think they’re more likely to categorise it as rubbish (which has no glitter) than trash (which usually has).

Does this mean there is good trash and bad trash? After all, people commonly say that something ‘is so bad that it’s good’, and a cult builds up around it, as happened to the soap opera, Neighbours, on UK university campuses. The aesthetic response becomes ambivalent because there is often a genuine love of the trashy art form, with irony as a way of enjoying it while silencing a feeling deep inside that perhaps one shouldn’t. Plus, of course, there is the simple social pleasure of belonging to a gang of friends having a good time together. As I said earlier about noir, I think the key difference between trash and mediocrity is that the latter is flat and boring whereas the best trash has an outrageous, in-your-face liveliness that is as confident and brash as it is awful.

Personally I have no issue at all with a love of trash and I don’t think we should judge people because they haven’t read Dostoevsky or watched Persona.  Let’s not forget that the theatre which happened in the Globe was generally seen as disposable and who knows how today’s work will be judged in the future? Hard-boiled novels, for example, are now more highly rated than a lot of work which was seen as far more worthy and significant at the time, work which was said to ‘explore the human condition’ while pulp was just about guns and gals and gangsters. I’ll finish by making it clear that I certainly wouldn’t want everything to be trash, and I worry that the commodification of Art is sending us in that direction. I also personally think that a lot of modern art is pretentious rubbish and doesn’t deserve to be classified as trash. But as we open our carton of popcorn and sit back to enjoy The Wasp Woman or The Brain That Wouldn’t Die, let’s celebrate a simple reality: sometimes girls, and even boys, just want to have fun.

LET’S GET REAL ABOUT CLIMATE CHANGE

When I was teaching in Vietnam, I remember the subject of climate change came up in the staffroom one day. It’s embarrassing when I recall it now because I strongly disagreed with the other teachers in being what environmentalists call a ‘denier’ (although I hate this word because of its connotations with people who say the Holocaust never happened, and therefore I much prefer ‘sceptic’). I argued that the idea that human beings could change the climate of the entire globe was typical of the arrogance of our species, believing that we’re so special that we can alter the future of the whole planet. What makes the memory rather shaming is the confidence with which I said this after having read a few articles about things like the medieval warm period. I was shooting off my mouth with very little evidence, like the worst type of YouTube pseudo-intellectual pontificating about a Marxism he has never even bothered to study in any depth. One of my fellow teachers accused me of being contrarian, and in retrospect he was almost certainly right.

OK, confession over. I’m no longer a sceptic: I’m now convinced that climate change is real and homo sapiens is almost surely the main cause. But if I’m honest, I still can’t say that I get all that worked up about it and I share little of the terrified concern that many people feel. Ironically, if we’re talking carbon footprint, I’m probably on the side of the angels these days in that I don’t drive a car, I hardly ever buy consumer goods unless the ones I have are broken, I wear clothes until they are faded and falling apart, I haven’t been on a plane for almost five years, I rarely eat meat, I use a fan rather than AC, and I put all my trash into the designated bags. But I can’t claim I do this to save the planet: my actions are simply the result of my personal lifestyle preferences. Any pretence that this is noble sacrifice on my part would be a self-promoting lie.

And that’s often the problem: we do what we want to do and dress it up in noble motives afterwards. The truth is that I don’t really care very much about the future of the planet. I will soon be dead, I have no children or grandkids to worry about, and most of my close friends are more or less my age, so they won’t suffer if or when the planet cooks. I know this sounds selfish and short-sighted, but I think Hume was right to say that we feel the pain of a splinter in our finger more than we do the destruction of a whole world if it doesn’t personally affect us. It’s easy to parade our moral virtue when it doesn’t really cost us, and I’ve known lots of people who lectured me about climate change, patted themselves on the back for being ethical consumers, buying eco-produce and saving the globe, and yet took three or four foreign holidays a year on long-haul flights, the latter morally justified by the dubious practice of carbon offset. At heart we all say one thing and do another (and the tragic thing is that those who don’t share this hypocrisy are often the most monstrous of us all).

A second reason that I can’t get very worked up about climate change is a sense of helplessness. In the modern world, we have a surfeit of information from a range of different media but generally all this does is create anxiety because it traps us in an atomised system in which it becomes extremely difficult for individuals to take meaningful action on a collective level. We just sit in front of our screens and either fret or fume. As a result, we have little option but to look after our own interests, but what makes sense for an individual (owning a private car) does not necessarily make sense for the collective (who all end up stuck in traffic jams in polluted, hellish cities). This leads to widepsread cynicism, as politicians who fly around the globe in private jets advise us to minimise our more limited use of finite resources. They urge us not to buy SUVs and to turn off our air-conditioning, to make everyday sacrifices to save the planet, while the governments of which they are part – especially those which should take the strongest action because their countries are the biggest consumers of fossil fuels – do little except trot out hollow, self-serving exhortation.     

Another reason I don’t focus much on climate change is that I think there are more pressing worries, specifically the emergence of a world in which the divine right of billionaires is taking chunks out of democracy and doing all it can to remove the human rights we have clawed back over the centuries from the monkeys on top of the tree. If surviving climate change means living in a neo-fascist globe where decency and the rule of impartial law no longer exist, the game isn’t worth the candle in my opinion, and the vast majority of human beings will be better off never being born. Frankly, I get a kind of pleasure from knowing that the oligarchs will fry just like the rest of us if the planet goes up in flames (unless their fantasies of a colony on Mars or their consciousness being downloaded onto software ever become more than fantasies). I know this sounds vengeful and self-defeating, and people will argue that saving the planet trumps even the fight against the oligarchs, but, as the sacrifices of the generation before mine against Nazism show, and despite the claims of evolutionary biologists, sometimes there are things that are even more important than survival.

Will humanity survive if the earth heats up by six degrees? Probably, in a few isolated places, like the jungles of New Guinea or the frozen fringes of the Arctic. But what we like to call advanced civilisation, possibly not. Does that really matter? From our human perspective, of course it does, but from the perspective of a Martian observer, not in the slightest. Even if our species becomes extinct, this is merely what has happened to 99.9% of all the species that have ever existed (according to my extensive research: i.e. Wikipedia). What would make us so tragically special if we go the way of the dodo?

This is where environmentalism falls prey to double standards and questionable logic in my opinion. One of its favourite arguments is that homo sapiens is just another species, no more relevant or important than a warthog or a worm, and therefore even if we have the power to dominate the planet we should refrain from doing so, since unlike other creatures at the top of the food chain such as the tiger or shark, we have this special quality of moral agency. This turns us into custodians of the earth and imposes on us a responsibility to take care of the globe, including plants and non-human animals. In short, we should cease to be anthropocentric and embrace posthuman thinking, although the idea that we are a uniquely moral species is anthropocentric in itself. Why should we be the only species which can choose to act against our nature in this way if we are nothing more than a vehicle for our genes? The reality is that whatever happens to us, even if we are wiped out as a species, the planet will be fine. It will go on spinning. In many ways, I suspect it may thrive as it shakes off this cancer that is destroying it, and our absence will almost certainly be beneficial for biodiversity.

I must admit that I’m far from comfortable with the arguments I make in the previous paragraph. The truth is that we have to believe we matter even if cool, objective thinking suggests that we don’t, if only for a variation on the famous argument that if there is no God, all is permitted, which in this case would definitely include the slide into worldwide fascism. There are lies, or at least empty platitudes, that are necessary if we want to lead a life that is bearable, but that doesn’t stop them being platitudes. A stronger argument against much of what I wrote in the previous paragraph is that I’m creating a false binary between the struggle to reverse climate change and the fight against fascism, when in truth they are interrelated: we live in this fractured world in which we are destroying the planet because of the attraction of the strong leader and the lure of populist demagoguery that this will always entail, so we can never reverse climate change without silencing the poisonous lorelei of neo-fascism. I’m not sure I totally buy into this idea that the two struggles are actually just one, but the argument is definitely worth debating.

To conclude, I know this essay opens me up to the charge of nihilism, but I really believe we need to stop pretending that we care so deeply about climate change because the evidence suggests that most of us don’t. Or, if we do care, it’s only up to the point where it negatively impacts on our own wishes and desires. Or perhaps it’s simply a lack of imagination, our habituation to the reality around us right now, our inability to think beyond this. Whatever the reason, we must stop saying what we believe sounds like the right thing to say and begin to genuinely care if anything is ever going to change. But even if this personal epiphany takes place and we start to genuinely push to try to force substantive action by our governments, this needs to be global and systemic or very little will actually alter. The problem is that it’s almost impossible to imagine this happening in a world of competing nations run by leaders drunk on their power over a population that remains so stubbornly tribal. In that sense the evolutionists are probably right: for all our much-vaunted intelligence as a species, we can never transcend both our basic biology and our inability to organise smartly as a collective.

THE DUTY TO DIE

Modern medicine has led to a huge decrease in mortality from acute disease at the cost of an explosion in cases of chronic, disabling illness, at least in the developed world, so there has been a slow and steady rise in public debate about assisted suicide, including that which is currently happening in the UK surrounding a Bill in parliament. I read an article this week about this subject, Is There A Duty to Die?, written by John Hardwig (available on JSTOR and easily googled), and it was its focus on duty which I personally found most interesting. Debates about assisted suicide tend to concentrate on the rights of the individual to end a life that belongs to him or her alone, but Hardwig’s paper focuses mostly instead on responsibilities to family and friends and society at large. I see this as a welcome corrective to a lot of current thinking.

I’m not saying that the standard, rights-based arguments for assisted suicide aren’t valid, but sometimes they seem to stem from the highly individualistic mindset of modern society, as if the person making the decision exists in a kind of vacuum set apart from the rest of the world. I imagine that the person considering this option of taking their own life does feel terribly alone and suffers from an aching sense of isolation as the debate unfolds inside his or her own heart and mind. But these decisions will have repercussions for everyone involved, and especially those who are fighting to take care of debilitatingly ill loved ones, and who are struggling to do so, emotionally, practically, and often financially. The thinking behind Hardwig’s paper is that the person contemplating assisted suicide has a duty to weigh its effects on everyone concerned, so, in a perfect world, these matters should be spoken about openly and decisions should be collective rather than individual.

No man is an island, as the famous quote goes. I see myself as a perfect example of the essential truth of this statement. In many ways my lifestyle is as close to that of a hermit as it could get without my going up a mountain and contemplating my navel. I live alone here in Gozo without a single local friend; my closest human contact is with the people who run the restaurant where I often go to lunch. Such friendships as I have are all online or happen via phone. And yet even someone like me who lives alone and whose parents and siblings are long dead has a web of people who would be affected if I decided to end my life. Yes, I’m sure it would be less painful for them to overcome any grief they might feel because I’m not a constant part of their everyday lives. We don’t speak on the phone every day, I’m not waiting in the house when they get home, I’m not chatting with them over meals together, my clothes aren’t hanging in the wardrobe, my books aren’t cluttering up the room. So it would be easier for them to forget me, and that would be a blessing in my opinion. But even the death of someone as socially isolated as I am would send ripples through the water for a while.

When we turn to the more common situation of someone living as part of a family or with a spouse, those ripples become tsunamis, so ideally any decision should be a collective one in which all close parties are involved. I say ‘ideally’, because when we are in the depths of pain and despair, we often have no energy left to consider anyone else; of necessity we become selfish and close ourselves off. But hopefully, in the calm at the centre of the storm, the thinking of the person considering suicide becomes quite clear and turns to their responsibilities towards those people they love, and they see themselves as part of something larger, something that will live on even if they as an individual are no longer there to share it. This is asking a lot of any human being, but perhaps this overcoming of self is possible with support and kindness and courage.

It might be argued that a rational decision is impossible in these extreme conditions and all that open discussions will do is set off conflict and negative emotions, and that of guilt in particular. But guilt is inevitable in these situations; it will certainly come following a suicide if these issues have never been discussed. You can’t have love without guilt because no one is perfect and we often hurt each other, even more so the people who have the deepest place in our hearts. The only people who don’t feel guilt are sociopaths who have no genuine feelings for others and see them as merely a means to an end. The painful difficulties of speaking openly shouldn’t be used as a reason for not speaking at all.

The truth is we don’t like to speak about these things because they make us uncomfortable and indeed do risk setting off all kinds of negative feelings. If someone does raise these issues, the conversation is often labelled ‘morbid’ and shut down, for in the modern world, there is almost no public acknowledgement of death. I remember many years ago I was on holiday in a little town in the Peloponnese, and one day there was a funeral procession through the streets. I don’t mean to sound disrespectful if I say that the corpse was like an ageing rock star being taken on a kind of farewell tour of the town and everything stood still as people had their chance to pay their last respects (and I felt pretty sure this was not a local dignitary who had passed away – it was just an ordinary coffin in an ordinary hearse without any of the pomp of an official event). Even in my own childhood in the UK, if someone died in my street the neighbours pulled shut their curtains for a day as a public gesture to honour the deceased. If I compare that to what happens in the UK nowadays, where death is handed over to professionals who make it as discreet and invisible as they possibly can, we can see that death, like so many other things, has become privatised in modern life.

This returns me to the argument at the heart of Hardwig’s piece: we can never actually be isolated units. Yet in public spaces today, we often are effectively alone: sealed off in our cars as we drive down the street, chatting on our smartphones as we walk, listening to music on our headphones. We have become molecules in an empty space which occasionally bump into each other, and thoughts of death are an unwelcome disruption of this atomised, semi-public existence. Acknowledging the reality and commonality of death is a way of connecting back with everyone else and putting shared meaning back into our lives. The line which stood out for me in Hardwig’s paper was, ‘We can conquer death only by finding meaning in it.’ I totally agree. But we tend to see it instead as an unwelcome reminder of our meaninglessness and flee from it as quickly as we can.

Many people will strongly disagree with Hardwig’s thesis and argue that an ethics which is based on one’s duties to loved ones is just as dangerous as one based on individual rights because both turn life and death into something which is negotiable and sacrifices the sanctity of life. This argument is most commonly advanced by Christians: that it is a sin to end a life which is given to us by God. My thoughts on this are twofold. First, the western world is no longer monolithically Christian and Christians must learn to live in a world where they co-exist with others of different faiths and none: they can believe what they want, of course, but they can no longer expect their chosen faith to dictate the wider rules of our society. Second, we can’t pick and choose when something is deemed sacred: an absolutist core is built into the concept and life is either sacred or it isn’t. Yet there are many occasions when this is conveniently fudged: in wars, including religious battles; in heroic sacrifices like that of Captain Oates, which is mentioned by Hardwig; in execution as a form of punishment for heinous crimes.

Another reason why assisted suicide should be a collective decision by the person in consultation with closest family and friends is that it removes a lot of burden from the shoulders of medical practitioners. At the moment, they are often the de facto decision-makers in a process which is deemed to be scientific and objective but I suspect frequently isn’t. For instance, the practice of triage in deciding who gets treatment and who is effectively left to die is a reality according to medical practitioners, even if we don’t care to admit it. Of course, doctors must remain a central part of any legal system which allows assisted suicide as a safeguard against its misuse, but their burden should be shared and the process in which this happens should be honest and transparent.

I’d like to detour slightly here in order to briefly mention the dangers of idealising the biological family. Many families are dysfunctional and I’m intending a broader concept of the term to refer to any group of people who love and support each other. Most commonly this is based on blood ties but need not be. In most cases, though, we will be talking of parents and children and siblings, and also in most cases, I hope and I believe, there will be bonds of affection and love, and these bonds need to be supported by systems that bring out all that is most noble in human beings.

Hardwig’s contention that there are situations in which someone might choose to die in order to cease being a terrible burden on those they love and who love them in return may sound harsh and uncaring towards a person who is already suffering horribly. And I can understand the argument that legal assisted suicide could be another perilous step towards a utilitarian society in which human relationships become even more instrumentalist. But this is where modern technology has brought us, and we can no longer close our eyes and block our ears: this issue is going to grow increasingly important as populations age and chronic health conditions become pandemic.

I would add as a final thought – and as my opinion – that Hardwig’s focus on responsibilities rather than rights is part of a more general needed realignment of the balance between them. Of course we must still fight for our rights, especially at this moment in history when many of those in power seem to want to strip us of them, but, to adapt a famous example, we must not only ask what our loved ones and our communities can do for us, but what we can do for our loved ones and communities. With rights come responsibilities, and a society must reflect that fact if it wants to remain in good health.

OUR SEARCH FOR ULTIMATE REALITY

This week I came across a quotation from Niels Bohr: ‘Quantum mechanics does not describe reality – it describes our knowledge of reality.’ Now obviously I know zilch about quantum mechanics, but this quote seems to counter the popular notion, easily picked up from a trawl through YouTube videos, that the vast majority of scientists are trenchantly materialist and believe that science based on reason and observation can give us direct apprehension of reality. It seems to be physicists who most often make statements like Bohr’s, while biologists tend to display less epistemological modesty, presumably because they are studying living organisms in their very material existence on planet Earth, not aiming to discover the ultimate nature of the universe.

To me, there seem to be four options regarding the latter: it is Matter, with mind as its epiphenomenon; it is Mind, creating the illusion of matter; it is a dualism of Mind and Matter linked in some mysterious, unfathomable way; or it is a singular substance and it is only our limited human thinking which experiences it as a binary opposition. I don’t see any way that we can choose between these options (other than subjective preference) since there seems no method that could ever provide sufficient and satisfying evidence for any of these options as an underlying ontology.

In arguments about the potential and the limitations of science, scientists, and physicists in particular, will always hold the advantage because they can understand what non-physicists say, while we generally cannot understand them, so we have to take what they say on trust and that they are not using their scientific background to gain a rhetorical advantage. If physicists tell us they have discovered the Higgs Boson, for example, we have no choice but to believe them, just as we must believe an expert in ancient Chinese pointing at a text and telling us what the characters mean. To understand the claims of quantum physicists, we also need a grasp of higher mathematics, and most of us are limited to the ability to count, to add, subtract, multiply and divide, which places us at a huge disadvantage. So we simply have to accept that when they talk about multiverses or string theory or a nothingness in which mysterious particles occasionally bubble up from nowhere into being, their words are grounded in observable and knowable facts rather than being closer to the vagueness of religious or mystical language, and that their claims about ultimate reality have much more substance than the poetry of creation myths.

If indeed the concept of an ultimate reality has any meaning. The idea of a foundational reality, whether God or some scientific alternative, may be a construct of the human mind and not necessarily exist. A lot of modern physics – which of course I admit I can’t begin to comprehend – seems to consist of a search for the ultimate particles of existence, as if there is a bottom to this rabbit hole, but perhaps there simply isn’t. Scientists, unlike artists and philosophers, tend to work from the assumption that if a question can be asked, it has a definite answer, at least in theory: if they get bogged down in defining abstract terms rather than observing the natural world, what they are doing is philosophy, not science or what used to be called natural philosophy. In short, on some level they have to believe that what they are seeing is real and, most importantly, knowable, at least until they leave the lab and step back into the murkiness of everyday life.

One attempt to bring our subjective experience of everyday life into philosophy is the concept of qualia, although many philosophers such as Dennett deny that the concept has any coherence. I struggle to understand why they have such a problem with allowing subjective experience into their thinking. Why should the stone that Johnson kicked be real, but his sensation of pain when he did this not be real? We experience both, and this happens internally in both cases. Why should the hills I see as I walk through them each morning be real, while the feeling of calm they evoke in me somehow be not? Admittedly, the pain Johnson felt can be explained by nerve signals to the brain, and my feeling of calm by the release of chemicals like dopamine. But these explanations are rooted in a kind of hierarchy of reality, in which substances like dopamine and electrical impulses are granted a kind of existential solidity that is denied to subjective feelings, which are dismissed as somehow less real. Is the word ‘real’ doing anything more here than justifying an unacknowledged metaphysics?

I suspect a lot of the difference in thinking can be explained by personality type and then rationalised through the use of the intellect: clever people are highly skilled at turning a disposition into a thesis dressed in logic and garnished with what is purported to be evidence. None of us likes the vision of the world which we have built up during a lifetime to be shaken and stirred. People like Dennett, Dawkins and Crick seem to relish thinking of themselves as mentally tough, seeing the world for what it really is, above the self-delusions and cowardly consolations of the religious believer and the mystic. The opposite holds true for those of a non-materialist bent. People with a propensity to believe will see the image of the crucifixion in the Turin Shroud and be quick to turn coincidence into meaningful synchronicity.  Human beings are tricky creatures. One thing about which I agree with the advocates of materialism is our capacity for self-deception: I just see that capacity being as active in Dennett and Dawkins as it is in everyone else.

When we think about scientists, we usually bring to mind people like Copernicus, Galileo, Newton, Einstein and modern quantum physicists, all of whose work has sparked deep philosophical speculation and has also led to prevalent metaphors that have influenced our whole way of thinking, but the majority of scientific work is humdrum and everyday, Kuhn’s ‘normal science’. That doesn’t mean it’s not important – far from it – but it does mean it doesn’t require a specific outlook or philosophy or metaphysics; there are procedures which must be followed, but not a fixed way of thinking other than a general commitment to logic and reason and observation. This flexible relationship between theory and practice is very enabling and isn’t limited to science; we need it in any field of human endeavour. In that of poetry, for instance, there is room for the fascist Pound and the communist Neruda: the practice of writing literature overrides the political distinction. The disciplines of specific fields are what determine the processes we follow, while the personal beliefs that we bring to the laboratory or telescope or blank page are much less important.

Although science is often perceived as a search for absolute truth, and despite the proclamations of some of the advocates of this idea, science is essentially pragmatic. The evidence for the correctness of theories in the abstruse realm of quantum mechanics, for instance, is that it works in practice in things like quantum computing even if the scientists themselves struggle to understand why. Theories come and go, but this test by efficacy remains. As someone who knows little about science and is at heart a writer, if I were forced to choose whether we can ever really know an absolute truth, I would swing on the side of it being impossible. Maybe this truth we are seeking is a chimera; on the other hand, perhaps those who see science as a form of teleological evolution are correct when they depict the history of science as an edging ever closer to this final truth. One thing is for sure, the universe is weird: so is it beyond the realms of possibility that, despite it breaking the basic principles of logic, both of these things may somehow be ‘true’ at the same time? But that’s mysticism, I guess, and our response to this idea may depend on our whole way of thinking rather than be something we can prove or disprove.

CREATIVITY & THE LOSS OF SELF

Years ago, I watched a documentary about Ray Davies, the creative force behind the  Kinks. I remember him saying at one point that he felt Waterloo Sunset was better than the person he could ever hope to be in his everyday life, the better human being he wished he could be. I understood at once what he meant. As writers, painters and musicians, we step outside of our quotidian reality for a while: the messy, trivial reality of cleaning our apartment or making sure we have enough money to survive until the end of the month. Our art can be good, bad or ugly, it makes no difference: for a while we become Dorothy, leaving Kansas and landing in Oz.

I know saying this might sound pretentious, but it’s not really such an extravagant claim. It does not mean espousal of some transcendental realm or world of Forms where superior beings called artists explore landscapes of meaning and truth beyond the reach of the philistine masses. In one way, it’s nothing more than the distinction between our public and private selves, but at the same time it’s also more than that: it’s the difference between writing something in a diary which we never intend anyone else to read and forging a poem for public consumption. When something becomes ‘Art’ and moves into the public sphere, it gains a more general significance, an extra layer of meaning. It turns into a symbol or a metaphor and accrues a level of abstraction, even if it’s strictly representational and aims only to hold a mirror up to nature.

One idea that has been kicking around since at least the days of Baudelaire’s flâneur is that an artist can turn his life and person into his work of art (in those early days, it was always a ‘he’), an idea picked up by people like Wilde and Whistler and Dalí in their various times, and which then took full flight in the avant-garde in the second half of the 20th century in the work of groups like Fluxus, one-off happenings, the theatrics of performance art, and the careers of working artists like Gilbert and George. But the distinction between the human beings and the work remained. The Ono of Cut Piece was not the Ono who brushed her teeth and put out the garbage.

How is something different when it takes on the mantle of Art? To return to the brushing of teeth, how is doing that on a stage or in a gallery in front of an audience different from doing it in the bathroom at home? First, of course, in the latter case it is essentially practical, something we do to stop our teeth rotting. We are not meant to interpret it as a metaphor for something else, as always happens once something has been singled out as Art. Then the brushing of teeth becomes a statement or a message – perhaps a comment on our contemporary obsession with hygiene and cleanliness, perhaps a satire on the meaningless routines of suburban life, perhaps a Freudian reference to the oral stage and our subsequent sexual development – you pays your money and you takes your choice.

The key thing is that there will always be this choice. It is almost impossible in this staged performance to see another human being brushing their teeth as a brute physical action and nothing more. It becomes metaphorical. One of the problems with people who criticise modern art for its obsession with theory is that it falls prey to the illusion that it is possible to have Art without a theory. Yet even if the critics and historians haven’t caught up with the work that is being produced, as perhaps may happen at first in low-status art forms such as early blues, there is still an assumption, a consensus, in the minds of the artists and the audiences who encounter it about what is being presented. If theory is rejected for so-called ‘common sense’ or vague claims about ‘human nature’, covert meaning still always remains, even when the artist, like Warhol in many interviews, attempts to avoid or deny it.

This is clear from a quick look at photography, an art form in which it might be claimed that a machine copies the reality that we see before our eyes with limited human intervention. But a glance at the work of a photographer such as Doisneau walking the streets of Paris and recording what he witnesses around him makes clear that a thousand choices are made: what to shoot, from which angle, with which lens, where to crop the negative, which ones to discard and which to keep and sell and publish, and so on. Even his most famous photo, Le baiser de l’hôtel de ville’, which captures a spontaneous moment of real life on the streets, becomes a symbol of young love or an expression of Paris the city, rather than merely a brief snatch of objective daily reality caught on celluloid. It is a testament to Doisneau’s talent that there remains a freshness, a sense of casualness, about his photographs, but it is not true that they are simply reflections of real life in all its haphazard flow. Everything which we choose to label Art is constructed at some level and then later decoded by its audience.

Art also, of course, takes place in special spaces designed in order to house it, despite various attempts – happenings, Land Art, graffiti, public murals, the wrapping up of buildings – to move it away from the galleries and concert halls and theatres. These venues signal very clearly that we have entered a world which is different from real life and we are therefore meant to approach it in a special way. This world is full of meaning, not the ordinary significance of events in daily life, but a meaning that is abstracted and collective and generalised. Many attempts have been made in modern Art to capture the randomness of everyday life, such as Tzara’s nonsense sound poetry in the Dadaist cafés, the automatic writing of the Surrealists, and Burroughs’ cutting up of text and rearranging it based on pure chance. But meaning tends to slip through even in these extreme cases as soon as an audience becomes involved and makes its interpretation. Art unfolds in a world removed from everyday reality and all attempts to dissolve this difference ultimately fail.

In the last part of this essay, I’d like to turn to more psychological aspects of this topic, which was probably closer to what Davies was thinking of when he made his statement. The person he was in his daily life somehow seemed to him a much less successful and worthy human being than the one who wrote Waterloo Sunset. And even if we never experience the success that Davies had in his career, there is a sense of wearing our Sunday best when we compose a song, write a poem, or paint a picture. We are transported outside of ourselves, and our self-centred desires are stilled for a while as we succumb to the rules of the material in our hands, be it words or paint or marble. We are no longer flawed and limited creatures struggling with the wayward trivia of daily life.

This ability of Art to transport us beyond ourselves when we create it forms the rationale for Art as therapy. There is something meditative about the state of mind we enter when we dance or paint or sing, even if in most art forms, unlike in meditation, the body remains highly active. This is more than just the joy of singing in the shower or the recapture of childlike play: it is blended with an adult seriousness that is working under rules that we don’t set. Nor is Art alone in this power to transcend the ordinary; doing sport can offer the same sense of escape, the same sacrifice of self to something bigger while we are doing it. Perhaps any activity can offer this if we undertake it with sufficient concentration. Our restless mind is silenced as we immerse ourselves in the activity, and for a short time we are possessed. Or, if this sounds too dramatic and other-worldly for those of a practical bent, we become rigorously focused. Or perhaps it is even simpler and our everyday mind is emptied, at least for a precious passage of time.

Anyone who has ever had anything published will probably recognise that sense of dislocation when we see our own words printed on a page, as if they no longer come from us, and this is even more true when we watch ourselves on video or listen to a recording of our own voice. This is a moment of shivery unease, as happens when we catch ourselves by accident in a mirror or a shop window. We are used to experiencing ourselves from within and suddenly we view or hear ourselves from without, as another human being sees us. This moment is truly uncanny. The unheimlich can be terrifying, of course, and our capacity to dream also gives us the power to conjure nightmares. The wonderful thing about Art is the structure it provides to tame these monsters, like Goya painting his Black Paintings. The discipline of Art can spirit the monsters away, at least for the period of time when the work is being created.

When we leave this total absorption in our act of creation, our loss of self vanishes in a flash and can feel like a passing illusion. We are back in our own skin with all the failings and anxieties of our compromised humanity. Davies is no longer the composer of Waterloo Sunset but a man who brushes his teeth and is full of parochial worries and agitated desires. The happiest moments in life, I would suggest, are those when we forget ourselves completely, and making Art is one way we can reach this emptiness and bliss. Our urge to creativity is truly a blessing in life as it nourishes us with these transitory spells of release that help to keep us sane.

ART & SPIRITUALITY: A REPLY TO ANDREW KLAVAN

This week I came across a YouTube vlog entitled Abstract Art Reflects Our SICK Culture by Andrew Klavan. His basic argument is that the modern west is morally sick, and this sickness is reflected in modern Art, and particularly in abstraction. I watched the video twice, and took notes on the second occasion, because Klavan is clearly passionate and intelligent and knowledgeable, and I didn’t want to dismiss his ideas out of hand, but with the best will in the world it seemed little more than a rant at times, or at best a piece of polemic for his personal brand of Christianity, an assortment of pet peeves about decadent contemporary life rather than a structured, coherent argument against abstraction in painting. There is little aesthetic content in his vlog.

My first question to Klavan would be: why focus on abstraction as especially pernicious? I can understand why many people dislike what they see as the ugly distortions of Expressionism or the triviality of Pop Art, the dryness of Conceptual Art or the shallow cleverness of Postmodernism. But abstract art, with its focus on colour and shape, is generally inoffensive; indeed, I would personally argue that it makes more sense to criticise it for its tendency towards the merely decorative rather than to damn it as a symptom of societal and moral decline. I can see why a Christian might dislike Bacon’s reworking of Velasquez’s portrait of Pope Innocent X or Serrano’s Piss Christ, but canvases by people like Kandinsky or Klee hardly seem to contain enough discursive content to be agents of moral degeneracy.

Klavan, as a well-educated man, must know that perhaps the most important theoretical work about western abstraction is Kandinsky’s Concerning the Spiritual in Art, and that other key figures such as Mondrian, Malevich and Rothko also saw abstract art as a movement that strived to express the spiritual rather than merely record the phenomenal. Yet Klavan chooses not to mention any of these people or the ideals that informed their work, but focuses solely on Pollock. I can understand an argument that abstract painting failed in its desire to express the inexpressible because it was attempting the impossible, but not that its intention was malign. Nor can I dismiss a suspicion that Klavan singles out abstraction for criticism because, unlike Pop Art or Photorealism, which are radically materialistic, it is grounded in a different concept of the spiritual from that of his own brand of Christianity, which makes abstraction a more dangerous opponent to the one true faith he espouses.

If Klavan’s main problem with abstraction or the modern world in general is its materialism and superficiality, I share his reservations. As belief in traditional Christianity has dwindled in the West, it is common for people to say that they don’t believe in God, but they believe in something, and the word which often comes to people’s lips in this discussion is ‘spiritual’. But does this word actually mean anything or is it just something we say to console ourselves or to flatter ourselves that we are deeper than we really are? Where I disagree with Klavan is his remedy for this loss of deeper meaning: a return to old-time religion with its dogma beyond question, its cosying up to authoritarian regimes, and its barbarity towards heretics. Christianity has always had a problem with syncretism and in my opinion that makes it far more dangerous than paint dribbled onto a canvas.

In his video, Klavan never discusses the spiritual or questions what it might mean; he just takes it for granted that everyone knows what it is and Christianity alone can offer it as the only cure that can rescue our sick society. For me, this avoids the difficult questions which he skirts around at points but never faces head on. He speaks of a new Renaissance when we combine what we have learned from science with a re-emergence of Christian faith, but this assumption that a revived Christianity will solve the woes of the world seems hopeful at best if the history of the established Christian church is anything to go by. And there is the added question, of course, of what any of this has to do with colours and shapes on a non-representational canvas.

Another concern of Klavan which I share to some extent is the loss of Beauty in painting in the 20th century, by which I assume he means the work that came after the Impressionists and with the advent of Modernism (he seems to have no issue with basing a lot of his vlog on quotes from the arch-Modernist, Eliot, though, since Eliot later converted to Christianity). In focusing on a painting by Pollock and making it his key example of abstract art, Klavan is being highly selective: rhetorically astute, but disingenuous. Pollock’s work is much more disorderly at first glance than a Mondrian or a Klee, and much more likely to attract comments that ‘my five-year-old daughter could do that’ or ‘it looks like someone has just thrown the paint at the canvas’; if there is beauty in Pollock’s work (and I am not an enormous fan, although I feel it has a clear integral structure and is not just the random splashing of paint and other substances onto a canvas), it does not yield up its beauty in the easy manner of many other examples of abstract art. Elsewhere, contrary to what Klavan claims, some of the most intensely beautiful paintings of the 20th century were the work of abstractionists, with Rothko as a prime example.

But, like many other topics in the video, this lament about the loss of Beauty is briefly mentioned and then passed over for other, different examples of things of which Klavan disapproves: de Sade, rap music, the ‘idolatory of scientism’, the Hindu concept of maya, feminism that sees being a woman as ‘a social construct’ and aims to make her ‘an imitation man’, transgender activism that believes that ‘men can become women’, art as a product of ‘theory’ rather than of ‘lived experience’. If I interpret Kravan’s words correctly, even Blake gets a rap on the knuckles, presumably because his Christianity edged towards a mysticism which transcends the dogmas of official faith and is therefore perilously close to being heretical. With the possible exception of maya as one of the eastern philosophies which influenced many western post-war artists, I fail to see a connection between any of the things which Kravan condemns and abstract art. I assume that this scattering throughout the video of what he sees as examples of contemporary decadence is offered up as proof for his contention that the modern world is sick, but his argument would be much stronger if it didn’t read like a list of personal bêtes noires.

Like many modern-day Christian advocates I watch on YouTube, Klavan seems to long for a past that exists only as nostalgia. He states that our modern morality is ‘basically sadism’, while never mentioning the horrors of a religion which waged religious wars and burnt people at the stake for the slightest hint of heresy. Klavan wants a future Renaissance and predicts that ‘a new dawn is coming’, but the past Renaissance that he eulogises for creating the greatest of art also paved the way for the scientific revolution and the modern rejection of absolutist Christianity and a contemporary world which he seems to so deeply despise. Ultimately, therefore, I feel there is little in his video that genuinely engages with the arguments for and against non-representational art, what it can and cannot do, and consequently my response has centred on religious belief rather than artistic practice. Kravan’s core concerns and motivation seem religious rather than aesthetic, with the latter used merely as a wedge to slip in a piece of polemic for his personal brand of theology.

GRAMMAR SCHOOLS

As I write my weekly blogs, I become more and more aware of how important going to a grammar school was for me personally. It took me beyond my social class of birth, at least on a cultural level, and opened up a whole new world of knowledge and opportunity. The UK state education system in which a minority of children from all backgrounds attended grammar schools was scrapped in most of the country not long after I entered mine, replaced by what was argued to be the greater egalitarianism of ‘comprehensive’ schools in which all pupils went to the same institution regardless of academic performance.

I will begin with a short description of what happened in state education when I was young. At the age of eleven (or sometimes ten, as was the case for people like me because of my date of birth), every child took an examination called the Eleven Plus. Those who achieved a high enough score in this exam were placed in a grammar school; the rest ended up in what was called a secondary modern. I’ve not been able to find an average figure for the percentage of children who were placed in grammar schools after taking the Eleven Plus because it varied from county to county, but I’ve seen on Wikipedia regional figures as low as 10% and as high as 35%. I imagine that in my region it was closer to the former.

The grammar schools creamed off the most academically gifted and taught a curriculum that was modelled on what happened in private education. In my case, further streaming took place on arrival at my secondary school, with pupils divided into three streams based on our score in the Eleven Plus, which sometimes led to our following a different curriculum (Latin rather than woodwork, French instead of German). Unsurprisingly, with its classical music in assembly, its school anthem of Gaudeamus Igitur, and its mimicking of schools attended by the upper-classes, the streams were given Greek names: Alpha, Beta and Delta.

All these years later, arguments still rage about the changes to public education that happened in the 1960s. Many people advocate a return to grammar schools (although ‘return’ is something of a misnomer, since in some places grammar schools continue to exist), while others staunchly support the move to comprehensives. I have to admit I’m conflicted and unsure where I stand on this.

The awful thing about the Eleven Plus was that it more or less decided the entire future of a child at such an early age: once you failed this test, it was almost impossible to shift level of school. I remember one boy managed it because he had what we would now call pushy parents. They seemed terribly posh to me at the time, but in retrospect I’m pretty sure they were at most lower-middle-class. All the same, they had the wherewithal to get him upgraded to the grammar school, and they were vindicated six years on when he became Head Boy and got excellent ‘A’ level results. He could have become the poster boy against the Eleven Plus.

But most of the families in my industrial area didn’t have the skills or the confidence needed to take on the authorities, and their children who attended secondary moderns were effectively seen as ‘factory fodder’. This created a split in our class between those of us, like me, who were being offered an escape from the factory floor and those who were left behind and were labelled as much too lumpen to do anything other than work there. This led to a definite tension. I remember walking to my school each day and having to face a gang of girls from the local girls’ secondary modern who threw things at me and called me a ‘grammar grub’. My own sister, eight years older than me, had gone to that school. If the secondary modern boys were destined for the factories, the girls were viewed as nothing more than future housewives and mothers who might also clean the homes of the rich for a bit of cash on the side.

However, it can still be argued that education in those days offered more social mobility than takes place now, although we like to pretend as a society that class is a thing of the past and anyone can reach the top if they work hard enough (possibly because of our apeing of American culture). A lucky few moved up into the middle classes: some, like me, mainly on a cultural level, but a handful into positions of influence and power and a comfortable middle-class salary with a home in the leafy suburbs. It can also be argued that despite the aspirational snobbery of things like Gaudeamus Igitur, the system genuinely sought at the time to educate at least some of the children who hailed from a lowly background and increase their cultural capital, and had a real desire to raise societal standards and make lives better. We may criticise them now for their patrician attitude, but our representatives in parliament often genuinely cared about the people in their charge in those days.

I said earlier that I felt conflicted about the return of grammar schools. Perhaps it would have been more accurate to say that I think we are asking the wrong question. Rather than proselytising for or against grammar schools, and as always in Britain obsessing over class (and I know I’m as guilty of this as anybody), we should overhaul the whole state education system, which seems unfit for purpose, and drag it kicking and screaming into the 21st century. Many of its critics would agree that the system needs radical change, but the problem is that we cannot agree on what it should change to, so what has tended to happen since I was born has been a new initiative every ten years based on the latest fashionable thinking, which is then reversed in the following decade.

It’s hardly a new idea, I know – it’s been around since at least the 1960s – but our state education system as it stands seems based on the concept of the factory, where the raw materials (children) enter at the age of five, are processed in an identical way, and eventually emerge as a product at the end of the conveyor belt. I can see the economic imperatives that dictate why this occurs, but it feels like herding cattle and takes no account of the specific skills or needs of the individual child. It is mass education for the masses, while the children of the establishment and the elite go to the same schools that their parents, and often their ancestors, attended. This is not only bad for social mobility, but eventually makes a country politically and economically stagnant.

So why, in an age of IT and now AI, can’t learning be more individualised? What is the point of teaching bored children geography, for instance, if they want to learn about computer programming or the practical skills which will help them find a job? Is knowing the capital of Mali or how ox-bow lakes are formed really going to be helpful to them in adulthood? And it’s not as if they leave school knowing lots about the world even after those five long years of geography. We all fail to learn things if they bore us and seem to have no purpose.

One possibility is to place students (but not as young as eleven) onto varying learning paths but in a transparent way, as happens in Switzerland, when after a few years of secondary school, pupils at the age of around fifteen choose between two options: the general or the vocational. The first is geared towards academia and university; the second is more practical and usually includes an apprenticeship. I have some personal experience of this, having taught English to groups of Swiss apprentices (although many years ago now), and I must say that they were diligent and respectful, far from the sullen teenagers I hear teachers often complain about in the UK. They got their heads down and worked, not because of some overpowering desire to improve their English, but because they needed a certain score to pass the overall course and earn the qualification (a qualification which really mattered, unlike GCSEs for most pupils in the UK). The Swiss youngsters were far from bursting with curiosity, but they were disciplined because they recognised the need to succeed at what they were doing.

I have no idea how the young people themselves felt about this, and whether there was any stigma attached to following the vocational path. Human beings always stratify by making comparisons between each other and class consciousness is not confined to England. So I’m not necessarily advocating this type of system, but it does seem to make more sense than having students who are bored out of their skulls sitting through lessons about the Treaty of Westphalia. Those who disagree may argue that this reduces the purpose of education to merely preparing someone for the world of work, and we should aim for a more rounded, liberal education which teaches critical thinking and more general intellectual skills. In my opinion, this is rather wishful, because in a world where education is centred on tests and exams, teachers will teach to the test, so very little rounded education ever takes place in reality.  

At the moment the system as it exists seems to be more about policing our children and keeping them off the streets and out of harm’s way while their parents are busy at work. This process of socialisation, although sometimes criticised as normative, or even as quietly oppressive, has to be an essential part of education of the young. But is it best served by classrooms of thirty studying subjects in which they have little interest, stuff that they rapidly forget when they leave school even if they learn it and parrot it in examinations?

I have focused so far on the young, but the idea that we cease being educated once we are squeezed out of the pedagogic sausage machine is limiting and depressing, and education for life should be more than merely a fine-sounding slogan. In theory, at least, developed countries could and should offer their citizens a chance to study and develop their knowledge and thinking, especially those who missed out when they were young. At that age, many of us don’t see the value of learning because we are obsessed with other, more pressing issues such as sorting out our hormones. When people discover a passion for something later in life and would love the chance to study it in a structured environment, an affluent society should do all it can to help them to do so. This world of learning and leisure was what we were promised lay ahead when I was growing up. In reality, though, we have somehow created a world where many people have to work two or more jobs simply to keep a roof over their head and food on the table. Few have the luxury of time to think, and one of the consequences of this is the political slanging match that envelops us today.

Education, like many other societal systems such as taxation and benefits and pensions, will always be out of date: we are so often fighting yesterday’s war and playing catch-up. But for all the furious arguments and swings of the educational pendulum over the last sixty years, little has really changed, and the same sense of gradual, inexorable decline in standards still pervades.The upper-class on the whole don’t care about state education because they take no part in it, the middle-class will game any system that comes into existence, while the poor will take what they are given and do as they are told. Young people are often berated for dreaming of instant celebrity rather than studying hard to forge a future career, but when the latter seems no more achievable than the former for most of them, who can blame them? In a world dominated by technology, the few who rise from the bottom will do this in spite of, rather than because of, their official education. We need to think more radically than merely squabbling over whether to return to grammar schools.

ECHO CHAMBERS & KEYBOARD WARRIORS

This week I’ve been reading a book by John O’Farrell, Global Village Idiot, which is a collection of opinion pieces he wrote for The Guardian at around the time of the Millennium. I want to like these short articles, I really do, since I recognise that they are clever, knowledgeable, witty and well-written. But somehow my response is luke-warm.

One obvious reason for this is that I buy most of the books I read these days at a second-hand sale held by an animal charity here in Gozo where each book costs a euro, so they’re hardly hot off the press. This kind of topical writing which responds to issues which were buzzing at the time suffers when it is twenty-five years old and the topics and the people in them have lost their immediate relevance. But there’s another reason for my tepid response, I think: I agree with most of what O’Farrell says.

Compare this to how I respond to Clarkson or O’Rourke when I read stuff written by them in the same genre. I want to release my inner McEnroe and scream at the words on the page: ‘You cannot be serious!’. In contrast, O’Farrell seems eminently sensible to me, and all I do is nod my head in silent agreement and wonder why some people cannot see it, whereas I engage with the former pair of authors at a much more visceral level.  Occasionally I wonder if they write what they do simply to make money and don’t believe a single syllable of it, but most of the time I take them at their word and spit feathers. They are deliberately and entertainingly provocative, and I quite enjoy being provoked, and generally we are far more passionate in dispute than we are in agreement. Which brings me to the focus of this week’s essay, which is the weird world of online vlogs and their unholy mix of echo chamber and gladiatorial conflict.

People like me who fret about the effects of YouTube tend to focus on its cultish nature, the way social media gathers us together into like-minded groups reinforcing one another’s biases and demonising anyone who thinks differently. The vegans tut at the moral turpitude of carnivores, the atheists chortle at the stupidity of the religious, the misogynists rail against feminism, the right-wing warn us of the evils of Marxists, the left-wing vilify all business and corporations. Then, when we leave our computers and look at the real world around us and see the social and political polarisation, especially in the US and the UK, we blame the online world.

The polarisation in society at large is certainly mirrored on the internet. There is a tendency on sites like YouTube to view everything as a struggle to the death. It’s very common to see a title that says something like ‘Jordan Peterson DESTROYS woke feminist’, or ‘Atheist unable to answer ONE simple question’, but when we click and listen to the videos, no one has been destroyed and the atheist has given a clear answer, but this is only their opinion and has been proved neither right nor wrong: two people have simply disagreed and rarely does one of them seem to be the obvious victor. Another word that is used all the time is ‘debunk’, when the vlogger searches the internet for the videos of opponents and claims to refute what they have found in them, but normally all they have done is restate their own opinion and the final truth, if any final truth exists, remains undecided.

Another favourite trick which is allied to this is to put oneself up against a loony tunes opponent. Hosts scour YouTube for ridiculous exemplars of whatever they want to argue against, as if the person they have found represents everyone on that side of the argument. Vloggers who agree with J.K.Rowling’s opinions on transgender, for instance, find the most ludicrous trans activists with hairy chests, rings in their noses, and half-baked gender ‘theory’ spouting from their mouths. Dawkins has sometimes been guilty of this straw man approach, although these days he does tend to restrict himself to opponents of higher quality and in my opinion this exposes some of the weaknesses in his arguments, which is good, since I thought this testing of ideas was what serious discussion was supposed to be all about.

A genuine debate sometimes happens, but this is relatively rare. Two ‘big names’ are often pitted against each other, but this is less an academic exchange of views than twelve rounds in the ring, and the title of the video will generally make this clear by using words like ‘versus’ or ‘face-off’. Interestingly, the better examples I have seen of these discussions are often those which are set up like a traditional debate, with clear roles of moderator and affirmative and opposing teams, and strict rules about timing and behaviour. These videos are still set up as contests, though, so we rarely get two people trying to reach some kind of common ground, using the discussion as a way of coming to a compromised conclusion rather than a situation where both parties merely reiterate and reinforce their opening position. And to show that it’s all a competition with a winner and a loser, there is often a vote among the studio audience at the end to see which team ‘won’.

But can we really blame YouTube for this, or the hosts of the vlogs? The truth, I suspect, is that we rather like this tribalism in Anglo-American culture, where everything is a zero-sum game (is this a product or a result of our political system of winner-takes-all?). It’s what we want and we vote with our feet if we don’t get it, so that’s where the advertising revenue goes, and reasonable, civilised discussion will inevitably lose out to pistols at dawn. The comments section below the video underlines this: it is often vicious, with people piling nastily into each other, sometimes giving the impression that they haven’t even watched the video but have just come spoiling for a fight. If the public intellectuals in the video sometimes seem unnecessarily aggressive, they come across as doves compared to many of the general public who post below.

This antagonistic approach seems largely cultural to me, and I will detour briefly to describe my experiences as a teacher of English to non-native speakers as my anecdotal evidence for this claim. The first time I taught groups of students from east and south-east Asia, I had to learn a totally different approach in order to engage them. With people from European countries, I simply had to raise a topic for debate – say, euthanasia – go through some of the relevant vocabulary, and then let them loose. The problem wasn’t to get them speaking, but to prevent the exercise descending into chaos and to persuade them to take turns and not speak over each other, as they rapidly split into two opposing, heavily committed groups.

In contrast, I can still remember my first experience with a class that consisted entirely of Asian students. I set the exercise up in my usual way and waited. At first there was an embarrassed silence as eyes glanced down and no one wanted to speak. Eventually, when they finally got going, there was an immediate attempt to reach a collective opinion on the chosen topic. Learning from this, the next time I elicited some common arguments for and against beforehand, split the class into two groups, and gave each group instructions whether they should argue pro or con. Yet still the class soon morphed into a search for group concord. In the end, in my teaching practice, I gave up on this kind of activity and gave them tasks instead which were a collective exercise: for example, designing a shopping mall. For me, as a westerner, this was boring in the extreme, but, in terms of my teaching goals of getting them speaking and using the language, it worked.

I know I am stereotyping, and not all Asian students are like this, especially nowadays (the lesson I have outlined here happened more than twenty-five years ago), but I think there is still some truth in the stereotypes. If most of the vlogs online came from people from Japan or China, I feel sure the online world would be very different and there would be much less ‘debunking’ and fewer keyboard warriors. In general, we welcome conflict in Anglo-American culture because we see it as creative, and I don’t doubt this is sometimes true. The problem, though, is that we westerners tend to think we know more than we do and speak far more than we listen, especially given our recent denigration of experts as made famous by Michael Gove.

The irony is that although we in the UK and the US love to think of ourselves as rugged individualists who would never follow the crowd, we gather together in echo chambers when we are not taking disputatious positions in free-for-alls. We seem to end up with the worst of both worlds: an aggressiveness which sees winning as the most important thing, combined with an unadmitted groupthink. Most of us could do with a lot more modesty and a greater awareness of how little we really know. This is nothing to be ashamed of, and maybe both of these countries would not be the shambolic mess they currently are if we used the internet more frequently to learn rather than to divide into two tribes who simply shout at each other.

SURVIVING RETIREMENT

Many among my Baby Boomer generation dreamed of the day they retired, and especially early retirement at fifty or fifty-five. No more struggling to get up on cold Monday mornings to do a job that they found meaningless, no more self-important managers swanning around in meetings while treating them like underlings, no more working like a dog to make money for The Man. They could finally step off the treadmill and write that masterpiece or set up their own little company to do what they’d dreamed of doing all of their lives. Freedom at last.

But the reality is often very different. Assuming eight hours’ sleep (which is far from guaranteed as we get older, since our quantity and quality of sleep tends to rapidly deteriorate as we age), that still leaves sixteen hours in a day, or well over a hundred in a week. That’s a hell of a lot of hours to fill, even for the busiest of bees.

That’s assuming we’re still here to fill them, of course. My father, for example, having started work at the age of fourteen and then worked for just over fifty years, retired on a state pension at the age of sixty-five. At the time, he was slim and seemingly fit. Nine months later, he was dead.

Before I started writing this piece, I had assumed my father’s death shortly after retirement was a very common phenomenon, but a quick skim through the research on this suggests my thinking might have been swayed by a mix of availability and confirmation biases, and perhaps this death spike is a myth. Nevertheless my guess remains that it might have been more common fifty years ago for working-class men of my dad’s generation for two reasons: firstly, because they went from physical jobs that kept them very fit to a situation where they were largely inactive, and, secondly, because that generation of men saw themselves as providers for their family and lost that role on retirement, a massive psychological blow.

This brings us to the problems of retirement and there are many. The first and most crucial, of course, is health, which rapidly gets worse as we age. It’s true that this would happen regardless of retirement, but perhaps it is exacerbated by a sedentary lifestyle, often lived alone, and a consequent indulgence in too much comfort food and all-day booze. Adverts targeting the elderly always depict them as couples on park benches amid blossoming spring trees, the men in sensible cardigans and the women in fashions which even few retirees would wear these days because they look so Woman’s Weekly. In the media in general there is little sign of what I see almost every time I go down to the shops here in Gozo: old people with bent backs wheezing as they climb the hills, stopping to rest every twenty steps, and struggling simply to walk.

The people in the adverts always look comfortably middle-class. But for old people dependent on a state pension, lack of money is a huge problem: believe me, I know, I am one of them. In the worst cases, as in a recent phrase made popular in the media, it is a choice between eating and heating. But having money is not just about avoiding the stress of living on the financial edge; good levels of disposable income also offer a range of positive benefits. They mean vacations to warmer places in the winter, trips to the local pub or visits to National Trust buildings, not having to wear old clothes with holes in them, being able to afford a good haircut, a trip to a local restaurant: simple things that give people a sense of purpose and pride.

Another big problem for the elderly retired is social isolation, especially for those whose spouses have passed away after sharing a lifetime together. This makes me think of my mother and what she said to me once on one of my visits home from abroad: ‘Sometimes these walls feel like a prison’. She was talking about the little apartment that the local authority provided, a place she had always spoken of with gratitude and great fondness. During my visits, we’d walk together to the local Sainsbury’s and have a coffee and a piece of cake in the cafe there and she’d tell me how wonderful it felt to get out of doors. How sad is someone’s life when a trip to Sainsbury’s is a special treat?

Work may be a drag at times, and we may dislike some of our colleagues with a vengeance, but it also tends to supply many of our social contacts. So the elderly retired who live alone suddenly go from a life where they meet other people on a daily basis to one where they can go for days when the only people they see are the check-out at the local mini-mart and the faces on their TV screens. Snobby people like me sometimes sneer at TV and cite the famous put-downs such as ‘chewing gum for the eyes’ (Frank Lloyd Wright) or ‘the idiot box’ (original citation uncertain), but it often provides the only human company for the elderly and single.

Family helps here, of course. In those adverts I’ve already mentioned, the ageing couple, when they’re not in the park in springtime, are pictured in their living rooms surrounded by rosy-cheeked grandchildren. But in the modern, mobile world, it’s very common for old people and their adult offspring to live in different cities many miles apart, or even in different countries, so visits from grandchildren can be few and far between. And, believe it or not, not all families are harmonious, and, even if the weekly Sunday visit is possible, it can become a bothersome chore which neither party enjoys.

Another problem of filling all those empty hours is boredom. Even retirees who are comfortably well-off are often unable to do the things they most love doing due to the natural decline of old age. Two of my closest friends are a married couple who are thinking of moving out of the house where they’ve lived happily for many years because they can no longer cope with its demands. This is especially hard for the husband, who has tended a beautiful garden all that time but now finds himself unable to maintain it to the standard he would wish. The retired are often advised to take up or spend more time on a hobby but lots of hobbies become impractical for the elderly, especially those which require bodily strength and flexibility.

Nor does the pace of change, especially technological change, help the elderly. It’s certainly not true these days that all pensioners are technophobic and I see many people in their dotage who can click on their screens with the dexterity of digital natives. But many cannot, and even relatively simple tasks like internet banking can become a source of anxiety. My sister, eight years older than me, was a perfect example. As far as I know, she never touched a computer keyboard in her life, and her ability with her smartphone was restricted to answering calls and being able to ring three or four listed numbers. Nor could she afford an expensive monthly package, so, in the days before WhatsApp, that ruled out calls abroad. This means many of the elderly do not have the option of modern technology as a source of entertainment or an aid against social isolation. The idiot box must suffice.

It’s not all doom and gloom, though. Many older people do take up new hobbies, have an active social life, and even take part in sports or go to the gym: the media loves to feature stories about grannies who first go parachuting when they are in their eighties. It’s all a matter of attitude, the self-help books declare in their towering wisdom (as long as one has health and money, of course). And I think society is adapting to this new world in which we tend to have more contact with people of our own vintage than we do across generations, with social groups of all types springing up, often based on personal interests. Some pleasures can be free or very cheap, and do much to counter the avalanche of loneliness in a modern world where so many people live alone.

I imagine many younger people reading this won’t feel all that sorry for me and my Baby Boomer compatriots because we were the pampered generation, especially those of us who weren’t born with a silver spoon in our mouths, in an age when governments cared about all of their citizens, not just the rich, and felt some sense of responsibility for helping them to prosper and thrive. And for the moment at least we are able to retire on a state pension, even if it offers only a rather grim day-to-day subsistence, and if we are sick we aren’t just left to die unless we are able to work (although as our governments cosy up to billionaires rather than take care of their people, I suspect this may not last: old people in their thousands gasping their last breath in cardboard boxes on the streets cannot be far away). Bad as this may become for us in the next few years, future generations will have it much worse. We have sold them down the river in a state pension system which is a kind of Ponzi scheme, the day of reckoning will have to arrive at some point in the future, and those left holding empty promises will be the ones who look at the IOUs in their hands and realise they are confetti.

Before I wrap up this rather depressing blog, I’d like to make a few comments about my personal situation, if only to make clear that I recognise how lucky I have been. I teach part-time online, but I only have to do this because I want to live abroad and I have to top up my state pension in order to pay the rent. If I had chosen to go back to the UK to retire, I would probably have received some help with rent and council tax, and just about been able to survive, although I suspect my mental health would have been much worse. Future generations, left to their own devices regardless of their health or disability, are going to have to work (if they can find it) until they croak it.

I’m also lucky because of the kind of person I am. I don’t really suffer from the social isolation because I’m introverted and frankly a little anti-social and I’m glad I no longer have to face being among groups of people and struggling to enjoy myself: something which I’ve always found exhausting. Nor have I needed to give up any of my hobbies. As an idler and a coward who is also something of a nerd, I promise you’ll never find me in a gym or see me dangling on the end of a parachute: I can still do most of the things that I love, such as writing, reading and walking. One day, I’m sure, these things will become impossible as my eyesight worsens and I’m no longer mobile, and I dread that day, but until then I don’t have to get out the Pringles and switch on the idiot box.

I realise that I’ve said nothing particularly original in this glum piece, but I don’t want to come across as totally negative about retirement. I’m sure the world is full of retirees who are very happy; however, getting old and no longer working is not always the golden age of freedom that the adverts like to portray. It can be good as long as you are lucky enough to be healthy, have enough money to get comfortably by, can still enjoy your hobbies, and maintain the level of social contacts that you prefer. The bad news is it’s not going to get any easier and the generations behind me are in a desperate position if they are poor. Having McJobs in a world where renting costs more than buying and unable either to get on the housing ladder or to save for the future, under governments which are more and more self-serving and corrupt and authoritarian, they will find themselves deep in the brown stuff when their bodies start to creak and their hair turns grey.

THE SWANSONG FOR CLASSICAL MUSIC?

In the unlikely event of my being invited to a soiree in Islington or Primrose Hill, I could probably hold my own in discussions about poetry or painting with aesthetes whose vowels sound like those of the old Queen in her Christmas message to the nation. But the purists among the chatterati might bar me from their dinner parties because of a huge lacuna in my cultural repertoire: I know almost nothing at all about classical music. And I am far from alone in this, I’m sure. Classical music, very much like poetry, has become something of a niche interest in the contemporary world.

For me, one of the big negatives of classical music is that it reminds me of school assemblies, even after almost sixty years. I know this is unfair, but it’s definitely true. I went to a grammar school in a working-class area and one of the aims of our headmaster was to introduce the oiks to the canon and high culture. So each morning after singing a religious dirge indoctrinating us into Christianity, we had to sit and listen in silence to a piece of classical music. On the whole, of course, this had the opposite effect to that which our headmaster sought: most of us grew to hate it, just as most working-class kids learned to hate Shakespeare and Keats in English Lit. Both were seen as medicine which we were forced to swallow because they were good for us in some way that was never explained. Our unsurprising response was boredom and sullen resentment.

And we had our own music to love instead. This was the mid-60s, when pop was flourishing, rock and soul were taking off, and the music in the charts distinguished us from our poor benighted parents. Unlike the classical pieces in assembly, pop had lyrics that spoke about our daily lives and which we understood: Dead End Street by the Kinks, with its theme of unemployment and poverty; Friday on my Mind by the Easybeats about people doing a boring job and longing for the weekend when they could go out and have fun by splashing all their wages in a night of pleasure; the far from subtle message of Let’s Spend the Night Together by the Rolling Stones. In contrast, what did we care about 1812 or the Virgin Mary? Our teachers told us how much greater Bach and Mozart and Beethoven were to the substandard trash that we adored, but the trash meant so much more to us, so much more than the symphonies and concertos that the school stuffed down our throats and told us we were just too dull to understand.

On a personal level, another important fact was that there was almost no classical music in my home: the closest we got was something titled The Dream of Olwen by someone named Charles Williams. My father listened to his generation’s popular music, which meant Jim Reeves and Winifred Atwell, (groans from me, of course), and I remember printed scores for songs like Little Brown Jug and When the Saints Come Marching In. We had a piano, which was not as surprising as it might seem nowadays, when pianos are largely the preserve of the middle classes for little Zoe and Tristan, but in those days many working class families had one, even if, like ours, they needed tuning and had a couple of dumb notes (we must remember that there was little entertainment other than the radio when I was very young, and singing around the piano was a common way of passing winter nights, twee as that might sound these days). Despite this, the absence of classical music in my home meant I needed to actively seek it out, and there was a contemporary music all around me which I was much more keen on discovering.

And for those of us who had intellectual pretensions (moi?), there was always jazz. Somewhere between the ages of sixteen and twenty, I discovered Coltrane and Coleman and Davis and Sun Ra, plus more mainstream artists like Holiday and Waller. These guys offered cachet without any association with the tedium of school assemblies. And despite my self-mockery here, I can honestly say that I responded to them more readily and naturally than to the classics, for they seemed much closer to the modern world that I lived in. They were urban and cutting-edge, while classical music, much like the novels of Hardy, seemed to come from a different world, a world where I didn’t belong.

So far this essay has been personal and anecdotal, but I’m sure I was just one of hundreds of thousands who had a very similar musical upbringing and if we multiply my personal experience by those numbers, it is clear why classical music took such a hit, and the roots of this were social, political and economic as much as purely aesthetic. At the aesthetic level, however, classical music at the same time had become ‘difficult’, following the Modernist experiments of Stravinsky, Schoenberg, Webern, Boulez, Stockhausen et al, so in a similar way to poetry, painting, architecture and ‘literary fiction’, it distanced itself from the masses. A recognisably contemporary situation emerged, where an intellectual class centred on critics and academics, rather than a class defined mainly by upbringing and money, became the setters of artistic standards. An increasing intellectualisation and theorisation of the arts took root, which left most people feeling stranded, unable to enjoy and understand the ‘avant garde’, and totally intimidated by modern developments in Art.

Film (and sometimes TV) has filled the gap this left behind, along with popular music. While it is true that there is an arthouse film circuit which features films that alienate the general public just as much as piles of bricks, there is also a prodigious mainstream film industry which is driven by demand and produces what most of the public want to see. The evenings gathered around the piano have become trips to the local multiplex. Social and technological changes have consolidated this move from print to music to video. When they weren’t being pale and interesting and covering up their chair legs, the Victorians spent their spare time reading; hence the chunky novels and their relative love and awareness of poetry compared to what exists nowadays. Classical music had its heyday in the days of radio before TV. But in what is an increasingly visual and technological age, film and recorded popular music have become the dominant art forms of our society.

Some people argue that classical music is superior to popular forms because it is more complex. I don’t know enough about music to give a confident response to this idea, but I can talk a bit about poetry, where greater complexity does not necessarily mean greater art in my opinion, with Blake’s Innocence and Experience as the perfect example. Another common idea is that classical music requires superior technical ability, so Maria Callas, for example, is necessarily a greater artist than Billie Holiday. But technical ability is only one part of what makes any work of art great: sincerity matters as much or even more. Art is also not just the output of gifted, creative individuals: it is an expression of a society and a culture and cannot be separated from this. It generally springs up from the grass roots, as blues, jazz and a lot of 60s pop did.

Complexity is often then added to these grass-roots art forms as they are influenced by other cultures, both domestic and foreign, as, for example, when Japanese prints began to arrive in Europe in the 19th century and inspired the Impressionists and Post-Impressionists, or indeed when pop itself began to borrow from the classics and from jazz in the late 60s and 70s. The increase in complexity that results from these infusions may lead to a concomitant gain in subtlety and skill, but often something of value gets lost at the same time, as arguably happened with blues and jazz as they evolved and spread beyond their immediate social background and took on the mantle of ‘serious’ musical genres.

I also mistrust the tendency to judge the value of a work of art by its scope or ambition. The simplicity of pop, the fact that in its purest form it is crystallised into a mere three or four minutes, can be a virtue as well as a limitation, and, just as a haiku can require as much artistry as an epic poem, a short piano piece by Satie can speak to us as much as a symphony that lasts longer than an hour. In Art, for me at least, less is often more, and this frequent valorisation of intent over final result owes much to the feeling I’ve already mentioned, that Art is somehow good for us, a spoonful of linctus that is morally and intellectually uplifting.

The passage of Time often does much to add to a work’s cultural reputation, of course. Just as Shakespeare’s plays have gone from mass entertainment in his own day to Culture about which we whisper in hushed tones, it is possible that contemporary pop may one day be much more secure in its ultimate standing than it is at the moment, as it gains the patina of age and is transformed into high culture. Alternatively, the contemporary classical pieces which are largely ignored by the general public may become recognised for the great art they are, as the famous pop and rock bands recede into the background and become little more than footnotes in musical history. Who knows?

In short, none of us can have any sure knowledge of how the music of our age will be regarded in the future. Since I get annoyed when people who think they are being smart declare that painting is dead in an age of installation and computer art, it would be hypocritical of me to say the same about classical music, especially since I have so little knowledge of it. I would reiterate, though, that the pop of the 1960s spoke to me and my peers in a language that we recognised and cherished, and, even if it ends up with a reputation closer to that of Victorian melodrama than that of Bach or Mozart, it was a genuine reflection of a time and a place, and an entire generation for whom it provided an authentic voice. I’m not sure that what we label classical music can do that any more.

LOVE IS IN THE AIR

It was Valentine’s Day this Friday, so what else could I write about this week other than love? The card shop windows were glittering confections of impassioned red and pink, the restaurants were offering special love menus for dating couples and placing a rose in a vase on every table, the champagne flowed (or at least the Prosecco), the florists and chocolatiers had grins that spread from cheek to cheek, and I’m sure someone somewhere got out their Barry White collection, crooned joyously along, and dreamed of running their fingers through the undergrowth of his chest.

It’s easy to mock it all, and to say that the greatest love of all is that of big business for money. But big business is only snatching the opportunity to fill a real need that most people have to be loved and to love in return. Perhaps those of us who had parents who loved us unconditionally (which is by no means guaranteed) are trying to recapture the feeling of security this engendered, while those who never knew that kind of parental love are desperately searching for something to fill a lack that they’ve felt for the whole of their lives. And sadly for some, I think, this lack of early love has scarred them for life, so that they are unable to feel worthy of love and tragically make sure they never find it, even if their trashing of their relationships is subconscious. Having loving parents is a gift we take for granted if we were lucky enough to enjoy it.

But Valentine’s Day isn’t about parental love or any of the other kinds of love which the ancient Greeks had many different words for, unlike the impoverished English language which lumps them all together under a single moniker. We’re talking Romantic love with a capital R, the subject of thousands of saccharine songs which send us messages we can’t help but buy into even if we know we’re being played for fools. I’m not saying that finding a love that lasts is impossible: I’m well aware that it’s not, since I have friends who have been together for fifty years and still show all the signs of profound and lasting devotion. It’s just not romantic love anymore. It’s something deeper.

So is Valentine’s Day just a bit of harmless fun or does it send out messages that make many of us unhappy? The studs and the belles love it, of course, as they line up the cards they receive and display them proudly on Instagram, but what about the nine-stone weaklings and the plain Janes whose mantelpieces are notably empty (if mantelpieces still exist)? As we get older, it’s easy to forget just how desperate most of us are for affirmation from our peers when we are teenagers and how rejection can cut to the quick at that age. In some ways little has changed since I was in my teens, but in other ways contemporary social media accentuate the pain of rejection by making the suffering so public.   

My favourite whipping boys, the evolutionary psychologists, will state that things like Valentine’s Day and romantic love in general are just nature’s way of making sure we procreate and then stay together afterwards until we’ve brought up our children in a way that is essential for a primate social species. It’s hard to argue with this, or the second point at least, although if all that matters is sufficient procreation to replicate the genes, sexual desire alone is surely enough without any need for a bouquet of red roses and a box of Ferrero Rocher. Because there are also cultural roots to western notions of romantic love which are specific to the European tradition and quite unlike the way desire is integrated into other societies before their brush with Europe: medieval chivalric romance, the troubadours who sang about it, the plaintive ballads, Petrarch’s sonnets.

One fascinating aspect of romantic love is its association with tragedy. Everyone knows that Romeo and Juliet didn’t end up in a cottage in the countryside with roses above the door, and, although much less implanted in the public consciousness, poor old Abelard truly suffered for his passion for Héloïse. All of this can of course be seen as mere literary conceit, but the core has survived the ages remarkably well. Leap forward not that much short of a millennium, and there are songs from 1960s girl bands like the Shangri-las with their stories which inevitably ended in tears: boys who couldn’t wait to crash their motorbikes or cars, while their helpless girlfriends looked on and clutched their Kleenex. Doomed love, secret sex, and gruesome death seem entwined in the western consciousness.

This is a very teenage thing, though: once we hit the middle-aged market and the books that, as a baby boomer, I still label Mills and Boon, romantic tales must needs end happily. In their classic form, the bad boys became doctors and lawyers and the teenage female protagonists found work as nurses and secretaries, and marriage was de rigueur in the final page. It won’t surprise you that I don’t read a lot of this stuff but I’m told that the content can be quite spicy nowadays as these books catch up with contemporary sexual mores. But I fear this faux sophistication and somewhat tame carnality will detract from their campy content. Call me old-fashioned, but I want doctors who are strong-jawed and nurses who have never been kissed.

In the second wave of feminism, these books attracted a lot of criticism for selling girls and women the fantasy of romantic love and acting as pernicious agents of patriarchy. And although I’m a guy, I can testify to the power of these illusions of everlasting love, for as an adolescent I spent a lot of time dreaming about how some day he’d come along, the man I love. Looking back in retrospect, though, there was much that was condescending about our theorising in the 1970s, as we congratulated ourselves on being the enlightened ones who had seen through the false consciousness, and we put down women who read Mills and Boon as brain-dead dopes and dupes. There was a reaction against this not long after, as academics in cultural studies departments, heavily influenced by postmodernism, argued that girls and women were using these books as tools, fully aware of what they were doing, and extracting what they wanted from them, an escape from everyday reality, and never really bought into the dream. The truth, I suspect, lies somewhere in the middle.

Men, of course, at least those who like to think of themselves as real men, pretend they are above this kind of thing and that they only join in the game because their girlfriends will make life unbearable if they don’t. Don’t believe a word of it in most cases. Yes, men might be disdainful of the chocolates and the dozen red roses, but they are every bit as desperate to become the people society tells them they should become: those public statements of gender normality, husbands and fathers. While it’s true that, much to the consternation of traditionalists, a surf through social media suggests that more men these days are beginning to reject these roles, the pressure to get married is still very strong, and once we leave the affluent west seems almost as obligatory as ever.

The same traditionalists, though, often seem conflicted about Valentine’s Day. On the one hand, as opponents of everything ‘woke’, they intensely dislike what they call the Dianafication of society, where everyone wears their heart on their sleeve and blubbers out their inner feelings at the drop of a hat (an opinion that I have to admit I largely agree with, although this will weaken my woke credentials). So they ought to hate the roses and the overblown sentimentality. On the other hand, Valentine’s Day glorifies the gender roles which traditionalists believe are natural and inevitable and therefore transcend culture, and encourages the me-Tarzan, you-Jane world to which they hope we will one day return, once we’ve found a way to convert all the pooftahs and trannies and given all the newly fey males a booster dose of testosterone.  Kinde, küche, kirche, but on one day in the year the little lady gets a pat on the head and a box of chocolates.

What I definitely think is regrettable is the guilt trip that comes with Valentine’s Day (and other days which are opportunistic commercial inventions, such as Mother’s and Father’s Day). There is an ineluctable pressure nowadays to buy cards and presents on these days as a measure of our depth of feeling, and the more elaborate and expensive the gift, the deeper our love is purported to be. As in almost every sphere of life, consumer capitalism has sunk its hooks not only into our skin but also our deepest desires, in a society where the ultimate pay-off comes not to consumers but corporations. As Tina sang, possibly as she remembered her days with Ike, ‘What’s love got to do with it?’.    

But in the end, despite the frenzy of contemporary marketing, as I said near the start of this piece, I have to question how much has really changed since I was a teen all those centuries ago. The skill with which adverts hit their spot may have been honed to the nth degree, but the need to be loved and to love is more or less the same. In some ways, perhaps, things have got worse. The pity that we used to show to women who were ‘left on the shelf’ has morphed into the hurtful, competitive shaming of a girl with a tiny roll of fat in her Instagram pic. As with every game, there are winners and losers, and I feel sorry for young people who haven’t yet learned how to shrug their shoulders and take all the sweetness with a massive pinch of salt.

ART, PUBLIC IMAGE, & PRIVATE SELF

Sometimes I worry that my poetry and my essays are a bit like progrock: earnest, elaborate, portentous, too deliberate. Art walks a tightrope between banality and pretension, and the sweet spot – the wafer-thin line that we must tread or else we tumble – is devilishly hard to navigate.

It’s a very English (British?) thing, I think: this dread of taking oneself too seriously and of trying a bit too hard. Whether we’re talking high culture (Wilde) or popular entertainment (Peter Cook), there’s this wish to be seen as a gifted amateur knocking off works of art on rainy afternoons when there’s nothing better to do, or coming out with stunning ad libs and off-the-cuff bon mots. Making any kind of effort is deemed rather vulgar, something that the middle classes might do.

This is tied up, I suspect, with the fear of embarrassment which stalks the English soul much more than any grim reaper. I remember a work colleague of mine who turned up one day with a grotesquely swollen face from some kind of dental problem. The first thing she did was apologise profusely, as if she was somehow to blame for the bacteria: however much pain she was in, her embarrassment hurt much worse. (She was well-bred and Scottish, so this trait seems to cut across class and maybe isn’t restricted to the English after all.)

I’m far from immune. When I talk about my writing, I’m wary of taking myself too seriously lest I sound ridiculously self-important, someone whose friends indulge him in his fantasies of being a serious writer but probably think he should give it all up and become a fully-fledged wino instead. An author who is competent at best, but no more than that. And in many ways in England, being a competent writer is worse than being a bad one because nothing is quite as shameful as trying very hard and achieving mediocrity: better to go the whole hog and assume the mantle of the modern McGonagall.

At school I took pride in the fact that I could pass exams easily while doing almost no work and was part of a tiny smartass clique who looked down our noses at anyone who studied hard, fellow pupils we dismissed as ‘swots’. (I was a horrible adolescent – my only defence is that I was pretty screwed up at the time.) This shows that upper-class disdain for those who put in an effort had trickled down, unlike their money, to the great unwashed. And even now, I sense that when I judge artworks, I have a largely unconscious preference for the quirky and the offhand: compare my high regard for Pink Floyd’s Piper at the Gates of Dawn, with the kooky, childlike, almost slapdash poetry of Barrett at its heart, with what I consider to be the dreary worthiness of the albums that followed once he was no longer the driving force of the band.

All of which is a long-winded introduction to a piece about the two different selves that all artists who exhibit or publish their work must negotiate, regardless of their success or failure: the public and the private. Mostly this is out of our hands if we are lucky enough to break out from obscurity: the myths begin to be spun around the work and especially the private life. Artists then face the same decision as any public figure, be that president, thinker or athlete: whether to shrug their shoulders and not give a damn about their public image, or to try to forge and take control of this persona.

If they do the latter, artists have little choice but to go with the grain. There has to be some kind of consonance between the work and the human being for this persona to be in any way credible. It’s hard to imagine Plath, for instance, downing a few beers in the pub and having a good laugh with her mates. Her image has to be tragic and long-suffering like her poetry, although there is a tough streak in her work that belies this persona in my opinion. Nor can we imagine a fey Jackson Pollock lounging around in a smoking jacket with a languorous look on his face as he sucks on a cigarette holder like some latterday Somerset Maugham, or Warhol decked up in masculine drag like one of his pictures of Presley.  

No one seemed to get more pleasure out of performing his public self than Salvador Dalí, with his waxed moustache and his diving bell that almost killed him. He self-consciously created himself as a joker, or even a charlatan, but his manufactured weirdness failed to impress Freud, who found his work too contrived to be the genuine fruit of the unconscious. Dalí was part of a modern trend where the artist becomes the artwork, which we can see in embryo in fin-de-siècle figures like Whistler and Wilde. At first, though, these flaneurs were far from lacking in technical ability – Dalí, for example, is generally recognised as a superb draughtsman. By the time of Gilbert and George or Tracey Emin, in contrast, technical skill had become an optional extra. The obligatory ability in the contemporary art world is a gift for self-publicity, to make the most of a new reality in which we have lost all confidence in a defining aesthetics and anyone can go viral and become a cause célèbre.

Personally, I feel mixed about this. It adds to the gaiety of nations, for sure, and does a lot to prick the pretensions of the artistic elite. But in another way it has become a method of adding to its pretensions in a world where name-dropping the correct intellectuals (ideally French) can conceal an emptiness of thinking and an art that can talk the talk but often fails miserably to walk the walk. Ultimately, Art is pretty simple, or should be, and the poem or the painting or the music should speak for itself. My saying this, however, betrays a certain naivety on my part, a belief that there can be a world in which a purity of response to a work of art is possible, free of any distortions that arise from social class and pretensions and expectations. I have serious doubts about the option to evaluate a work of art with genuine disinterest.

And almost all of us love a good story: Rimbaud and Verlaine, Plath and Hughes, Orton and Halliwell, Kahlo and Rivera, the Bloomsbury Set – these add a sense of ordinary humanity to what might otherwise feel like daunting and obscure aesthetic concepts. Narrative will trump theory nearly every time. Also, few of us have the leisure or the inclination to delve really deeply into the work of an individual artist, so these thumbnail personae have to do the heavy lifting. The stereotypes may be shallow but they serve a useful function.

Then there’s basic economics. Artists have to eat (or at least be able to pay for their absinthe). They may no longer need to suck up to the church or to rich patrons or to an intellectual cognoscenti but in the modern world they do have to know how to work the media and the marketplace. I don’t want to sound too cynical here but it’s hard not to conclude that the most famous painters of the last fifty years or even longer have been those who were dab hands at self-publicity. Perhaps talent, like murder, will eventually out, but a dash of notoriety can make a wonderful lubricant in the meantime.

So what exactly is the relationship between the private person and his/her public art? In many ways, a poem or a song or a painting, or sometimes even an entire oeuvre, can capture only a tiny part of the complexity of a human being. Someone reading my poems, for example, might imagine me a miserable neurotic who spends his days fighting off the black dog of depression, for I tend to write when I am down. When I’m up, I’m chatting online or reading a good book or scoffing food in a restaurant by the sea. Does that make me fake? I hope not, because the mood I’m in when I write feels authentic enough; I’m honestly not trying to come across as some throwback to the Romantics, sharing my existential angst with the world.  But it does mean there’s a gap between my public poetry and my private self.

We live in a modern space in which there remains very little art created by a society as a whole, where everyone becomes a performer as, for example, in ‘tribal’ music and dance. In the west, there were once elements of this communal artistry in things like the mystery plays, where an entire town took part in the making of the work and acted as both creators and spectators. Even today, remnants of this collective invention remain in the flamboyant excesses of the modern carnival or artforms like gamelan orchestras. But in general we now rely on a special caste of people called artists who create work for the mass of passive spectators, and an inevitable consequence of this is the rise of the individual artistic persona.

In a sense this persona is merely a more complex version of what we all do in everyday life: present a self to the world. This self is partly under our control, but is always open to differing, and sometimes even perverse, interpretations. And once our self-image spins beyond the people we meet face to face and branches out into the abstract realm of the public domain, it can metamorphose into something we can no longer hope to control. Ultimately, even in a case like my own, where my obscurity limits this public persona to friends and acquaintances who read my work, the fact that I choose to self-publish it means that I have no right to complain about what I feel are distortions in its reception. Like most poets and writers, I feel the need for an audience to complete what I create and in the process both the poem and the persona take on a life of their own.

In the end, though, I believe we can only write or paint or dance in a way that goes with the flow of whatever talent we possess and our basic disposition. The public image may be skewed in the sense that our art only reflects a part of who we are, as in my personal tendency towards gloom, but it cannot be skewed too far. With some artists, this manipulation of the image is conscious and deliberate; with others, it is merely an unavoidable consequence of going public. In a modern, consumerist globe where we all become brands to some extent, building a distinct public persona may make for a better career and more money and renown, but I hope that over time evaluation genuinely becomes disinterested and that the bubbles of reputation built on self-promotion will burst, while the best work will earn the reward that it deserves. But then again, perhaps, this is my being hopelessly naive.

MY FIRST TIME

Don’t worry, I’m not about to go all icky and tell you how I lost my virginity – not that sort, anyway. I’m going to describe my first encounters with various works of art, the ones that are imprinted on my memory in the sense that I remember where I was and exactly how I felt at the time. I apologise for what is a rather self-indulgent piece this week as I wallow in nostalgia.

A few weeks ago in another essay, I described when I first heard The Marble Index by Nico. I was in what we called ‘the middle room’ of my childhood home where I used to listen to John Peel’s radio programme and suddenly this music came on:  distorted bells, the drone of a harmonium, and finally the Gothic gloom of Nico’s voice. It chilled me to the bone, and I can still remember the icy streak that shivered down my spine as I listened. It felt like I’d found the perfect music for my teenage angst (I recognise that word sounds pretentious, but I don’t know what else to call it: I was scared a lot of the time in my mid to late teens and early twenties.)

My first meeting with Captain Beefheart was the Strictly Personal album and the track called Ah Feel like Ahcid. I was sitting in the front room of my next-door neighbour and schoolfriend, and his elder brother put the album on. I knew nothing at the time about Delta or Chicago blues or Howlin’ Wolf and I had never heard music remotely like this before. On this occasion, my strongest emotion wasn’t chilly fear, but a kind of shock, a very pleasant shock. I know that people, including Beefheart himself, criticise the album for the hippie-trippie bits which were added to make it sound more ‘weird’, (as I also do in retrospect), but I fell in love with the album immediately, partly, I have to admit, because of those awful hippie-trippie bits, and it was the start of a lifetime of loving Beefheart.

I often read people (for example, Matt Groening of Simpsons’ fame) say that it took them many listenings to like Trout Mask Replica, but I fell in love with it at once, although I can’t remember when or where I first heard it (perhaps at the house of another friend – sharing musical finds was a huge part of our social life in those days). I misunderstood it completely, believing it to be improvised by a group of musicians who couldn’t play their instruments very well, something that now embarrasses me deeply, but the love was immediate. Even if I didn’t ‘get it’ intellectually or technically, I got it on another level.

Although I can’t remember the first time I heard Trout Mask, I can recall the first time I listened to Beefheart’s next two albums: Lick my Decals Off, Baby and The Spotlight Kid. Both happened in a record shop in Dudley where you could listen to an album before you bought it, which must be about the only time in history where Dudley has been anywhere remotely near the cultural cutting-edge. I didn’t take immediately to Decals in the way that I had to Trout Mask (although it was soon to become my favourite Beefheart album, and still is), and I absolutely hated The Spotlight Kid. I can remember shouting out in the shop, my earphones still on, ‘He’s sold out! Beefheart’s sold out!’ God, I was embarrassing at times.

It wasn’t only ‘avant-garde’ stuff that I remember, though: I recall my first hearing of Wilson Pickett’s Land of 1000 Dances very clearly. I was on the back seat of a coach (although we called it a ‘charabang’) on its way to Blackpool for my family’s annual holiday and the song came on the driver’s radio. Again my love was instant. It just sounded so exciting and I couldn’t wait to find out what this song was. I also remember the first time I heard Reach Out, I’ll Be There by the Four Tops. I was back in our middle room, this time listening to the Sunday programme which played the Top Twenty (and later, I think, the Top Thirty). The intro to the song knocked me out, and I still think that somehow, and I can’t begin to explain why, it shows instant class, something that demands that you listen to the rest of the song.

OK, enough of music. Let’s turn to painting.

Almost the only time I can definitely remember first seeing a specific painting was Richard Hamilton’s collage, Just what is it that makes today’s homes so different, so appealing?. This was on the front cover of a book about Pop Art that I found in my little local library (at least, that is how I remember it – it is possible that it was inside a book about post-war painting because I also seem to remember a Jackson Pollock in the same book, which I didn’t like at all). I suspect that my memory here has more to do with carnality than with Art. It is hard for young people in this age of mass porn to imagine, but at that time any public show of male flesh was exceedingly rare, so, for a gay teenager like me, this was about as good as it got. Anyway, the male flesh led to my borrowing a book which spelled out that Art didn’t have to be boring religious pictures from hundreds of years ago which I felt no connection with at all: it could be about now, just as music didn’t have to be Mozart and Beethoven and the stuff that my school tried so desperately to make us appreciate.

After the previous paragraph, I recognise that I need to up my cultural capital, so I want to add that the only other painting I specifically remember seeing for the first time is The Potato Eaters. When I first went to the Van Gogh Museum in Amsterdam, I was aware of only the most famous paintings by Van Gogh such as the various Sunflowers, The Starry Night, The Night Café, Wheatfield with Crows, and so on. At the time, the exhibition was broadly chronological, so one of the first things I saw on entering was a painting which felt totally unlike a Van Gogh work to my untutored mind, The Potato Eaters, plus several other paintings from his time in Nuenen. I liked it at once, though, and I feel that I have never learned so much in one museum trip as I did during that visit (at the time there were not the crowds with cellphones that often make visiting famous galleries such a joyless chore nowadays).

One movie I remember seeing for the first time is Eraserhead by David Lynch, who passed away this week. To be more precise, I don’t remember sitting in the cinema and watching the film, but I do recall the journey home with my friend. It felt as if the bus was taking us through the twisted, surreal urban landscape of the film and we had entered its nightmare world. It wasn’t a comfortable feeling at all and we both felt it. Other than Eraserhead, I have many vivid memories from films, an artform which generally seems to implant its visions in the brain, probably because of their oneiric quality. One in particular I remember was watching Hitchcock’s The Birds on TV and then going to the pub across the road to buy some cigarettes and seeing the birds gathered on the telephone wires and feeling very scared indeed. The ship slowly drifting into the port in Murnau’s Nosferatu, followed by the mass exodus of rats, the visual arrival of evil, is another scene I will never forget.

I’d like to end my reminiscing now and talk more generally about our immediate reaction to works of art. There are artists with whom we feel an immediate connection: in my case, these would include Munch, De Chirico, Magritte, Ensor, Nolde, Bosch, Hopper, Rothko, Caravaggio, the Goya of the Black Paintings. Although I don’t remember the first time I saw the work of any of these, I do know that my liking for them was instant. Something clicked. I would contrast that with painters who I didn’t like that much at first but I learned to appreciate: Degas, Velasquez, Manet, Matisse, Rubens, Pollock. But no matter how much I now admire this latter group, I don’t think there will ever be the effortless connection that I felt, and still feel, with the former group. Nor should there be in my opinion. It’s good to learn about Art with a capital A, but when that destroys our instinctive preferences, I think something essential has been lost.

I also wonder why I have very little memory of the first time I read poems or works of fiction. Some of the plays of Strindberg and Genet had a strong effect on me when I first read them, as did fiction like The Catcher in the Rye and Siddhartha, but I don’t remember myself in the process of actually doing the reading (although I do remember sitting on a bus and reading The Pit and the Pendulum for the first time and being totally engrossed in the horror). I also remember reading Aesop’s fables as a child, in a set of blue encyclopedias in my home, and specifically a drawing of the fox who says that the grapes are sour. But other than that, I generally don’t remember my first reading of poems or novels. I think one possible reason why I don’t store literature in my memory in the same way as music or painting or film is that I am a writer myself and therefore I never quite switch off my critical faculty and lose myself in the work in the way I do when I approach artforms which I could never in a million years aim to emulate.

There may be something intrinsic to specific artforms that tends to make them more or less memorable. On one level, painting hits us in the face. We see the whole work at once before we begin to look at the details and start to take the work apart in our mind. There is a moment of instant reaction that we don’t get when we read literature as a process taking place in time. Despite also being temporal, music is different from literature in the sense that it surrounds us in a way that writing doesn’t and has a kind of immediacy and power which the latter can sometimes lack, as well as the visceral quality of sounds, especially drums and bass, reverberating through our body. A sense of occasion may also be important; there is something special about going to a concert or to a gallery which is not replicated by sitting quietly at home reading a book. Thus, a visit to the theatre is likely to be remembered more than reading the same text in an armchair and a movie may be more memorable in the cinema than when we watch it on a DVD.  Lulled by the darkness and the presence of an audience, we enter the world of a movie or a stage show. I know this entrance into an imaginary world is also true of reading in the sense that the story unfolds in our mind as we read – like me reading Poe on the bus – but we are more in control of the process and perhaps this makes our reaction less spontaneous.

In a way, I feel rather sad that we can never recapture that first moment of meeting a work of art that we love and which we sometimes become so habituated to that its shine begins to fade. I guess this is balanced by the gain of growing familiar with works of art that didn’t speak to us so instantly and elementally, and broadening aesthetically as we learn. But I personally find there are limits to this process. I don’t know if I’m typical, but my deeper loves rarely change all that much, and I will go to my grave loving Beefheart, Strindberg and Munch.  

RESEARCHING THE PARANORMAL

My Sun sign is Cancer, I have Moon in Aquarius, and Scorpio rising. How do I know this? Because as a late teenager and a young man in my early 20s, I became obsessed with astrology, and I can still remember where all of the other planets are in my birth chart even though it is at least forty years since I’ve looked at it. At the time I also read Tarot cards and used the I Ching, plus I was interested in lots of other paranormal phenomena such as ESP, telepathy and clairvoyance. Then I gradually lost interest as the years went by and my opinion swung towards scepticism about these things, apart from the I Ching, for which my admiration continued to grow, and which I still regularly read and consult. These days, though, I use it more as a guide to living than as an oracle.

I think it’s fair to say that this younger version of me wanted the paranormal to be real. This was part of who I was, and who I still am as a person: I have always had poetic leanings and a deep need to be creative and I suspect that anyone who writes poetry or creates artworks must have some kind of yearning to step outside the monochrome everyday world at times and enter the technicolour realm of Oz. Also, late adolescence was a very difficult time for me as I struggled to adapt to adulthood, and I found in astrology and related beliefs some kind of gentle nudge towards the person I might with good fortune become as an adult. Fifty years on, I am much less certain about the paranormal. I imagine sceptics would shake their heads at the gullible youngster I used to be, but be happy to hear that I’ve seen the folly of my ways and given up this other-worldly nonsense and replaced it with good old common-sense materialism. I’d argue, however, that committed physicalists have an impulse similar to mine, but in the opposite direction: their psychology requires that material reality is the only reality there is. I suspect they see themselves as paragons of logic and rationality in a world of deluded fantasists, but like all of us they have their psychological reasons for what they choose to believe. Despite what the economists say, there are limits to the rational potential of human beings.

However, sceptics basically talk sense when they argue that, from a logical perspective, it is hard not to be dismissive of claims for the paranormal. Take astrology, for instance: how can Neptune from all its millions of miles away possibly influence what is happening here on Earth? This would break all known rules of physics and everyday experience. I know some advocates of the paranormal like to point to quantum mechanics and say this has been a game changer, and that theories such as the uncertainty principle and phenomena like quantum entanglement render things like astrology more credible. But these advocates are usually not scientists and they are drawing on things that they don’t really understand as a convenient way of justifying their beliefs.

One of the biggest problems about examples of the paranormal such as telepathy and clairvoyance is that most of the evidence is anecdotal and therefore cannot satisfy the exacting requirements of science. For example, there are many stories of twins separated by thousands of miles suddenly knowing that one of them is in mortal danger: when one twin is having a heart attack and the other twin feels a terrible pain in the chest at exactly the same moment. The first obvious possible explanation for this is deception or hoax, although I personally feel that there are too many tales like this for them all to be the work of liars. But even if the people involved are sincere, they may have slowly added details to the story over the years or unconsciously embellished it to make the timing more exact. Ultimately, these anecdotal claims have nothing concrete we can point to as clear proof that they are true. Another common anecdote is someone saying they dreamed of an air disaster only to wake up to face headlines about an airplane crash. But the obvious rejoinder is that every night billions of people dream, some of them will dream about a plane crash, and occasionally a plane will crash. It might look spooky but it seems reasonable to conclude that the most likely explanation is probability and/or coincidence.

Because of these difficulties, attempts have been made to put the study of the paranormal on a more scientific basis through laboratory experiments. One of the most famous series of these were those done by J.B.Rhine at Duke University in North Carolina, involving the use of Zener cards (cards with symbols on the back) to test for telepathy and clairvoyance. Rhine originally published what seemed like amazing results to prove that some individuals really had an uncanny ability to predict what the next unknown card would be. His results were never replicated, though, and modern opinion rejects them on the grounds of sloppy methodology and poor use of statistics, plus a naive trust in his subjects which didn’t make sufficient allowance for the likelihood of cheating. Other famous research, in astrological circles this time, was the work of Gauquelin, a French psychologist, who seemed to show that there was a correlation between the position of planets in a person’s horoscope and their likelihood of success in a specific career. Again, though, these were not replicated by other researchers, and methodological and statistical weaknesses were again advanced to discount the results.

One problem that I personally have about many of the claims for the paranormal is that they are often so trivial and banal. Uri Geller, who was famous when I was young for bending forks and spoons was a perfect example of this. He was claiming special psychic powers that broke all the known laws of physics, but was he using these special powers to cure cancer or eliminate world hunger? No, he was bending cutlery. So people like me who want to keep an open mind about the paranormal have to admit that the field is full of charlatans and fakes, whether they be mediums conjuring up ectoplasm while communicating with the dead, or trained magicians using their dark arts of distraction and their sleight of hand to fool us. It is very telling that these people can never reproduce their magic powers when a working magician such as Randi is watching (which they always put down to the negative vibrations caused by sceptical onlookers). It all reminds me of the old cartoon of a notice on the door of a psychic consultant: ‘Closed due to an unforeseen emergency’.

Another problem I have is that claims for the paranormal normally come with a load of baggage about religion or ‘spirituality’. We are always being sold more than a mere phenomenon – for example, the ability for one mind to ‘read’ another without any physical contact – but an emotional package that seeks to provide solace from a life that is often hard to bear with promises of a better world somewhere out there in the ether. But why should the paranormal have to be supernatural? Why can’t it be nature that we don’t yet understand, as has happened throughout human history, where trial and error made us able to use natural forces that we couldn’t begin to explain? Also, the label ‘paranormal’ bundles together a range of disparate phenomena which might be better studied and tested individually. Avid believers in the paranormal tend to believe anything which can’t be easily explained, whether it be skies that rain frogs, ghosts that haunt buildings, spiritualism, cartomancy, numerology, or a host of other mysteries that we can’t fathom or things that we can’t even prove exist. But these things should be studied as discrete phenomena, not as a bag of spooky conundrums. The fact that there are almost certainly occasions on which it does rain frogs should have no bearing at all on the possibilities of telepathy or the ability to predict the future by reading palms.

So far I have sounded like a total sceptic and any new atheist reading this would probably nod their head in agreement. However, I want to go on to look at some of the problems with what I have just written. There is something rather facile and convenient about immediately grasping at obvious reasons to prove that claims for the paranormal are false. Just because something might be a hoax does not prove it is a hoax. Just because people who claim to have been abducted by aliens might be the victims of hallucination does not necessarily make them so. There is a QED quality to many of the common refutations of paranormal claims, plus often a thinly disguised contempt towards ‘ordinary people’ and the value of the evidence they offer. A good example of this is when sceptics dismiss the accounts of local people who claim to have sighted a creature in Loch Ness by suggesting that they have seen a floating branch, as if these yokels who walk past the loch on a regular basis are incapable of recognising a log on the water when they see one. Broadening out to more general consideration about the evidence for the paranormal, freakish coincidences will clearly happen, as will events with a probability of millions to one, but an unthinking, automatic assumption that anything which breaks the rules of a materialist universe must of necessity be faulty or due to pure chance rests on a personal metaphysics and not, despite the confidence of physicalists, on a proven, built-in premise of existence.

Similarly, it seems unfair to demand that the paranormal pass laboratory tests which are not suited to its exploration. For example, if telepathy happens, the anecdotal evidence that exists suggests that this often occurs in moments of great emotion or stress. Nothing could be farther from that than sitting in a lab going through packs and packs of Zener cards in a process that has no urgency at all for the person being tested. Meanwhile, science itself has a range of methodologies; a cosmologist or a zoologist has very little use for the kind of experiments in a lab that are often demanded of the parapsychologist, and yet no one is suggesting that their disciplines are not valid sciences. Cosmologists and zoologists are just two of the many types of scientists who have to use a radically different methodology from the hypothetico-deductive model based on experiment. Why should the study of parapsychology not be granted a similar leeway?

So in my opinion any scientist who says that paranormal phenomena cannot be real simply because they are impossible is indulging in tautology and, I would even suggest, being unscientific, since their view is based on an underlying metaphysics which they are unwilling to open up to challenge. Scepticism should work both ways: things like hoaxes, hallucinations, coincidence, and distributions of probability should not be a get-out-of-jail card automatically produced to trash claims that physicalists deem to be impossible simply because that’s the way reality is. I accept that there are some things of which we can perhaps be certain – for example, the principle of non-contradiction – (although even this might be uncertain in a universe that is fundamentally random), but I would counter that disagreements about the paranormal should be empirical rather than ideological in nature. But once we enter the realm of metaphysics, from the point of view of the non-scientist, or perhaps I should more accurately say the non-physicist, why should I believe, for example, that the concept of a multiverse is any more credible than that of a creative God since I have no proof of either? In my opinion, at this level we can only speculate and our human ability to reason breaks down as inadequate, perhaps forever, but certainly given our current state of knowledge.

Ultimately, and somewhat ironically, therefore, I find myself largely in agreement with those scientists who argue that it is not worth spending precious time, money and resources on studying the paranormal, but for very different reasons. They consider it a waste of finite resources because the paranormal is almost certainly hokum, while I arrive at this point from a belief that the ‘scientific method’ is not capable of this task and cannot give us any useful answers. I know this sounds defeatist but I can’t see any way around the enormous difficulties of research into the paranormal. Personally, I have a problem only when scientists categorically state that any belief in the paranormal must of necessity be rank nonsense; if they do this, I don’t see how they are behaving any differently from a credulous believer who simply knows that these phenomena are real even if they cannot offer any strong evidence for them. At heart we all have our core beliefs, our ways of seeing and conceptualising the world, and we continuously confirm them to ourselves throughout our lives. In my opinion, no one, not even the most objective scientist, is exempt from this innate psychological bias.

LIES, DAMNED LIES, & STATISTICS

In the fan forum run by the PinkUn, the press and online media that reports on all things Norwich City Football Club which I visit almost every day, there is a poster who loves statistics. In many of his posts, perhaps even a majority of them, there’ll be a reference to xG or xGA, or a heat map of where a player has been on the pitch during a game, or more obscure and convoluted diagrams which leave me totally flummoxed, all serving as evidence to support the argument he’s making. But it’s common for the two of us to disagree strongly on the pertinence of this data. Perhaps it’s generational, perhaps it’s because my background is in the arts rather than the sciences, but we have widely different opinions about the significance and value of these numbers.

The most basic stat in football, of course, is a goal scored. This is simple: if a player kicks the ball and it ends up in the net, he has a goal to his name. There is a slight complication about what happens if the ball is deflected in off one of the defenders: the general rule is that if the ball would have ended up in the net anyway without this intervention, it counts as a goal chalked down to the forward; if not, it is registered as an own goal.

A second simple stat these days is the ‘assist’: this is the last touch by any player on the same team before the scorer nets his goal. It is usually a pass or cross of some kind. However, there are problems even with something which sounds so simple. First, a wildly misplaced pass counts as an assist if a second player on the same team is the first to touch the ball afterwards and scores a goal. Indeed, there is no need for the first player to have even tried to pass the ball: if it bounces off his knee and falls favourably for a second player who scores, the first player is awarded an assist. Also, if the first player makes a wonderful pass which gifts the striker an open goal but he fluffs the shot, there is no assist. So there is obviously an element of chance involved in these figures and not all assists are equal in merit. This wouldn’t matter if the assists for each player over the course of a season numbered in the hundreds, since probability would even out the element of luck, but most footballers get a very limited number of assists in any one season, generally in single figures. So if our best player of the 23/24 season, Gabriel Sara, got 14 assists, that is almost certainly meaningful, but the difference between players who get four and six assists is likely to be down purely to chance.

The measure xG is even more tenuous. This is a figure which shows the number of goals a team might expect to have scored in a game based on a range of criteria, such as where the player who shoots is placed on the pitch, whether there are defenders between him and the goal, the difficulty of angle involved, and so on. These things are weighted based on historical precedent of the likelihood that a shot will hit the back of the net under these conditions. I assume this is all done by machine algorithms and there is no little man watching the game and ticking boxes, which in theory rules out subjective influence, although it can be argued that the original decision that, say, a player being 20 yards from the goal is worth 0.2 points must have been to some extent a subjective evaluation of its goal potential relative to other criteria.  

Obviously football is no longer merely a game: it’s big business. This means that measures which are available to the fans such as xG are relatively crude compared to the mountain of stats that clubs now gather behind the scenes. In the past, the managerial team on the bench would be watching the game and judging by the ‘eye test’; these days they are usually poring over their laptops getting the latest data on the match as it evolves. In a business where the potential profits are huge but the margins between success and failure are often paper-thin, nothing escapes measurement. For example, the latest buzz phrase is data-based recruitment, by which new players are no longer signed mainly from the reports of scouts watching them in games, but according to their figures on a screen, which are based on a huge accumulation of facts about them, such as how fast they can run, how many miles they cover each game, what percentage of successful passes they make, and so on. And even a sceptic like me has to concede that this has worked wonderfully for Brighton, propelling them to the top half of the Premier League, so every other club is rushing to jump on the bandwagon and setting up similar recruitment systems.

So statistics now play a crucial role in football, but some will argue that ultimately football is only a game, so if we want to make important judgements about the value and usefulness of stats, we should turn our attention to a field where they can make the difference between life and death: medicine. In modern medicine we have available a plethora of stats to measure red and white blood cells, haemoglobin, cholesterol, triglycerides, blood sugar, vitamins, minerals, liver function, and so on ad infinitum, and major decisions, such as whether to put someone who is asymptomatic onto statins are made based on these figures. As a result, a sizeable proportion of the adult population in the US (around 35% in 2018/19 according to the NIH) now takes statins, many of whom have never had a heart attack or even any symptoms to suggest that one might happen soon. Markers are treated as if they are symptoms, but many people with high LDL levels do not suffer cardiac problems while others who have low levels sometimes do. All we can ever do, it seems to me, is play the odds and hope for the best, while figures and statistics can suggest a sense of certainty and control which isn’t justified.

About six years ago, when I was living in Portugal, my doctor arranged for me to have a CBC (complete blood count). This showed that I had a high level of LDL (around 180) and he suggested that I might want to consider taking statins, especially since there is a history of heart disease in my family. He put the data from my CBC into the computer, including other facts like my age, my height and weight, and my family history, and the NHS programme he was using calculated the likelihood of my having a heart attack within the next five years as a percentage down to a single decimal point. It all felt impressively scientific, but I still declined the drugs. To be fair, my doctor listened to my arguments and didn’t harangue me even though I must have been a nightmare patient armed with bits of information which I’d picked up from the internet and talking as if I knew it all. I know my experience is anecdotal, but I feel it shows how figures can gain a life of their own and give an unwarranted impression of exactitude, and how doctors under the pressure of time are going to use them to make key decisions, while most patients, unlike an awkward sod like me, will accept their doctor’s best advice and start taking the tablets.

Another problem with medical statistics is how poorly they are reported in the press, especially how journalists ignore the difference between absolute and relative risk. There is a huge difference between stating that eating a specific food doubles our risk of getting colon cancer and describing the same statistics as an increased risk of being diagnosed in any one year to be up from 1 in 10,000 to 1 in 5,000. The newspapers, not unnaturally, will always go for the more dramatic figure in their search for readers or clicks. Then, of course, there is cherry picking of data, where advocates of a plant-based diet will highlight different research from someone who aims to push the benefits of a carnivore one. And this doesn’t even start to address the host of problems around nutritional research in general, such as the drawbacks of self-reporting or the existence of publication bias or the over-reliance on meta-analyses which clump disparate studies together in order to get an adequate sample size. Finally, academics are not always innocent victims in skewed reporting, with some of them making ridiculously exact claims such as every glass of wine takes twelve minutes off your life expectancy, knowing full well that this is what newspapers will pick up on and feature in their headlines, so their research will get near to the top in search engines after all those clicks, and they are therefore more likely to get further funding in future.

Another area where stats have become enormously significant is education. The desire for higher standards has led to schools being placed in league tables based on the quality of their education, which tends in practice to mean their exam results, both because these may be what parents care about most and because they are easily quantifiable in a way that quality of teaching isn’t. The problem is that this changes the way that teachers teach and they start to teach to the test rather than taking a broader approach to giving a child an education. Many other key decisions, such as where people choose to live, follow on from this, with the so-called postcode lottery where two families who live close to each other but in different administrative districts might end up with their children going to different schools with very different reported standards, so house prices soar in the area with higher-ranked schools. In their desperation to get higher up the league table, some schools might choose not to enter weaker students for examinations, since failures will drag down the school’s score and lower their position in the league tables. In short, as soon as we have statistical gathering of data, we will get individuals and organisations gaming the system.

Obviously it’s easy for people like me, who are not trained in the collection of data and its interpretation and don’t fully understand all the ins and outs, to use a famous quote such as ‘Lies, Damned Lies, and Statistics’ to come up with smartass headings, and to pooh-pooh the use of data. But it is certainly true that the amassing and presentation of big data has become a powerful tool in public discourse, and for this reason we should approach it with more than just a quip and a cynical shrug of the shoulders. On the plus side, from comments I see online, even on my PinkUn message board, I sense that the ubiquity of data online has led to a growing sophistication among the general population about statistics and a greater scepticism (compare the fairly basic content of a popular book of the 1960s, How to Lie with Statistics by Darrell Huff, with some of the more advanced material we find these days on the best of the internet if we look carefully). As a result, there seems to be more awareness of the potential to twist and misuse data, and it is no longer only scientists who come out with the hoary old truth that ‘correlation is not causation’.

Despite this, however, I still feel that figures and charts and tables can create a sciency feel and it remains easy to be dazzled by them and to switch off our critical faculty, not least because they often back up our biases and set off our weakness for logical fallacies. Also, many of the problems mentioned in this essay are not intrinsic to statistics but are unforeseen secondary consequences of their use (as, for example, an important outcome I didn’t mention, but which is potentially crucial – the power of opinion polls to influence voting behaviour during elections). Many of these problems, perhaps, could be addressed by the compulsory teaching of data analysis at secondary school. This might not have much of an effect in the here and now, but should make future generations better able to fight off the tendency to be bamboozled by what is sometimes the deliberately sly and illegitimate use of statistical data and their graphic representation.

A BELATED HAPPY NEW YEAR

When I was much younger, a young man in his twenties and thirties, the start of a new year always felt very significant. I much preferred New Year’s Eve to Christmas. OK, we had to put up with Andy Stewart and Moira Anderson and bagpipers in Edinburgh and all the other clichés about Bonnie Scotland on New Year’s Eve, which was much less enjoyable than watching The Wizard of Oz on Christmas Day. But apart from that, the New Year was a lot more fun.

And for me, as a teapot atheist, it felt more meaningful. It represented a fresh beginning, a chance to transform my life and try to move towards being a better person with a better future. Like most people at that time, I made and rapidly broke resolutions each year. And over the years I did most of the obvious things at least once, like Trafalgar Square at midnight, despite my hatred or perhaps I should say my phobia of crowds. My long-term partner and I housed parties where our gang of friends gathered and drank ourselves silly, waited for the bongs of Big Ben to welcome in the New Year, and then we all hugged and gave each other shy pecks on the cheek. At other times I even embraced strangers in city centres, on one of those rare occasions when we British allow the ice to thaw a little and awkwardly accept the risk of letting other people enter our tenaciously guarded private space. Admittedy, when I was abroad, which was most of the time after 1996, I would play the Phil Spector Christmas Album and sing along with the Ronettes and Frosty the Snowman on Christmas Day, then try not to guffaw at Spector’s huge dollop of schmaltz in his Yuletide message to the listener at the end of the LP. But the bulk of my enthusiasm was still reserved for the last day in December.

This year, just as I did in nearly all of the new years I saw in during the 21st century, I went to bed way before midnight, too tired and disengaged for even a celebratory glass of wine. So I’d survived another year – so what? Was that really worth losing a good night’s sleep?

A lot of the reason for this is doubtless that I’m older now and have slowly grown increasingly glum and anti-social. But also I suspect that I reflect a general foreboding people have that the world is spinning out of control and heading for a very dark place, so no one wants to think too much about the future. It seems a world where few of us want to go, except for a handful of control-freak neo-Fascists who can’t wait to see the reopening of Dachau. I recognise that this sentence sounds somewhat hysterical, but what else are we to make of a world where a man is voted back into office even when he has made it clear that he doesn’t really believe in democracy unless he wins and actually did his best to overthrow it when he lost; a world where a bilious billionaire openly funds neo-Nazis, bitter culture wars rage and public opinion seems to be completely and aggressively polarised, hatred of foreigners and Muslims and transexuals and a host of random Others is escalating day by day, and we still have all the old familiar nightmares such as nuclear armageddon to worry about, but also a whole new bunch of fears around climate change? Surely it makes perfect sense to want to slip under the duvet and not come back out?

And the idea of special days is generally a load of tosh. The cosmos doesn’t know that the 25th December was the day of Jesus’s birth (and anyway, it almost certainly wasn’t), or that 2024 has become 2025, or even whether it’s a Tuesday or a Thursday. Human beings can divide the endless fabric of time into almost any system they wish: the use of the solar or lunar calendar being an obvious example. Yet divide it we must, for both practical and psychological reasons. On a practical level, it helped us to survive by making it possible to predict the future and, in agricultural societies, knowing when to plant and harvest crops, while dates such as the summer and winter solstices make psychological sense in cultures which had no scientific basis for understanding the seasons and needed the reassurance that the dead sun would be reborn.

But we shouldn’t feel too superior to our ancestors because even in our modern, shiny, technophilic world, we seem to have retained this psychological need to designate certain days as magical and celebrate them. Business has not been slow to recognise this atavistic imperative and have gleefully added a whole new set of red days to the calendar and changed and extended the meaning of the old ones. This has led not only to the crass commercialisation of religious festivals such as Christmas, but also to the addition of days which are much more plainly the brainchild of business, such as Mother’s Day, Father’s Day, Valentine’s Day and the modern Halloween, and these relatively recent inventions are spreading across the globe due to western cultural influence. Some of these special days, such as birthdays, seem completely natural to us in the west, but they were not always celebrated throughout the globe. For instance, when I first went to live and work in Indonesia, I was shocked to learn that many older people didn’t even know the date on which they were born and even those who did seemed to have little interest in celebrating it. Nowadays, of course, modern Indonesians have birthday parties with cakes and candles which are little different from those in the west.

So my attitude to most of these red letter days, especially modern consumerist monstrosities like Black Friday (which some of my Asian students now refer to as a ‘festival’ for heaven’s sake), is bah humbug. But New Year’s Day still has some attraction for me. I retain a fondness for the idea of a day when we reflect upon the past and look into the future and try to become better people. So I wish the handful of people who come on here all the best and a belated Happy New Year. Let’s hope my gloomy prognostications about the future are proved totally wrong and we still have the desire, and more importantly the freedom, to imagine transforming ourselves after I am long gone from this earth.

GAY CHATROOMS

I’ll be honest; I sometimes go on gay chatrooms. In the early days of these chatrooms, only text was available; nowadays there is often the option to use a microphone or a cam. Most initial contacts in these rooms are short and test the water to see if the other person is compatible, while longer chats will usually develop into sexting (stimulating each other by means of sexual chat) which ends with one or both parties climaxing. In many ways chatrooms remind me of old-style cruising in their randomness and impersonal approach to sex. Also like cruising, they’re a totally mixed bag. Sometimes they can be exciting and fun; at other times they seem a total waste of energy and effort because so much of our time on them is spent in repetition, frustration, and even a kind of boredom.

I have noticed that many of the people in gay chatrooms are young guys aged between 18 and 30, or much older men who are discovering aspects of themselves that they denied or suppressed or perhaps didn’t even realise earlier in life. Another common set of users are married bisexuals looking for clandestine sexual pleasures that they find hard to get in real life. (Throughout this essay, of course, I am merely reporting what people have told me about themselves and this may well not be true: in the virtual world of chatrooms it is easy to create a completely fake persona and this is often a big part of the pleasure).

A key attraction of gay chatrooms is that they offer a safe space both emotionally and physically. They involve none of the risks of cruising (getting queer bashed) or of hook-up apps (where the guy who invites you to his home could be the psycho from the movie, Cruising.) A sense of emotional safety may be the reason why there are so many young guys there: chatrooms are places where they can explore their romantic and sexual feelings through role play and fantasy. Research suggests that most young guys these days have watched porn and the next stage often seems to be to act out online the porn scenarios they have seen and discover how they feel as they mimic and take part in them, and these rooms are a perfect place to do this with the reassuring knowledge that it is happening virtually and they can click and leave at any time. They also have the chance to speak to more experienced, older gay men, to try to find out what gay life is like in reality, especially in the bedroom.

Older guys who are new to being gay can use them for the same purpose; however, in my experience, they are far outnumbered by married bisexuals who are looking for furtive virtual sex. The arguments for and against this are complex: it may help to keep their marriages alive by providing an outlet for these husbands and even prevent them from physically straying in the real world, but at the same time it is very unfair on their wives because this is virtual cheating (unless the marriage is open and the wife knows that this is happening), even if the sex is only fantasy. As a result, one of the most common adjectives in gay chatrooms is ‘discreet’, used mostly by heavily closeted gays and these married men who secretly like men.

Another advantage that chatrooms offer is the opportunity to act out fantasies that users might not want to act out in real life because they feel too threatening and extreme, such as bondage and master/slave. This kind of chatting, with or without a microphone or cam, is close to certain types of gay porn in a way, except in porn surrogates perform these acts rather than a chatter with a chat partner. For many, I imagine, chatting is more enjoyable because there is genuine contact with another human being, not the fabricated passion of porn, which reduces the watcher to the role of passive spectator. This may make sexting in a chatroom feel more spontaneous and less contrived since the real human being in the text window feels like a normal guy, not an unnaturally handsome and hugely endowed performer. Perhaps most important of all, the man to whom you are speaking is also truly experiencing pleasure and not merely acting out fake desire.

Many gay men have difficulty finding someone for sex, for both personal and practical reasons. First, there are those who lack confidence in their looks or physique, who can transform in these rooms into muscled, well-hung studs or halve their age at the click of a key (although obviously this rules out camming). Second, there are guys who live in the middle of nowhere, far from big cities, and even those in smaller towns may fear exposure so much that they prefer the total anonymity of virtual contacts. Next, as already mentioned, there are those, for example married men, whose personal life situations mean they prefer not to find sex in the real world around them. Another group are the disabled, who, depending on their disability, can use a keyboard to act out fantasies which they might otherwise find impossible. Then there are those with fetishes they feel ashamed of, but who can find in these rooms someone else who shares their fetish, no matter how unusual. Finally, men with HIV who like to bareback but don’t want to endanger others, or bottoms who are too scared to do what they want most can at least bareback in fantasy. So sexting offers freedoms and opportunities to a wide range of people and can even perform a useful social role, as for example during Covid, where online sex almost certainly helped to reduce the spread of the disease among both gays and straights.

And despite their meat market atmosphere, although it may seem unlikely, a small community often builds up on these sites, gathered around the ‘lobby’ or home page, regulars who get to know each other and take part in general chat, a substitute for what used to happen in gay bars but is less easy to find in an age of dating apps. These little groups texting each other on the home page are like online families, sometimes supporting each other and sometimes squabbling and bitching just like any family except for the Waltons.

The greatest drawback of gay chatrooms is that they tend to be addictive. As with physical cruising, there is always the lure that if you hang around long enough, the perfect sexual catch might come along. The entrepreneurs who set up these sites know this and organise their garden of earthly delights with all the expertise of a supermarket chain. The lobby is like the entrance to the store and in many chatrooms there are ‘open cams’ there, where those who tend towards exhibitionism (and believe me there are many, especially big boys) reveal what they have to offer, and these tempt the visitor in: (a kind of reverse to what happens in supermarkets, where the chocolate is usually at the check-out). Once inside the store, the mix of text, microphone and cam offer a full range of options: texting for the shy and those whose primary concern is discretion; camming (either openly or in private rooms); group action where one guy becomes the room ‘slut’; a range of specialist rooms for fetishes and kinks. There is even often a place for those who are looking for love, or at least a steady relationship, so the minority who are doing this can dream that somewhere in the store they will find their soulmate. The odds don’t seem very promising, but in the end who knows?

So the visitor wanders through the store and views the products on display. Many chatrooms request that people make a profile when they sign up, in which they describe themselves and say what they have to offer and what they are looking for. In practice these profiles tend to be rather predictable. This is reflected in the names that people choose as their nick, and in personal outlines which include their age and the age they are seeking, which country they come from, and, most importantly, their sexual preferences: top or bottom, sub or dom, if they cross-dress or have a particular kink. Penis size is often included in these profiles, and if we were to believe what the posters claim, most guys are blessed with at least seven inches (except for some of the sub bottoms who emphasise how small and unworthy they are.)

We usually find out very little about the people we are chatting with other than these titbits. They become reduced to objects, a masturbatory aid, with little more character or individuality than the actors in porn. As a consequence, these chat rooms can easily turn into a dead-end. There is often an obsessive quality to sexual desire and most of us have preferences and fantasies we repeat again and again. So in reality, just like porn, these sites can end up limiting our imagination and also trapping us in one role, as we repeat the same one with a succession of different contacts whose obsessions complement our own. Also like porn, this makes the experience one-dimensional in the sense that it reduces sexual activity mostly to vision (and sound if a microphone is used), whereas non-virtual sex is above all a tactile and sensual meeting of bodies and minds. In a chatroom there’s no touch or feel, so the rich and complex joys of sex become flattened and predictable and, unless we’re careful, obsessive, repetitive and sterile.

Finally, and worryingly, there is the potential threat of the grooming of underage boys. I suspect that this danger will eventually lead to legislation making people prove their age and identity before joining these rooms (most of them already require registration nowadays, but this tends to be perfunctory and only enables abuse to be reported after the event through the use of the IP). And even this tightening of the law, which sounds sensible on first thought, might lead to chatrooms being set up and based instead in countries which either do not have strong laws against underage sex, or have these laws but rarely enforce them, and many users of chatrooms, wary of giving their private details, would leave the monitored sites for these more dubious alternatives. It is also worth pointing out that chatrooms are at least safer than hook-up apps simply because many of them are global and geographical distance limits the possibilities for grooming, whereas hook-up apps operate at the local level with the express purpose of meeting up.

I acknowledge that much of this essay is anecdotal and is based solely on my personal experience. And I cannot speak about straight ‘dating’ chatrooms since I find the idea of watching or taking part in heterosexual sex, or even merely texting about it, makes me feel queasy, as I imagine any thoughts of same-sex activity repel the majority of straight men. However, I suspect that a lot of what I have said would also be true of heterosexual chatrooms which are built around sex; there might be a few different emphases, perhaps, but otherwise I don’t see why they should be massively different. Also, in any research or discussion about sex, we must always remember that people often fail to tell the truth, especially when we are discussing phenomena which are grounded in fantasy. Because of this, we should always maintain a certain scepticism about what people say and claim when they are in chatrooms.

I’m sure many people who read this article, both gay and straight, will find the whole topic rather sordid and would prefer that I don’t raise it. But these chatrooms are a feature of contemporary life and do much to influence our thinking about romantic and sexual relationships and behaviour, and crucially they are helping to forge how young people regard these things. They are an important aspect of many people’s lives, especially the younger generation, so instead of being the target of moral condemnation, they need to be studied, openly discussed, and their advantages and drawbacks recognised, not least because, in a world with a global internet, it’s hard to imagine a future where they don’t exist.

CHOICE & FREEDOM

In the modern world we are overwhelmed by choice. Much of this is trivial – which brand of toothpaste we should use, whether to drink Coke or Pepsi, or heaven forbid even water, which colour we want to decorate our bathroom – but much of it is also vital to the level of life-changing: whether to go to university and incur a lifetime of debt and, if so, which major to take; at what age to get married or whether to get married at all; whether to leave that steady job which bores us and to risk going back on the job market or moving into a completely new field. But even the trivial stuff has a real influence on our lives simply because it accumulates and we have to make literally thousands of these tiny decisions on a daily basis. Walk down the aisles of a modern supermarket and there are sometimes hundreds of variations on a basic product. We are drowning in this world of the drip drip drip of needless, vacuous decisions.

In theory this glut of options should be a wonderful thing. On the surface we have so much more freedom than the generations who preceded us, whose lives were usually far more circumscribed. It was a brave woman who chose to remain single when a man proposed to her and thereby risked being ‘left on the shelf’; there was usually little chance for someone to move up and out of their social class of birth, so, for instance, in the industrial, working-class area where I grew up, most boys were destined for a life in a factory while girls were supposed to become housewives; the majority of people until quite recently lived and died close to where they were born. In contrast, in today’s world, at least in the UK and many other western countries, many people are likely not to get married, or to have a patchwork career where they move from one field to another, often out of necessity. In addition, we are much more mobile than in the past and move not only for work but also in response to relationships or simply a desire for novelty.

Is this freedom genuine or is it just smoke and mirrors? Do we really have more freedom than in the past or are our horizons simply limited in a different way? And if we use Berlin’s distinction between positive and negative freedom – freedom to do something and freedom from things such as want and oppression – are both equally available or unavailable to us in the modern world? Crucially, are we free not to choose, particularly in situations when the choice is insignificant? Do we even want all this choice and freedom?

Recent politics in both Europe and America suggests that perhaps we don’t, or at least that a significant number of us don’t, possibly even a majority. Less than one hundred years after the horrors of Nazism and Fascism in Europe, Stalinism in Russia, Pol Pot’s ‘socialism’ in Cambodia and neo-Fascism in Chile, to mention just a few, it seems there is now a fresh longing afoot for the strong leader, the man (and it is nearly always a man) who will tell us what to do and make us safe, at the cost of the removal of our freedoms, especially those freedoms from things like oppression, discrimination and injustice.

There seem to be two strands in all of us as human beings. The first is a desire for security, for the certainty of rules and limitations, even if they become onerous and no longer really serve to protect us. The second is a curiosity and an openness to difference and novelty and a longing for a world without restrictions. In evolutionary terms, this makes good sense because in order to survive, our species needs to find a balance between these two ways of thinking. But over the last twenty years or so, the pendulum has swung heavily towards the first desire, for security, and we seem almost blasé about endangering the democracy which nearly all of us in theory support.

When we remember all the terrible things that happened under Nazism, Fascism, and Soviet Communism, surely we should fight harder to maintain the freedoms we have gained, so why aren’t we? I think the first reason is fear of the Other, something which I doubt we can ever eradicate as a species which evolved to work in tribes of around a hundred and fifty people, so it is always easy for provocateurs to stir up hatred to further their own agendas, especially since they are often supported by big business and the very rich, whose greed seems insatiable. Another possible reason is that people have become complacent in western democracies and view the right to vote as an eternal right which can never be taken away, but history and recent events in America suggest that this is far from true. To use the title of a song from the Freak Out album by Zappa and the Mothers, there is a sense that ‘it can’t happen here’. A different type of complacency leads us to assume that we will always be part of the in-group who won’t end up in a concentration camp: they’ll never come for us. Most of us surround ourselves with people who are similar to us in their background and thinking, so we tend to assume that the whole of society is very like our tiny bubble, so it is easy to place the blame for bad things that happen on a handful of miscreants and/or outsiders. We’ve seen this so often in history that it is very difficult to have any hope that it will ever change.

Another reason why people fear freedom is that it entails responsibility, which brings us back to the issue of choice. Being placed in jail sounds like a terrible thing to happen, but it takes away a lot of our need to make decisions since our whole life is reduced to following the set routine of the prison. Inmates become habituated to this rigid environment and often have problems adapting to the world outside once they are released. Less extreme than this, but still a potent reality in modern societies which are rapidly changing, we now have to make key decisions such as those I mentioned earlier (e.g. do I get married?) which we used to make on autopilot, doing something because almost everyone else did it and anyone who chose not to was seen as either rather suspicious or perverse or simply weird. Also, when we are free to make decisions, especially major ones but also even fairly trivial ones about purchases, there is no one else to blame if things go wrong. In my opinion, the mountain of self-help books that clutter our libraries and bookshops are destructive here because most of them preach freedom as an ultimate good and gloss over the fact that it comes with a price; we should all get in touch with our ‘inner selves’, they say, and follow our individual wishes and desires, and that will make us all happy.

Except perhaps it won’t. Because the research suggests that we are becoming less happy in developed countries, less satisfied, despite our relative freedom compared to the past. There is so much pressure on us to maximise our lives, and as someone who is now old, I can see huge differences between the world in which I grew up, a world where people often enjoyed really simple pleasures such as working in their gardens or even stamp collecting, and the world which young people now grow up in, an unstable world where everyone is treading on the hamster wheel and fearing falling off, so there is much less time for these pleasures. I notice, for example, that when I teach English and ask my foreign students what their hobbies are as a way of breaking the ice, often they have none. Their answer will be something very generic such as watching movies or listening to music, but when I probe further there is little depth of interest in these activities. Most damning of all, at least to me, is the fact that they often say shopping.

Finally, I’d like to explore the generally accepted belief, to the point of it being a truism, that we really have more freedom nowadays. But are these freedoms illusory, or at least meaningless? The freedom to choose among a hundred different types of shampoo is not in many ways a freedom: it becomes a burden. In contrast, radical freedoms such as the hippie dream of opting out of society, getting a place in the countryside and living a simple and natural life as depicted in the 1970s comedy series, The Good Life, recedes further and further into the distance and becomes a silly and comforting fantasy, one which is only available to the very rich who can afford to step off the hamster wheel. Of course, this was always largely true, as the failure of my hippie generation proves, but as the vice of consumer capitalism gets tighter and tighter, in many ways we have less freedom than we used to have, even as we are fed the lie that we have hugely more.

Ultimately the concept of freedom is a complex one which is riddled with paradox. In some senses, the slave has more freedom than the ‘free’ individual because he or she is released from the burden of choice, and people adapt to restrictions and grow to accept them, which ironically can lead to greater happiness, or at least complaisance. The dream of perfect freedom is simply that: a rather infantile fantasy in a world which will always limit us physically, psychologically and socially. I want to make it quite clear that I am not arguing for a ‘strong man’ or limitations on our democracy; as Churchill famously said, it is ‘the worst form of government, except for all the others’. But I do believe we need to be careful because the extreme right is twisting and abusing the concept of ‘freedom’ as a way of trying to remove it. As always, they use the distraction of bread and circuses and choices we don’t need so that we fail to notice as the rug is pulled from under us.

IS CAMP DEAD?

After a discussion with one of my students, I began to wonder if camp is a dying mode of feeling and behaviour, or is perhaps already dead. This led to my re-reading Sontag’s essay (or what she calls her ‘jottings’), On Camp. Every time I doubt Sontag as a thinker and critic and feel she is sometimes obscure and portentous, a quick flick through these ‘jottings’ makes me realise just how perspicacious her writing often is. Consequently, I strongly recommend that anyone who is interested in this topic should read On Camp, and I fear that I do little more than paraphrase her thoughts in the first part of this essay (all of the direct quotations I use here come from On Camp.) Once I have talked about what camp actually is, shamelessly stealing many of Sontag’s insights, I will go on to question whether camp has become impossible, exploring four possible reasons for this: homosexuality has become mainstream, traditional splits between high and low culture have more or less dissolved, no one believes in the possibility of serious discourse anymore, and feminism has led to massive changes in traditional gender roles. Sontag argues at one point in her essay that camp is the antithesis of tragedy: perhaps for this reason it can no longer exist in a world where the only option is irony, even if we have not quite yet reached Marx’s eventual destination of farce.

It is often argued that camp is a sensibility rather than a style or a fashion or a way of behaving: that while it is impossible to define camp, we know it when we see it, at least if we are in tune with this sensibility. Because of this difficulty in pinning camp down, we often resort to making a list of people and things which are generally agreed to be indisputably camp: famous actresses and singers (Judy Garland, Greta Garbo, Carmen Miranda, plus Bette Davis and Joan Crawford as a catty double act); certain films (The Wizard of Oz, Johnny Guitar, Beyond the Valley of the Dolls, Whatever Happened to Baby Jane); movements in art (Mannerism, Art pour l’art, Art Nouveau), art forms that tend towards the overblown and which take themselves very seriously indeed (ballet, grand opera); things that are very obviously covers for other things (Health and Efficiency magazine). Camp can be instantiated in certain objects and fashions (feather boas, peacock chairs, beehive and bouffant hairstyles, ostentatious make-up and jewellery). Finally, it can manifest as an androgynous way of looking and behaving: men dressing and acting like women, women dressing and acting like men. Drag queens have always played a key role in camp with their absurd, exaggerated portrayals of femininity. And sometimes the opposite becomes camp, too: the alpha-male displays of masculinity of narcissistic, muscle-bound bodybuilders, cowboys, leather queens.

But listing is not analysis and Sontag attempts to go beyond it to aim for the essence of camp (if indeed camp has any essence, something that is often denied). Firstly, camp is against the natural and in favour of artifice: as Sontag says in a marvellous aperçu, ‘Camp sees everything in quotation marks’. There is an emphasis on the difference between how something appears and the underlying reality, although at the same time the whole idea of an inner reality is sometimes ridiculed. Style is favoured over content, so camp is resolutely superficial and glories in the surfaces of life and is highly aesthetic in approach. Long before postmodernism questioned our concepts of surface and depth, camp was busy mocking and undermining them. The external look of a person or object is simultaneously a false veneer and the only thing that matters.

This love of surface wallows in extravagance and excess. The Golden Age of Hollywood musicals was the perfect example of this, with its fabulous, over-the-top Busby Berkeley extravaganzas and its ludicrous, colour-splashed images of Carmen Miranda with half an orchard on her head surrounded by dancing girls waving bananas in the air in The Gang’s All Here, a sequence that still has me laughing uncontrollably no matter how many times I see it.

This brings us to a second key feature of camp: it makes fun of all attempts at meaning or profundity. Sontag again: ‘The whole point of Camp is to dethrone the serious’, and ‘Camp convert[s] the serious into the frivolous’. This lack of seriousness is used to differentiate what is camp from what is not. Thus, Abstract Expressionism isn’t camp; Pop Art often is. Sometimes there is no attempt at seriousness, as we see in movies such as Barbarella and the work of John Waters, TV shows like Lost in Space and the 60s Batman series, and pop groups like the Village People. In other cases, camp ridicules art forms which take themselves extremely seriously and throws them in the trashcan along with the subcultural junk it tends to prefer with no regard at all for high art status (ballet and grand opera). And sometimes it appropriates things which originally took themselves seriously and then drags them down (with the emphasis on drag) to the level of campy trash (the Eurovision Song Contest, Miss World).

Thirdly, camp is amoral, ‘a solvent of morality’. This links it clearly to earlier historical phenomena such as the 19th century dandy, the figure who Sontag argues acts as a bridge between the aristocratic, fin de siecle decadence of Wilde, Whistler and Beardsley and the phenomenon of camp that followed in the century of the masses: ‘Camp is the answer to the problem: how to be a dandy in the age of mass culture’. No one expresses this flippant rejection of morality better than Shelby Carpenter, a character in the film noir, Laura: ‘I can afford a blemish on my character, but not on my clothes’. Sometimes one senses that perhaps a moral message is intended – a criticism of insincerity, bullshit and hypocrisy – but more often camp comes across as a refusal to accept any kind of overarching moral standards.

Camp is almost exclusively an urban phenomenon, a big-city sophistication which mocks the suburban and the homespun, and plays with concepts of innocence and experience as adroitly as a Restoration comedy. Within these urban spaces, as Sontag points out, one of the key functions of camp has been to act as a kind of ‘private code, a badge of identity even, among small urban cliques’. This makes it a child of industrialisation and the cities that grew up around the subsequent affluence, where these urban cliques could thrive in the anonymous demi-monde in which they clustered.

The most important of these cliques by far was the homosexual subculture that grew up in post-war American cities and other global capitals such as Amsterdam and London, which Sontag identifies as the ‘vanguard’ of camp. While I agree with her on this, I would question her statement that if homosexuals had not invented camp, someone else would have done so. I think here she is underestimating the role of gay people in the development of camp because for them it was not a mere game or divertissement but a way to survive in a world where having a gay relationship could land them in prison. In the UK, for example, a language called Polari grew up in the gay subculture, a secret slang that gay men used to identify themselves to each other and protect themselves from a hostile world, and this language was linked to an effeminate self-image and behaviour personified by Quentin Crisp. Camp emerged not only because it was fluffy and delightful and fun, but also because for some people it was essential.

So why do I feel that camp may be dying? First, I’d like to draw on Sontag’s division of camp into two strands: the first, which she calls ‘pure’ camp, seriously believes in what it is doing and is largely unconscious of its own campiness (grand opera), while the second strand is knowing, flippant, self-conscious and deliberately ironic (the films of John Waters). These strands have always co-existed, of course; I cannot believe that the people making The Gang’s All Here or the 60s Batman ever took themselves even slightly seriously. But when the social changes after World War II became a mass movement and camp slowly trickled beyond its subcultural roots into society at large, it ceased to be a secret code and began to lose some of its power, so having a collection of Judy Garland records was no longer a telltale sign to a minority in-crowd: everyone was suddenly in on the joke. So the Eurovision Song Contest, for instance, has become an opportunity for all kinds of people to relish its campy tackiness, while Rocky Horror evenings attract audiences which are a mix of gay and straight dressed up as the characters in the show. This has arguably watered camp down, and many things are now accepted as camp which perhaps would not have qualified in the past. This may just be because of my age, but I find it hard to see Kylie Minogue as a camp icon, nor Madonna, nor perhaps even Cher. They are simply too knowing. Camp shouldn’t have to wink quite so blatantly and to such a large audience.

The second reason is a cultural drift towards a world in which very few people are placed on pedestals anymore, so much of the pleasure that camp offered from deflating the glamorous, the pompous and the high and mighty has been lost. The Hollywood stars of today are not as distant and unapproachable as those of the past, but even as it travestied these glamorous, hugely popular icons and at the same time mocked high culture, camp depended on both for its existence, and there was often a genuine love of these things even as they were mocked. Ridiculous as it may sound, there was a genuine glamour and colour and beauty in Busby Berkeley and Carmen Miranda and camp recognised and adored this.

But alongside this adoration, camp (and certainly 19th century dandyism) was often also associated with an ennui and a cynicism which has now spread more generally through society at large, to a point where there is a sense that people believe in very little in the contemporary western world. Both on the abstract, intellectual level of critical theory (no more grand narratives), and at the level of a confident populism (‘I don’t know much about Art but I know what I like’), there is a feeling that there are no standards anymore and that attempts at seriousness are risible and futile. Camp is reliant on seriousness for its existence, though, even as it mocks and undermines it: if there is no sacred, there can be no profane. There has always been a twilight zone between camp and kitsch, so that one person’s camp is another person’s kitsch, and contemporary scepticism has blurred this even further, to a point where I would personally label kitsch much of what some people now label camp.

Feminism has been another factor sometimes diluting camp. It has problematised the figure of the drag queen, questioning whether the comic, outlandish figure of a thousand drag shows is progressive or reactionary. Do camp and drag challenge accepted gender roles and undermine sexist stereotypes through ludicrous exaggeration and humour or do they mock and belittle women in a show of blatant sexism? When I was a young(ish) man, there was a theatre troupe called Bloolips which grew out of post-Stonewall radicalism and used drag as a kind of political tool (which makes it sound very dry and worthy – I can promise you it wasn’t, it was very funny). In 2024, though, we are much less confident about this kind of performance in a world in which everyone is much more sensitive about how groups of people are presented, the feminist and trans movements are often at loggerheads with each other, and the drag queens’ ruby slippers are treading on eggshells. Could such a group exist nowadays or would it become a target for activists and ideologues?  

In general, my answer to the title of this essay is that camp isn’t dying, and is certainly not dead, but is changing in response to larger social changes in the western world. This is a positive phenomenon in the sense that it reflects a world where gay people are much more integrated into society, gender roles are more fluid, and people have greater freedom about which roles to adopt. But this has come at a cost in my opinion: camp has lost a lot of its cutting edge and Sontag’s ‘pure’ camp is no more. When we watch Russo’s documentary, The Celluloid Closet or listen to Julian and Sandy, it is hard to believe that audiences were so blind to the more than obvious subtext of camp, but it seems in many cases they genuinely were. That collective naivety has gone forever. I welcome this on the whole because camp is generous and inclusive at heart and I like the idea that everyone, bar a few rancid bigots, is in on the joke and having fun. There is just a part of me – a typical old fart, I guess, bemoaning that fings ain’t what they used to be – that regrets the spread of camp into the mainstream. And that when people like me pass on, a secret strand of our culture will have disappeared forever.

THE MARBLE INDEX & DESERTSHORE

I remember very clearly the first time I heard The Marble Index by Nico. I was listening to John Peel on the radio and he played the first song on the album, Lawns of Dawns. Instantly I felt chilled to the bone. My teenage self was very attuned to what later became Goth and I fell in love at once with this music from the graveyard. Nearly sixty years on, I still regularly play both this album and its follow-up, Desertshore.

The familiar public image of Nico is dominated by her relationships with famous men. For most people she is seen as little more than a beautiful model plucked from obscurity by Warhol and transformed into a chanteuse with the Velvet Underground, one of the beautiful people he used to decorate the Factory, along with other women like Sedgwick and his well-known troupe of drag queens. In a way she fought against her beauty all of her life and welcomed the day when it was gone, but she used that beauty very astutely to get where she wanted to go. She was a mass of contradictions as a person, much too complex to slot into any pigeonhole. She said that her only regret was not being born a man and yet so much of her life involved her being seen as a kind of female accessory to various famous men – Warhol, Morrison, Jones, Delon, Fellini, Reed, Cale – or as the glamorous ex-model performing for the benefit of male fantasies in teasing adverts. And while she always expressed a belief that she would have been taken more seriously as an artist if she were a man, she often name-dropped these famous men to whom she was attached in the public eye, as if her relationship to them in some way validated her as an artist.

Most people know only that she sang with the Velvet Underground for a while and nothing about her own music. But her music is what I want to focus on here because for me she was a greater artist than many of the more famous people around her. I will begin with my favourite song from her catalogue, Frozen Warnings. (Not a particularly original choice, I admit.) This is an utterly simple song but very little music gets anywhere close to its bleakness and sense of despair. And there is nothing insincere about this angst, none of the callow teenage posturing of a lot of later Goths who copied her. Many of the songs in both albums share this mood, particularly No One is There and Facing the Wind from The Marble Index, and Abscheid from Desertshore. And this sense of loneliness, desolation and doom perhaps reaches its climax in what was her most famous song at the time, Janitor of Lunacy, as she unleashes a swirl of madness from her harmonium which surges and swells as she wails out the lyrics.

But despite being known for her Germanic iciness and as a monster of a mother who introduced her own son, Ari, to heroin, Nico’s music could be surprisingly gentle and tender at times. Ari’s Song, for example, has been described as a kind of lullaby, but it is far from a comforting one. It is generally thought that it was written at a moment when she believed that Ari would die; it is loving and caring and yet lacks the slightest trace of sentimentality: there is almost an acceptance that Ari will leave and a sense of his release from suffering – ‘Let the rain wash away your cloudy days’ and ‘Now you see that only dreams can send you where you want to be’. Also untypically soft for a Nico song, but equally unsentimental despite its surface sweetness, Afraid, from the Desertshore album, is close in many ways to a traditional love song. (And later in her career, she recorded many versions of My Funny Valentine, a romantic song which I felt suited her perfectly because its romance was quirky and not at all clichéd or saccharine.)

Technically her voice was far from perfect. Apparently she was deaf in one ear and this often made her sing off-key, as the Velvet Underground complained to Warhol when he insisted that she become an important part of their debut album. And she lacked the softness and control when required to sing lighter songs such as many of those on her first album, Chelsea Girls. But when she needed to deliver an aural version of Munch’s Scream, as on Janitor of Lunacy, the voice was a Teutonic howl of terrifying sublimity. This was not only a personal wail of pain, but the wail of existence as Munch had described it when he spoke about The Scream. She was also fortunate to discover the harmonium as an instrument because its morbid drone made it the perfect tool for the spirit of her music and singled her out from other female singer-songwriters. (In contrast, she detested the flute on Chelsea Girls, which she felt destroyed the album – this doesn’t surprise me at all because no one radiates flute lady less than Nico.)

There is a sense in which the music on these two albums could only have emerged in the 1960s, when experimentation was encouraged and funded, and artists who were way outside of the mainstream could end up getting record contracts, but at the same time it has a timeless quality which makes it hard to date, unlike much of the work of the period (a lot of psychedelia, for example, has dated rather badly). Both albums have an almost medieval feel at times and a rather religious quality, especially tracks such as My Only Child, No One Is There and Abscheid. To me, this is serious existential art rather than entertainment. It is in contact with something deep and real in the human soul.

Which brings me to what I want to discuss in the last part of this essay: the role of John Cale as producer and musician in these two albums. Opinions range from those who see his contribution as annoyingly intrusive, a bag of tricks detracting from the gaunt simplicity of Nico’s songs, to others who feel that he really enriched her simple songs with a great sensitivity for, and understanding of, her art. I lean strongly towards the latter opinion.

On YouTube there are versions of the songs on The Marble Index as sung and played by Nico alone on her harmonium before Cale added to them in the studio. I have to admit that I like the version of Frozen Warnings there every bit as much as I like the one on the album: there is a starkness to this utterly simple version where Nico reaches deep into her soul. However, if I imagine every track on the album being simply Nico singing and playing her harmonium, I am sure that it would soon become dreary, a funereal dirge dragged down by a lack of variety. Cale’s task was to avoid this, and in my opinion he did a good job. For example, I feel that in the final version of Frozen Warnings on the album, the icy ambience of the background sound broadens out what is personal despair into a sort of cosmic bleakness, a vision of a universe which is cold and indifferent and inhospitable to human beings.

Cale uses and personally plays an impressive array of different instruments on the two albums to counter any slippage into fatigue or repetition. Desertshore, in particular, has deft changes in mood and aural quality from track to track, from the full-on Gothic blast of Janitor of Lunacy, to the rhythmic patterns of piano on The Falconer, to the use of a cappella on My Only Child, to the nursery tinkle of Le Petit Chevalier, to the grim and weighty seriousness of Abschied, to the romantic piano chords of Afraid, to the expressionist distortions of Mutterlein, to the middle-eastern feel of the final track, All That Is My Own. And throughout both albums Cale’s viola does much to create the classical feel to the music which I have already mentioned, a dark and moody chamber music with its solemn religious tone and its medieval ambience. I’m in danger here of repeating Nico’s ambivalence about being born a woman, not a man and drawing on sexist stereotypes, but the heart of these two albums lies within Nico while the thinking was executed by Cale.

I feel sure that very few people will share my extremely high regard for Nico as an artist, but her music definitely speaks to me in a way that almost no one else’s has. It’s hard to explain why some simple work strikes us as banal while other feels profound, an intangible distinction that Blake, the ultimate Romantic poet, made when writing about Milton. Perhaps my love of Nico’s music reflects my bias towards art that is simple and sincere rather than clever and technically accomplished, and also betrays my personal leaning towards Romanticism, and usually a rather dark Romanticism. I recognise that Nico’s music will never be popular and mainstream, but I hope that in each generation a few people who are in tune with her will continue to discover her work and find that it speaks to them somewhere deep and mysterious inside.

FANTASY GIRLS & DISNEY WORLD

We’re strange creatures, poets and artists. We spend so much of our lives in fantasy worlds, and if we’re one of the lucky few, we even get paid for it. But this power to escape the world of rocks and stones is not only true of artists. It’s true of almost every human being, even the greatest dullard.

I suppose this ability to fantasise is the logical outcome of big brains that enable us to step outside the reality around us and imagine ourselves in different places and times. I don’t think this is unique to us as a species, but it is clearly far more highly developed in humans. In evolutionary terms, this has been a roaring success judging by the way we have planted our footprint firmly upon the whole planet. As individuals, though, it is a mixed blessing. Fantasy can take us to heavens of joy, refreshing us when we are tired and lifting us above the tedium of mundane reality. But it can also lead to terror, imagined hells with their terrifying shadows and demons and instruments of torture.

The greatest bonus of fantasising is the creativity it nurtures. We can step beyond the physical world around us to create imagined worlds, a cultural universe which is full of meaning and every bit as real as those rocks and stones. I’m not talking only about the glories of Art and the discoveries and inventions of science, but simpler things which enrich our everyday lives, such as diversions and games. But even this has its drawbacks: it is creative, after all, to invent and use those instruments of torture, and fantasy increases our opportunity to be cruel and inhumane. Perhaps it even enhances it because it enables us to imagine what the other person is feeling and offers us the pleasure of sensing how their pain is experienced inside. Perhaps it is an inevitable side-effect of those big brains and is the wellspring of our capacity for cruelty.

One of the obvious areas where this is true is sex. Whereas many other species have a very small window when mating occurs, and even when it does occur it is often quick and perfunctory, we are sexually receptive more or less twenty-four hours a day, and, as the cliché goes, the biggest sexual organ is the brain. I sometimes go on gay chatrooms and at first I was genuinely shocked by the prevalence of fetishes and fantasies on there: spanking and bondage, slave markets, humiliation, being the slut in group sex sessions, sex in public places, sadism and masochism, exhibitionism and voyeurism – almost the only thing that wasn’t on the menu very much was ‘normal’, everyday sex with a regular partner. Perhaps this is because only a certain type of person uses sexual chatrooms, but I doubt it; I suspect that these fantasies are common, even in the dreariest cul-de-sacs of suburbia. I also doubt whether this is peculiar to being gay and suspect a lot of straight chatrooms harbour a similar range of secret, exotic delights.

Of course this doesn’t mean that people would necessarily want to live out these fantasies in reality: the realm of fantasy is a garden protected behind a wall and this is much of its attraction. It is simultaneously dangerous and safe, like a roller-coaster ride at the fair. It also offers the frisson of transgression. To what extent this is due to societal taboos about sex and to what extent it is a natural feature of human behaviour is probably an indefinable calculus, but the childish pleasure in breaking the rules and being naughty breathes life into sexual adventures and games.

Romance and happy-ever-after is the flip side of this sexual darkness, but it is every bit as much of a fantasy. The title I chose for this essay comes from the lyrics of a Beach Boys song, Disney Girls, which captures perfectly what many would see as darker sexual urges sublimated beneath bouquets of roses and valentine cards. Personally, I have my doubts about this concept of the ‘Beast Within’, (I am currently reading an excellent book by Mary Midgley which questions this assumption), and I prefer to see these two types of fantasy – the saccharine and the tart – as co-existing rather than one being the romantic wrapping paper which serves as a respectable veneer to conceal the true animal inside.

Fantasies also take more banal forms, closer to daydreams and wish-fulfilment: winning the lottery, getting a promotion, telling our boss exactly what we think of him, and so on. These tend to be less complex because the emotions involved do not conflict with each other in the way that sexual fantasies often do, or threaten our self-image as a nice, decent, civilised human being. They are pleasant in the way that a cool beer on a hot day is pleasant. I suppose some people might argue that they deflect us from taking action since we seek refuge in them instead of confronting and changing an unfulfilling life (or really telling our boss some hard home truths). But if we imagine life without the comfort these flights of fancy offer, it seems tedious and bleak.  

This raises a key question: is our ability to fantasise a benefit overall because we can vicariously enjoy things that we wouldn’t want to do in real life, things which might threaten social harmony if they became commonplace, or is it harmful because it is a retreat from truth, a refusal to face up to reality? Do fantasies sustain us through times of unhappiness and loneliness and mental pain or do they trap us in unreal dead-ends and repetitive, often obsessive rituals of behaviour? The obvious answer is both, of course, and we must judge individually whether we have a healthy balance of reality and fantasy in our lives. It is not an easy judgement call to make.

A key downside to our ability to step outside our immediate reality is that it means we can regret the past and worry about the future. ‘Live in the moment’ is a motto often propounded by self-help gurus, but this is far easier said than done because in order to thrive in the world, or simply to survive, we need to continually reflect upon the past and make plans for the future. People often write about the benefits of meditation in helping us to find that ability to live in the moment and not inside the cage of our everyday mind, but I suspect that this release from the strictures of time and place can only ever be short-lived, and fantasy can also provide this temporary escape, although perhaps in a less healthy way.

Finally, I’d like to look at the question whether modern life is reducing our ability to fantasise creatively, in a capitalist world which packages our fantasies so that they become standardised, homogenised and monetised. This has always been done to some extent, of course, and one of the key functions of Art is to create these fantasy worlds, whether it be a Greek tragedy, a romantic novel, or a Batman movie. But it can be argued that nowadays we often no longer make our own fantasies but rely on the so-called creative industry to sell us mass-produced dreams (movies, TV, pulp fiction, Disneyland, porn), turning us into mere passive recipients of cultural products. Watching a film requires less of our creative imagination than reading a Victorian novel; pornography guides, or perhaps even dictates, our sexual fantasies down to the slightest detail, whereas in the days of erotica we would need to build our own internal sexual scenario around a mere picture or photo; entire industries churn out romantic dreams of candlelit dinners and love everlasting with the perfect partner for life. We have outsourced our inner world.

Let’s face it, most of us love and cherish our fantasies – I know I do. They are a magic carpet that lifts us above the trivialities, disappointments and frustrations of daily life. The problem is that we end up at some level believing in them even if we know they aren’t true. I’m not talking here of an imagined stereotypical teenage girl screaming at a K-Pop concert; I’m talking about all of us, no matter how smart we think we are, or how cynical. At some level deep inside us, the fantasy world glitters and the Disney girls lure us like lorelei.

MEN’S COMPLAINTS ABOUT MODERN WOMEN

Lately I’ve been spending a lot of time watching YouTube videos focused on the current relations between men and women and I can’t help but wonder if they’re in crisis, especially in the US. The recent election there highlights this divide, with a clear victory for Harris among women and an even clearer landslide for Trump among men. Statistics show that an increasing number of heterosexual people in developed countries are choosing to live alone, and this wish to remain single has happened to both women and men. None of this is at all surprising for me having watched a cluster of videos hosted by men, often between the ages of twenty-five and forty, who seem extremely bitter and who consider males to be the oppressed gender nowadays in a culture with double standards that discriminate against them. The root cause of this, they argue, is feminism, which has upturned traditional views of male and female gender roles to the detriment of both sexes.

I’ve watched at least twenty of these male videos over the last few weeks, since the YouTube algorithms tend to flood you with almost identical videos as suggestions once you’ve watched even one video on a theme (whether this is because people really prefer to be in an echo chamber or whether YouTube merely assumes this, I have no idea). The angry male hosts use clips from YouTube videos made by women as evidence for their claims, women who seem unhappy and equally bitter, angry that men are ignoring or disrespecting them or using them purely for sexual gratification and then ‘ghosting’ them. These women also feel that modern men are wary of any kind of lasting relationship or sometimes even close contact. Although I have to admit to the guilty secret that I find these YouTube videos by both men and women rather entertaining, I come away from them a little depressed that relations have soured so much and become so confrontational.

I realise this may just reflect the dysfunctional nature of much of social media in encouraging extreme positions and pushing people towards them, and when I watch couples in restaurants and shops in real life, I see few signs of this conflict, so perhaps it’s all a storm in a YouTube teacup. I also realise that I’m stepping way beyond my area of experience in writing about this issue since I’m old and know nothing about the modern dating scene. Moreover, I’m a gay man, and I accept I would feel annoyed if I read a straight person pontificating about gay culture and lifestyle. So I hope I’m not being presumptuous in raising this topic, but ultimately it affects everyone in our society, whatever their sexuality, and in my opinion needs to be discussed openly.

The complaints of the male protagonists in their videos are manifold. The first is that modern women want to have their cake and eat it. For example, they say that women want equal pay and education and career prospects, but still insist that men pay for meals in restaurants, buy them gifts and flowers, propose marriage on one knee with an expensive engagement ring in hand, and generally treat them with old-fashioned gallantry. Secondly, they state that women criticise men for being superficial since they judge women purely on their looks, but then show videos in which female hosts make it clear that they would never even consider dating a man who was too short, and especially if he is shorter than themselves. Thirdly, they believe that many women regard men as a meal ticket, both in the early stages of dating and during a marriage, and bleed the man dry if the marriage ends up in divorce. Fourthly, they state that they are afraid of being accused of harassment if they come on to a woman but then show videos in which women criticise them for being too timid to approach them, labelling such men as unmanly. Perhaps most worrying of all, these male hosts claim that they are concerned about facing charges of assault if sex takes place, especially after alcohol.

They put a lot of this down to the Princess phenomenon, where modern young women want to be treated like princesses and believe that they deserve only the crème de la crème of the men on the dating scene, leading to a situation where the vast majority of women are chasing the same top 5% of rich, handsome guys. This makes women open to exploitation by these desirable males, who use and discard them because there are plenty more fish in the sea, and there’s always another fish which is keen to be hooked. The more relaxed attitudes to sex and the greater openness about it mean that it tends to happen more quickly and more readily in a relationship than in the past, often on the first date. Many YouTube male hosts are unhappy about this new sexual freedom extending to women, and are especially critical of events like hen nights or all-girl vacations where casual sex might take place away from their steady male partner. Finally, older, divorced men, especially those who are active in men’s groups, rail against custody laws which they say always favour the woman and alimony laws which they feel discriminate against them as husbands even in situations where their ex-wife earns the same salary as them or more.

I’ll turn to these arguments one by one to question how justified I feel they are. First, I generally agree that in a situation where the man and the woman are earning roughly the same amount of money, it only seems fair that they share the costs of dating, and I suspect this splitting of the bill is more common nowadays among young people, but I admit I have no evidence to support this idea. When this doesn’t happen, I suspect it is often the man who insists that he should pick up the tab because he feels his masculinity is undermined if the woman pays or they go Dutch. At the same time, I think many women enjoy, or sometimes even need, the boost of feeling special, which takes concrete form in gifts of chocolates and flowers. This may sound like an imbalance in favour of women, but we shouldn’t forget that statistics show that they still earn less than men throughout the world, often for the same work. In my opinion, a good compromise could be that a realistic budget is set to which both partners can contribute equally, but psychological and cultural factors clearly play a huge role in what happens on the ground and may prevent this.

I broadly agree with those men who say that a woman rejecting a man solely on account of his height is little different from a man rejecting a woman because her breasts aren’t large enough. However, we cannot deny that we all have our own tastes in romantic partners and it seems perverse to deliberately go against them, but there is a difference between making something a factor guiding our choice and having an inflexible determination to date only our ideal man or woman. It remains true, though, that women are still much more likely to be judged by their appearance, even in the modern world, although the pressure on young men to be buff and have muscles and a six-pack is growing stronger all the time. Perhaps we should blame dating apps for much of this, since they encourage this objectification of the human being. I also think that both men and women are sometimes loath to give up traditional masculine and feminine roles and the physical stereotypes they foster. When men are not muscular and buff and do not behave in a ‘masculine’ way, the YouTube videos of some of the women show that they think of these modern, sensitive men as wusses and reject them. Despite all the social changes, at heart many of them are still looking for a ‘real man’.

As for their claim that many women are gold diggers, I have no sympathy with the male hosts. Yes, a few of the female videos are hosted by women who want to be wined and dined in luxury and are shallow and materialistic, but equally there are sugar daddies who will use their wealth to exploit women. Don’t judge everyone by the bad apples. Problems around things such as divorce, alimony and custody are due more to shortcomings in the law. Laws always lag behind reality and issues such as divorce, alimony and custody need updating in a world which is very different from the traditional 1950s world of man at work earning the corn while wife stays at home and looks after the house (a reality which was often far from true – both my parents needed to work to make enough money to run a family). Surely no one wants to go back to the laws that backed up this model: a world in which women often had to stay in unhappy marriages because they could literally be on the street in the case of divorce. The law and the practice of the law still needs to catch up with these social and economic realities of modern relationships, I feel, although making them fair to both partners in a more diverse world seems a Herculean task.

Regarding harassment and assault, this seems to have become a minefield, with salacious cases in the mass media of celebrities and top people getting dragged through the mud in the gutter press. We can only do our best to deal with these claims and counterclaims since so often it becomes the word of one person against that of another, which becomes a hideous problem when the assault becomes an accusation of rape. The greater readiness of women to report rape, assault and harassment is surely a positive step forward, but in a society as litigious as the US, I can understand why men have become very wary. And despite the greater transparency that now exists, I suspect that we are seeing only the tip of the iceberg and most assault remains hidden. So ultimately although I understand why men are nervous, I think it’s very unfair to put the burden for correcting this on women.      

If you watched only YouTube videos and had no access to other media, it would be easy to assume that the Princess phenomenon was rife. The women in the videos that the men use as evidence against them are mostly spoilt, neurotic, wannabe divas who believe that the world owes them a living just for being gorgeous. But there have always been spoilt prima donnas just as there have always been men who treat women badly, but to what extent this exists outside of this weird, narcissistic, exhibitionist online world I have no idea. We have to ask how much of what we are talking about is a symptom of a greater selfishness in a society based on consumer capitalism where everything becomes a product to be sold or bartered. In the YouTube videos I’ve watched by both men and women, there is an underlying instrumentalism, an attitude of ‘What’s in it for me?’ and very little consideration of the other person. And if some of the women I see are little princesses, some of the men are unreconstructed chauvinists wailing against feminism and the modern social and sexual world.

The effects of these changes in mores could fill a library of books, so a paragraph seems woefully inadequate. Briefly, it does seem that women have changed faster and farther than men in the sense that ladettes and girls’ nights out mimic a certain type of male behaviour whereas the alleged feminisation of men that these YouTube hosts disapprove of has been much more marginal or perhaps just less publicly visible. But for all the publicity around ladette culture, it certainly seems true that the old double standard of men being studs and women being sluts is alive and well. More generally, confusion about gender and gender roles seems more prominent nowadays as the endless arguments in the media about the trans movement show. Again, though, we must ask: how much of this is real at grass roots level and how much is a media fabrication?

I have listened to thoughtful online hosts, both women and men, who argue that the ultimate result of all the trends I’m discussing here is a glut of single, available women who want a long-term relationship but are unwilling to compromise on standards and end up feeling lonely and frustrated. This is somewhat stereotypical, but I think women have always tended to think more long-term in term of partners and relationships and therefore do tend to set higher standards than their menfolk (whether this is due to biological imperatives as the evolutionary psychologists believe or the result of their social and economic vulnerability is irrelevant) so I suspect there is nothing new about this tendency for women in general to want higher-level men who will provide for a family.

Overall, I feel that some of the grievances of these male YouTube hosts have some  substance to them. However, when they become a desire to go back to a past that perhaps never existed and certainly can’t be replicated, I think these men need to question and change their way of thinking. Blaming it all on what they label ‘feminism’ seems little more than a convenient scapegoat. Society is struggling to cope with massive changes in sexual and gender behaviour, and even if some of this is overstated in the media, many of these issues are real and rooted in larger social and technological changes and therefore perhaps cannot be solved, but only ameliorated.

ARE WE REALLY WHAT WE EAT?

I’m something of a sucker for reading or watching anything to do with nutrition and whether a given food is good or bad for us. I’m not alone in this, I’m sure, and the internet has done a lot to feed this obsession with the effect of food on our health. But this fascination with nutrition is not simply a contemporary trend fuelled by a life spent online, as is made clear by a book I recently read called Eat Your Heart Out by Dr. James le Fanu. This was written as long ago as 1987, but in my opinion its core questions are still relevant today. Le Fanu contends that the range of vastly different cuisines around the world proves that there is no single diet which guarantees better health and wellbeing, but that most diets are adequate as long as they have contain enough calories, vitamins, minerals and essential amino acids, and that expecting more than this from a diet is misguided. There is no such thing as the healthy diet and certainly no such thing as a superfood, and searching for them is a waste of time.

The main evidence le Fanu offers for his argument is the contrast between the nutritional advice given by experts in the 1930s and the contradictory advice offered in the 1970s and 1980s which was de rigueur at the time he wrote his book. In the earlier period, there had been a focus on protein and calories and the key message was to eat more meat and dairy. In contrast, the post-war years saw a reverse of this advice as saturated fat became seen as the main cause of higher rates of cardiac arrest, strokes, diabetes, and even cancer. Red meat became a dangerous food which should be eaten sparely and replaced when possible by less fatty meats like chicken, full-cream dairy should be jettisoned for low-fat or fat-free alternatives, salt was starting to become the bad guy it is still often seen as today, and the food pyramid emerged, with lots of basic carbs from things like pasta bulking up the low-fat diet.

Nutritional science has obviously moved on a lot since le Fanu’s time (for example, there is no mention of the microbiome in his book) and expert advice has changed in response to this. So for almost a century now, the guidelines of reputable organisations like the NHS or the American Heart Association have often fluctuated wildly, and this has not only been true of the foods which science never seems sure about (e.g. eggs and coffee) but of the broader macro-nutrients. The worries about meat, dairy and salt of le Fanu’s days have, if anything, grown stronger and have now been joined by concerns about sugar, carbohydrates, ultra-processed food (UPF), and sometimes even fruit.

These changes in the official guidelines have created a scepticism about nutritional science among many people. As the 180-degree turn within the period 1930-80 highlighted by le Fanu shows, fashions come and go in the world of nutrition with very little consistently agreed on except for the health benefits of vegetables. When I read his book, I was struck by the confidence with which scientists and dietitians expressed their advice, and it feels as if the advice has changed but not the certainty with which it is expounded. Also, the official guidelines of the medical establishment all tend to say the same thing at any given moment in time, with little recognition of new or alternative ideas until the paradigm shifts and the herd gradually moves to its new collectively agreed position.

This scepticism has been exacerbated by the rise of internet gurus, the host of doctors and nutritional gurus offering advice to the general public on sites like YouTube. I have to admit that there is something addictive about these ‘experts’ and their pronouncements even though I realise that I have no way of knowing whether any of them are in fact qualified doctors. I suspect that most of them are, and they are not just a quack putting on a white coat and wrapping a stethoscope around their neck, but even if they are genuine medical practitioners, so little of medical training involves learning about diet that they are unlikely to have much more knowledge in this field than many laypeople. These self-declared gurus are a cacophony of voices saying opposite things: for every person arguing against meat and dairy, there is a person arguing for them; for every vegan there is a carnivore; for every expert saying that carbs are the core problem there is another placing all the blame on UPFs. Even foods which are usually beyond question (e.g. avocados, olive oil, fruit) have their occasional detractors. Recently, for example, I watched a woman screech into the camera that ‘fruit is evil’ like some deranged religious believer warning us about the temptations of the Devil.

But it’s too easy to lay all of the issues surrounding research into nutrition at the door of internet quacks with their weird obsessions or to blame the media for sexing up scientific research to turn it into clickbait (scientists themselves are not above this kind of sexing up in their press releases in their search for future research funding). A quick look at the history of nutritional guidelines by official sources based on research suggests that a dose of humility on their part would not go amiss. Le Fanu’s question about whether a specific diet can really make such a huge difference to human health and wellbeing remains a valid one in my opinion, and even if it seems reasonable to assume that diet can have this effect, how can we ever confidently identify what that diet is?

We are always told that randomised controlled trials (RCTs) are the gold standard of any research into nutrition but the problems associated with them are far from trivial. Most trials usually have a small sample of subjects, which becomes even smaller once they are split into two groups so that there is a control group. They also tend to take place over a very short period of time, but it seems logical to assume that any effects of diet on our long-term health are unlikely to show after just a month or so. RCTs almost always depend on markers for health (blood pressure, cholesterol levels, blood sugar, and so on) rather than actual health outcomes. And how do we know that heightened levels of these markers after eating a certain food (for example, the sugar spike that follows eating foods with a high glycemic index) is genuinely harmful or just an unimportant temporary effect? After all, blood pressure will often rise to extremely high levels during vigorous exercise, but no one is suggesting that exercise is bad for us. Then there is the fact that we are talking about statistical significance which is often very small and could possibly be the 5% of times when we have an outlier. Meta-analyses are a common way of trying to see past all these issues but it is rare to get two or more pieces of nutrition research which are exactly the same, so there is a lot of room for confounders. The problems of RCTs are considerable.

In addition, is it really useful to look at specific foods or even types of food and study them in isolation? If bread is bad because it causes a sugar spike, is this true whatever else we eat with the bread? We know, for example, that eating fats at the same time as vegetables helps with the absorption of fat-soluble vitamins, so shouldn’t we be focusing on combinations of food rather than single items? In practice, though, this sounds totally unrealistic. Also, nutritionists are realising that no two people are the same and what is good for Jack may not be good for Jill. In extreme cases, some people can die from eating a peanut while other people derive great benefits from them, so aren’t much more subtle effects of specific foods going to happen? Zoe, an organisation which makes some of the most convincing internet videos on nutrition, are talking more and more about the need for a personal nutrition profile, but is this a realistic aim for anyone except the rich in a world where health services are coming under increasing financial pressure?

We are often warned that epidemiological studies are much inferior to RCTs, mainly because of the sharpshooter fallacy. This means that if we have a mass of data there are bound to be clusters that look meaningful but are actually random since these clusters will appear purely by chance if our data is big enough. However, the sheer size of this data presents opportunities because some of those clusters may identify genuine phenomena which can then be explored in RCTs. For example, this has led to the identification of ‘blue zones’ where people regularly live long and healthy lives – real outcomes and not markers, based on a lifetime and not a few weeks or months  – and some scientists are investigating the possibility that these zones have things in common which lead to these healthy outcomes. But from what I have read, the diets in these blue zones are incredibly diverse and seem to support le Fanu’s thesis that human beings can be healthy and have long lives on completely different diets and that the search for the perfect diet is a chimera.

I’d like to turn briefly to the role of the media in all of this. From a combination of self-interest (a desire to get more readers and therefore more advertising revenue) and lack of understanding or time to study the press releases of researchers (and therefore poor analysis of the research), they massively simplify and exaggerate its conclusions. To this we must add our own role as the general public. Put simply, we will tend to believe the research that we want to believe, and to cherry-pick that which suits us. Vegans will focus on research suggesting that meat is deadly; carnivores will root out that which refutes this idea. I’m as guilty of this as anyone. A few years ago I found some research by a Finnish scientist which suggested that wine only had serious negative effects if we drank more than one bottle a day. I jumped on this at once, and even though I consciously realised the dishonesty in what I was doing, at some deeper level I at least half-believed this research because I wanted to.  

I know I’m in danger of coming across as a smartass in this essay, someone who knows better than the scientists when I list these problems, but I don’t think for a moment that nutritionists are unaware of them: they are an integral challenge of their field which they have to negotiate all the time. My concern is that these factors added together – the need of scientists for research funding, the role of much of the media, our own bias towards what we want to believe, plus the bewildering complexity of the field – is making us neurotic about food and turning what should be one of life’s greatest pleasures into a source of fear and loathing. So even if le Fanu’s thesis turns out to be flawed, and the intuitive feeling that we are what we eat is at least partly true, I think he encourages a healthier attitude to food, which will benefit us in the long run.

HOW I WRITE MY POEMS

I write a lot of my poems – perhaps even a majority of them – when I’m out walking. Living in Gozo, I’m very lucky to have quick access to the ‘countryside’. I’ve put the word in inverted commas because Gozo is so small that I’m never far from a residential area even when I’m somewhere with hardly any buildings and almost no cars. But because Gozo is so small, I can be in the middle of this ‘countryside’ in just a few minutes from my apartment. I love these moments in nature for they provide me with solitude and silence and a place where I can think, dream and be creative.

I’d like to make clear that I never go walking with the express purpose of forcing out a poem or set out with the intention of writing one – I just go on my daily walk of at least an hour or so and sometimes my imagination is fired and the shape of a poem takes form in my mind. If not, it’s no big deal. The moment of inspiration often comes as a first line and I know almost at once whether this line will become the beginning of a poem I’ll be happy with: something just rings true about it although I haven’t the slightest idea why. By the time I’ve finished my walk, I usually have the skeleton of a first draft. Then as soon as I get home I type it on my computer; if I don’t do this, it will disappear into the ether. Then I work on it.

Other poems take shape when I wake up in the middle of the night and I find a poem is writing itself in my head. Then I know I must get up immediately and put it down on paper; otherwise it will be lost. I find these middle-of-the-night poems come quickly and fluently and I can turn around and go back to sleep confident in the knowledge that I can begin the editing process on the following day. These nocturnal verses tend to change much less than most of my other work when they are edited. This may sound a bit pretentious but sometimes I honestly feel as if these poems are written by some sort of spirit and I am merely its shaman. Perhaps this is why I tend to be more satisfied with these works, since they feel fresher to me and less contrived, and compared to most of my poems they need less editing.

One thing I almost never do is set out with a goal to write a poem abouta specific topic: the theme emerges from the words and not the other way around. What I must do is find the heart of the poem. It’s hard to explain exactly what I mean by this: it’s certainly not an idea nor anything directly related to the intellect. It’s often an image, or a flow of words that creates an image, and at this early stage of the work I’m not aware of what this image means. It’s largely mysterious and unconscious, a vague feeling or emotion or an empty, nebulous space with the potential to become a poem. Or in other ways it’s like a colour, and my task is to discover and express its exact hue and saturation and brightness.

The only time I do begin with a deliberate goal is when I write ekphrastic work. I see a painting I love, or listen to a piece of music that I have liked all of my life, and I decide I want to write a poem to explore it and pay tribute to it and to the artist who created it. All the same, I can’t force it into existence. I often know the works I want to use as a foundation for a poem – for example, I always knew I wanted to write poems based on Van Gogh’s The Night Café and Nico’s Frozen Warnings – and once I’ve made this decision I send a kind of message to my subconscious mind and tell it to get to work. If I’m patient, it almost always delivers the goods even if takes time.

I can’t speak for other poets but personally I can’t begin to write a poem when its heart or essence still eludes me: I can only wait for it to emerge. This is very different from the process of writing a novel, at least for me. During the writing of my three novels, if I had a day when I felt uninspired and I didn’t really want to write, I could compel myself to sit and work on the next chapter and eventually the words would start to flow. I think this was because I had a general idea where the novel was heading next even though I hadn’t formally set out the plotline on a piece of paper, as I know many novelists do. And even if my words often sounded less natural and successful on the following day than the stuff I wrote when I was in the zone, they still represented an advance that I could build on. In poetry, at least for me, this is not an option. If I’m not in the mood, I’m better off going to the beach and forgetting all about it until I am.

So much for what we might call the ‘inspirational’ part of writing a poem. Next comes the hard work, which is editing and further editing. In the early days, I visit the poem every day and sometimes make changes. This editing process can unfold over several months and even when I feel that a poem is finished and ready and I’m broadly happy with it, I still like to put it away for another six months before I even consider trying to get it published.

Unlike many writers I enjoy the editing process. I’m very fussy and I brood over every comma and full stop and capital letter, let alone every word or line. I suspect most poets are like this, even the ones who pretend not to be so punctilious. Although our modern preference is for naturalness rather than artifice, the concision of most poetry means that every single word must pull its weight and an infelicitous choice can ring as crude and ugly as a cracked bell. I’m not denying the part that inspiration plays in the creation of a poem, but the later editing is every bit as vital in my opinion.

Some of my early drafts are closer to being ready than others and any changes that I make to them are limited, and restricted mainly to vocabulary. Others are tougher meat and simply don’t feel right and need a lot more chewing: my instinct tells me there is a poem inside there somewhere but I haven’t found it yet. This sometimes requires wholesale changes or the moving around of stanzas or even a completely new structure in order for me to find that heartwhich I spoke about earlier. In a few cases I give up and surrender to the inevitable; the poem has eluded me and will be discarded and chalked up to experience.

Regarding more mundane issues, I’m not too proud to use a thesaurus – the real skill at the editing stage is choosing the ideal word and while browsing through synonyms in a thesaurus may sound rather laboured and prosaic, it’s the quickest way to bring to mind a list of possible alternatives, much better than simply sitting there and hoping that the right word will pop up magically from some mysterious corner of the mind. As for what I use to do my writing and editing, although I’m generally no fan of electronic technology and I used to struggle to write creatively on a screen, these days I find working on a computer much easier and more productive than on paper. (I still much prefer to read poems on paper, though.)

Despite writing all types of poems from haiku and sonnets to free verse, I personally find creating or editing a poem with a clear structure much easier than free verse, so the latter usually takes me much more time. This might sound overly formulaic, but simply counting up to seven, five and seven, or going through a list of words that rhyme, or following a regular metre narrows everything down very quickly. Fewer alternatives, greater focus. And as some of my other essays on here make clear, I generally welcome the discipline of a tight structure when I write (and read) poetry, even if this makes me rather unfashionable.

Once a poem feels finished, I tend to fall in love with it for a short time and it may even seem the best I’ve ever written. This is why I put it away for six months because later I’ll find that often it’s lost much of its early allure and I feel a bit embarrassed that I had such great hopes for it. Then it’s not exactly back to the drawing board but it may involve everything from a set of minor changes to major surgery. This is much truer of my free verse poems, which often seem fragmented and shapeless once I no longer have the feeling which I had at the time of their creation. I think this is because at that time, I unconsciously added those thoughts and emotions and buried links as I read the poem, something that the reader cannot do. Six months on, those submerged feelings and ideas have become much less present to me, which places me more closely to the position of the reader. I think a second reason why I find working in metre and rhyme easier than in free verse is that they instantly announce a clear structure that allows me to quickly enter a poem, get into its mood, and start working.

I’d never pretend that my personal approach to writing poetry is in any way typical or universal: why should it be and even if it was, how could I possibly know? And ultimately trying to identify how I create a poem is largely an intellectual exercise: interesting perhaps, but of no real use when I’m carving out a first draft or editing a poem. Writing is clearly less spontaneous than art forms such as dance or music which happen in the moment and must therefore largely take place at an unconscious, bodily, muscle-memory level because the conscious mind cannot operate at that speed. But even in writing, and especially in poetry, there remains an element of mystery which no amount of grey-cell analysis can ever begin to explain.

THE SEXUALISATION OF SOCIETY

We don’t need to go back far in time to find a period when there was almost no open discussion about sex and very little public acknowledgement of its existence. We are not talking about the Victorian age with its alleged covering of the legs of furniture, or the era of the Hollywood Production Code stating that a man and a woman could not be shown on a bed in a movie unless one of them had at least one foot on the floor, or the post-war cause célèbres such as the Kinsey Report or the trial of Lady Chatterley’s Lover. Even in the supposedly swinging sixties, when I was growing up as a teenager, the shroud of secrecy still largely rested over everything except conventional sexuality at its most abstract and anodyne, and discussion even of that was buried under layers of obfuscating euphemisms. The post-war period, however, was the time when the ice began to melt and the silence started to crack, and a completely different public discourse about sexuality was ushered in.

Within the last fifty years things have swung full circle and it is hard to avoid sex in contemporary public space. Nudity is now commonplace in mainstream culture and simulated sex is possible in mainstream films. Formerly taboo topics such as female desire and homosexuality are now openly portrayed. So-called dating apps, which are in reality often little more than hook-up apps, mean there are many more opportunities to enjoy a sexual life, both openly and secretly. The internet has made porn an everyday phenomenon and research suggests that most young people in countries like the US and the UK have seen it by the time they are sixteen. It is increasingly normal for people, especially young people, to post pictures of themselves naked online or even videos showing themselves having sex. Clothing for girls aged as young as ten is frequently whorish and slutty. In America there are disturbing beauty pageants where pre-pubescent girls wear outfits which seem designed to feed the fantasies of paedophiles. Sex is ubiquitous and it is loud.

The societal attitudes to sex which existed when I was a child are now seen as prim and puritanical, and sometimes even interpreted as a form of mental disorder or psychological illness. Young people seem to be far more sexually experienced at an early age than in my generation, and those who are not experienced appear desperate to lose their virginity, which is seen as an embarrassment rather than something to be proud of.  I suspect that some of this is the bravado that has always existed, usually among boys, to boast about conquests that never actually happened. And while it is true that there is a counter movement against this sexualisation of public discourse, usually from Christian groups, this is small and rather fringe. Purity rings have become a fashion among some young Christians to show the world that the wearer is a virgin and is saving herself for Mr Right (I don’t think young guys do this very often and say they are waiting for Miss Right). A few people (generally not motivated by religion, but influenced by gay, lesbian and trans identity politics) openly declare that they are asexual and have no interest in sex. But these reactions against modern sexualisation feel like straws in the wind.

In the 1960s, the idea that sex could be a route to happiness, self-fulfilment and even meaning in life began to become widespread. The gay movement of which I was a very small part in the following decade was one of the most important causes of  this, as a minority which had been oppressed and stigmatised for most of history began to identify themselves through their sexuality. The fashion for a half-digested borrowing of Eastern concepts such as tantrism was another major factor. Psychoanalytical theories about the harmfulness of self-repression, especially sexual self-repression, in a profession dominated by Freudian thinking were also incredibly powerful. Among the intelligentsia there was a revived interest in texts like the Kama Sutra or the works of the Marquis de Sade, an idealisation (often based on a misunderstanding) of the dominant sexual values in classical Greece and Rome and in ‘primitive’ cultures, and appreciation of writers like Swinburne and artists like Beardsley. Eliminating repression, or in the more vernacular phrase, ‘letting it all hang out’, became a rallying cry that urged us all to throw off the shackles and embrace our sexuality. Personally, I am very sceptical of a claim that we can find our ‘true selves’ through our sexual behaviour and identity – I feel this is placing too much weight on just one aspect of our human nature – but it has undoubtedly been influential and has rippled out from the counter culture to the general public, as a quick trail through the internet today proves.

For me, one of the worst aspects of this current sexualisation is its objectification of the human body, turning it into a thing that can be commodified. This is most obvious in porn, but sneakier forms of the reduction of the body to an object to be used to make money exist in areas such as advertising and marketing. The argument that porn sets us free from Victorian repression and frigidity is a very convenient one to justify the commoditisation of the human body, and particularly the female body. Porn stars are essentially meat and are not chosen for their cute eyes. So a generation of young people who are growing up watching porn have unrealistic expectations of beauty and physique, feeling inadequate as men if their penis is not as large as the specimens online or as women if their breasts aren’t as pert or as ample. But singling out porn for opprobrium is a little too easy for we are talking about a spectrum here and porn is on the margin. The fashion, diet and cosmetic industries, for example, have similar effects, ones which may be even more harmful in some ways because porn is clearly fantasy and most people realise that. This public objectification of the body beautiful feeds into everyday life, and every young man is now expected to be muscled and buff and every young woman to be glamorous. Meanwhile being average (which logically is what most of us must be) is not acceptable any more. These ideas then trickle down into day-to-day social media and lead to phenomena such as fat shaming and bullying of people who fail to come up to the mark. We are increasingly defined purely by our bodies.

I realise that so far I’ve probably sounded like a stuffy old prude, so I’d like to turn now to some of the benefits of the modern, more open public discourse surrounding sex, and in my opinion they are significant. The first huge plus is increased public availability of knowledge rather than myth, whispers and hearsay. It is hard to overstate the sheer ignorance there was about sex when I was a teenager, as is clear from the time when I came out as homosexual to my mother; she looked at me, puzzled, and asked me what a homosexual was. The number of unwanted teenage pregnancies was sky high if my local area was typical, largely hidden by the fact that there was often severe pressure on the couple to marry; the alternative, of course, was the horror of a backstreet abortion. There was almost no public information surrounding STDs (one of the very few positives to arise out of the AIDS crisis was a need to talk openly and honestly about sexual health). I realise that some opponents of this new openness will argue that it encourages more sexual activity among the young, and therefore a higher rate of teenage pregnancies and STDs, but I find the idea that ignorance is in some way more desirable than knowledge unacceptable and most serious scientific research suggests that this increase in teenage pregnancies is not empirically true. Advocates of celibacy tend to imagine that sex wouldn’t happen if no one spoke about it, but I can assure them that it did and does happen, even under a blanket of silence, and often at a young age when those advocates would prefer it not to happen. For me, an acceptance of reality will always be an advance on wishful thinking.  

A second benefit of the new public openness around sexuality and sexual behaviour is that it makes people realise that, whatever their sexual desires are, they will share them with somebody else. The main beneficiaries of this have been gays and lesbians, who suddenly realised they were far from alone in their desires, and were consequently able to unite into a political force for change. But openness has helped lots of people who have what were once considered to be shameful and very rare sexual perversions – cross-dressing, S&M, foot fetishism, an almost endless list of variations on the missionary position, not to mention something as common and uncontroversial these days as oral sex – to realise that these are far from rare. It has turned the word ‘normal’ from a morally loaded term into what it should be – an expression of what actually happens (there are plenty of censorious alternatives if you wish to make a moral judgement). There is less shame nowadays of what are common and normal practices undertaken by consenting adults and surely that has to be a good thing.

Having looked at what I consider to be clear advantages and disadvantages of our current public openness about sex, I will now turn to some arguments frequently put forward by proponents of traditional silence and discretion in order to evaluate them. The first is that openness places a huge pressure on young people to begin their sexual life too early, before they are emotionally ready for it and able to make wise decisions. This is a substantive criticism in my opinion. For example, when I chat online nowadays I am very aware of young gay men, often barely past the age of consent, wishing to become sexually active as soon as possible or even boasting that they are. The language that they use when they do this reflects the language they’ve learned from porn or seen in chatrooms – sub, slut, fag, bitch, top, dom, and so on – and I fear that they may get involved in real-life situations that they are not equipped to handle. I see no reason to think that the same pressure to have sex at an early age is not also true of young heterosexuals, especially girls who now have to tread a much more careful line between whore and virgin. For all this, though, in terms of what people actually do,I have to question whether much of what seems a real trend is being hugely magnified by greater visibility, and if any actual changes in sexual behaviour are being over-estimated.

Another pertinent argument made by traditionalists against the sexualisation of society is that getting into a habit of one-night-stands and casual hook-ups, now easily available in a click on a dating app, makes people unable or unwilling to eventually settle down in a stable relationship. It’s hard to know whether this idea has any substance in reality; statistics show that people are becoming less keen to marry or are doing so at a later age, but there are so many factors involved in this decision that it’s almost impossible to disentangle them. Even so, the majority of people do still opt for marriage or a settled relationship at some point. It can also be argued that sexual compatibility is a crucial component of a good marriage, and that learning about one’s sexual preferences before marriage is better than discovering a disastrous difference in desires after the wedding.

Finally, I would like to look at the feminist argument that sexualisation has been bad in general for women. Whereas women had been somewhat protected by the concept of gallantry, they are now expected to have sex on demand, condemned as sluts if they do, and called frigid if they don’t. More than ever, they struggle to negotiate the line between whore and virgin. In heterosexual society at least, they are still more objectified than men (in gay culture, men are massively objectified as well, though). Again it is difficult to make confident conclusions about all of this, since we can never know how much that was not gallant happened behind closed doors in the past, or the incidence of unreported rape and assault. My own intuition is that little has changed in terms of behaviour and that the greater openness about sex has at least made women more willing to report sexual harassment and assault, but I’m far from confident about this intuition.

So often when I write these weekly essays my final paragraph ends up being somewhat wishy-washy and inconclusive, and the same is true here; I definitely lean towards greater public openness and welcome, if somewhat unenthusiastically, the present environment of sexualisation compared to what existed before it. The greater personal freedom and choice, the decrease in ignorance and prejudice, and the implications for sexual health are what swing my opinion. I remain uneasy, however, about what often feels like cheap exploitation of human sexuality, done not to make people happy or to benefit the human race but simply to make money, and wish we could find a better balance between the dishonesty of silence and the tawdriness of much of the current public discourse surrounding sexuality.

GAY MARRIAGE

When I was growing up as a young gay man in the 1970s, those of us who were active in the gay movement tended to belong to one of two wings. The first took the view that gay people should assimilate to the sexual norms of society and focus on working towards goals such as equal legal rights; the second believed that the gay movement should be revolutionary and challenge the very roots of what queer theory now calls heteronormativity. In Britain, the main group that took the first approach were a non-confrontational organisation called the Campaign for Homosexual Equality (CHE), with a strategy based on appealing to the traditional liberal values of tolerance of minorities. The other strand was influenced by the Gay Liberation Front (GLF) in America, was much less formal and less centralised, and used tactics such as demonstrations and street theatre as their principal way of trying to achieve their aims. In my local area, for instance, we called ourselves the Gay Activist Alliance (yes, we took ourselves ridiculously seriously in those days).

There was often no love lost between these two groups. Fifty years on, I think it is fair to say that the assimilationist strand has triumphed over the revolutionary, and in retrospect, although I feel that both strands of the movement were essential to change, I have to admit that the circumspect approach of the assimilationist wing possibly achieved more than the confrontational politics of GLF-inspired groups (although one thing the GLF approach did very well was to make homosexuality and lesbianism much more visible, although to a lesser extent regarding the latter). A perfect example of a change brought about largely through a traditional liberal campaign appealing to tolerance and respect and freedom for the individual is the introduction of gay marriage in many western democracies. However, the radicalism of GLF has not totally disappeared and there are still gay people, especially of my generation, who reject gay marriage as an adaptation to a society which remains homophobic at its roots and which encourages an unhealthy attitude towards human relationships in general, gay or straight. In this essay, I’ll look briefly at this difference in thinking, and particularly the arguments for and against gay marriage.

The first major benefit of gay marriage is that it helps same-sex partners to enjoy legal benefits that heterosexual couples automatically enjoy. One of the cruellest aspects of the days when long-term gay relationships had to be hidden away was the extra pain experienced by one of them when their partner fell seriously sick, or, in the worst case, passed away. With no recognised legal status, they were often denied access to hospitals and had no right to see and be with the person they loved, and often faced problems in terms of inheritance, with greedy families who had earlier rejected their gay family member deciding they were not about to reject the chance to inherit their goods and property, all at a time when the loving partner was grieving and often had no emotional support whatsoever. This access to a partner in extremis is clearly the greatest emotional benefit that a marriage contract brings, but there are other practical benefits such as the chance to enjoy the tax breaks of heterosexual married couples. (It can be argued that it is now partners who choose not to get married, gay or straight, who are discriminated against on this level.) It is almost impossible to over-estimate the importance of these rights now available to gay spouses.

On a cultural level, gay marriage will almost certainly further the normalisation of homosexuality in our society. When I was growing up as a teenager there was almost no public expression of homosexual love and desire, and young gay people like me felt very isolated and alone, and I’m not exaggerating when I say that I thought there were maybe a few thousand people like me in the whole of England. I know that sounds hard to believe now, but the veil of silence over the topic was almost total, with the only public acknowledgement that gay people existed taking the form of depressing documentaries on TV showing the silhouette of a gay man talking about how lonely and miserable his life was, and occasional sex scandals splashed on the front pages of the tabloid press. In contrast, in the contemporary western world, there are famous singers, actors, politicians, and public figures who are openly gay, pictures of them with their partners online, and advertisements featuring gay couples targeted at the pink pound. As always, though, there is still far less visibility of lesbianism.

Another huge boon for many gay male couples is that they can adopt and bring up children, while lesbian parents can now be publicly open about the nature of their relationship. Personally, I have to be honest and say that never having to bear the huge responsibility of being a parent has been one of the biggest pluses of being a gay man, but I’ve known so many gay men who desperately wanted to be able to raise a family (often because they grew up in a big, close-knit family themselves), and in order to enjoy this kind of family life they chose to live a secret life of fleeting sexual encounters under the cover of a bogus and often deceitful marriage. Gay marriage must surely be slowly bringing about a greater openness and honesty; it is hard to think of anything better at integrating gays and lesbians into society than meeting other parents at the school gate.

Gay marriage is also contributing to the removal of the invisibility of homosexual life (much less, though, in the case of lesbianism). For men, it has done much to dispel the myth linking homosexuality and paedophilia. It has also been one of the factors giving them the chance to live an openly sexual life without fear of being attacked in cruising grounds or arrested in public toilets, as frequently happened when homosexual behaviour was illegal. Lesbians have much farther to go in terms of public visibility, and the silence which once protected them from some of the horrors meted out to men, such as imprisonment or blackmail, is probably now limiting their public acknowledgement. Sadly, however, most gay people of both sexes still prefer to remain in the closet, most often because of negative responses from parents and family.

We’ll now turn to the reasons why some people feel that gay marriage is a negative development overall. They hold the opinion that traditional marriage is a form of possession of the partner, especially the woman, and in the 1970s the radical wings of both the gay movement and the feminist movement argued that marriage was intrinsically sexist, a way of transferring ownership of the woman from the father to the husband. While contemporary lesbian activists are often more measured in their rejection of marriage, they still criticise it as an institution which accepts, promotes and relies on sexism, while many gay men refuse to take a proprietorial approach to partnerships and consciously decide to have open relationships. They see the institution of marriage and monogamy as a restriction of human potential, forcing people into stunted relational structures which limit their personal growth, inculcate a naive, romantic view of long-term relationships, and an unrealistic and insincere view of human sexual desire.

Traditional marriage of all kinds, including gay marriage, can also seem out-of-date in a world where increased longevity means that people who marry might be expected to live together for fifty or sixty years. There are many people who do this, of course, but there is always the danger that the two people who tie the knot subsequently move in different directions and drift apart. Moreover, society in general has become much more fluid and mobile as fewer people live their whole life in one area, while women are often as keen to have a career as men and are less willing to take the traditional home-builder role, both of which can put a strain on a relationship and can trigger a break-up. This has led to what has been called serial monogamy, where people have a succession of long-term but not lifelong monogamous relationships, a cycle of marrying, divorcing and then settling down with a new partner. Also, the idea of open marriage (or polyamory, to use its more fashionable contemporary title) has spread beyond gay culture and become popular as a way of trying to avoid the secrets and lies of a monogamy that is often more a public show than a reality.

Some male gay activists contend that gay marriage is a threat to gay culture. An important part of male gay culture has always been its willingness to embrace transgression, especially with regard to sex. Sexuality lies at the core of male gay life – cruising, bathhouses, group sex, and so on – and some gay men see gay marriage as a kind of suburbanisation of gay sexuality, making it as dull and uninspiring as official heterosexual marriage, and think that a homogenisation of sexual desire is taking place. They suggest that a strange mix of stereotypes and standards of male gay life is emerging. One is highly sexualised – young, handsome, promiscuous, hedonistic – while the other is respectable, decent, conventional and essentially dishonest. Many of the old GLF generation argue that gay marriage accepts a society which is built on lies, where people cheat on their partners while pretending to be faithful, either through having affairs or, in the case of men, by visiting prostitutes. However, other people who worry about the increasing sexualisation of our culture, a culture in which girls of ten or twelve are sold clothes which look like the stuff that a street whore might wear in a movie, see gay marriage as a welcome bulwark against this increasing sexualisation.

The fact that the benefits of gay marriage have taken up more space in this essay than the risks suggests that I am broadly supportive of it, which indeed is my overall opinion. 1970s gay activism was thrilling and the arguments it spawned were interesting, provocative and serious. But when I look at society now and the place of gay men and lesbians in it, at least in much of the west, I can’t help but feel it represents progress. And hopefully it will protect us from any backlash from people who love to hate, which is a constant danger and, sadly in my opinion, will never totally go away.

ART & INFORMATION OVERLOAD

In some ways, we are the luckiest generation ever. We have masses of information literally at our fingertips, a luxury of riches which would have been unimaginable to past generations. For much of the past, the study of painting, for example, was a privilege largely available only to the rich, who had the leisure time and the money to travel to art galleries around the world. In the field of literature, meanwhile, I can now instantly look up a word I am thinking of using to check whether I am using it correctly, or to be sure that I’m spelling it right, or to find a list of possible synonyms in order to avoid repetition or to get the mot juste. Similarly, if I’m working on an essay, I can do a quick Google search and include information which makes me sound far more erudite than I am. No other generation has ever enjoyed this immediate and effortless access to so much knowledge. However, this is a mixed blessing, and this essay will focus on some of the problems arising from this abundance of goodies, specifically in the fields of art and literature, before going on to offer some thoughts about its more general negative effects.

With regard to painting, books showing the great works of the past have existed for many years, but the reproductions tended to be black-and-white, poor in quality, and limited in number because of the cost of illustration, particularly in colour. This meant that good art books were often very expensive, priced well out of the range of the poor. They also tended to focus on a handful of ‘Old Masters’ – Titian, Leonardo, Michelangelo, Rembrandt – or a select list of the ‘modern greats’ – Picasso, Matisse, Monet, Van Gogh – but it was far more difficult to find out about their obscure contemporaries who were largely neglected and forgotten. In contrast, we can now see online reproductions of the works of both artists who are household names and those who are little known, who lived in the distant past or are our contemporaries, and who come from every corner of the globe.

In theory, this should be a wonderful asset for anyone hoping to become a working artist. Rather than going to their local gallery and copying its very limited offering of paintings in order to hone their skills, they can surf the internet and access the websites of the finest galleries in the world and see a bewildering range of different styles and approaches from across the centuries and from a range of cultures. There is no longer any need for them to study under a master in his studio, as often happened in the past, being required to finish off some of the minor details in the master’s work while slowly learning and building up their own skills. No such lowly position is necessary for modern wannabes. They can gain inspiration from pictures on the net and, as the cliché goes, the world is their oyster.

However, is this abundance always such a boon? Having more models to learn from as a step in their artistic development means more choices to make and there is the danger that the neophyte flits from one style to another, never really settling on one, and ends up an artistic dilettante who never develops a truly individual style. The potential artist also cannot watch the master at work and follow how he creates his piece or uses his brush or mixes his palette: all he or she sees is the finished product detached from any context outside the canvas. And while it may help the isolated artist living in the back of beyond to have a virtual community, it is doubtful if this is ever as stimulating or as supportive as daily (nightly?) discussions in real cafes and bars.

The modern lover of art will also see most paintings on screens these days. This may not seem radically different from the previous generation’s reliance on illustrations in books, but I suspect that the internet makes us more promiscuous. Books on art written by experts do some of the structuring of ideas for us because they tend to focus on one artist, or a group of artists who worked together, or one period and place, which creates a kind of fundamental ordering of the data. (Admittedly, this is not always an unalloyed blessing because it can blunt some of our instinctive reaction to works by placing them in pre-existing categories.) In contrast, on the internet we are free to click to our heart’s content, flicking from French landscape painting to Japanese ukiyo-e to the works of Frida Kahlo, and I fear that we can end up sated by this flood of images, unable to make sense of this glut, uncertain about how to evaluate it, unsure which works we might learn from or wish to emulate if we are artists, or how to react to radically different works if we are not.

Viewing art mostly in books and on the internet also has stylistic repercussions. Screens (and glossy reproductions in books) flatten the art work and remove much of its material presence. Looking at a painting on a page in a book or on a computer screen is a very different experience from seeing the same painting in a gallery: its two-dimensionality is emphasised, any texture of the brushstrokes is often lost, and, perhaps most importantly of all, the reader gets no sense of the work’s size or scope. Sculpture in particular suffers horribly since we cannot walk around it and share its space, but have to be satisfied with merely imagining ourselves doing so, plus, of course, we can only see it from one angle. Also, the material of which the sculpture is made is a crucial part of the work, and in books and on the screen this tends to become reduced to its visual properties seen from a distance.

One final concern I would like to mention (although I could have listed several more) is that the ubiquity of famous images can suck out almost all of their power. Who can see any of Van Gogh’s Sunflowers with a fresh eye now, having seen them reproduced on tea towels and coffee cups, or Munch’s Scream, when it is photocopied above a list of exam dates and has gained an almost comical edge? Unless we dig deep, Constable is forever associated with biscuit tins and chocolate boxes and jigsaw puzzles, making him seem conventional or even twee, although in some ways his landscapes were a departure from both the French and English traditions. This reflects the superficiality which is always a threat with a mass of information, all of which we cannot possibly absorb: we learn the cliché only, what is most obvious. And all we can end up claiming is, to quote from a song popularised by Earl Hines in the 1950s: ‘I know a little bit about a lot of things’.

In many ways, the changes wrought by modern technology have not been as radical for the wannabe fiction writer or poet, probably because novels and anthologies have always been more freely available and are more or less the same regardless of the medium we use to read them (although personally I feel that my poems subtly change when I read them on-screen compared to when I see them on a page in a book). Unlike a painting or sculpture, there is little of Benjamin’s ‘aura’ in a particular iteration of a piece of literature because its impact is generally unaffected by where it is written (although the difference between a poem read aloud in public and one read silently in a private study is obviously very marked).

One key difference between art and literature is that the online world has done little to dislodge dealers and galleries from their role as the ultimate gatekeepers in the art world, whereas publishing has changed enormously over the first quarter of this century, with options such as self-publishing and online webzines democratising the world of writing to some extent (the same has been true of music). Anyone can put their work on the internet and hope that it goes viral (the sad reality is that it almost never does, of course), and try to bypass completely the traditional routes, although I don’t want to exaggerate the importance of this as a road to success – there are literally millions of books out there which sell almost no copies at all (I know this all too well, as a writer of some of those books). And for every Arctic Monkeys, there are thousands of bands languishing in obscurity.  

I think one contemporary trend driven mainly by modern technology which is not helpful to writers is that genre has become all-important. Whereas we tend to stereotype artists as individuals, in fiction the focus is on genres, and it is often writers who pin these labels on their own work: go to the online sites for groups of people who hope to become professional authors and the discussion is dominated by genre. It is true that painters are sometimes coralled into tight categories or placed within a movement or -ism, but I think the process is more relentless with fiction writers, who have a greater number of slots into which they can be fitted: thriller, romance, sci-fi, fantasy, detective, literary, gothic, and so on. In my opinion, these rigid categories encourage wannabe authors to place their writing in a sort of box and limits their imagination, and to focus too much on the selling of their work rather than the work itself. I am especially concerned about the category of ‘literary fiction’, which encourages the idea that this is somehow superior to other work, which can be relegated to the level of ‘genre’, something commercial and inferior. (This is nonsense, as a quick read of Chandler shows, or indeed of some of the pretentious trash which calls itself literary fiction.)

Another problem writers face because of modern information overload is that the internet tempts all of us to read rapidly and impatiently. I’m certainly guilty of this: I read more restlessly online, always thinking of the next article or the next poem rather than giving time and thought to that which is in front of me. I find it hard not to believe that the sheer quantity of stuff available is leading to a reduction in quality. I’m not just thinking of grammatical errors or lexical carelessness such as the famous distinction between ‘disinterested’ and ‘uninterested’ which so agitates language mavens, but the general slapdash nature of so much online writing. I recognise that journalistic writing has always had this trade-off between speed and quality, but it seems that the latter is now often totally sacrificed for the sake of the former. But I can’t blame writers for this: why should they bother to write carefully, to take time to choose exactly the right word or to use language with sensitivity, if the reader’s approach is that of someone gulping down junk food in McDonalds? I honestly believe that we are losing some of our sensitivity to language, especially written language, in a modern world dominated by images, and our writing is becoming increasingly coarsened as a result.

This is where I broaden out my argument to claim that the internet is coarsening our culture in general, and not only on a linguistic level. I know I am sounding like an old curmudgeon wrapping myself in nostalgia about an idealised past, but I’m not denying the huge positives of the virtual world and the wonderful opportunities it offers. I simply feel that the level of public discourse has declined, with cultish conflicts exacerbated by too much time spent in echo chambers, in a world where emoticons are a cheap and instant solution to the problem of trying to say exactly what we mean or feel. And that knowing so much about the world at a superficial level is not really making us more informed and is certainly not making us happier: as surveys show again and again, levels of happiness have dwindled in the west over the past fifty years. In my opinion, the decline in the standards of art and literature over this period is symptomatic of a larger malaise. With twenty-four hour news and an avalanche of random information, we know too much about a world we cannot change or even influence and this is making us simultaneously cynical and depressed.

Ultimately, however, we cannot unlearn modern technology any more than we can unlearn splitting the atom, so we have to live with the consequences of a virtual world that bombards us with information 24/7. I only hope that we will adapt to this new reality and that it becomes a normal, but not an overwhelming, part of everyday life. Then perhaps we can make the most of what it offers while avoiding its potential to coarsen not only Art, but what it means to be human.

SOME QUESTIONS ABOUT MEMES

I have problems understanding memes. I have no problem if ‘meme’ is just a fancy, sciency-sounding way of saying ‘idea’: then it’s fashionable, but superfluous, jargon. On the rare occasions when attempts are made to tie down the definition of a meme, though, they seem to run all the way from its being little more than a cultural unit that spreads through cultural processes to something that is a ‘real’ entity operating physically within a material universe. Certainly people like Dawkins or Dennett at times seem to understand the term, and want it to be understood, in this more ambitious way, which is logically consistent with their underlying metaphysics of physicalism, but which seems to me to create a different set of logical difficulties.

In my own attempt at a working definition, memes are cultural ideas that spread from one brain to another through a kind of viral process, but this process is more than merely a metaphor and operates in some unknown way at the level of the material. Physical viruses, however, transmit physically. There is a point of contact at which a material object – a virus – enters another material object – the nose or mouth or broken skin of a human being. I struggle to understand how this is possible of an idea, which has no physical reality. I recognise that there are mediums of transmission – sound waves in the case of speech or markings of black ink on white paper in the case of writing – but how can non-material ideas somehow leave body A and travel on one of these magic carpets to enter body B?

It might be argued (although I am sceptical) that every idea has a physical correlate in the brain – neural or electrical or chemical activity, or a mixture of these – which in some way builds that idea from matter, but this still does not explain how this physical correlate could continue to exist once outside the brain nor how it could be transported though space. In Dawkins’ view, everything is ultimately matter. Mind is perhaps an epiphenomenon which evolved because it helps genes to replicate, but it cannot exist outside of a material environment (i.e. a brain and body). This is presumably the reason, for example, why Dawkins believes that telepathy is an impossibility. So how can a meme suddenly become paranormal and break these rules?

One possible argument is that, due to natural selection over the course of millennia, creatures of the same species developed a propensity to respond to signals from each other in the way that a peahen, for instance, responds to the display of a peacock without the need for any direct tactile contact to happen between them, but through sight or sound alone. This communication might then be explained as a kind of semiotic messaging similar to that which takes place in the bee dance, and exchanging information through language could be argued to be simply an immensely more sophisticated version of this semiotic signalling. A huge leap, perhaps, but we are told that a few cells which were slightly more sensitive to light eventually evolved into the eye, and that given the millions of years in which natural selection has to work, this is possible. One problem with this idea in the case of human language, though, is that it did not evolve over untold millions of years. In evolutionary terms, it’s a recent phenomenon. And, if I understand him correctly, Dawkins is not a fan of punctuated equilibrium.

A better explanation may lie in the human ability for mimesis. If I watch you do something and then copy what you do, I don’t need any abstract or cultural concepts in my mind: I just need to repeat what I have seen you do, something which children in particular are very good at. Then I have ‘learned’ the new behaviour without any need for an intellectual exchange of ideas. This may certainly help to explain the ‘softer’ version of memetics, where it is essentially a form of mimicry, as in the case of a slogan or a snatch of song going viral. More complex cases of exchange of ideas through mimesis could be aided by simple signals like pointing at an object, although the receiver must understand, of course, that the signal refers to the object in the distance and not the pointing finger: something which I read that chimps in captivity can do but, interestingly, not those in the wild. We also know that mimesis occurs in several other species of mammals and birds. I remain unsure, though, that it satisfactorily explains ‘harder’ versions of memetics, since there is a huge difference between pointing to where a deer is hiding in the trees and using language to explain, for instance, quantum entanglement, and we have to make the assumption that such a leap is possible, again in a relatively short period of evolutionary time.

I mention quantum entanglement for a reason. (I choose the word ‘mention’ quite deliberately here since I don’t want to pretend that I have any real understanding of what it is.) As someone way out of his depth, I merely want to tentatively suggest that this phenomenon might be helpful in explaining the transmission of memes. If particles can communicate across vast expanses of space with no possibility for physical interaction of any kind between them, why couldn’t this happen at a more local level between two minds? However, once this is accepted as a possibility, Dawkins would need to also accept the possibility of lots of other phenomena which he considers impossible, telepathy being an obvious example.

My essential problem is not with the meme as such but with the combination of the concept of the meme with a metaphysics of physicalism. If we accept dualism – matter and mind as two distinct substances which co-exist in some inexplicable way – or a monism which declares that the ultimate reality is mind and matter is a substance somehow created by mind, there is no problem accepting the reality of memes. Physicalism, however, needs to explain the transfer of immaterial ideas from mind to mind. The fact that it clearly happens all the time makes us take it for granted, as if it needs no explanation, and I feel physicalists take advantage of this familiarity to camouflage assumptions that they do not want examined and questioned. But in my opinion that doesn’t make the questions disappear.

This all reminds me of the physicalist response to another phenomenon that calls materialism into question: the placebo effect. Physicalists cannot deny this exists since it clearly does, so they accept that it is real without feeling any necessity to explain how it can take place in a purely material universe, how something that has no material reality – the mind – can influence matter – the brain and body. Tellingly, although physicalists usually avoid the word ‘mind’ whenever possible, preferring to use ‘brain’, when the placebo effect is described, it is almost always theorised as a way that the mind, not the brain, influences the body. It is a belief  that does the trick – my belief that the person in a white coat is a doctor and therefore I will benefit from the medicine given to me – but how can a belief affect the physical organism, since a belief must surely be immaterial and not in any way ‘real’ (and said belief can also be mistaken and therefore not correspond with reality)? Does the placebo effect happen in and through the brain, as generally supposed? But what then is the link between an immaterial mind and the brain with its chemical and electrical activity? But if we remove mind from the equation, where is a belief situated? It seems to me that much is being assumed here rather than explained or explored.

To trained scientists and philosophers, I am probably coming across as very naive, asking a basic question – how can immaterial things exist in a universe which is solely material – which they feel does not need to be asked because it is so obvious and commonsense that they do. But either these things which seem immaterial must be in some way ultimately material, or the physicalist edifice has problems it needs to address. I can’t shake off the feeling that Dawkins and Dennett want to have their cake and eat it when they reduce mind to a substrate of matter and argue for a universe which is strictly material while using words and concepts that their overarching theory rejects as impossible. I accept that an entity of a different quality can emerge from a base belonging to another category – for example, mathematics emerging from the separation of physical objects into discrete entities, or culture more generally emerging from the material reality of human bodies and brains – but why do we accept this so readily, as if it needs no explanation? Common sense and our daily experience are the reasons, perhaps, but elsewhere science has so often proved that common sense and daily experience can lead to misguided assumptions (the sun going round the earth being an obvious one), so why not here?

In short, as a concept utilised to describe the transmission of cultural entities, I think the meme is an extremely useful piece of shorthand, especially in the fields of the social sciences. But once it becomes more than a metaphorical tool, I have doubts about its utility and difficulties understanding how it is even possible once it is combined with a physicalist metaphysics.

FOOLS RUSH IN

… where angels fear to tread. Well, I’m certainly no angel, but I probably am a fool for wading into the controversy surrounding J.K.Rowling and Imane Khelif. The ‘trans issue’ has become completely polarised since it now functions as a kind of marker of where one stands in the broader culture wars that are raging at the moment. The wisest thing to do is probably to stand on the sidelines and not get involved, but that seems rather cowardly when the culture wars are having such a disastrous effect on our collective society.

I might as well be clear upfront and say at once that I found what Rowling said about Khelif to be reprehensible. Rowling, of course, has every right to publicly argue that the trans movement is putting women’s rights at risk, to state that MTF transexuals should not compete in women’s sport, to insist that they should not be allowed into women-only spaces, and to contend that the trans movement is a kind of Trojan horse which is threatening the hard-earned advances of feminism. These are claims which someone else can support or dispute. But to describe Khelif in a photograph as having ‘the smirk of a male … enjoying the distress of a woman he’s just punched in the head’ was unacceptable on so many levels.

First, as far as I am able to find out, there is no proof at all that Khelif is, or ever was, male. She was assigned as female when she was born, she was raised as a girl, she identifies as a woman, and her birth certificate and passport state that she is female. The chance that she is a MTF transexual is infinitesimally small since this is not allowed in her country of birth and she would probably be either in jail or perhaps even dead if she were.

It is possible that she has a Y chromosome, but there seems to be no clear proof of this. The IBA, the boxing association who banned her from competing in Russia, have stated on occasions that this was the reason for the ban but, to the best of my knowledge, have never produced any evidence to support this claim. Indeed, on other occasions they have said that the decision was made because of the hormone levels in her body, specifically those of testosterone. But this would not make her a man any more than my having high androgen levels would make me a woman: hormone levels exist on a spectrum and there will be differences within one sex as well as between the sexes. It is also worth pointing out that the IBA is hardly an organisation with a faultless reputation: the IOC stripped it of its oversight role in the world of boxing because of accusations of corruption.

And even if Khelif did turn out to have a Y chromosome, this would not excuse Rowling’s personal attack on her. Whatever the reality, Khelif is totally innocent in this matter: she has clearly lived all of her life as a woman and believes herself to be a woman. It is very sad that what was probably the happiest moment in her life – the moment she had worked so hard to achieve – has been tainted by this controversy and her achievements brought into question. Rowling should remember all the times that her manuscript was rejected by publishing companies, and the feeling of pure joy that she felt when it was finally accepted or when she first held a copy of her book in her hand, and recognise how her hurtful words are trashing a similar moment for Khelif.

Play the ball, not the person. Rowling could raise all of her concerns without attacking Khelif personally; she could keep the argument on a theoretical level, but seems more interested in throwing petrol on the fire than in genuinely discussing the difficult issues involved. In addition, as far as I know, despite moral panics in the tabloids and on sites like X, almost no one is seriously suggesting that a man should be able to put on a wig and a pair of falsies and then go into a woman’s toilet or a rape crisis centre. It seems to me that Rowling is raising a straw man (or a straw transexual).

I don’t deny that there are difficult issues here that have emerged because of modern sex-change technology. How do we decide who can enter a sporting competition? I don’t know enough about biology to know whether having a Y chromosome should be the determining factor when assigning sex or one of many. But if so, I suppose one possible way of making the decision is declaring that anyone entering an important sporting contest must take a DNA test and will be disqualified if they turn out to have a Y chromosome. But there are also arguments against this. First is the dangerous precedent it sets of using DNA as a way of singling out and taking action against people who have not committed any kind of offence. Second, it feeds discrimination and prejudice against people who are not the norm. Third, it may have serious psychological repercussions for athletes who identify as female and then have to face the trauma of realising that they are biologically male.

So how do we successfully balance the right of transsexuals to live happy, secure lives free from hatred and prejudice with the right of women to be safe in public spaces? The question of which toilets to use might sound a rather trivial one, but it raises a crucial point: if a woman is potentially endangered if a man in a dress is allowed to use a women’s toilet, a man in a dress is also endangered if forced to go to the gents. It’s very hard to find a compromise which safeguards both groups. I have to admit I have no idea how to square this circle but vilifying one of the groups does not seem helpful or appropriate.

Many of the disputes currently happening remind me of the 1970s and the splits which appeared in the feminist movement at the time. Many women of colour, women from poor backgrounds, and lesbians argued that the feminist movement was a movement for nice, white, middle-class women only, whose main aim was to get the same professional privileges that white, middle-class men enjoy. This split within the movement was a negative development in the sense that it splintered the movement into small groupings who were often in conflict with each other, much to the delight and advantage of people (mainly men) who wanted to retain the status quo. On the other hand, it had the positive effect of broadening the feminist movement beyond its predominantly middle-class origins and bringing a wider range of women under its umbrella. It made clear that a freedom which minimises or ignores discrimination against minorities within its own group is a partial freedom at best.

I really wish Rowling would apologise for her comments about Khelif rather than dig in her heels and make grandiloquent statements about being willing to go to jail for the sake of women. I’m not suggesting that she should retract her arguments – she should make them as forcefully as she wishes as long as they are arguments and not personal attacks – but I wish she would simply say that she regrets the way she expressed them towards another human being who genuinely identifies as a woman and who did nothing but win a gold medal. With her public platform and her fame, Rowling could do so much to take the sting out of these arguments and lessen the culture wars but she seems determined to stir them up and make them worse.

I admit that I find it impossible to be neutral in this discussion because I am too coloured by my own experiences as a young gay man in the 1970s who was part of a nascent gay rights movement. At the time there was deliberate conflation by our opponents of homosexuality and paedophilia, just as there is now conflation of the transexual movement and male aggression against women. I understand Rowling’s fears that the gains which women have made over the last fifty years will be watered down or even lost, for I have similar fears about the advances made by gay men and lesbians. I understand her passion about this. I just feel that this passion would be better expressed in a positive way and in a different direction.

Trans rights and women’s rights do not have to be a zero sum game. The two groups should unite and fight together against the real enemy, which is all discrimination based on prejudice and hatred.

SWEET DREAMS

In one of my poems, Painting the Sky, I describe a dream that I had when I was a teenager, in which I sat at the horizon and painted pictures on the sky, which then moved upwards and became constellations shining far above me. I had highly visual dreams like that quite often then. In another dream which has stayed with me for the whole of my life, I died in an earthquake in Mexico City. Even today I would feel very nervous if I travelled there.

Nowadays my dreams are very humdrum in comparison. I get quite a few anxiety dreams, but they never involve anything as remotely exuberant or colourful as painting the sky or as terrifying as being swallowed up by the ground. Last night’s was typical. I was trying to catch a train but I couldn’t find my ticket and I was rummaging around in my pocket while a ticket collector eyed me suspiciously and wouldn’t let me pass through the gate. I remember a little bit of colour in the dream – the train was bright red and streamlined – but other than that the dream felt totally monotone.  

I searched online and apparently there is strong evidence that we are less likely to remember our dreams as we age, and some support for the idea that they lose a lot of their vividness. Despite being in my dotage, I personally remember quite a few of my dreams, at least for a few moments, due to waking up needing to go to the bathroom during the night (isn’t old age wonderful?). But while the dreams I have nowadays may leave me with an uneasy feeling at times, they never feel meaningful in the way that the two dreams I described at the start of this essay did. None of my current dreams stay with me for more than a few minutes, and they certainly won’t spook me fifty years from now even if I do live to be a hundred and twenty (eat your heart out, Elon).

Our attitude to dreams is similar to that we have regarding children and pets: our own are fascinating while other people’s are tedious. But as a general psychic phenomenon, they intrigue me as much as ever. In a world dominated by physicalist beliefs, we no longer tend to see them as having any intrinsic meaning; people who believe that matter is the only reality will either simply dismiss the tales of Jacob’s ladder and the Pharaoh’s cows or offer rationalist reasons to explain them. Increasingly, dreams are not even accorded psychological significance: Freud’s dream analysis has been confidently placed in the pseudo-science bin along with palmistry and phrenology. The sense of mystery that we often feel when we remember a vivid dream has been replaced by various attempts at scientific explication: brain software cleaning up and ordering its files or transferring that day’s fresh inputs from short-term to long-term memory, or our conscious mind’s creation of stories in an attempt to make sense of electrical impulses that happened in the brain during sleep. Meanwhile, unsurprisingly, those one-trick ponies, the evolutionary psychologists, claim that we dream because it helps us to survive and pass on our genes.

What almost no one believes any more is that they are harbingers of the future. I have to accept that there is no strong evidence for this predictive power of dreams. As sceptics will point out, millions of people have millions of dreams every night, so it would be very strange indeed if none of them ever came true. On the other hand, these dreams foretelling the future generally feel significant to people when they have them. Many scientists would argue that this is irrelevant, but I find this a little insulting: if something has subjective meaning to somebody, then we should accept this subjective response as a psychological reality rather than simply dismiss it as a delusion. But whatever our opinion on this, it’s hard to see how we could set up an experiment to test whether dreams could have this clairvoyant potential: problems of veracity and trusting the recall of the dreamer; issues regarding what exactly constitutes a ‘hit’; other issues about how much time should elapse before a ‘hit’ becomes a ‘miss’; what degree of similarity is required for something to be classified as predictive; how to calculate the astronomical probabilities involved, and so on. The consensus among most scientists seems to be that it doesn’t happen because it can’t happen, which is at least consistent with a physicalist outlook.

But I feel uncomfortable about reducing dreams to mere function, which seems part of a wider trend to filter out the mystery from life. The resulting feeling is often termed ‘disenchantment’: a sense that all the magic is being squeezed out of our existence. But can we live without dreams, both nocturnal and diurnal? Regarding the former, science accepts that dreams are essential, and all the evidence shows that if our opportunity to dream is removed, we eventually become mentally unstable and begin to break down and hallucinate. Regarding the second kind of dream, I think few things express the need for these better than a stretch of lyric from Ruby Tuesday by the Rolling Stones: ‘Lose your dreams and you will lose your mind. Aint life unkind.’(words and music attributed to some combination of Richards and Jones, although the exact details are disputed).  I know we are speaking of a different kind of dream in this case – something closer to a wish or desire – but it is interesting that the same word is used for both phenomena and this suggests that there may be some kind of underlying relationship between the two mental states.

Even in the scientific community, there is some acceptance that mysterious things can happen in the course of dreaming, such as Kekulé’s famous dream in which he visualised the ring structure of benzene as a snake eating its own tail. The rather vague and catch-all concept of the ‘unconscious’ or ‘subconscious’ is then trotted out as an explanation, even sometimes among confirmed physicalists. Artists of all kinds, of course, are likely to actively turn to dreams as a source of inspiration (and other situations in which the power of the logical mind is at least partly attenuated, such as mind-altering drugs). This is an area about which we know very little and it’s hard to make any confident statements.

Then there is the phenomenon of lucid dreaming, in which the mind becomes aware that it is dreaming. It seems that many researchers in the field accept the reality of lucid dreaming and some experiments seem to confirm that communication can take place between a waking person and a dreamer. Much more controversial is the idea that two lucid dreamers can communicate with each other when they are both asleep and dreaming. Sadly, I have been unable to track down a book I read many years ago in which two scientists claimed to have communicated with each other during lucid dreaming and had proved this to their own satisfaction by agreeing on a number during the dream which they then independently wrote down on awakening before they made any contact with each other to check if their numbers matched (this was a stretch of digits, not a single number, and according to the scientists, they did). This would be earth-shattering if true, because it would suggest that the mind does indeed travel to some kind of ‘astral plane’ when asleep and that communication can take place without any physical medium such as voice or gesture (but such research is always open, of course, to the charge of being fraudulent since we have to trust the report of the people involved, so I think it would be far too much of a stretch to say that it constituted any kind of ‘proof’).

Another strange experience that I often had as a teenager but no longer have was suddenly knowing that I had previously dreamed the waking moment that I was going through. I say ‘knowing’ because that was how these moments felt; there was an absolute certainty that this was the case, even though when I look back rationally at the experience I realise that there are lots of reasons why this psychological certainty could be mistaken and many alternative explanations for the experience. I guess that it is related to deja-vu, the difference being that in the latter we have a feeling that the event has happened before in our waking life rather than in a dream. I’m not up to date on the latest thinking about deja-vu, but down the years I have read various explanations put forward by scientists, but I generally found them unconvincing because they depended on concepts about the brain’s experience of time that could not be verified or claims about neural activity that could be neither proven nor falsified. I don’t know if neurology has moved on now and feels it can make stronger claims based on activation of specific areas of the brain while performing a scan.

I suspect that we are a long way from finally solving the mystery of dreaming. For all the theories based on metaphors from computing and digital technology, it seems that, even if these theories are broadly correct, there is also a relationship between dreaming and creativity, so the idea that dreaming is simply the equivalent of sorting out our files and making sure they go into the right folders seems too reductive. One thing does seem certain, though: the mind, even the portion that we separate out and label the conscious mind,  is highly active during sleep, and especially during REM sleep. As a poet who retains a fondness for enchantment, I like that idea.

GOOFING OFF

As a pensioner, I have a lot of free time. I need to teach online to top up my state pension, but sometimes there still seem to be so many hours in a day stretching shapelessly ahead. Despite being a lazy person, I feel a pressure to find something to do to fill those hours, to make them productive. Most of us have a sense of guilt when we goof off (this is one Americanism I really love because it captures perfectly the idle mindlessness which is the ideal of this activity). Once I have finished my teaching, I fill my time writing poems and maintaining this website and the inevitable struggles with WordPress that this entails, but this is not enough to keep me occupied until bedtime. So what else can I do?

One absolute no-no is cleaning my apartment. This only happens in extremis, when my fridge has begun to look as if it may be harbouring E coli, or the landlord is due to collect the rent. Alcohol is a good distraction for people like me who avoid overly strenuous physical activity – all those trips to the fridge for a refill, all that heavy lifting of the wineglass – but has obvious perils as a long-term solution. Filling those empty hours is a genuine quandary. Fortunately for us lesser individuals, the people who write self-help books feel they have the answer: permanently bettering ourselves as human beings. Only the stupid and the indolent, they tell us, ever stand still.

But not all activities, it seems, are equally meritorious. Doing things like learning a language or deepening our understanding of statistical analysis is apparently much more worthy than hanging around with friends in Starbucks or watching Netflix, but is this idea justified? Surely it comes down to personal choice and preference. For instance, I spend a lot of my free time doing laudable things like reading books because I’m the kind of person who enjoys this more than socialising and going to parties. But do we really need this hierarchy of activities? Aren’t the two ways of spending time just different? The self-help gurus might counter that my activities are more useful and progressive because they may help me to advance in life, whatever that means. But at my age who needs to advance and anyway I very much doubt this is true. A lot of free time activity is wasted if we do it as a route to personal progress. For example, I spend several hours each week working on new stuff to put on this website, but almost no one comes here and reads it. I haven’t progressed an inch.

The self-help gurus reflect a culture in which we are becoming obsessed with moving forward, both societally and personally. The biggest sin of all is standing still. In contrast, in the working-class England in which I grew up, most people were happy with a much simpler life: food on the table and a week’s holiday in Blackpool once a year to see the hallucinations. For a man, life meant a few nights down the pub, football every Saturday at 3pm, perhaps a touch of gardening, some kind of hobby to get away from the wife: the sort of life gently satirised by Ray Davies in his song, Autumn Almanac. For a woman, there wasn’t much free time once she had finished looking after the kids and cleaning the house and doing all the chores, but a trip to the hairdressers for a perm or a night at the bingo were enough to keep her spirits up. For both, there wasn’t this constant pressure to improve themselves and the idea that only losers were largely content with their lot.

Is the modern world in which there is this diktat to improve oneself really an advance? In many ways, I suppose so. Young people of my generation often rejected the rather staid culture that I’ve stereotyped rather unfairly here; we felt that life had to be more than just working to survive and we upbraided our parents for accepting this way of seeing the world. But the daily life that has emerged since then has huge drawbacks. In modern consumer capitalism people are units to be used and then discarded once they are burned out, to be replaced by the next unit of human labour. This obviously has a long history: Victorian mills and factories segue into the anti-human stopwatches of Taylorism, and then very neatly into a modern world where we are expected to work loads of unpaid overtime, pay for our own training, enjoy almost no job security, have to accept zero hour contracts where we take on the risk which in capitalist theory at least is borne by the employer, and so on. Nor is this restricted to countries which are nominally capitalist: modern China has the concept of 9-9-6, which means working from 9am to 9pm on six days of the week. The Victorian factory owner would be delighted at how history has turned out.

Even if we accept that this has all resulted in greater productivity, has it led to greater happiness? I very much doubt it. We certainly have more toys, but are we any better off when it comes to the real necessities, or simple pleasures, of life? The family in which I grew up is an example of what we have lost. We were a working-class family and yet we lived in a rented three-bedroomed house with a large garden. My parents weren’t in debt. We had food on the table and that annual holiday in Blackpool. Nowadays someone at a similar level in the social hierarchy as we were is much more likely to be squeezed into a tiny apartment, burdened by debt, and forced to do two or three jobs merely to stay afloat. Something has gone wrong, and it is the poor, and increasingly even the middle-class, who are paying the cost of our so-called progress. All the goodies go to the one per cent. Life has become a never-ending run on a hamster wheel and the vast majority of us can’t afford to get off.

It won’t surprise you after reading this that the slow movement appeals enormously to me, because although I may be psychologically incapable of just closing my eyes and doing nothing, I think there is a wisdom in doing it. In lots of ways I now think my generation were wrong and we have foolishly given up so much for what has effectively been a con, or at best a mirage. For the ordinary person struggling to make ends meet, goofing off simply isn’t possible anymore.

WRITING HAIKU

I included five haiku in Digging for Water, the collection of poetry which I self-published in June of this year: my first experience of writing this poetic form. Recently, I have entered two of these in a haiku writing competition where all of the entries are visible to everyone else. Being able to read the entries of the other contestants is fascinating, as is trying to analyse why some of them (in my opinion, of course) are more successful than others.

Haiku are a strange mix of sensuous physicality and abstract emptiness, and in this sense they remind me of a lot of Japanese and Chinese nature painting. What I mean by this is they paint a picture of a moment in time and objects within nature  (leaves, birds, lakes) in a clear and sensual manner, but they do this in a subtly different way from how the Impressionists, for example, captured their moments in nature on their canvases. There is an emptiness at the heart of a haiku, but this is not emptiness as generally understood in the west: an absence of being or perhaps even an existential angst. It is more like a potentiality, a background which exists in order that things might unfold within it, a yin that enables the yang. It is as essential to what we read or see in the poem or painting as the physical objects and images which populate it.

There is research using eye trackers which shows that people from the East tend to spend more time scanning the background of a photograph than their western counterparts and consequently less time on the objects in the foreground. In a lot of western art, nature is something we observe from outside rather than something which surrounds us and to which we belong. Thus, in many representational paintings, the background is often a kind of theatre or stage in which the foregrounded events happen, a tendency that goes back at least as far as early Christian paintings in which the stories of the Bible were the important content and the natural background was often little more than an appropriate setting.

In contrast, I feel that the background has a much more active role in a lot of eastern nature painting. Human beings become part of nature, not detached observers of it. I recognise the danger of falling into a kind of orientalism here and drawing on stereotypes of the mysterious and inscrutable East, but I don’t think I’m imagining this pregnant emptiness in Japanese and Chinese nature paintings. Basically, it’s why I like them so much and feel that I never quite grasp their essence (although paradoxically I find great pleasure in this sense of a mystery that I can’t fully comprehend).

English versions of haiku, however, even if they conjure up attractive pictures of birds or blossom or snow, rarely manage to suggest this hint of the transcendental glimpsed through the prism of the material.  When features from one culture are borrowed or appropriated from, or hybridised with, another culture (choose whichever verb you wish according to your attitude), there is always a lot of unconscious slippage. At a formal level, the 5-7-5 of a haiku is alien to the English language in a different way from, say, that of the alexandrine, but the outcome is generally the same: we struggle to achieve the effects we want when we use either of these forms and are almost bound to distort the original, just as Buddhism or Daoism were distorted when they became popular in the counterculture during the 1960s. In this essay, therefore, I am commenting only on haiku written in English: I have no knowledge of Japanese language or culture, and even if I did I doubt that it could outweigh my linguistic and cultural baggage.

I felt that one of the main problems with many of the haiku in the competition was a struggle with rhythm, a difficulty which I imagine was rooted in the unfamiliar structure of 5-7-5. The odd number of syllables in each line is probably harder to manipulate smoothly in English, with its strong bias towards iambs and therefore an even number of beats. Perhaps a greater use of anapests and dactyls would have helped build greater fluidity, for a lot of the lines had a sort of clunkiness that some poets display when they are trying to squeeze their work into rhyming or metrical patterns: too much focus on following the template and not enough on the overall flow of the words. So in the haiku submitted, short stretches of language that would have worked perfectly well in a piece of free verse or even a more traditional English metered structure sometimes landed like the thud of heavy boots, especially in the middle seven-syllable line.

Another problem was achieving the required simplicity and sense of ease. In a good haiku, there is no straining whatsoever for effect. (To be fair, this may well be true of all of the very best art and poetry - e.g. Blake - but that’s an idea for a different essay.) Perhaps the strain and lack of naturalness in some of the haiku also arose from their authors feeling a pressure to include things from nature which were also objects of obvious beauty: raindrops on roses and whiskers on kittens. But describing these kinds of things well requires a fineness of language which we seemed to find hard to achieve within the unfamiliar 5-7-5 structure.

One common weakness I found in many of the haiku in the competition (and I saw the same problem in my own haiku) was a tendency for each of the three lines to stay resolutely separate from each other, like three snapshots laid out sequentially which didn’t develop into any kind of progression or narrative but also failed to intertwine thematically into a satisfying coherent whole. The better examples managed to at least turn three strands into two in what I understand is a common feature of many traditional haiku, but creating a link between the two strands that was both subtle enough and yet somehow felt instinctively ‘right’ proved beyond most of us. Only the best of the haiku, in my opinion, avoided a sense of  fragmentation.

Achieving this inner coherence is far from easy. It is not as simple as just running two of the three lines together or reducing the number of images in the poem. Nor is it solely a matter of staying on a fixed path and not making a detour because this detour seems to be an essential part of a haiku and without it the form becomes too literal and lacks that glancing indirectness which hints at the noumenal beyond the phenomenal surface reality. Online I have read people talk of the ‘satori’ moment in a haiku, a point at which it lights a flame within the mind: Bashō’s famous plop. To demand this of every haiku is surely too exacting since very few poems can be expected to help us reach zen enlightenment, but in my favourite examples from the competition there was always a point at which the world of nature and the world of the human mind met, generally in the third line: a moment when the two qualities I highlighted at the beginning of this essay - the sensuous and the abstract - came together and the poem became whole.

Returning to Bashō’s plop, several of us made deliberate efforts to incorporate such a moment into our haiku, often with an onomatopoeic word, to mark the moment when the angle of the poem shifts, but I feel that on the whole we failed. There was something just a little too calculated about how we did it. An external mark of this problem was our uncertain use of punctuation to incorporate the satori moment into the poem. A dash was the most popular solution, but in such a small poem a dash at the end of a line can feel forced and obtrusive, at least to me. More generally, punctuation is crucial in an English haiku: with so few syllables, the addition or the absence of a comma or a capital letter, or the choice of a dash rather than a colon, makes a vital difference. The sheer concision of a haiku is obviously a very strict discipline and we must learn how to use punctuation almost as a free extra syllable.

I felt we also caused ourselves problems by an overly literal interpretation of the requirement that a haiku should be placed within one of the four seasons. Thus, the words ‘winter’, ‘spring’, ‘summer’ and ‘autumn’ were commonplace, but this was often telling rather than showing (to use a phrase which is more often used of fiction than poetry). The successful entries didn’t do this: they approached the subject more elliptically, and this lack of directness seems to me the very essence of the haiku and Japanese art in general. There is something simultaneously both very literal and yet also numinous in eastern paintings whereas the West tends to separate these two qualities into opposing categories (realism or naturalism as the obverse of symbolism or expressionism or abstraction).

I am aware that this is all sounding rather negative. But we western would-be haiku writers were attempting something truly difficult and shouldn’t beat ourselves up for not reaching the heights we hoped for: flip it around and imagine asking a Japanese poet writing in Japanese to use iambic pentameter while also capturing the spirit of western verse. The result is almost bound to be something that is not an authentic variation on the original, but an uneasy and clumsy hybrid. Fusion food in my opinion often fails; I suspect the same is generally true of fusion poetry. That doesn’t mean we shouldn’t try, though: so much that is good in art comes when cultures borrow, clash and hybridise.

ART & AI

This week a picture came up on my Facebook feed which called itself a painting in the style of Edward Hopper. My immediate reaction was that it looked like Hopper but didn’t feel in the slightest like Hopper. The woman in the railway carriage was much too calm and composed; there was none of that unease and loneliness of a classic Hopper, nor any of its sense of inner emptiness. I suspect many other posters felt the same as me because several of them openly wondered if this was an AI-generated work.

There used to be a saying that ‘the camera doesn’t lie’. This wasn’t always true, of course, even in the days before photoshopping – careful cropping, for example, could create a variety of different realities, or at least interpretations, from the same photographic negative. But in general people believed a photo: it offered a technological version of ‘I saw it with my own eyes’, and it was therefore assumed to present the truth.

Few of us are so trusting now. We know that not only photographs, but even whole videos, can be concocted, and that if we see a film of Kamala Harris beating a cute puppy with a stick, we may well be looking at a fake. Unable any longer to believe our own eyes, we have come to rely on experts who declare whether a picture is real or has been tampered with, or if a video is fabricated. The problem with this is that we no longer have responsibility for what we see and what we therefore believe to be true; we need to pass that responsibility on to a third party. And then we are trapped in an infinite regress: how do we know that these experts are real and can be trusted? Who, or what, monitors the monitors?

In the world of art, this has updated and exacerbated the problems of detecting a forgery. At the moment, perhaps, like some of the people on my Facebook post, we may feel able to sense the difference between an AI-generated Hopper and the real thing. But as AI develops, who is to say that it will not be able to reproduce not merely the surface of a work of art, but also its essence? And while works by historical figures like Hopper may be relatively easy to verify or dismiss because we can demand very solid evidence to show that a newly discovered painting is genuinely his, living artists and writers are unlikely to be vetted so rigorously.

AI certainly complicates the issues of copyright and intellectual property. In a world where anyone can task AI with writing a poem or painting a picture, how can we be confident that a new work is created by a human being? It can be argued that AI is currently only churning out the kind of dross that poor genre writers have been doing for decades - boy meets girl, boy loses girl, boy and girl get back together and walk off into the sunset. I’m sure AI will be able to crank out cheesy romantic novels and scripts for action movies and good-versus-evil fantasies. But how will human artists earn a living in this world? And how do we prevent the further decline of art and literature into banality, cash-cow sequels created by dull middle-managers in marketing departments who have the aesthetic sensitivity of a friendly dictator and the creative spark of a concrete mixer?

In painting, at least, there is the requirement for a physical object to be created. But eventually AI will be attached to robot arms or a 3D printer, I’m sure, and even this role for the human being will be lost (although the physical nature of the art work would allow for more opportunities to check its veracity through processes such as material appraisal of the canvas). Literature, especially fiction, is under much greater immediate threat. On my Facebook feed, I regularly see adverts inviting people to ‘write’ and publish fiction by using AI (and I don’t mean using it as a guide or a source of inspiration, but literally giving AI a set of instructions, pushing a computer key, and out pops a novel). Why pay writers for the script of the latest dreary sequel if the sausages can be squeezed out just as efficiently by AI?

There has always been a delicate balance between reality and fantasy in Art. Part of our enjoyment of literature or theatre or film comes from entering a world which we know is make-believe. This becomes explicit at times: Calderón’s Life is a Dream, The Matrix, the twists and turns of Borges, the trompe d’oeil of Op Art, the work of Magritte, the impossibilities of Escher. But our pleasure always rests on a conviction that there is a real world to go back to, a real world we are taking a break from. Like the big dipper at the fair, we enjoy its otherness because we feel safe. We are not going to wake up to find ourselves transformed into a giant cockroach.

But this sense that AI is undermining the solid foundations of our lives is something that is affecting more than Art. There have always been individuals who questioned the reality of reality - Zhuangzi and his butterfly dreaming it was a man - but they have tended to be marginalised on the hermetic fringes of society. Their ideas had little traction with the vast majority of people who treated these ideas - if they thought about them at all - as conceits, the odd twitterings of mystics and madmen. But soon we may all be living in a world where the ground under our feet is unstable and we have very little confidence about what is real and what isn’t. Already so much of the world we inhabit is virtual (walk down the street and see the pedestrians with their noses pushed up against their mobile phones), and we often spend more than half of our waking lives staring into a screen and communicating with people we have never met in person. Slowly we are losing the security of feeling that there is a normal reality that we can return to when virtuality is over. It is never over. This radical uncertainty is our new reality.

I haven’t even expanded on the political threat of AI in a world in which life is increasingly mediated. Our leaders and politicians are now figures we see on TV or on the internet, and this is our only way of judging them. Again this makes us dependent on experts who can separate the wheat of the truthful from the chaff of the liars, but who pays these experts and who controls them? Will they really be disinterested? Will we be able to trust this media class to act as honest go-betweens? Or will he who pays the piper call the tune? I recognise that this gatekeeping role has always existed, but the gatekeepers were at least visible and, in theory at least, answerable to us. In a world of the huge online conglomerates that shape our daily lives, I am not confident that this is any longer true.

And what will this do to us psychologically? Eliot’s dictum that human beings cannot bear too much reality is a favourite quote among pessimists like me. But perhaps it is equally true that  human beings cannot bear too much unreality. Will we descend into madness in a world where we can never be certain that anything is real? Are Deleuze and Guattari, for all their impenetrable language and pseudo-scientific pretensions, or earlier counter-cultural figures like Laing and Szasz, correct when they argue that we are already on our way to a schizoid world where we have lost all our moorings and only the madman is sane? Personally I find it increasingly difficult not to agree with them that our modern, mediated world is fundamentally sick.

Perhaps civilisation will not end in the bang of an asteroid hitting the earth or a thousand nuclear warheads, but in the whimper of a billion AI mutations which undermine and eventually shatter our sense of reality. Whom the gods would destroy, they first make mad.

CENTRIFUGAL & CENTRIPETAL POETRY

There’s nothing I enjoy more than penning a diatribe about what I see as the failings of much of the contemporary poetry I come across online. But I’ll do my best not to rant today. I’ll try to explain in a more measured way why I find it so difficult to warm to a lot of this work. My key problem is that I often feel marooned as I make my way through a poem. There seems nothing for me to hang on to. I often feel as if I’m reading a list or browsing through a catalogue.

I would distinguish here between two basic approaches to poetry. The first is looser, less strictly disciplined, and moves outwards. It’s expansive and open. The second folds in on itself like a flower. It’s concise and the structure is much tighter and clearly signalled through techniques such as rhyme and metre. Borrowing terms I have found online (although they seem far from common), I’ll call the first centrifugal and the second centripetal. From my google trawl, it seems the former has been used by scholars discussing biblical poetry while the latter has been used to describe the work of Yeats. In this essay, Whitman can serve as an example of the centrifugal and Frost of the centripetal.

Centrifugal poetry tends to eschew rhyme and consistent metre, and in terms of content it accumulates a succession of images as the poem progresses. Its most common structural device is anaphora at the beginning of lines. Centripetal poetry often uses rhyme and metre as an anchor and is generally based around one or two key metaphors or symbols which are woven into the poem throughout the verse.

Thus, in I Hear America Singing, Whitman begins each line with a person - mechanic, carpenter, mason, boatman, shoemaker, wood-cutter, mother - and strengthens this by repetition of the word ‘singing’, plus other repeated phrases such as ‘as he + verb’ and ‘or + prepositional phrase’. If at times I have in other essays unfairly accused contemporary poets of creating a ‘catalog’, this word (and also ‘list’) have been used by analysts of Whitman’s work. But his listing in the poem discussed here has an internal logic. It isn’t just a random collection of disparate images.

In The Road Not Taken, in contrast, Frost uses a consistent ABAAB rhyming pattern in each of the four stanzas. The metre is complex and not restricted to iambs, but there is an underlying regularity largely missing from the Whitman. Thematically, the poem is exceedingly tight, hardly straying from the central symbol of a fork in a road as a metaphor for a major choice in someone’s life. In that sense, and only in that sense, it is very simple.

I am not arguing that the Frost is better than the Whitman - which of the poems you prefer, if either, is clearly a matter of personal choice. My taste is for the former, but I accept that the Whitman has a clear structure and is not merely prose masquerading as poetry. My gripe is that I often read attacks on poets who do Frost badly - whose rhymes clunk, whose rhythm becomes tedious and predictable, and who twist grammar unnaturally in order to fit the metre - but I almost never read any criticism of people who do Whitman badly. And there are lots of them.

My argument is that bad Whitman is just as frequent and damaging as bad Frost, all the more so since few poets seem to write in anything but free verse nowadays, but this poor quality is almost never called out. For example, instead of Whitman’s variations on a theme (people in their different social roles) in I Hear America Singing, many contemporary poems pile up random images one after another with no obvious connection to link them thematically. It is like a sales catalogue where different items for sale - a pair of shoes, a filing cabinet, a tin of soup, a mattress - are placed randomly together on the same page.

Another common problem is verbosity: a lot of contemporary poets don’t know when it’s time to end the poem (I’d personally argue that Whitman was also guilty of this at times). So the verses multiply, and the images pile up, but the poem as a whole goes nowhere: it is an accumulation rather than a coherent statement. The same ideas could be expressed in a fraction of the number of lines.

It is unfair, though, when people who call themselves traditionalists accuse Whitman of destroying what they consider to be traditional poetry. The roots of poetry in the west are oral and based on narrative, so Whitman’s verse is closer in many ways to these origins. It’s also worth stressing that Whitman was deliberately and consciously borrowing from the use of language in the Bible, a language which underpins much of English-speaking poetry. In many ways, it is the careful verse of Frost or Yeats which is a movement away from the roots of western poetry, even if the written tradition they write within also began a long time ago. The kind of tight-knit poem that Frost or Yeats wrote could not have existed before mass printing other than for a privileged elite because the nuances of language that only register when we get the chance to read a poem again and again aren’t possible in an oral tradition. We hear it and move on, swayed by the delivery of the speaker as much as by the poet’s exact choice of words.

If the roots of poetry indeed lie in narrative, this may be one of the reasons why contemporary free verse poems are so often based on a personal vignette or a journey, using narrative as a structural device in the absence of technical features such as rhyme. Obviously these narratives are much shorter and tend to have more internal psychological content than the epics of the classical period or the sagas, but a story or a journey of some sort is used to provide the structural framework that the poem would otherwise lack. In the absence of rhyme and metre, the story takes us by the hand and guides us through the poem, at least if it is written well.

Spoken aloud - and let’s not forget that Whitman believed that poetry should be spoken aloud rather than read on the page - Whitman’s style of poetry can be enormously powerful, and perhaps in an age where film and the internet have made video more central to our daily lives than the written word, it is not surprising that variations on his style remain so popular in the contemporary scene. In America, certainly, his free-ranging model has largely become the norm.

I just wish the same high standards were demanded of centrifugal poems that are demanded of centripetal ones. There is room for both in the world.

NICHE

I need a niche. Every artist needs a niche these days.

Take Mary Oliver. Almost every day one of her poems pops up in my Facebook feed. I’ve no idea why. She’s much too upbeat for my taste.

Yet up she pops, with her spiritual cheer. Best of luck to her, I say. She does it well. She’s found her niche.

So what is mine? What else is in the catalogue?

Mmm, stiff competition. A person of colour. A woman. Lesbian/gay male. In the process of transitioning. Disabled. Hey, whoa there, a bit too depressing. Where’s the last verse redemption?

My problem is, I’m a bit of everything. Sometimes I rhyme, sometimes I don’t. Sometimes I argue, sometimes emote. I know it’s out of fashion, but I even do a bit of philosophical musing. Ooh la la! Where’s my USP?

How about being old? Could that be my USP? Coffin-dodger poetry, the Baby Boomer bard. Old is just boring, though. Smells of urine. Who wants to read a poem and be reminded that they’ll be old as well one day?

Quick, click on a link, get me outa here, guys!

Ah, an echo chamber. Phew, that was close. Hey, that’s just what I was thinking, bud. Great minds, uh? Can I take this opportunity to tell all you guys that you’re so unbelievably cool?

Maybe my niche could be having no niche. Kinda clever in a way, don’t you think? Rather French. But some bastard probably got there first. Like half the poets in history.

A niche of having no niche. Hmm. Best of luck with that. Let’s just call it brave.

WHAT IS TRADITIONAL POETRY?

This has been a good week for me. Three of my poems have been published in an online site called Mediterranean Poetry. https://www.odyssey.pm/. I’ve also been fairly creative.

Other than that, I’ve spent a lot of time researching possible markets which might be receptive to my brand of poem, so I’ve been reading some of the works in online poetry magazines, and, although I wasn’t expecting a lot of ABAB, I was shocked to find very few poems that used a ‘traditional’ rhyme pattern. Nor did a large majority of poems have any kind of regular metre. Only one of the magazines categorically ruled out these things (although several stated a preference for free verse and what they termed ‘experimentation’), but the reality was that ‘traditional’ poetic features were largely absent.

Although many sites stated that they were looking for original, innovative, and groundbreaking poems, I found many of the poems I read depressingly similar. They generally avoided ‘poetic’ language - the range of adjectives and verbs which were once the staple of poetry - in what felt like a declaration of their down-to-earth naturalness, with none of that elitist arty-farty crap. This was allied to what seemed to be an attempt to avoid any kind of artificial cadence, the rhymes and rhythm of ‘traditional’ verse. The favoured punctuation was often the dash and the slash - no colons or semi-colons for these tell-it-like-it-is poets - in what I assume they saw as an innovative use, especially of the slash or double slash. So innovative that I read about five poems that used it in this way: the dog//cried out//and the man in the car//who was wearing a dress//ate his burger (the double slash here does not represent a sentence break; it represents a trendy but meaningless double slash).

There also seemed to be a deliberate rejection of concision to revel in its verbose opposite: lines that spread out towards the edge of the page and ended when there was no more white space so another line began. The main structuring element which did exist was the use of anaphora at the beginning of lines, repetition of a word or a phrase followed by a long string of words that lacked any effort to sound beautiful or rhythmic, which often in fact gave the impression that these things were deliberately expunged if, by accident, they somehow appeared.

These linguistic features were mirrored in the content, which flitted from image to image rather than focused on one or two central images or metaphors which might serve as the integrating heart of the poem. Content was often formulaic and predictable and, damningly in view of the claim that this kind of poetry is dangerous, it was utterly safe. I’ll use one of my own poems to illustrate my point, because I don’t want to single out any individual poet for what was a collective conformism. My poem, A Cemetery in Scotland, describes men cruising for sex in an Edinburgh cemetery. The idea that anyone who reads a lot of poetry would find this subversive or challenging or threatening is utterly ludicrous. Fucking in a crypt, been there, done that, got the t-shirt. But imagine I wrote a poem saying what a great thinker Jordan Peterson is (which I promise you I won’t) - I feel sure that would ruffle a few feathers among the new literati. How very dare he. Anyway, I found little of interest as I worked my way through these poems; they had all the predictability of an action movie. Oppression and self-identity seemed de rigueur in terms of content and a mushy leftish sentimentality the mandatory politics.

I can’t help but see a lot of this as a reflection of the MTV generation and the internet world of surfing and clicks and links. Just as scenes in movies now tend to be much shorter than they were in the days of classic Hollywood, as if boredom might set in among the audience if three seconds passes without any change of camera angle or people rushing through corridors as they speak, most poems now have a kind of restlessness that alights on an image for a very short time and then moves quickly on to another, like items in a sales catalogue. It is the poetry of consumer capitalism from people who claim to be opposed to consumer capitalism.

I can picture the rolling eyes - the sooner this almost dead white male pops his clogs, the better, so we don’t have to read this antediluvian crap. But the idea that contemporary poetry threatens anyone is at best a fiction, and at worst a downright lie. This poetry is not subversive, not original, not groundbreaking. People like Stein, e.e.cummings, Lautréamont, Apollinaire, Pound, the Futurists, Dadaists, Surrealists, Whitman, Ginsberg, Bukowski, were doing a lot of these things at least sixty years ago and generally much further back, often taking risks which went way beyond what our timid contemporaries are offering.

This isn’t unique to poetry - the same is true of the art world. Painting was pronounced dead by trendy young things at least thirty years ago, and yet it survives. It’s just that artists working in ‘traditional’ styles don’t get rich and they certainly don’t win the Turner Prize. That is reserved for artists who pretend to despise the moneymen while making sure to implant their tongues in the nearest available orifice, all in the name of irony of course. At least the poetry world avoids the worst of this hypocrisy, if only because there’s far less money to be made since there’s no unique product to buy and sell for millions. The new conformism in poetry seems more to do with a need to be au fait and cutting-edge than with making a buck (or a million).

Yes, I am an almost dead white male, and I’m sure I sound cynical and old-hat. But I’m not calling for poetry to return to identikit verse of contrived metre and rhyme; I don’t want any kind of identikit verse. In this sense I have to hold up my hands and admit I’m a consumer capitalist, too, who believes in a pluralist marketplace. I want the same poetic world as the editors of most of these mags claim they want: a place where a thousand blossoms can bloom with lots of different colours and shapes and fragrances. I dislike this unadmitted conformism not because I want to return to dull formalism, and not even because it makes it much harder for me to publish my work. I dislike it because it rests on the flattering lie that these contemporary poets are risk-takers and pioneers. They’re not. And while art is in one sense a glorious lie, it abhors insincerity.

I have been placing ‘traditional’ within quotation marks throughout this blog because the reality is that what people generally mean when they use this word is no longer our poetic tradition. Free verse without rhyme or metre and with strictly delimited content is the norm of the new poetic establishment, and a lot of it is as revolutionary as Barbie. The least the new elite who claim to hate elitism could do is admit this fact.

POP POP POP MUSIC

When I was studying at Warwick University in the late 1970s/early 80s, I lived in a hippie-trippie household in Coventry in the days when hippies were becoming something of a joke. We were all into music that was still marginally trendy even if punk was gobbing on our incense sticks: I was a big fan of Beefheart, Brett was into Syd Barrett’s solo work, while Jon, who looked very much the stereotypical hippie, naturally went for Jefferson Airplane.

A couple occupied the room above mine: Di and Charlie (no, not that Di and Charlie, although everyone’s favourite royal misfits did get hitched towards the end of my time there. My friends organised a not-the-royal-wedding picnic by a river - in Ludlow if I remember correctly - and we all had lots of fun tossing a Di and Charlie frisbee around).

The Di and Charlie chez moi were strange in a completely different way from the royal couple. Neither of them were students any longer, having dropped out of Warwick, although she was learning Serbo-Croat in her spare time for some reason I can’t remember or never discovered, while Charlie worked in catering at the university when he wasn’t drunk. Anyway, the strangest thing by far about Di was that her favourite band was the Seekers. Yes, that group from Australia who did stuff like I’ll Never Find Another You. I couldn’t help but admire Di for this: preferring a cheesy, folksy Australian band when she was surrounded by a peer group still cocooned in 60s weirdness and flower power.

But in retrospect Di had a point. When I hear the Seekers now, I realise how good they were in their way, even if a lot of their stuff was twee. First, Judith Durham had an amazing voice. Next, their harmonies were up there with the Hollies or the ultimate harmonisers, the Beach Boys. And forget the cheesy, folksy stuff. Did anything encapsulate Swinging London like the whistling in Georgy Girl? Does anything else set off visions of bouffants and mini-skirts and Carnaby Street in quite the same way?

But my personal favourite among their songs is The Carnival Is Over. I know, I can hear the groans. Yes, it’s sentimental and obvious and predictable, but every time I hear it, it still tugs on my heartstrings and my eyes get tearful. If ever a song proved Coward right when he wrote in Private Lives, ‘Extraordinary how potent cheap music is’, it’s this one.

Except it isn’t cheap in many ways. That’s what makes Pop into Pop – the direct arrow to the heart. And Pop often fails when it tries to go beyond this. Personally I really like Days of Pearly Spencer and The Windmills of your Mind, but neither were smash hits, even with all the publicity Radio Caroline gave to the former. They weren’t big hits because they didn’t go directly to the heart: they detoured through the brain. They’ve both been covered since, of course, and got some of the acclaim which I feel they deserve, but neither offers the instant gratification of classic Pop.

What got me thinking about all this was a YouTube I watched by Samuel Andreyev. He is a composer and musician whose channel I first found because of an analysis he did of Frownland by Captain Beefheart and then his interviews with members of the Magic Band, but he is classically trained and ranges across many different musical genres (and I’d highly recommend his channel to anyone interested in music).

Anyway, this particular post was titled 20 Songs You Need to Hear. And I wondered which twenty songs I would have included, but then I realised it would just be a list of my favourite twenty songs. Then I thought about Andreyev’s list and his felt much the same. It seemed to have no organising principle with regard to genre and arguably didn’t have much pop music at all, for many of the chosen pieces (by Soft Machine, Beefheart, the Velvets, Leonard Cohen etc) weren’t really Pop.

Now it’s his channel and he can do what the hell he wants with it, of course, and I won’t argue with many of his choices because lots of the musicians he chose are my favourites, too. And I especially admired him for choosing Cohen and then selecting Everybody Knows. I say that because Andreyev is a musician and I get the feeling that he inhabits an aural world in a way that I, as a writer, inhabit a verbal world. Yet he could appreciate a musician whose greatest gift is as a lyricist (he also recognised this quality in Cole Porter). And the song he chose was perfect as an example of everything that is great about Cohen as a lyricist – his mix of irony, cynicism, a barely submerged idealism, and more than a splash of Romance with a capital R. (Since writing the paragraph above, I have found out that Andreyev has also published several books of poetry, which I guess debunks a lot of what I say here.)

Anyway, as always I digress. This got me thinking about Pop as a genre and I felt none of Andreyev’s selections were pure examples of the form. Yes, some of them had made the charts, but they were all slightly off-centre compared to an imagined archetypal pop song. If I were choosing twenty great pop songs I would definitely have included something by Spector and perhaps another girl band such as the Chiffons, The Sun Aint Gonna Shine Anymore, different Beach Boys from the one Andreyev chose (God Only Knows, perhaps, or Wouldn’t It Be Nice), a Burt Bacharach number or two, (Walk On By by Dionne Warwick and The Look of Love by Dusty), and perhaps some soul (Sam Cooke? Wilson Pickett?) or even Motown.

So why do I feel that these songs are in some way the purest expression of Pop? Well, Pop, as I said, is direct. It was originally made to be played on the radio, so it had to grab your attention and it had to do it quickly. In general it couldn’t afford too much subtlety. It especially needed an opening that smacked you between the eyes, such as that of Reach Out, I’ll Be There. The Stones in particular were very good at this (I’m not a big fan of the song as a whole, but the opening to Gimme Shelter is stunning). Pop also tends to be emotional. Love and its trials and tribulations is by far the most common emotion, of course, and the feelings expressed tend to be personal. When Pop gets political, it generally fails. Rock and punk, with their harder edge, are much better suited to it.

Another essential of Pop is that it is manufactured, created in a studio by professionals aiming to make the perfect three-minute product. Whereas jazz is usually far better live, a lot of Pop loses its magic in performance, which is why I suppose there was so much lip-synching on Top of the Pops. Unlike a lot of modern jazz, Pop tends to be short, again a result of its history of needing airtime on radio. When it gets to stuff like Hey Jude, which in my personal opinion wastes a good song by tagging on a long ending that sounds more like a football chant, it has lost its way. For me, Ticket to Ride and Paperback Writer are vastly superior.

Because at heart Pop is commercial – it exists in order to make money. In Pop, though, this often becomes a virtue – it prevents the kind of self-indulgence that happened in the late 60s/70s when the simplicity of Fats Domino or Little Richard had become the tedium and faux-profundity of rock operas. Pop is honest in this regard. There is none of the pretence that often exists in other art-forms, especially the world of art, that this is art for art’s sake rather than for filthy shekels. Many of the musicians who created the wonderful Pop of the 60s, of course, almost certainly did it purely for the joy they found in creating it, but not the moneymen who mattered (although I have to concede that things were much less corporate in those days and there were lots of entrepreneurs at that time who did put their money where their mouth was, especially in black music).

For all its lack of pretension, though, Pop can be strangely promiscuous and will borrow from anywhere: the various love songs that rely heavily on The Moonlight Sonata, for instance, or the moments of jazziness at the end of  Dead End Street, or Malcolm McClaren raiding opera, or the baroque excesses of Bohemian Rhapsody or Wuthering Heights. But, and this for me is important, every time it does this, it risks moving away from the essentials of its genre and getting lost, even if that particular borrowing is successful, and eventually needs to be dragged back to a singer, a lead guitar, a bass, a set of drums, and probably a piano, all in a recording studio.

Before I finish, let me quickly say that I understand that I am creating arbitrary categories when I suggest this clear divide between Pop, Rock, Soul, and so on, especially in the artistic fervour that was popular music in the 1960s. Andreyev has a Kinks song in his list, for example, but where would one place them? They were clearly Pop in the sense that they had a string of singles that were big hits, but they also went off in their own idiosyncratic direction as the decade wore on. And how about bands like the Stones? Are they Pop or Rock? Clearly these categories are fictions on one level. I can only say in my defence that I have been trying to isolate what I see as pure Pop, even if I know this Platonic form doesn’t exist in reality and every song to some extent is a mix of genres.

I’d also like to make clear that I’m not claiming that Pop is mindless froth and is incapable of dealing with ‘serious’ issues. Popular music has always spoken for the poor and the oppressed, for example in forms such as folk and reggae. However, modern commercial Pop takes place in a more compromised situation because of its focus on making a profit. Not even this, though, prevents social or political comment, even in songs which many people would claim are mindless froth: for example, the girl in the Shirelles song wondering if he will still love her tomorrow (the subtext being after sex has happened), or one of the Moonlight Sonata songs, Past, Present and Future by the Shangri-las, with its hints of sexual pressure, abuse or even rape (‘but don’t try to touch me, don’t try to touch me, cos that will never happen again’).

I have no idea what happened to Di and Charlie from my hippie house, not even whether either or both of them are still alive. But if Di is, I really hope that from time to time she gets out her black vinyl version of her Seekers Greatest Hits album, puts it on her turntable, and whistles along to Georgy Girl. Because another thing that Pop is very good at is capturing the spirit of an age and reminding us all of how daily life used to be.

FLASH FICTION

I recently rediscovered flash fiction. I first came across it when I was living and working in Singapore in 2016, had a go at writing a few pieces, but assumed they had got lost in my move to Portugal a few months later. Several weeks ago, however, I found them on a USB. So I dusted them off, rewrote them a little, and am now trying to sell them to various magazines.

There seems to be little agreement about what exactly constitutes flash fiction, other than it’s shorter than standard fiction, and a host of names have emerged to differentiate various word-limits: micro, flash, short shorts, and so on. From my trawl of Google, it seems that below 1500 words is often chosen as the point at which a short short becomes a piece of flash. I would personally go stricter than this and place the cut-off at 1000 words at most. My stories from Singapore were around 300 words, except for one which clocked in at slightly double that.

Over the last year I have also written several stories for a short-story writing competition I enter quarterly, which often has a limit of around 900 words, but I’ve tended not to think of these as flash fiction because they still contain elements of plotting and characterisation similar to those you might find in a standard short story. Overall, I feel that as the count goes below 500 words, the things I talk about in this essay grow more essential and a different genre of literature begins to emerge, whereas, in my opinion, the ‘normal’ rules of fiction start to kick in at around 1000 words. So the space between 500 and 1000 is a bit of a grey area.

During my Google search, I came across many websites arguing that two essentials of flash fiction were a pared-down plot and a restricted list of characters. Certainly the traditional whodunit seems out of the question, but must flash always be so skeletal? In flash fiction, it is true, everything must be established almost immediately - setting, mood, characters, storyline - with little opportunity for subtleties and extra layers to be added later. I will briefly look at these four elements in turn in the rest of this essay. I have to say, though, that I am not at all confident that what I write here will be universally relevant to the writing of flash fiction or whether it will merely reflect my own individual practice.

With regard to setting, there is clearly no time in flash fiction for Hardy’s leisurely descriptions of nature or the meticulous detailing of furniture and rooms that Chandler delighted in. In flash fiction the simplest of statements usually has to suffice to establish all that it is necessary for readers to know: the action takes place in a hospital ward or a school playground or a street. This detail tends to come very early, often in the first line, so that readers can immediately feel secure within the setting and place themselves mentally in that space. I can imagine a flash fiction which consists of nothing but dialogue for the first half of the story - something similar to Hemingway’s The Killers (although even there the first line tells us we are in ‘Henry’s lunchroom’)  but I suspect a story that happens nowhere in particular - Waiting for Godot: (‘A country road. A tree.’) - will rarely work well in flash. So the setting will often be somewhere commonplace with which the readers are familiar so that they can fill in the details for themselves. Keep it simple seems the obvious advice for writers of flash fiction as far as setting is concerned.

Mood must also be quickly established, by a combination of factual information about the setting with a choice of words that sets the mood, often by means of a single adjective - trees become ‘skeletal’ or ‘lush’, rooms ‘bare’ or ‘crowded’, beaches ‘remote’ or ‘hectic’, and so on - thereby killing two birds, setting and mood, with one stone. As I argue later in more detail, I feel there are many similarities between writing poetry and writing flash fiction, and this spare use of descriptive adjectives is an essential tool to set the mood when there is a severe limit on word count.

The introduction of several characters in depth is almost impossible in flash; there is simply not enough space to do this without confusing the reader and risking the focus and intensity of the writing. So there will often be only one main character, and he or she will be introduced very early in the story, usually in the first line, with the author making clear that this is going to be their story. (If first-person narrative is used, other snippets of information about the character may be slipped into this first sentence that immediately begin to fill out the first-person speaking.) If there are two main characters, and the story is essentially a dialogue between equals, both will be mentioned early, and some background information will generally be supplied about their relationship (whether they have just met, are old friends or lovers, boss and employee, and so on).

The literary equivalent of movie extras, walk-on characters, is possible, but they tend to be anonymous and tangential to the main action. They will rarely be given names (because this sends a message to the reader that they are important as individuals in some way, while they are not) and are usually reduced to a function (e.g. ‘the waiter’). Stereotyping will come into play here: for example, the waiter may be quickly shown to be French and supercilious, since English speakers believe we know that all French waiters are supercilious. This adds some colour to the picture without risking sending the work off on a tangent or in a different direction altogether. Flash is therefore well suited to satire and parody and irony since even more than most fiction it relies on a reader’s preconceptions about people, which can then be used for a quick injection of detail or humour. It is much less well suited to subtle portrayal of fully rounded individuals in social groups and situations (e.g. real French waiters).

Obviously there is not much room for complex plotting in flash fiction but this does not necessarily mean that plot is absent or even necessarily hugely truncated. All the same, the writer will need to find ways of including the plot in the most economical way. A plot takes time to unfold in a story if it follows a traditional, chronological ABC, but this amount of time can be reduced if it is introduced by means of the main character remembering the past or reflecting on the situation in which they find themselves, and in the process identifying and focusing on the most important past facts and moments for the reader’s benefit. This is similar to the use of flashback in film: an interesting switching between time frames that not only tells the story in a swift and concentrated fashion but can also pique the interest of readers through variety and a more imaginative unfolding of the plot.

On my trawl through Google, I found people who felt that a good flash story should finish with a sudden, surprising twist, but in my opinion this kind of ace up the sleeve can easily be overplayed. First, creating a genuinely surprising ending that also somehow seems inevitable once it happens requires immense skill on the part of a writer, who must insert tiny clues throughout the story, sometimes even in as little as single words, which the reader brushes past and doesn’t even notice on first reading, but then feels rather dim not to have spotted. Second, it does not take long for brilliant endings to become clichés. For example, the murder mystery in which one of the murdered characters turns out not to be dead (perhaps first used in And Then There Were None/Ten Little Indians by Agatha Christie?) was a coup de grace, but readers get cute very quickly and this kind of coup rapidly becomes just another cliché and a yawn. Third, if readers expect a surprising ending and then one doesn’t arrive, they may feel cheated or disappointed. Yet the actual simple ending may be the best way by far to round off a perfect little tale.

I have personally found that a radical change in mid-story can work well, where the reader has been led to believe that a certain situation exists but suddenly the reality turns out to be different. Readers then need to quickly readjust their thinking mid-stream, which puts pressure on them but can also create a feeling of novelty and pleasing surprise. In my opinion, this may come across as less contrived than the sudden, dramatic final twist, while also keeping readers on their toes. Another useful possibility is to imply, but not directly state, certain aspects of the story (sexual abuse is a classic example), so that readers need to keep these possibilities in mind but never feel confident that they are not making a leap they shouldn’t make. Keep it simple gives way here to keep them guessing.

A common pattern I have noticed in my own flash stories, although I am unsure whether this is merely the way I write or something that may be a more general feature of the genre, is a slow ratcheting up of tension as the main character’s fears slowly grow or the situation gradually worsens. This avoids sudden lurches of plot which may be hard to pull off in such a restricted number of words, and is a way to keep the focus on the main character and the overall situation, while preventing a feeling of stasis, thereby retaining and building tension towards the story’s climax.

One thing that intrigues me and appeals to me about flash fiction, especially once it slips below the 500-word mark, is that it seems to operate in a space somewhere between fiction and poetry in the sense that every single word must pull its weight, and so concision is essential in both forms. (Although I’m not sure that some contemporary poets regard concision as a desirable quality in poetry any longer, since they often seem happy to write as if they were journalists getting paid by the word.) Those adjectives and adverbs we all use and frequently overuse as writers must be ruthlessly expunged from flash fiction unless there is an absolute conviction that they add something significant to the cooking pot.

Another feature of poetry -the ability to say two things (or even more) at the same time in the same stretch of language - is also a highly desirable quality in flash. Thus flash writing is often rather elliptical and I am personally of the opinion that symbols play a more central role in flash than in other forms of fiction. Realistic fiction, which slowly builds up a replica of everyday life, a mirror of nature, takes time to develop its effects, time which is not available to the writer of flash. It is often the very ordinariness of language and scene, the mundanity of the everyday life portrayed, that slowly draws the reader into a story written in a realistic genre. Central symbols can have their role in realism - Ibsen’s wild duck or Chekhov’s cherry orchard - but these act almost subliminally, subordinated to the mimesis of social life. In fiction of normal length, the characters take over the story and have a life of their own, and there is less space, or need, for symbols. Sometimes, in many ways, the characters become symbols themselves. In the restricted space of 500 words, on the other hand, it is harder for characters to dominate, whereas a central symbol can easily perform the role of being the very heart of the writing, its essential meaning, as is true in poetry.

This makes me wonder if flash could be a good home for surrealism. I am not talking about fantasy here, because although fantasy clearly isn’t ‘real’, it tends to be heavy on plot (good versus evil, and so on). I am talking rather of disparate items that have no obvious connection to each other but somehow feel correct when placed together, as in much surrealistic painting and writing. (Note to myself: I must give this one a try.) So while modern fantasy stories à la Harry Potter might not thrive under the conditions of flash, one obvious area where flash fiction can succeed, and has clearly succeeded in the past, is that of fairy tales and fables, such as those collected by Grimm or written by Aesop, which are often very short in terms of word count.

Finally, let me turn to the role of the reader in flash fiction. It seems to me that he or she has a far more active role which requires a lot of work and a sensitivity to language in a genre where this is paramount and where the tiniest of hints, cloaked in veils of language, can move the story on, doing the work that entire paragraphs might be called upon to do in longer realistic fiction. Whereas readers can miss out whole passages or even chapters in longer fiction and still be fully aware of what is going on, they need to be much more attentive when reading flash. They are often required to paste the story together for themselves using subtle clues half-hidden in the writing. For me, a good flash fiction will offer up a lot on the second reading that isn’t noticed on the first.

So although I am prone to ranting about modern life with its pointless speed and its short attention spans and its superficiality of clicking ‘Like’ on social media, I can see a huge potential in flash fiction. I can’t see the point once we hit the extremes -micro-stories of one paragraph or even a handful of words - but I can see a role for 300-1000 words as a crystallisation of literary expression, similar in many ways to the haiku in poetry. I am on course to becoming a big fan of flash fiction, I think.

WELCOME TO MY SUNDAY UPDATES

SUNDAY, 28 JULY 2024

I launched this website last Sunday, so this is the first of my Sunday updates.

I’m always ranting about social media and how it’s destroying the world as we know it, turning us into idiots who take pictures of our food in restaurants, persuading us to click Like to posts by a thousand Facebook friends while not finding time to stay in contact with the handful of close friends who really matter, selling our private details on the sly to commercial outlets, making us all feel imperfect and inadequate, turning talentless nobodies into celebrities, quacks and influencers, and generally adding to the shallowness of life.  

But over the past week I’ve found something to love about Facebook. Its algorithms in their trawl to find out about me so that I can be monetised have clearly worked out that I’ve just published a book of poems and so all week poems have been popping up in my latest feed or whatever they call it, and I’ve read some wonderful ones from Yeats, Frost, Dickinson, Stevens, and so on. And not only poets I already knew, but also interesting new poems from random contemporaries, plus some work, of course, that I didn’t much care for.

The algorithms have also clearly realised that I’m interested in art and this week I’ve been deluged with posts of Hopper paintings, which is great because I’m a big fan. No one does urban loneliness better than Hopper. But if we label him as only the painter of works like Nighthawks, this reduces him to a stereotype. I love his use of light and you can really see that he spent a lot of time in Paris studying the Impressionists, even if his overall style looks very different at first glance.

Going back to the poems, in some ways reading them was a bit depressing to be honest. My book of poems is hot off the press but then I read this fantastic stuff by other people which just seems so much better. I have to tell myself this is not a good way of thinking: writing poetry is not a competition with gold medals and wooden spoons. First, I shouldn’t compare myself to others - every artist is an individual with something unique to offer, even McGonagall. Second, comparing myself to the greatest poets in the English language (and I don’t mean McGonagall) is the height of stupid: how do I expect to feel after I do that?

One of the poems I read again this week was the one by Yeats that begins with ‘When you are old and grey and full of sleep’. A brilliant poem in its subtlety and gentleness and my own work seems like a sledgehammer in comparison. The problem is comparison. It’s a dead end and you just should never do it. It’s like someone who plays park football on Sundays comparing himself with Messi or Ronaldo.

This reminded me of something I read once about Brian Wilson’s reaction to Sergeant Pepper. Apparently he felt devastated when he first heard it because he’d spent so much time on Pet Sounds and he felt the Beatles had put him in the shade. I disagree. Firstly, because - and this is only my personal opinion, of course - Pet Sounds is vastly superior to Sergeant Pepper, more textured and more coherent as an album. But it’s depressing that such a talented composer and arranger should have thought in this way rather than being proud of his own achievement of masterminding one of the best pop albums ever.

I guess artists tend to be sensitive (a less kindly word might be precious.) And I have to admit that, even at the age of 71, I still find it difficult to enjoy writing simply for its own sake, without thinking or caring about how it might be received. The important thing is that when I write, for a short time I step outside of my daily world with its petty concerns and worries. I lose myself for a while, and that really is precious.

This is the cliché about happiness, but it’s basically true: we are happy when we are so engrossed in something that we don’t even think about whether we are happy or not. The journey, not the destination, if you like. Or the process, not the result. This is where we can find the real joy in life, even if only fleetingly.

Oh my God, I’m starting to sound like a self-help guru. Time to sign off, I think.