Paul Éluard’s “The Life” and behind the veil of translation

I’m starting on one of my New Year’s Resolutions early: to spend more time presenting fresh translations here. I expect a sizeable portion of them will be from French. Today’s piece, from the poet Paul Éluard, will be one example. Since there will be others, I’m going to start today with an aside about the translator’s tasks and my tactics and credentials for doing them, before we get to today’s combination of poetry and music.

I am not a native French speaker, nor do I have great facility with that language. Growing up in a little Iowa town, I had the luck to be able to take French at our small community high school, and later attempted to study it in college. French was Hobson’s choice for any foreign language classes at my small high school, but I welcomed that particular chance. In at least one previous post I’ve mentioned some of my accidental connections to the French language, but let me summarize them for newcomers. I first encountered French during my father and his brother Bill’s fishing trips to Ontario Canada, where I, a grade-school aged kid, was amazed to find the labels on many boxes and cans were bilingual, French and English. Around the same time my beloved Auntie Red, found herself and her young family stationed in France when her military husband was restationed there. Back stateside, she would amaze me by reeling off French phrases still retaining elements of her Southern US accent. That there was such a thing, an entire other language to describe the world, presumably as rich as English, seemed marvelous.

My academic career with French was none-the-less fraught – both in high school and college. Much of the work was based on getting conversational mastery, and I was terrible at it. I have something (I’ve always suspected neurological) that frustrates me with vocal mimesis. It’s likely part of the reason I struggle with singing. Helpful correction of the “no, it’s pronounced like this” kind only made me seem stupid or uncooperative, because my second and further attempts would still be way off in trying to make the right mouth sounds. Though my academic career was eventually stunted anyway, I sometimes wonder what might have happened if I had lucked into a class based on silently reading and appreciating literature in another language.

But that wasn’t on offer, or affordable, to me. So, in my 20s, largely out of school, I started to translate French poetry. This was a laborious process. I would page through a French to English dictionary for all but the most common words. I still retained a smattering of knowledge of French syntactical and grammar practices back then, and putting the two together I was able to produce a handful of translations. One that survived from this work was my translation of Paul Éluard’s “L’Amoureuse”  which was presented here some years back.

My activity then, and my activity now, might occasion questions, ones I don’t know that I have a good answer for. Should I be doing this? Wouldn’t it be better if someone fluent in both English and French did any such literary translation? Isn’t the answer the that last question obvious?

My answers (then and now) would be: some of these poems don’t seem to have English translations I have access too, and even if translations exist, isn’t an attempt to do my own translation just another variant of doing one’s own “deep reading” of a poem? If for example, Helen Vendler has written an essay on what she found in a poem, does that mean I shouldn’t look at the poem myself and ask what all is in there – not because I think of myself as more learned or insightful than Vendler, but more at because I’m another human consciousness engaging with the consciousness of the poet.

What gave me such audacity, with so small a mastery of French, to do this? Well, I wanted to – enough for a stubborn young man. Now as an older man, still translating without mastery in the source language, I also tell myself that I did (and do) self-consider myself a poet, a chooser of words, focuser of images, composer of word-music. Part of the task of translation is to do the primary work of literal translation, but to produce the full pleasure to the reader of poetry, the poetic work is at least as important. Decades ago, I read that Ezra Pound used only someone else’s English glosses of Li Bai to create his landmark Cathay  collection. Eventually I became aware that Pound’s Chinese translations were not very accurate depictions of Li Bai – and since learning that as a young man I’ve sometimes “checked” translations of poems to see how varied the translator’s version may be from some literal word for word, or from other translators’ versions. I was too uncertain of my own translations to think I was doing better work, but what I read as taking liberties bothered the younger me. Are translators like Pound “cheating” by not serving the original poet faithfully? These resulting English poems (I would say of Pound’s Li Bai) were as much or more the translator’s poem as the source author’s.

But what if I publish my translations? There are what I call “guild concerns” there. In the same way that I worry that my naïve musical compositions and make-do musical skills are, in their small way, part of a flood in the musical culture that reduces the shrinking opportunities for “real musicians” and trained composers, am I doing the same for translators with better knowledge and cultural grounding? Back to Pound: his work out-shadowed other translators who knew Chinese, and I’ve featured here a contemporary of Pound, Shigeyoshi Obata, whose Li Bai translations are largely unknown.

As a guild concern for people who depend, or wish to depend, on income from their art, this can be considered an existential issue, and I wish them no harm. Yet they may think: no matter, you are  harming us. And now there’s another monster in the forest that they might view me as riding in on: computer-based artificial intelligence.

Since my early thumb-worn French-English dictionary forays, computer translations have become quite facile. An instant’s click will produce a literal gloss on one’s screen of a poem such as today’s selection. Let me stipulate to all, and to those that fear and dislike AI, these instant computer glosses are not good poetry.*   I will click for them, but I will still spend time with dictionaries. What are the various contexts of a word? Which choice in English brings the most to the poem?**

Herein lies one problem. I’m trying to read the source author’s mind, and that will bring in my own mind, experiences, and knowledge to filter that process. This part of translation is unavoidable for causing both errors and accuracy! As I’ve grown older, I now often understand those “cheaters” as other blind ones assessing the elephant of the poem. If, as Frost had it, poetry is what is lost in translation, then a translator’s job is to reclothe the poem’s bones in English poetry, using modern English poetic expression. Doing this has limits, dangers: readers may like their foreign poetry to sound, well, foreign, with an exotic awkwardness – and having ancient poets sound like your contemporaries at a local poetry reading risks unintentional humor.

So, here we go: an early poem by French Surrealist Paul Éluard, “La Vie.”   I start with a machine translation.

Life x2

The 1926 poem in its original French, and a computer translation

.

The syntax comes out somewhat scrambled or hard to follow. As a prime Surrealist, one that knew and went through the Dada predecessors to Surrealism, this is likely there in Éluard’s original French I think. My first questions are what are the images: what does the author want me to see, or otherwise sensually appreciate? This can be hard with Surrealist poetry or the like: they often seek strangeness or even nonsense in images. All good imagery works with some degree of mystery or novelty, anything less risks cliché. Even one of those images that you read and think “I’ve never seen it like that before, but once I’ve read this, I’ll always think of this comparison” has to surprise you, cause you to take the leap of likeness. Surrealism says you need to outright react that’s impossible or outrageous to fully free and implement the imagination. So, my primary task is difficult with Surrealist poetry – they may want to be impossible or impenetrable, yet I still try to make the images clear, and this may be subject to mistakes. It’s also possible that in psychoanalyzing the poem that I may be putting things in there that the conscious intent of the original poet didn’t intend. ***

Examining the gloss, I think Éluard is describing a woman whose consciousness is either in a dream state mimicking waking life or living her waking day informed by, or as if, in a dream state. I think the image wants that ambiguity, to have it both ways. Either way, the people and things she meets are like strangers who have been hiding and she has found them for the first time.

The Life tranlated by Frank Hudson

My translation, used in today’s short musical performance

.

I take the liberty while translating this image in giving double meanings to some words, using more than one English word to stand for a single French word. I’m doing what “rescaling” a lower resolution digital image does, I intersperse additional pixels/words to bring the image out. “Neu” is rendered “naked” and “new;” “fraiche” becomes both “cool” and “fresh.” This is a judgement call, and my choice may be wrong. In making those word choices, using a French dictionary with multiple meanings and usage examples of the word used in context remains useful. Another thought, that AI will miss, but bears considering: the author may have intended a pun or other wordplay .

The final two lines gave me a word-music chance to put a rhyme in to tie things up, what with “gaze” and “sways.” I was so pleased how that worked out that I overlooked one word, a mere possessive pronoun, “ses.” I’m enough of an idiot regarding French usage that I can’t be sure if it’s a male pronoun such as “his” or a general pronoun, a “he/her/it” equivalent. Who’s gaze is it? The woman in the poem? The poet, the male Éluard? Something else, life or imagination?

In my ignorance, and as an admitted failure of craft, I just put down “her,” because at the time I finalized my translation my focus had moved from the word-for-word elements to what is the vivid image; and I thought, this woman that Éluard is admiring in the poem, living the Surrealist outlook, is confident in her own gaze as she sways in either the intoxication of fresh experience or the artistic refinement of dancing her day forward, and so I wrote in my translation “her.” I thought the poem is about the woman – even should be about the woman in the conclusion if I was writing it – but Éluard might have chosen to end it with his gaze evaluating the woman’s experience: I’m the artist, they’re just “life.”

So unintentionally a feminist recasting of the poem? Surrealism does have a problem there: open to women as muses, yet not as open as it should have been in allowing them to be concrete artists themselves. Shades of Éluard and Breton, may I call my ignorant choice a “Freudian slip?”

Today’s music? This was a little exercise on my part using a depiction of a couple of chord progressions from a Joe Pass performance as the basis for the music. Pass was a great Jazz guitarist – but for external practicalities, once more there’s no guitar in this version at all! Dada composition! You can hear my musical performance of Paul Éluard’s “The Life”  with the audio player gadget below. No player visible? Not a mistake or slip, it’s just that some ways of reading this blog won’t translate into showing the player. If so, here’s a highlighted link that will open a new tab with an audio player.

.

*They don’t generate good literary prose either – producing as they do some estimate of the most probable word to be used, not the one chosen by another human’s consciousness that may not be the most common one-for-one. Moreover, with poetry the word-music issue is ignored by the AI translations, and poetry is musical speech. I generally don’t do rhyming accentual-syllabic translations, the cause of many an “inaccurate” translation, but I want the resulting translated poem to sound like poetry in modern English.

**These choices are a reason I highly recommend translation as training for poets. I don’t believe I undervalue topic, message, or prose-level-meaning in poetry, but many poets are stuck in finding the best combination of what they want to say with how it’s said. While I acknowledge the real issues with AI, for a monolingual, working with a literal/prose gloss of another poet lets one develop those selection-skills of the right word, right order, right connection, while one-step-away from their own experience and desired message.

***This may be proper since Andre Breton, another founding Surrealist, thought Sigmund Freud was on to something crucial with his recent theories and psychoanalysis. That Breton may be wrong about Freud or that Freud may be wrong about how the layers of consciousness and personality work only reinforces my stipulation that this outside consideration of the poet’s fellow consciousness is necessary for accuracy and errors.

The “Guild Concerns,” and mine, and yours, around Artificial Intelligence

I hope the hardy, but smaller, summer readership here has enjoyed this diversion from our usual literary poetry combined with original music subjects. It’s been somewhat difficult to write. Why?

When I run across comments or longer-form writing about artificial intelligence – given my interests, mostly from folks in artistic fields – the feelings and cold convictions I read come in hot. AI gives me a lot of feels too: frustrations, fears, disgusts, distrusts, worries, even amusements at its fails. Yet, earlier in this series I’ve honestly talked about AI features I’ve tried. I wonder if I’m alone in these mixed feelings – if I’m just a wishy-washy old guy who won’t say it plain. For my final installment let me focus on those concerns.*

I’ve referred to some of those “guild concerns” earlier in this series. Let me expand on that. Let’s say you are a professional, semi-professional , or aspiring visual artist, voice talent, translator, editor, writer, composer, musician. AI claims it’s achieved parity with your field’s trades. “No!”  you reply to any such suggestion, for you are informed of all the small things that a master in your field provides that AI, as yet, can’t. But along with that comes the fear that most customers and many consumers of your art may judge as inessential elements you’ve learned to provide and appreciate, that your professional value-add may be judged dispensable. Capital’s royal decision makers may not hear your objections, give them any bottom-line weight. There’s an unavoidable term for a resulting outcome: enshittifacation. Everything then may drop to just above the level that would drive commoners to revolution.

And there’s a tsunami of salt to be poured into artist’s wounds from the use of Large Language Models in current AI. LLMs digest realms of work by artists, almost entirely without compensation to them, and apply pattern and categorization processes to this hoard to make it into reusable parts that can be recombined into other work – work whose ownership has been severed from artists and transferred in part to oligarchical corporations. This injury isn’t speculative. It’s already occurred in titanic amounts to create current LLMs, and ex post facto attempts to get paid for this seizing of work or to prevent future accumulations of scraped up art are being resisted by the AI industry who is seeking government protection for this reuse.**

So, where organized as unions, workers in the arts have attempted to counter this, concerned both as keepers of artistic excellence and as counter-forces seeking to protect incomes for their members. Will this succeed? Who am I to predict, watching ignorant beach-sand techbro armies sweep across the darkling plains amid alarms. But I understand the anger/fear of the artists, endorse it.

But I, myself, am an odd case. Poetry has low capital needs, a loaf of bread, a jug of iced-tea, and a roof, and I’m good to go there – and the renumeration market for poetry is scant. I used to inconstantly chase after giving readings with a couple dozen attendees, or the small paper presses aspiring to three-digit sales. I still admire those things and support them, I just don’t see them as precious scraps to struggle over at this point in my life. With the Parlando Project I most often use other people’s poetry, using and promoting work from dead and/or public domain poets or small excerpts of words from the living. With this Project I can aim for my hundreds of readers or listeners for a piece – a tiny audience in Internet stats, but an appreciable reward by poetry standards. With my music production and distribution here (aided by affordable computer technology) I find that I’m part capitalist and part worker-in-song. And there’s a conflict there.

I’ve already confessed in the series that I sometimes use what is called AI to extend the long-standing feature of computer music arpeggiators, programs that suggest and play patterns of notes on command. Honestly, I don’t feel good about using these – there’s shame mixed in there with the approval I find with my producer’s hat on from the effective results they bring to the finished musical piece.*** It’s not just breast-beating when I confess it feels fraudulent to me to use some computer aided line or expression played with an accomplished verve. A human should do that, and I can’t do that, and yet that part of the ensemble is  there – I’ve allowed it, and its level of success to some listener could be assigned to me. The alternate path I left some time ago was organizing bands of musicians to realize the music I create. I may wonder about that untaken path, but then I consider how dissatisfied those musicians might be at my non-commercial aims, how frustrated or dismayed they would be with my musical naivete, how stressful and ill-fitting it would be for the composer-hat-me to wear the bandleader-hat as well. Yet, those struggles, despite unfitness on my part, may be the necessary dues to engage in musical work. Guild concerns might hand down a harsh judgement on what I’ve done: “If you can’t do that, you shouldn’t do that  –  you’re taking away jobs from skilled tradesmen.”

In this I support the guild with one side of my heart, and yet I could be charged with working against its union shop.

A musical piece from a pair of DVDs issued decades ago that my child and I treasured when we both were younger. I don’t have details about how this music was produced, with what technology, but this is so much better than the trite AI slop illustrations I could have chosen to use instead. The Animusic web site is defunct, and I don’t know how you could still purchase this.

.

Full-fledged AI music? The examples I provided in my last post satisfied my curiosity in my quick attempts to see what the current state of the art can do. Even more so than with my frustrations with AI illustrations I discussed in the first part of this series, I’m not tempted to continue to use that level of AI music creation. I don’t have to test my ethics in this: AI generated songs can’t get close enough to what I want, what I intend to communicate. I like playing instruments, and despite my not uncommon artists ability to procrastinate on getting down to composition of new work, once I’m into the process, I find it absorbing. If what results isn’t always a perfect realization of intent, so to it is with AI, and typing a few words into a prompt has no visceral rewards.

As I wrap up this series today, I’ve honestly tried to report my contradictions. If I’ve done anything, it’s my hope that you, my widely curious readership, will use what I’ve written to spur your own considerations of the challenges AI brings to art. I’ve used music as the main example, but literature and many other arts – as well as work that isn’t viewed as artistic – have like dangers, allied concerns.

.

*Let me mention that I also share environmental concerns with the energy usage to provide AI. While earlier in this series I wrote that we likely don’t really know what those energy needs are with precision – and our existing general use of ubiquitous computers both saves and costs energy in some balance that’s hard to calculate.

An another issue: brevity keeps me from delving today into the important risk of extended capitalist and or authoritarian control of expression by ceding tools of production to oligarchs.

And lastly, there is a great deal of techbro hype around AI. In some ways it’s encouraging and scary how well it works, and in others it’s risible and scary how badly it works. I don’t mind so much laughing at its limitations in the world of musical art – like the satire in the last post where it created outrageous protest songs that can still sound sonically plausible – but the thought of non-analog safeguards in life-and-death contexts is concerning. It’s already hard enough to hold capital to account for grievous errors and oversights. Giving another level of kings-X granted to the passive voice of “computer error” worries me.

**As I was finishing a draft of this on Saturday I read an egregious example of AI theft from a musical artist. Emily Portman (and others, it appears from the linked news story) had their artistic presence on leading music streaming sites invaded by someone greedy enough to try to steal the widow’s mite that independent artists receive.

***If I was to play advocate in my defense, I could say that the uses I make of these tools are not the same as typing in a few generally descriptive words and having AI generate an entire song (or painting, or story, or essay) such as the song examples I supplied in the last post. I work iteratively with the specifications and adjustments for the patterns – though so do many who work on elaborate prompts for generating entire songs – but I’ve supplied them with the harmonic structure by playing or composing the chords or melodic centers of the resulting pattern to be generated. Those substantive contributions I supply make a case for these uses being collaborative extensions of the human.

I’ve so long used drum machines – and entire accepted genres of music are built around the expectations that they will be used – that using computers to play drumbeats in patterns seems more allowable to my inner ethicist. If I dig deeper, and acknowledge that I know and appreciate the musicianship and sound of a good percussionist, this is inconsistent, but this is my honest emotional report.

Summarizing and speaking here in guild specifics: the composer in myself may feel justified, while the internalized musician’s guild inside my soul still feels shame at my stooping to this.

I ask AI to write a protest song, and…

A funny thing happened on my way to winding-up my Summer diversion series of thoughts on Artificial Intelligence. I’d concluded last time: since current AI was capable of producing musical pieces in popular styles that could pass for human works in casual listening – or plausibly even more exacting listening – those who’d prefer music expressed by humans might need to change the things they look for and value in music. What kind of things? Accept more imperfections in the music, cultivate an appreciation for the humanness inherent in live performance, and increase their consideration of the intent and motivations of the musical organizations they support.

That last point, about more significantly honoring intent, had hardly inscribed itself as a blog post here when a mischievous thought came over me: while AI is created by businesses with commercial intent, human-made music doesn’t have to be. As difficult as it is to refine authentic intent from music made by strangers distributed in a marketplace, could we be fooled about intent by entirely software-generated music? So, what if I asked AI music generating software to produce a protest song? What if I went further and presented it in a misleading context?

Disregarding my environmental footprint for the duration of the experiment, I created a free account on an AI music generating site, and I set about creating a new protest song. Out of the many outrages of 2025 so far, I picked the authoritarian assaults on academic independence which have sought fines/bribes/tribute from some of the U.S.’s most prestigious universities (known in America as “the Ivy League”) while demanding oversight into their operations and academic programs on flimsy pretexts.

Like a lot of AI, the one I used for this works on a “freemium” structure, with limited features for non-paying users. To make a song I only needed to enter in a text prompt (length-limited for free users) describing it. I asked another AI engine to suggest a prompt and asked it to create lyrics for a song (though the song-creating AI site would be glad to generate its own lyrics). The more general AI answer-bot suggested including artists whose style the music generating AI site should seek to emulate. I picked Phil Ochs and the Fugs. I wanted something with real anger and satiric bite.*

I created around six songs. None of them gave me that, even when I tweaked my prompt. What came out was sweet-voiced singers with an attitude of pop-music yearning, or acceptably sorrowful disappointment in their delivery. The AI lyrics did come up with a few phrases that had some charge to them, but the lyrics generally suffered from what I personally call “Horse With No Name” defects.** My prompt specified “gruff,” “angry,” rough” or even “sloppy” to describe the vocal delivery I was looking for, and out came the singers with an air of polished regret, and lyrics that groaned under their attempt at machine-constructed sincerity. The best I could say for the lyrics on the songs? They might pass as modern recording-production-style versions of the parodies created for the Spinal Tap acting company’s folk-music parody It’s a Mighty Wind.***

These results fed into the context I chose to present them in. I wrote a script for a podcast, supposedly devoted to American folk and Americana music. I decided the podcast presenter would be earnest, but a bit removed from the less-commercial segments of American folk music, and so I made her British. She would be portrayed by the machine speech that I use on my writing computer as a proofreading aid.**** As the token human in this enterprise, I’d appear as a hype-man for the Parlando Project.***** Over the next day I wrote the podcast script and recorded it folding in sections of the machine-generated protest songs. I slightly degraded the audio quality for the British host’s dialog, though after I finished I now think I should have done that for my own dialog instead, as I’d be more likely the guest relying on a remote overseas link for the imaginary podcast.

I had fun doing this, trying to gauge how many tells that this wasn’t on the up-and-up I should drop before revealing the near total AI nature of the content in the last minute. For the names of the Americana acts that were purported to be performing the AI songs, I decided to burlesque the names of U.S. 19th century Fireside Poets. I think “Greenleaf-Whittier” is a great name for a band in that genre – failing that, Jeff Tweedy if you’re reading here, you’re welcome to it for the next Wilco album title.

Greenleaf Whittier

Featuring the exciting new song “University Surrender” you heard on the “Kit That Sounds So Real” podcast.

.

The audio player below will let you hear the 18-minute program. The program opens with a snippet of an AI generated folk instrumental whose prompt I supplied was its title: “Obey in Advance.”   Though only a small selection, I think it demonstrates that AI generated music without vocals is particularly “real” sounding. The program continues with parts of three versions of a song called “University Surrender”  where the AI program supplied the words, music, and fully produced recording in three slightly different Americana styles it thought appropriate. The three versions resulted as I tried tweaking my text prompt – and while distinct, on repeated listening they seem somewhat “samey” to me. More smooth than I was asking for, “Ralph Waldo Bryant’s” version rising to falsetto delivery almost works for the material despite the pitch control artifacts I can detect in the computer-generated performance – but remember, as I said earlier in this series, the same artifacts are now common with recorded human vocalists in current pop. “Greenleaf-Whittier’s” cover did add one, nice, out-of-leftfield, touch: the flagrantly computer-voiced autotuned opening refrain of the title before continuing into its bouncy two-step country groove. And then there’s “Oliver and the Rolling Homes’” version of “University Surrender”  whose arrangement serves up a country-music playlist/station format sound. I was laughing hard as I heard the small-town-worshiping-my truck-my girl -I may get a little drunk sometimes-but I’m a hardworkin’ man-like my daddy sonic approach, but this time holding forth on tenure and syllabus issues. And then there’s “Ivy Towers Bow”  that is said to be written and performed by “J. R. Lowell.” The lyrics here were written by an AI chatbot and then those lyrics were given to the AI music generating program to make this song. Musically this one doesn’t give me anything – so generic. I almost didn’t include it, but I decided it was an example that a generate-songs-AI was on par with a text-focused AI when writing lyrics. The final song on the fake podcast might be the one of the group that does the best emulation. If I was listening casually and “E. E. Peterbuilt and the International Harvesters ““The Emperor’s New Chains”  came on, I’d think it better than many songs in its style. Oddly enough, the AI program produced it when I goofed and clicked generate when I’d only partly written the prompt “Folk or Americana protest song, gruff voice…” and by not having to lyrically add the academic details that made Oliver and the Rolling Homes version of “University Surrender”  so unintentionally hilarious, its Horse-with-No-Name lyric faults are not as exposed. If I wanted to pick one AI song from the ones I generated to fool a careful listener, I’d pick this one. You know you’re in the Uncanny Valley when the guitars have faded out and the robot vocalist gives us a little aside into the still open mic. Spooky.

If you don’t see the audio player gadget to play this imaginary podcast, this highlighted link was human supplied to let you hear it, and will open a new tab with its own audio player.

If my courage and energy hold out, I still want to write one more post about what I call “the guild issues” that concern some artists engendered by plausible AI results.

.

*The AI program didn’t object to those two 20th century folk-rock artists of outrage and cutting satire being supplied for models – but it completely ignored trying to emulate them. When I tried “Bob Dylan” – suggested by the separate AI that’d given me a prompt I could use elsewhere – the song AI immediately refused to do so, presumably due to a specific concern about IP.

**”A Horse With No Name” was a 1971 song, recorded in England by a band led by expatriate Americans. The recording, done by humans, not AI, sounded like someone had anachronistically entered our future and asked AI to “Create a song that sounds exactly like a Neil Young record.” The lyrics went forth despite including some awkward lines like “There were plants and birds and rocks and things,” “the heat was hot,” “’Cause there ain’t no one for to give you pain,” and “Under the cities lies a heart made of ground, but the humans will give no love.” To spare us from more lyrical howlers, the song also featured a lot of repeated “la la la’s” in its chorus, well-performed in a CSN&Y style of harmony.

The song was a substantial hit in both the U.S. and Britain, indicating that it worked as a song for its audience none-the-less.

***Hey, I’m a fan of Spinal Tap. Everyone is! And rating art is a fool’s game – but “It’s a Mighty Wind” is every bit as good, maybe better.

****The “read aloud” feature in the current versions of Microsoft Word is a huge aid to my self-proofreading. With my neuro-wiring, it lets me catch a great many errors I’d otherwise miss, and using the female British voice enhances the “hearing this anew, as if I didn’t write it” factor that makes it so effective.

*****The stuff I say in the middle of the satiric podcast concerning the Parlando Project is how I actually feel about the nine-plus years of stuff I’ve put out here.

AI music may be telling us something about how music works for listeners – and we might want to change that

I had to catch myself editing the last post – as I discussed my use of virtual instruments in place of the actual instruments and the new plausibility of thoroughly AI music, I was tempted to overuse the word “verisimilitude.” Is that really something essential to the art of music? I like the cranky not-quite-real sound of the Mellotron after all. If musical art should be imagination, music itself certainly doesn’t care if the instruments are real – though musicians might, from legitimate guild concerns. Then we moved to having the computer play the instrument, and that too asks about human-displacement – and now we have AI creating songs outright from very generalized prompts. If you’re a composer, a musician, or a listener, this raises questions.

Let’s start by being honest with ourselves as listeners in avid or casual modes: as we pass through life, music becomes a sort of sonic homeplace – a location where something sounds similar to what we’ve heard before, with just enough difference to stave off boredom, just enough new to add the spice of novelty. Some musical ears live in homogenous towns, others in more diverse ones, but we go to music for the effects we’ve learned to appreciate.

Current entirely-AI music exploits this: taking what we know of form and sounds, following its predictability in a way listeners have been known to appreciate, and serving our aural expectations back to us. When they do that, the robots are telling us something about ourselves. As I ended my last post, if we object to AI music, it may be from the romantic feelings we retain for human artists. We want fellow humans to make these sounds with and for us, and our response may rise to disgust when we are tricked. And here’s a problem: it’s getting harder to say you won’t be tricked.

If this is so, what hopes do we have? One: imperfection, at least of a kind. Let me interject here that I’m not talking about the imperfections of boredom, of which there are many. I’m talking about music that may be a bit more haphazard and unpolished. If machines can precision-target our musical comfort-center receptors, then let us distrust that response at least in part.

Commenting reader rmichaelroman has already guessed that might be part of it, mentioning the performance, rough in recording quality and musical finesse, from the LYL Band at an Alternative Prom in someone’s basement years ago.  Even stored on honest recordings – live music, particularly live music that is truly live, with unplanned-out moments, with instruments reveling in their specific bodies, breaths, and vibrations – offers vivid imperfection.

Or too: voices with less talent than intent. I try to not over-burden my listeners with self-made excuses for my singing voice – but for all its limitations, it remains the one I have handy to realize the songs. Would AI be able to duplicate those imperfections? Perhaps, but it’s unlikely to want to.

When music practices and equipment reached points of greater mastery in the 20th century, reaction in the form of purposely avoiding those felicities arose. Midcentury pop music was opposed by the rising Folk revival and by early Rock’n’Roll. Then later, perfected Rock recording technology and improved musicianship found themselves met with Punk and Hip-Hop premised on the idea that a minimum of tech or muso-chops can still make an effective statement. By the way, I believe those technical hierarchies produced worthwhile music, but those that dispensed with them did so too.*

And when I wrote about voices with more intent than talent: for all the romantic imprecision of assigning internal motivation from a separated artistic product, what we believe we understand about why a piece of music was produced has importance. AI-music, however good it is at mimicking the technology and sound of music we like, presently offers only the weakest and least admirable answers to the question of why it was called into existence. To make some money? To make inoffensive sonic décor? To sell drinks to dancers? To show it can be done, as if that “verisimilitude” was the most significant thing about art? Some music I have liked was made for such mundane reasons, but in the future we may find intent more necessary to weigh.

I’ll leave with one more brief metaphor as AI-music reaches a level of musicological competence: we may have come to something analogous to painting’s role as photography entered the realm of visual representation. AI music in artistic hands may eventually seek out flagrantly subjective use of the technology – and music made by humans holding physical objects in real time will increasingly began to value qualities beyond sounding customary and “correct.”

If my energy holds out, there’s at least one more post in this AI series before I return to our regular combinations of literary poetry with original music, this one will address in more detail some of those music things I call “guild concerns.” If you miss the usual Parlando Project fare, there are over 800 examples of that here, so feel free to look around.


I wouldn’t want to call this performance imperfect, but there’s a human unexpectedness to it that satisfies me

.

*The 1950’s-early ‘60s folk music revival had elements that I found closely mimicked by the Punk/Indie movement of following years: the DIY convictions, the gumption to form or transform venues and record labels, the opportunities for out-of-the mainstream ideas and sounds to sneak in between the more polished and “professional” acts. Similarly, Hip Hop followed the folk process: use what instruments were at hand, assertion before sounding “correct,” recombining shared culture materials (floating verses and borrowed tunes for the banjo brigades; turntables, cheap drum machines, and samples for Hip-Hop, contemporary social comment for either). Musicologist Ethan Hein said in a BlueSky post that helped spur me to write this series, “You can get across the essential elements of hip-hop and house with buckets (Hein here is referring to overturned buckets used as drums –FH)  and voices. Computers and sound systems are nice to have but inessential. Long after Spotify is gone, people rapping over beats will still be with us.”

Artificial Intelligence in Music: the last wall of the castle

Just a note to readers coming here for the experience of literary poetry combined with the original music stuff we do – I’m still doing some “summer vacation” writing that breaks from that form this month. This post does deal with music – if from another angle – and I expect to fully return to our traditional presentations this Fall.

So, I’m at my frequent breakfast place on a fine August morning that has not yet reached the AQI-alert level of smoke. In an unplanned coincidence, Glenn walks in. We’d talked last week about, of all things, Herb Alpert, and his early 1960’s instrumental hits, particularly “A Taste of Honey”  which was a chart topper in our youth.*

Glenn has some Herb Alpert and the Tijuana Brass CDs, but like many he’s as likely to have a CD player as he is to have a way to play 78 rpm shellac records. He’s been trying to get their music onto his new Mac Mini, but his old USB Apple Super Drive won’t recognize a music disk.**  Somehow (likely my current preoccupation with finally writing about it) we got to talking about AI. I mentioned that I’ve been struggling to use my collection of Virtual Instruments (VIs) to realize recordings with brass instruments that capture the full level of articulation the real thing can produce.

We talked a little bit about the various ways these instruments can be controlled: little plastic keyboards, various guitar pickup schemes, even wind controllers. Glenn has a bit of engineering background – this had (I hoped) some mutual interest.

I have little or no guilt in using VIs here for the Parlando Project. Not only is a VI grand piano highly affordable, it takes up no space, requires no fancy mic’ing, and produces a pleasing sound. Given my musical eclecticism, I think of how much more cluttered my studio space would be if I continued to collect odd instruments that I would experiment with to add unusual colors to pieces. And though I can’t actually play a real cello or violin, I can use a MIDI guitar controller to add those sounds. I’m grateful for those options for realizing my music.

Then I told Glenn about the Mellotron – a pioneering virtual instrument before such a thing had a name and acronym. Rather than hard drive files containing databases of digital recordings of actual instruments playing a range of notes in different articulations like one of my computer VIs, this primitive mid-20th century machine used strips of analog tape recordings of an instrument playing a single note for each tape strip. When professional musicians (among them: The Beatles, the Moody Blues, King Crimson, The Zombies) started to use the Mellotron, some objected: could the Mellotron put real musicians out of work? When the Beatles and their producer George Martin wanted a high trumpet part on “Penny Lane,”  a real musician was contracted for and played that difficult and memorable part. But flip the “Penny Lane” 45 RPM record over and on “Strawberry Fields Forever”  Paul McCartney pressed a Mellotron’s keys to produce an eerie flute sound. Listening closely, it wasn’t quite like a real musician blowing into a real flute. It was maybe 80% there – but if it sounded a little fake to a discerning ear, one might think it was still an interesting sound, whatever its level of verisimilitude. But imagine you’re a flutist in 1967 – the Beatles could certainly afford to pay for your services. Though bands moved on to use more complex synthesizers and other devices, real instruments still retained a level of preference when their fully-authentic sound was called for.

Could I pay or otherwise record real musicians instead of using my computer VIs? It’s hard for me to imagine a cello or violin player that would accept my chaotic and self-imposed quick-turnaround schedule, naïve/inconsistent musicianship, my shifting moods, and my no-revenue-project budget.

In my defense, this human being may well be playing the instruments,  just as I play guitar: this note, here, this loud, this long. Other times I’m scoring the music the VIs play, writing or modifying the MIDI event data rather than on a music-staff leger.

Still, there are some gray (or even darker) areas. For me, that started with using arpeggiators: ways to tell a computer you want it to take a chord and play the notes within it in a rhythmic series. I can tell it what note-length to use, something about the order of the notes, but the precision is then all the computer’s – and arpeggiators will have presets to suggest, and I might agree to one. Numerically quantifying the level of plausibility of my own work is problematic, but VI technology is such that even with my limited musical-instrument-operator skills, I may approach 90% there – but my musicianship, with its intents, and also it’s limits, is still involved. I can’t help but think my brass VIs sound badly because they are so far from the families of instruments I have played in “the real world.”

But a greater temptation arrives: more sophisticated computer “players” that take a chord sequence and duration I supply – from composition or by my playing something – and augment them by playing those cadences musically in a style it supplies and I consent to. These “players” have multiple adjustments, I can (and often do) modify what they supply as defaults, but this further development bothers me. Am I still the composer? In a human-musician world the answer would be clear by well-established tradition: yes, they’d say, I’m still the composer. Professional musicians, working before computer algorithms, have long supplied “feels,” timbres, expression, and entire decorative lines. They might even revoice the chords or play extended harmonies. They will do all that (or more, or better) than my computer does for me. So why do I feel bad when I ask my computer to do this? Well, there’s the impersonality to it. I’ve worked with others who’ve made important musical contributions to work I’ve originated, and that doesn’t feel the same. While I think I would be problematic to impose this on human musicians for the rewards I can offer, there’s more to it than not offering them that opportunity. I can’t help but think I’m cheating, that these realizations are fraudulent.

Yet guilt hasn’t stopped me from using these computer functions, and you’ve heard some of the parts they’ve played sometimes in Parlando Project recordings. The term artificial intelligence is elastic, it’s become a marketing buzz-word, but these enhanced arpeggiators and play-with-this-feel-or-articulation variations could fairly be called AI – even when the same musical piece has my vocals of I-hope-for subjective-quality or my it’s-supposed-to-sound-like-that guitar playing.

That said, over breakfast, I tell Glenn about how far AI music generation has come in the past few months. Just by entering a prompt or making a menu selection, often made up of generalized summary words for genre or playing style, one can create an entire song including vocals and all the musical accompaniment. Earlier in this decade the results would’ve been overly simple or subject to embarrassing defects. Now, the results easily pass the “Turing Test” for casual listeners. If the Mellotron flute is 80% there, and my best VI violin might be 90% there, these entirely machine-generated songs are about 95% there in verisimilitude. Sure, human musicians, real composers, even avid music listeners, are forever aiming for that extra 5% of skill, originality, and listener appeal; but when I listen to these productions which can be produced endlessly in minutes of hands-off computation time, the “tells” are the thoroughly AI songs meh obsequiousness to genre musical tropes and the slight artificiality of the machine-made vocalists. And that’s a problem. Centuries of musical theorists from the days of music theory treatises written with a quill, and onward to the accretion of hardened commercial songwriting craft, have supplied all the steps in ink-stained longhand to create a coherent musical structure with predictable effects. The computer coders only have to apply a light dressing of adaptation to transfer this consensus for robotic mass-duplication. The singers would still have remained a challenge – except by a fateful choice: popular music has increasingly prized machine-aided polishing of human voices to remove the inexactness they are prone too. Ironically, what could have been the last rampart to be surmounted by AI was dismantled by meticulous vocal production and ubiquitous auto-tune before the tech-bro Visigoths arrived.

I said to Glenn over breakfast in the café “Here we are talking about a popular song released 60 years ago, one we both still remember. ‘A Taste of Honey’  didn’t have any vocals, and now AI could easily produce an entire album of other instrumental songs to surround it – and even listening carefully, I’m not sure we could tell AI from human-written and realized musical pieces.”

This is not a theoretical exercise. Streaming platforms and playlists care even less than casual music listeners about AI content standing in for human work. In some genres, the algorithm that supplies your next song playing may already be a robot suggesting robots playing robot-composed imitations of human music. The only thing holding off an overwhelming onslaught of AI slop is that we, the audience, are still invested in the erotic worship of flesh-and-blood young performers and some residual romantic veneration of the human artist. Those things may be illusionary, but even if so, those things may be our defense. Do I have any other hope to offer? Yes, there’s something else, that comes next post.

This is the author of the play “A Taste of Honey” for which
the tune was composed. Her play frankly portrays a whole range of working-class situations in ‘50’s Britain. A teenager when she wrote her play, she was 21-years-old when this cheeky interviewer interrogated her. What admireable self-confidence!

.

*As vividly as I remembered the song, I knew nothing about its origin – and while I could distinctly recall the musical sound of Alpert’s recording in my head (trumpet, trombone, and that beating drum) I also heard in my mind vocals and a crooner singing. I tried to find the version with the sung lyrics I was remembering. I likely had heard the (somewhat unlikely) version of “A Taste of Honey”  done on the Beatles’ earliest LP, but I don’t think it was that one I was hearing in head.

**If you still own that ancient Apple artifact, the external Super Drive CD/DVD drive, you should know that it won’t work unless connected directly to one of your Mac’s USB ports. Even deluxe powered USB hubs or docks won’t work–  the drive will seem completely dead when connected through them.

Prompt: write that AI post you’ve put off for a year

The responses invoked by so-called Artificial Intelligence are a complex mix. Expressed feelings recently would include any of the following in any combination: disgust, fear, ridicule, outrage at theft of Intellectual Property, and charges of tech-bro over-valuing. Let me say at the outset that I have caught myself feeling all those feels too.

I’ve planned for some time to write a post about AI here, and this summer period when I feel free to take short holidays from our usual music/literary focus would be a good time for it. Then this morning I read this post by a blogger/teacher/musician Ethan Hein,* and I’ve been driven to start this long-delayed, provisional, and likely incomplete post on the subject.

What Hein wrote isn’t extraordinarily provocative. “I understand the impulse to decorate your newsletter with AI slop images but when I see that, it makes me assume that you don’t know what you’re talking about.”

If I’m not a proponent of AI, why would that motivate me?

Well, for one, I could be found guilty of the failing he uses as a marker of knowledge. And if my energy holds out, there’s more than that to say.

THE MATTER OF IMAGE

As the Parlando Project moves into its 10th year, how I work and present things has been a learning experience for me. Some years back I noted that images in blog posts increase new visitors to this blog. Now, the Parlando Project is a poetry/varied music thing, and a great many of the casual visitors don’t become regular readers or listeners – but some  might.

Given that I’m an abysmal visual artist, I began using this way of finding images: public domain pictures or (I hoped) benign reuse of images found on the Internet. This is a more complex subject than I’ll go into today, and I know enough to know that as a courtesy or strict matter of rights, I’ve likely sinned in regards to crediting images. The Parlando Project isn’t even a non-profit organization at this point – my plan from the start was deliberately to be a non-revenue thing. I want to spread knowledge and outlooks and to promote other people’s art. I certainly don’t want to remove value from others’ art.

The original attempts at figurative AI illustrations that I saw were ludicrous. I knew there was this thing called DALL-E, and its warped and poorly detailed images others shared seemed to have come straight from the Island of Misfit Toys. But in 2022, I was made aware of a new option. I’m a long-time user of the Adobe Audition audio editing program, and Adobe had a new product offered for beta-testing called Firefly. Firefly claimed to produce better AI illustrations, and it also claimed this Unique Selling Proposition that, AFAIK, has remained unique: they said it was trained only on art whose creators had been compensated for.**

The very first image I used from Firefly actually pleased me. I did modify it, but it worked for illustrating the musical setting of the poem I was presenting, Hey, I could use something like this, I thought.

April 2023, I want to show William Carlos Williams dancing alone. My first use of Adobe Firefly to generate an image.

This acceptance of the tool was reinforced by my decision to present videos some times. While a blog post needed only a single illustration, having something germane to put up against the linear flow of a video asked for multiple images to fit different points in the song.***

I think this was the final Parlando Project use of AI-generated images to illustrate this very short Emily Dickinson poem’s “lyric video.”

.

Over about two years, I continued to use Firefly. My experience was mixed. No matter how much care and detail I tried to put into the prompts I often couldn’t get anything like what I wanted. I’d resort to 20 or 30 tries to get one I could charitably use. Afterwards, I’d sometimes wince at what I accepted and included with Parlando work, but I have a policy here of leaving work up “warts and all.” But I did write “mixed.” Just like that initial image that I used of a purported dancing William Carlos Williams, some of the ones I got from Firefly pleased me, and I hope pleased audiences. Maybe someone now sees a poem in a different light, or checked out some music they otherwise wouldn’t have heard.

A combination of things turned me away from AI-generated illustrations. The amount of time to go through all those bad results to pick the sometimes barely acceptable one bugged me. I could use that time to read or research more on poets and poetry, or to make somewhat better recordings! And partway through my use I started to read the charges of extraordinary energy use by datacenters generating AI.****  While I didn’t make some hard and fast decision, my Firefly use just tailed off. Now in the past year, the outrage against AI has grown, particularly from artists in various fields. If my personal energy holds out and I continue to write on this, I’ll get into more detail on those concerns and theorizing around AI, but those concerns are genuine feelings about genuine threats.*****

This is not an AI-Generated Image

.

Which leads me to my personal concern, one I had reading Hein’s honest and informal opinion. I’m nearly willing to join the pitchfork and lit torch brigade marching on the AI castle, and I share their concerns. But for around two years I was up in my energy-dense lightning-powered lab twiddling the dials to generate this – well, yes it is, isn’t it – monster. Look, villagers, I didn’t intend to drown the little flower-picking girl – I was just trying to juice up my low-budget poetry/music blog. I actually had moments of pleasure when the monster grunted semi-intelligibly!

I made a short reply to Hein this morning, he clarified that his statement was more of a vibe thing. I understand – I make those suppositions too. This post is, in so many words, asking for mercy for using AI image generation. If posts on AI here continue, what I’ll write will get more complicated yet, but that’s enough for today.

.

*Hein has a wonderful way of writing about the theory and practice of musical composition. I’m grateful for the things I, an untrained and largely naïve composer, have learned from what he’s written. His particular specialty is examining (with practical examples) the disconnect between the venerable Western/European musical tradition and the way music is realized here in modern America. Currently he seems to be pivoting to podcasting his information, but links to his work are here.

**Presumably, Firefly’s source material was Adobe’s stock art library.

***I sometimes ask myself why I don’t just do a single still image and leave it at that in my videos. After all, there are many YouTube videos that do only that for music-centric content. Despite my love for spare, concise poetry, I speculate I’m just a maximalist with the arts that I’m not knowledgeable about.

****My first thought reading those energy estimates was: what is the methodology to determine how much energy draw was due to AI? I’m an old IT guy. If one has full access to all the systems, and wished to log the amount of CPU and access time for each sub-process running on them that they knew pertained to AI, then one could make a reasonable estimate from that mass of information as a proportion of the total energy drain of the entire facility. I couldn’t imagine anyone writing about the astronomical AI heat and energy drain had such access. They might have some sense of the total for a particular facility, but I’m unaware of any facility that only  does AI processing. Facility A may use a whole lot of cooling and electricity, but how much is for transcoding cat videos, searches for what actor played who in that movie, and order processing for Labubu orders? Did someone use estimates from proposals? It would be easy to imagine that any engineer asked to create energy and heat needs for establishing AI at a site would be encouraged to spec high.

That said, total energy costs for our modern computerized world does seem to be increasing, and AI does seem, at this time, to be remarkably energy-demanding.

*****What did I do instead? I think I’ve had less weird or imaginative blog illustrations recently – that’s a loss, if a survivable one – and per Hein, the cheesiness of some of them might not have helped. For videos I’m subscribing to a product that offers a portion of a leading stock image library. My report: there are plenty of times when I hate a not-quite-right stock image as much as any AI fresh-off-the-slab monstrosity. And I worry that those stock image libraries may soon enough include AI-generated images.

If you are reading this post and think, “But he didn’t say this! That’s the key point.” I may yet get to that.