
As well as being a huge technology enthusiast, I have always been a big music fan and record collector. Therefore I was intrigued earlier this year when over one thousand musicians announced the release of a rather unusual new collaborative album.
The list of these artists included some of my favourite artists such as New Order, Tori Amos, Pet Shop Boys, Kate Bush, and Public Service Broadcasting. The album was called “Is This What We Want?” and there was one aspect of the album which makes it very unusual indeed.
The album is made up entirely of the sounds you would hear in an empty concert hall, music venue, or recording studio: there is no singing and no instrumentation on the album at all, just the sound of silence and the occasional cough or sound of a door opening or closing. It can be found on most streaming platforms although be warned it’s not a great listen (unless of course you really like experimental ambient music and/or John Cage!).
The intention behind the album becomes clear when you read the album’s tracklisting. Read in order the tracklist spells out the phrase “The British Government Must Not Legalise Music Theft To Benefit AI Companies”. What does this mean?
Unlike many jurisdictions, the UK does not allow text and data mining for commercial purposes. Text and data mining is however one of the ways in which artificial intelligence platforms are trained; AI platforms such as ChatGPT and DeepSeek, regardless of what they are being asked to generate, are producing their outputs by utilising all of the data they have been fed and trained upon. The higher the volume of data an AI platform is trained upon, and the better the quality of that data, then the more robust and sophisticated a generative AI’s output will be.
That output is often very impressive indeed. As most of us now know, generative AI is able to quickly and cheaply generate or recreate written materials, pictures (including photographs and movies) voice and music, which is often indistinguishable from that which has been human created. Plenty of AI platforms exist to allow you to draft an academic paper, a novel, artwork, legal documents, videos, and of course music.
By way of example, whilst writing this piece I was able to use an AI song generator to produce the chorus for a 70s soft ballad about cheese (I asked for a “70s cheesy ballad” and it took it literally), which whilst it’s unlikely to bother the pop charts was still remarkably impressive given the seconds it took to generate.
It was however very derivative and sounded very similar to several well-known 1970s artists. This begs the question of whether one would rather use an AI tool to generate music or hire a professional? Given the somewhat unpredictable nature of the global economy at present it’s not hard to conclude that the use of a cheap or free AI tool may be a more appealing choice than paying thousands to a known artist in much the same way as it may be more appealing to have an AI take a first run at a legal document before instructing a lawyer. The truth is that an AI generated output might not be as good as engaging a musician, or a copywriter, or an artist, or a lawyer, but it will usually be cheaper and quicker, often vastly so. In other words, it might be good enough.

The problem is that it might be “good enough” just because it is essentially repurposing the intellectual property of someone who is an expert. I asked ChatGPT to create a song in the style of Kate Bush and sure enough received the lyrics and music including a piano arrangement for a song which, yes, looks like something Kate Bush might have written. It was also better than my cheese-inspired song, but most of all it proved that the artists who released “Is This What We Want?”, who have made their livings out of creating intellectual property only to see this intellectual property be used to train an AI (for free) to create work derivative of their output (again for free) have a point.
The problem is one person or one group or even one jurisdiction refusing to allow intellectual property to be used to train AI will not change anything
Over the last two months social media has been filled with AI-generated images created in the style of Studio Ghibli, the Japanese animation studio responsible for critically acclaimed movies such as My Neighbour Totoro and Spirited Away. The studio’s founder, Hayao Miyazaki has previously criticised AI-generated artwork as “an insult to life itself” and is unlikely to have been impressed by the internet becoming flooded with images derivative of his life’s work created just by typing something into ChatGPT.
The problem is one person or one group or even one jurisdiction refusing to allow intellectual property to be used to train AI will not change anything. Intellectual property laws are not global, whereas generative AI is. Given the AI arms race we appear to be witnessing governments are very keen to ensure that they are at the forefront of embracing AI. The risk that AI is used as a sophisticated derivative works machine is not foremost in the minds of lawmakers; rather ensuring competitiveness on the world stage is. It would likely take a legal challenge by a corporation with very deep pockets who relies upon the exploitation of its intellectual property for success to bring about any meaningful change.
I don’t see this happening soon. Competitiveness is foremost on the minds of us all at present: we are all expected to produce positive outcomes as quickly and often as cheaply as possible; needless to say, this goes for our competitors as well. When ChatGPT was first released the line I most often heard was that workers “will not be replaced by AI but will be replaced by workers who are using AI”. However if you are using AI to generate content and that content infringes someone’s intellectual property then it is not impossible that the owner of that intellectual property could seek to protect their rights and their livelihoods.
But what would this look like? I’m old enough to remember being warned not to photocopy sections of books when carrying out legal research as it would be a breach of copyright in the jurisdictions in which I worked; perhaps we’ll see a similar view be adopted to the use of AI, but in the current climate I doubt it. Whatever happens, in whichever jurisdiction you are based, there’s a clear need to take a look at the impact the widespread use of AI is having not just on how quickly we can produce outputs or even on the job market, but on the
art, media, and our own work product as well.
Paul Haswell
Paul Haswell is a senior lawyer based Hong Kong office, specialising in Technology Transactions and Sourcing. With over 20 years of experience, he focuses on TMT matters, including data and cybersecurity, telecommunications, and emerging technologies like AI and blockchain. A tech enthusiast since childhood, Paul has handled major technology disputes and offers a blend of legal expertise and passion for innovation.
Outside of his legal work, Paul is a tech and law podcaster and a DJ. He co-hosts the “Sunday Escape” radio show on RTHK and the podcast “Crimes Against Pop.” A music lover with an extensive vinyl collection, Paul enjoys discovering and sharing new music. He’s also a sci-fi fan, particularly of “Doctor Who.”

