Human and AI Authored Works: How to Tell the Difference.

Since the launch of the Microsoft backed generative AI language model ChatGPT, developed by OpenAI, people have been quick in leveraging its stunning output in a multitude of areas, from writing emails, poetry, news articles, doing research and even writing books.

With books in particular, human authors and up-and-comping novelists are worried. Thousands of AI generated books have appeared on Amazon alone, many of them unaccredited to either ChatGPT as the principle author, or to the countless and unnamed human works that provide the training data from which ChatGPT draws and assembles its remarkable output.

Without even discussing whether the AI output is interesting and worth reading, the ethics are clear. Unattributed AI generated works are unfair to the creators of the original and unique data from which AI draws its knowledge. That does not need defending.

The question of interest is how can creators of original content such as literature and art be protected and also receive their dues, whether gold or glory?

First, humans using AI to produce content intended for public and/or wide scale commercial use – articles, research, advertisement, film, speeches, books, paintings etc. – should be obliged to declare having used it.

Second, a system should be developed – if it has not been already – whereby data used to produce the AI content is properly credited to the original creators. That is, an attribution system that owners of AI bots will be obliged to put in place. Those using AI to synthesize works would be presented with a list of references relating to content used, which the “synthesizers” would then need to include in any work used publicly or commercially. Failure to do so would be then be deemed a breach of the terms of use of the AI bot.

Third, to ensure synthesizers abide by these ethical standards, owners and operators of generative AI should provide a system whereby content claimed to be purely human-authored can be reliably be checked for AI generated content. That is, an anti-plagiarism safeguard. Works suspected of synthesizing generative AI output can be fed through the mill so to speak. There are already some tools that do this, such as DetectGPT and the more recent AI Text Classifier by OpenAI itself, but they are not yet sufficiently foolproof. It might perhaps be prudent to require that all publicly and commercially available content be subject to such scrutiny as a matter of routine.

The important point is that people have a right to know when they are consulting works by other humans and when they are consulting works by an intelligence that has already obtained significant advantages over them. The existence of an advantage is not an issue, and whether the content is qualitatively better or worse is irrelevant. What is important is that humans have a right to information establishing the authenticity of the work, by which they can then judge its worthiness.

If that were to be the case, then it will be clear that until AI can demonstrate creativity and uniqueness indistinguishable from human, that is, as being authentically its own work, the worthiness of AI generated works rightly cannot be classed as being on a par with that of humans.

However, if such indistinguishably ever arrives, then we humans will surely find a way to live with it. And who knows, perhaps AIs will love and revere us for our limited yet valiant striving.

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.