Why ChatGPT isn't a threat to human jobs: A deeper look into AI limitations
Creative professionals are asking: are we in danger of losing our jobs to Chat GPT? Are we plagiarising when we outsource parts of our work to ChatGPT? Are we obliged to reveal to the world that we are making use of AI to generate copy or ideas or images? Will clients start choosing to use AI in place of creative agencies?
The truth is that nowadays, AI has infiltrated every aspect of our lives – it is all around us. With or without our endorsement, it is here to stay.
It is remarkable that the artificial intelligence company, Open AI, was able to train the ChatGPT model to understand what people mean when they ask questions and respond conversationally.
I confess I have spent copious amounts of time inputting questions into ChatGPT as an experiment to gauge the kinds of responses I will receive.
During my dalliance with this new tool, I asked it to write a press release to announce the appointment of Flow Communications’ head of social media, Miliswa Sitshwele, using inputted information from the original press release (published on 17 January 2023).
Unsurprisingly, the response it generated was very generic and boring. It became evident that the AI tool sifted through millions of press releases online about appointments and used its probability model to predict what the press release should look like and what the CEO of Flow Communications, Tara Turkington, would say about a former employee rejoining the company. And bam, out came a wordy press release riddled with clichés and repetitions.
I prompted it further with more questions and instructions, and after endless follow-up questions to ChatGPT, I was able to help the system refine its responses. But still, the copy was not up to the Flow standard.
This makes sense, considering the model was trained by being exposed to massive amounts of data on the internet. So, the best it could do was regurgitate that information back to me – without much creative flair or consideration for the readership that would consume the piece.
The shortcomings of the latest AI “threat” don’t end there. Open AI has openly admitted that:
- ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers
- When a user asks an ambiguous query, ChatGPT guesses what the user intended instead of asking clarifying questions
- ChatGPT will sometimes respond to harmful instructions or exhibit biased behaviour
- ChatGPT is sensitive to changes in the input syntax or repeated attempts at the same prompt. For instance, the model may pretend to not know the answer if the query is phrased one way, but if it is phrased slightly differently, it will provide the right response
- The model repeatedly states that it is a language model developed by Open AI and utilises other overused words. Biases in the training data are the cause of these problems
As I went further and further down a rabbit hole, I concluded the following:
- ChatGPT is an interesting tool to use, but it simply cannot compare with the creativity of human beings
- If ChatGPT can generate your idea, then it’s probably not good enough. Use it as a rational check to gauge how “generic” your ideas are
- If you pair a good idea with a series of facts on ChatGPT, it has a tendency to produce sentences that are written in boring business-speak
- ChatGPT is very good at producing logic, but it won’t produce an entire program. And be careful: it can introduce bugs to your code
To echo the words of a colleague, Lizette Sutherland: “AI tools have the power to take over the menial tasks – the slog – that consume copious amounts of time and can give us the time, space, energy and mental capacity to focus on real creative work. That’s a plus to me.”
I don’t think we need to be threatened by the existence of such AI tools. They can function as a supplement to assist in our daily work but don't stand a chance when it comes to replacing our jobs.