Yet another blog post about ChatGPT
Artificial intelligence (AI) chatbot ChatGPT became an overnight superstar when it launched in November 2022. Does the world need another ChatGPT blog post? Maybe not. But we reckon Content Design London (CDL) ought to have something to say about a technology that really could turn the content world upside down.
We're not experts in AI. We do not have all the answers. But we hope this piece might explain a few things about ChatGPT for those who are new to it, and maybe kick off some important discussions.
Our position on it will doubtless change as we get more familiar with it, and as you the content design community challenge us and offer fresh perspectives. Please do.
What ChatGPT is
ChatGPT is a piece of artificial intelligence (AI) software that has been trained by its creators OpenAI to answer questions in a conversational way that seems natural to human readers.
In this blog I'm not going to worry too much about how AIs work. In simple terms, AI just means software that can learn, and has learned to do something we normally associate with human intelligence. You might already be using AIs like Google Translate and Amazon Alexa every day. ChatGPT's underlying software has been around for a while. In a way it's just an extension of auto-complete, and writing tools and other language AIs can also generate content from scratch.
But ChatGPT has caused a particular stir because of the conversational way you ask it questions. One of its most interesting features is that it 'remembers' what you are discussing throughout a conversation, so you can refer to previous answers and ask it to clarify, elaborate or answer in a different way.
The latest iteration is GPT-4. It costs $20 a month (about £16 at current exchange rates) to sign up to ChatGPT Plus, which gives you access to GPT-4. OpenAI say it is better at creative tasks, and more likely to get its facts right. GPT-3 is still free.
Using ChatGPT in content design
ChatGPT is good for
- basic research — it can give you an answer that's more tailored to your question than a Google search or a browse through Wikipedia. Though see the important warning below about fact-checking,
- first drafts — if you are stuck staring at a blank page, ChatGPT can write something to kickstart you,
- improving your draft — ChatGPT can write in whatever style you ask it to, so it can help you spot things that could be clearer if you ask it to rewrite your copy in simpler, plainer language.
ChatGPT is bad at
- staying up to date — it does not have access to any information from after it finished its training,
- fact-checking — an AI is only as good as the examples it is trained on, and as we all know, the world is full of misconceptions, half-truths and lies. You cannot trust it to always get its facts right. You must always fact-check anything you intend to publish,
- knowing what to write, for who, and why — its output is only as good as the question you ask it. It does not know who your users are and what their needs are, unless you tell it.
Writing good questions
Here are some tips for writing good questions (prompts) that we've found useful so far:
- give ChatGPT a detailed brief, not just a single sentence,
- you can tell it about tone of voice, to use bullet points, to break things up with subheadings, and what your users' needs are,
- using the word 'simple' seems to work better than 'plain English',
- if it doesn't get it right first time, you can tell it what's wrong and keep iterating.
Crafting good prompts is likely to become an important content design skill. Experiment and find out what works for you.
Like any piece of technology, it is important to consider the ethical issues when using an AI like ChatGPT.
Plagiarism and copyright
ChatGPT learned from published work. Which means other people's work. Even if it’s generating new content based on that training, is that still plagiarism in some sense? And particularly when it’s writing about niche topics where there might not be a wide variety of sources to have trained on, how sure can you be that it is not just using big chunks of text unaltered?
The idea that AI is neutral because it's a machine is nonsense. If there is bias in human society, then there is bias in the writing ChatGPT learned from, and there can be bias in its answers. Even the creators admit that it 'may occasionally produce harmful instructions or biased content'.
ChatGPT can be racist, sexist, homophobic, and ableist, just like humans can. We will be interested to hear about any bad experiences you have.
ChatGPT's interface itself does not seem very accessible to screen readers.
But on the flip side, it could become an accessibility tool itself. Could low-literacy writers, or people who are not writing in their first language, use it to write more fluently?
Creators OpenAI have a neat, social-purpose-y mission and charter on their website. Only you can decide whether you trust that, now and in the long-term, when ownership and mission can change.
Time Magazine recently published a story about the human work needed to help ChatGPT avoid the worst of the worst content on the internet.
This may be no different to other tech companies. But OpenAI have a charter that’s all about their mission to benefit all humanity.
No, it won't take our jobs
Since understanding users and their needs is what makes content design different, we don't think the content designer's job is under threat just yet.
In fact, if we can speed up the writing part of our jobs using AIs like ChatGPT, that should leave us more time to spend understanding our users, writing user needs or job stories, and testing our draft content with real users.
Crafting prompts for AI can done well or done badly. It’s something we can learn and will become part of our content design toolkit, to be pulled out when it’s the right tool for the job and left in the box when it’s not.
[This article was updated on 23 March with information about GPT-4 and writing good prompts.]