Is it ok to use ChatGPT?
This two-part blog outlines some harms and benefits of applications using ChatGPT. By identifying a set of harms, we can give a simple methodology for determining if it’s ok to use ChatGPT in an application. Some rules for the implementation of the applications that are developed are provided. And I can confirm these blogs are 100% human created!
Technologies can break loose from their creator’s control. Mobile phones are a great example. Bell Labs invented them, and for thirty years the telecoms industry saw them as extensions of landlines. Their view was that a mobile phone would let you make and receive phone calls when you were away from a fixed line phone. In the 1990s, telecoms discovered mobile messaging (SMS) and a revenue bonanza turned companies like Orange, Verizon, RIM and Vodafone into technology giants.
Then the iPhone was introduced, and suddenly phones became not phones, but portals to an internet of applications and services. Mobile social media bloomed and mutated into something that no one at Bell Labs would have recognised or countenanced, and the rest is (ongoing) history. The telecoms industry was out of the picture and Apple and Google now define the product and use cases for mobile phones.
This kind of escape can have unexpected and unwanted consequences. Many have argued that some of the new applications of mobile phones are hugely damaging. For example, applications such as Instagram and Snapchat (consumed via ‘phones’) are sometimes blamed for destroying young-women’s self-image, leading to a rise in suicides(1).
A new and powerful technology
ChatGPT is a new and powerful technology that is also escaping the control of its developers . Until a few months ago, chatbots were narrow and brittle interfaces onto information sources. Applications employing them were expensive to develop and often failed in practice. For example, Facebook launched a service called ‘M’ in August 2015. This was slated to act as an automated personal assistant, but in practice, by 2018, Facebook was only able to achieve a 30% automation rate for the service.
In contrast, ChatGPT is a chatbot that has broad abilities in terms of handling different types of conversation and conversing on different questions. It appears to be very flexible and can be adapted to a variety of tasks cheaply and quickly. Because ChatGPT has become available and the methods used to build it are so well-known, there is now a rush from other technology providers to launch similar technology, and a legion of independent developers is creating applications using it. We have to get used to the idea that generative conversational assistants now exist and all sorts of people are using them.
My new GFT thought leadership paper ‘Using ChatGPT safely’, tries to answer the following questions: So what? What applications are off limits? What should be done to make things safer?
Impressive feats of natural language processing
These tools are underpinned by a generation of machine learning models collectively known as Large Language Models (LLMs). The first of these was the BERT model(2) in 2018 and since then LLMs such as ChatGPT have grown to be 1000 x larger. As LLMs have grown, they have become more and more capable of impressive feats of natural language processing. They are now able to generate long form text, poetry, computer code and interactive conversations.
This is a bit of a shock. There are several reasons why it has happened. Firstly, huge amounts of money have been poured into the effort, allowing a ‘brute force attack’ with vast amounts of cloud compute used to power LLMs. The training methods developed for the first generations of LLM have also turned out to be very inefficient, and very clever people have created much more efficient ones. Finally, and possibly most troubling of all, the life blood of machine learning models is data, and the creators of LLMs have (with some honourable exceptions) demonstrated rapacity and ruthlessness to get it.
Despite what it says when you ask it, ChatGPT appeared at the end of November 2022. Already, just a few months later, many unexpected applications for ChatGPT are appearing. These range from the apparently ethically unproblematic to the obviously malicious. In my paper, I review the application areas that are emerging and outline why these might be considered problematic (or not) and then I will outline what steps can be taken by business people to decide if an application is appropriate and ethical, and what mitigations can be put in place to make fielding something possible.
Discover what’s off limits and how you can make ChatGPT safer in part two of this blog coming soon…in the meantime you can download my ‘Using ChatGPT safely’ thought leadership paper here.
- Luby J, Kertz S. Increasing Suicide Rates in Early Adolescent Girls in the United States and the Equalization of Sex Disparity in Suicide: The Need to Investigate the Role of Social Media. JAMA Netw Open. 2019;2(5). doi:10.1001/ jamanetworkopen.2019.3916
- Devlin, Jacob, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. ‘Bert: Pretraining of deep bidirectional transformers for language understanding.’ arXiv preprint arXiv:1810.04805 (2018).