ChatGPT: Making things safer

In part-one of this blog I examined the new and powerful technology that is ChatGPT. In this second and final part, I explore what best practices are required to make its use as safe and secure as possible.

It is pretty clear that we are not going to put ChatGPT back in the bottle. The techniques used to create it are well known, and although the amount of compute required seems to be heroic now, in the relatively near future it will be much more widely accessible. Even if compute prices do not shift down radically in the near future, the kind of compute required to create GPT3.5 is already available to many state actors, and a wide range of non-state actors.

Google has announced ‘Bard’ based on its LAMDA technology which is so compelling that one internal engineer became convinced it had a soul and Deepmind has developed a chatbot called ‘Sparrow’ which is ‘claimed by some’ to be technically superior to ChatGPT.

The big dangers are not likely to come from sophisticated super companies like Alphabet. Smaller companies with a ‘move fast and break things’ attitude are likely to be creative and adventurous with their application ideas. But very real harms to very real people are possible with this kind of system, and these can be easily and quickly implemented by small nonexpert teams.

Five top tips to make ChatGPT safer

Even though there are many paths to ‘no’ and only one to ‘yes’, there will still be a lot of applications that get qualified as reasonable. But this will not make them safe. In order to have confidence in a ChatGPT-powered application, it is also suggested that the following steps are implemented.

  1. There should be no deception about what it is that users are interacting with. You cannot give informed consent if you are not informed. Saleema Amershi et al [1] have published excellent guidelines for interaction for AI systems. Importantly, these provide structure for considering interaction throughout the lifecycle of a user interaction. The guidelines cover how to make it clear to the user what they are interacting with and how to instruct them about what is expected of them. Amershi’s guidance extends throughout the interaction, managing failure and overtime as the system becomes ‘business as usual’.
  2. Users should have the option to not interact with the system. A real option – for example an alternative contact channel.
  3. There should be an impact assessment attached to every application. Put it on the website as you would a robots.txt file, or as you would add a licence to your source code. The Canadian AIA process offers a model for this sort of thing, but some fundamental questions are a good start. Who will it hurt if it works as intended? Who will be hurt if the chatbot goes wrong? Can anyone tell if the chatbot is going wrong, and can they stop it and repair the situation if it is?
  4. If your system could have an adverse effect on others, then there should be monitoring and logging of what the system is doing and how it is behaving. These should be maintained in such a way as to allow forensic investigation of the behaviour of the system, if required.
  5. If you are not personally and directly responsible for the system, a clearly documented governance process should be developed and maintained. Part of this should describe how users can call for help, and how they can complain about the system. It should also describe what the processes around addressing user distress and complaints should be.

Potential for great value in many use-cases

In my new GFT thought leadership paper ‘Using ChatGPT safely’, I have laid out what the potential problems are with ChatGPT-based applications and some tactics for avoiding and mitigating them in practice. I hope that our community deepens and develops these approaches in the near future.

With the correct controls and processes, new large language models such as ChatGPT will provide great value in many use-cases, albeit with the essential controls and checks in place, to ensure users and end-users are protected from any misunderstanding.

You can download my ‘Using ChatGPT safely’ thought leadership paper here.

 

  1. Amershi, Saleema. ‘Guidelines for Human-AI Interaction.’ CHI conference on human factors in computing systems. CHI, 2019. 1–13.

Hybrid and multicloud

Learn how cloud and multicloud drive transformation!

Download now