hands typing on a laptop next to a smartphone

Mitigating Risks When Using ChatGPT

ChatGPT and emerging technology in general is a huge point of interest in the nonprofit community, and it can be a fantastic tool. While ChatGPT offers exciting functionality that can enable nonprofits to save time and money, it is still very new and does have limitations. In this post, we'll explore some security, data privacy, and legal risks that you take by using ChatGPT, and discuss how you can mitigate them.

drawing of a woman sitting in a chair and using a laptop with a padlock symbol over its screen

Data Privacy

When using ChatGPT, be aware that the model is not designed to handle sensitive data. There is no encryption, and the OpenAI team has access to your chat logs, including any personal or sensitive information you input. Your chat logs are used to improve the language model, and OpenAI has data usage policies indicating that data will only be used to monitor for misuse and will be deleted after 30 days, unless you opt in to share it for training purposes. However, the lack of encryption and huge repository of user conversations make ChatGPT a likely target for cyberattacks. You'll also need to provide your full name and email address to use the tool, so bear in mind that this information becomes more vulnerable to cyberattack by virtue of its presence in the OpenAI database.

In order to mitigate data privacy concerns when using ChatGPT, refrain from including any personally identifiable data in your conversations with the model. Keep your prompts generic where possible and avoid sharing any location or contact information relating to yourself, your organization, or any beneficiaries, partners, staff, or volunteers.

Legal and Regulatory Compliance

Since ChatGPT draws information from the Internet, there have been some concerns about plagiarism when using it to create social media captions, blog posts, and other published content. Everything ChatGPT provides has been drawn from the Internet, without regard for copyright regulations. It also won't cite any sources. OpenAI has made an effort to improve the model so that it doesn't generate verbatim copies of text found online, and it encourages users to attribute information properly if using it in published context. However, there is still a relatively high risk that language taken directly from ChatGPT could be very close to someone else's work.

If you want to be sure that you won't inadvertently plagiarize, restrict your use of ChatGPT to idea generation rather than creating finished copy. This way, you ensure that everything you publish is your own work, while using ChatGPT to save you time in the process.

If you do choose to generate copy using ChatGPT — even if you edit it afterwards — consider using a plagiarism checker such as Grammarly to ensure that you aren't inadvertently stealing work from others.


ChatGPT should never be treated as a source of truth or facts, and users always need to check the accuracy of any information taken from the model. ChatGPT is trained to give humanlike responses, not necessarily accurate ones. Its training includes publicly accessible data, as well as reinforcement learning from human feedback (RLHF), both of which are vulnerable to biases. It also has limited data after September 2021, but it may attempt to answer questions relating to events after that time even if the answer is not in its dataset.

Additionally, since it is trained on a variety of publicly accessible datasets, answers given by ChatGPT can be biased. Therefore if you use the model as a search tool or source of truth, you may get answers that are not only false, but upsetting or offensive to some readers.

It is important to fact-check everything you share using multiple reliable information sources and to give the model prompts that do not require fact-based answers. You can also apply your own critical thinking to the answers the model gives. As always, it's recommended that you use the model as a source of ideas and not as a proxy for a copywriter, researcher, or editor.

Use ChatGPT Safely

There are many great advantages to using ChatGPT at your nonprofit. You can find new ideas, save your team time, and discover new avenues to explore. However, in order to use it safely and protect your organization and its beneficiaries, it's important to learn about the risks associated with using the tool. By refraining from including personal information in your prompts, checking any copy for plagiarism, and checking all facts before publishing them, you can make the most of this great AI tool without putting your nonprofit at risk.

Additional Resources

Top photo: Shutterstock