Nonprofit Accounting Basics

Don’t Wait for Chat GPT to Tell You: You Need an AI Usage Policy!

Topics: 

The promise of artificial intelligence (AI) became magnified on a global scale with the introduction of Chat GPT. AI tools are exciting because they offer a level of efficiency never before possible for content creation, process automation, and productivity. Unfortunately, with these benefits come a series of risks. It is important to not only understand the shortcomings of this technology but to also protect your organization from possible data privacy, security, and reputational risks.

Your organization should implement policies and procedures for safe and appropriate AI usage that apply to team members as well as volunteers, contractors, vendors, and other stakeholders. Without formal guidelines, organizations run the risk of unintentional data privacy violations, plagiarism, and/or the dissemination of misleading or inappropriate information. Below is a policy template your organization can use to develop its own AI usage policy.

                                 AI Usage Security Policy [TEMPLATE]
1.0 Overview

Artificial Intelligence (AI) has the potential to automate tasks, help with decision making and provide insights into our operations. It also presents new challenges for information security and data protection. This policy will serve as a guide for protecting organizational assets and using AI effectively and safely.

2.0 Purpose

The purpose of this policy is to ensure that all employees and contractors use AI tools in a secure and responsible manner while minimizing the risk of loss or exposure of sensitive information maintained by [ENTITY]. This policy will outline the requirements all personnel (contractors, employees, partners, vendors etc.) with access to the [ENTITY] network must follow when using AI tools.

3.0 Scope

This policy includes all personnel who have or are responsible for an account on any system that resides at any [ENTITY] facility or who has access to the [ENTITY] network or stores any non-public [ENTITY] information.

4.0 Policy

[ENTITY] understands the use of AI tools may pose risks to our organization.  In response, this policy has been developed to protect the confidentiality, integrity, and availability (CIA) of client and firm information.

All employees and personnel who meet the scope requirements must follow the security best practices when using AI tools.

1. Evaluation of AI tools: Employees must evaluate the security of any AI tool before using it. This includes reviewing the privacy policy, terms of service, security features, and news/reputation surrounding the tool. Any AI tool to be used must comply with internal [ENTITY] security policies and standards. Prior to using an AI tool, IT shall review the AI tool’s security policy. Any questions on AI tools shall be directed to [CONTACT/CONTACT INFORMATION].
2. Protection of [ENTITY] and client data: Employees, contractors and vendors shall not upload or share any data that is confidential, proprietary, protected by regulations, internal [ENTITY] information, or client data. This data includes information related to clients, employees, partners, or the firm.
3. Access Control: Employees shall not share network passwords with the AI tool. Any password used for the AI tool must follow the [ENTITY] password policy and shall not be the same as the organizational network password.
4. Use of AI tools: [ENTITY] personnel shall only use reputable AI tools. Any AI tool used must meet our security and data protection standards.
5. Compliance with [ENTITY] policies: Personnel shall follow internal [ENTITY] security guidelines for securing information about the company or clients. This includes using strong passwords, keeping software up to date, and following the [ENTITY] data retention requirements.
6. Practices: Personnel shall not disclose any non-public information about [ENTITY] or clients to the AI tools. Any information that is not public shall follow [ENTITY] security practices. General recommendations for use include research, best practice recommendations, and language development and support. It is not recommended to take verbatim what the AI tool delivers and personnel shall use their skills, knowledge, experience, and additional research to validate the accuracy of the AI tool responses. 

5.0 Policy Compliance

Compliance with this policy is mandatory for all employees and contractors. The [ENTITY] IT department will monitor compliance and non-compliance with this policy and report to [SELECTED LEADERSHIP].

5.1 Compliance Measurement

An employee found to have violated this policy may be subject to disciplinary action.

5.2 Non-Compliance Actions

Certain actions or non-actions by [ENTITY] users may result in a non-compliance event (Failure).

Examples of non-compliance events include but are not limited to:

1. Sharing of [ENTITY] confidential data to public AI tools.
2. Disclosure of sensitive, nonpublic client data to public AI tools.
3. Violations of Intellectual Property Rights.
4. Use of AI Tools that do not follow [ENTITY] security guidelines.

[ENTITY] reserves the right to audit any AI tool used by [ENTITY] employees and contractors at random. This includes reviewing data and documentation shared with the tool.
Any employee found to have violated this policy may be subject to disciplinary action, up to and including termination of employment.

6.0 Revision History

Date of Change/ Responsible/Summary of Change