This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 3 minute read
Reposted from Taylor English Insights

Best Practices for Workplace Use of AI

As generative Artificial Intelligence (AI) tools like ChatGPT take the consumer world by storm, their commercial counterparts and application are causing equal parts excitement and angst among employers.  Many countries, including the US, are looking at ways to regulate AI use, but those rules are not yet here, and they might pertain more to how AI affects public discourse than they do to private commercial use of tools. 

Common uses of AI in the workplace today include generating copy, drafting documents, writing code, producing stock images, and more.  If your company does any of these things, it is worth giving some thought to how you want employees to use AI, and how you don’t want them to use it. 

AI can be helpful in many ways.  However, it also carries a bevy of issues based on how it works and how humans use it.  For now, it is sufficient to think in broad categories about these issues, such as the following:

  • IP: generative AI (which can create words, pictures, and other output) relies on ingesting vast quantities of data from existing, published sources.  But it does not always recognize that source material may belong to someone else; several artists are currently suing leading AI providers for infringement of their intellectual property due to ingestion of their material into AI engines.  If you are licensing AI material, you need to understand whether there are limits on what you receive; if you are generating material using AI, you need to understand if there are limits on the rights you can provide to your customer or end user. 
  • Privacy: likewise, the fact that AI engines suck in huge amounts of information, indiscriminately, in order to “train” the AI means that personal details about humans may be part of the AI information base.  Under privacy rules in force in much of the world, those humans may have rights in that personal information that could limit how it is supposed to be used.  As with IP, there may be diligence and disclosure issues that should be covered in your contracts and other documents. 
  • Quality: much AI output is, at best, only as good as an entry-level employee could generate.  It is often wrong, or does not convey the editorial viewpoint or other objectives you may want to come from your company.  AI-generated materials may need human review, input, and polishing to meet company standards.  One goal might be to think of the company’s final output as being “AI-assisted” rather than “AI-generated.”

For these reasons and many others, it is becoming more common for companies to adopt an internal Use of AI policy that covers a wide range of topics, and to think about other places where AI might affect the company as well.  Some recommended steps are set out below:

  • Update vendor contracts to address IP issues and to ensure that you know what tools underlie the platforms and technologies that you are licensing for use by your company
  • Update customer contracts to let customers know of any AI output that will or might be present in the deliverables or other goods or services you make available to them
  • Audit AI tools already in use in your company
  • Develop a list of prohibited AI tools and permitted AI tools
  • Develop a list of any prohibited uses of AI
  • Develop procedures to identify any AI-assisted code your employees generate, and rules for logging those
  • Appoint a person or group empowered to review requests to add AI tools to employee workflow, and make sure employees understand the approvals process
  • Update privacy policies, acceptable use policies, terms of use, and other website agreements (consumer-facing) to make clear whether your platform/website uses AI, how it is used, what user information will be collected and processed, and what rights you do or don’t intend to enforce
  • Draft an internal Use of AI policy and update any internal policies that may be affected by use of AI. Remember to notify employees of the rules you have set regarding acceptable tools, uses of AI, and internal procedures for evaluating new tools/uses
  • Train employees to understand what AI can and cannot do for them
  • Encourage transparency at every level about use of AI 

Tags

data security and privacy, hill_mitzi, insights, ai and blockchain