AI
In the past year or so, I’ve had clients come to me with questions regarding AI in their vendor tools, developing or adding AI-based functionality in their own customer-facing products, and for internal product development. Today, I discuss common use cases I’ve seen and risks to be aware of.
Meeting notetakers can come in a couple different forms. I’ve seen 2 distinct flavors that most tech companies consider: general notetaking for employees and notetaking for coaching sales team members. First, get consent from your meeting participants to record as many states require two-party consent.
It may also be best to avoid (or turn off) meeting notetakers for highly sensitive or confidential meetings. When meeting with your legal counsel, you may risk losing attorney-client privilege if those notes are later made public or disclosed, and is a particular issue if you’re facing litigation. Further, confidential meetings often disclose sensitive business or technical information, and you risk losing trade secret protection if that information is later disclosed in a meeting summary. It may be best not to record these sensitive meetings altogether.
Lastly, have your cybersecurity team vet each vendor before signing on, as you’ll want to understand third-party access risks. Ultimately, it’s about balancing these risks with the productivity benefits for your team.
In customer-facing applications, many companies use coding assistance, image generation for marketing materials, as well as developing AI-assisted features for their products. The most important issue here is quality control and IP infringement. To help manage that, you should have a human-in-the-loop review all AI output and ensure it meets the quality needed for the use case.
For code generation, you may lack IP ownership over AI-generated code, so it’s important to identify content that is AI-generated to the extent you are patenting or applying for copyright registration. You’ll also want to audit any AI-generated code to understand whether it contains open source code, and if so, what licensing requirements you may be subject to, before using the code in production.
You’ll also want to be aware of whether your inputs and outputs are used for training of those AI models, and whether your security team permits it; user terms usually disclose whether users or the AI model own inputs and outputs, and whether any user content is used for training.
Chatbots are subject to their own body of laws, particularly as they are often used for sales and marketing purposes, and connect directly with individual consumers and users. Many states and the FTC require disclosing at the start of a chatbot conversation that the individual is speaking to a bot, to avoid any claims of unfair or deceptive business practices. Additionally, to alleviate any wiretapping concerns for consumer-facing businesses, you should get consent from individuals that their conversation is being recorded. Typically, companies partner with third party providers to integrate the chat bot with their service. You’ll want to check with your security team as to how to handle any PII provided to the chatbot to limit potential exposure.
Up next: commercial considerations when contracting with companies leveraging AI. Stay tuned!