cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
danyork
Newcomer I

Examples of usage guidelines for AI assistant tools?

With all the various "AI assistant" tools becoming available, does anyone have any good examples of usage policies companies are using?

 

Zoom, for instance, is now strongly promoting their "AI Companion" tool. There are MANY others. These tools can be incredibly useful when added to a Zoom call - providing transcripts, capturing action items, summarizing the call, etc.  And for a public webinar or call where the recording will be shared publicly later, I don't personally have much of an issue with the tools.

 

But my concern is when these tools may be used on internal calls where internal / confidential / private information may be shared. The information (transcripts, recordings, etc.) is then being stored on other servers - and much worse, could potentially be used as "training data" for the AI large language model (LLM). If so, there could be a case where some of that training data comes out in a response to someone. There's a risk there.

 

Beyond Zoom calls/webinars, people are definitely interested in using tools like ChatGPT, Bard, Claude, Bing Chat, etc. to rapidly summarize documents, create information, etc. Again, if it is summarizing a public document, I don't see an issue. But if it is summarizing an internal document, I see definite risks.

 

What are people doing in terms of usage policies for these kind of tools?  Blocking all tools is obviously one approach, but there are definite benefits and efficiencies that can come from these tools.

 

(And yes, one approach I'm seeing some people do is to use one of the LLMs that can be installed on a premise so that it can be used for internal information - and all the data stays within the corporate network.)

--
Dan York, CISSP, Internet Society
york@isoc.org @danyork
0 Replies