Hi All
A very good question, so how is your organisation going to cope with AI and Generative tools?
What policy are your respective organisations going to apply?
What risks will you encounter?
Will you encounter data leakage of sensitive information?
https://www.lexology.com/library/detail.aspx?g=8b138f0b-e96b-437e-9351-715d21a56973
Or do you think Asimov got it right?
First Law
A robot [AI System] may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law
A robot [AI System] must obey the orders given it by human beings except where such orders would conflict with the First Law.
Third Law
A robot [AI System] must protect its own existence as long as such protection does not conflict with the First or Second Law.
Regards
Caute_Cautim
ChatGPT will develop a perfectly reasonable sounding policy for you. Just ask it.
More seriously, our current stance is to embargo its use until the hype has worn down a bit, the experts have replaced their gut-level reactions with fact-based advice, and we all know better what questions to ask.
In a few months, I anticipate that any use of such tools will end up going through our standard "application" vetting process, with a particular emphasis on data-protection and legal-liability (e.g., who pays when its bad advice kills someone?).