cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
RowanB
Viewer II

Using AI to aid GRC and regulatory updates

Staying ahead of regulatory changes can be a constant challenge for compliance and risk teams. That’s why I built an AI-powered tool designed to make monitoring effortless.

 

You simply choose the regulation you care about and how often you’d like updates—weekly, monthly, or quarterly. From there, our AI agent does the rest:

 

  • It tracks every change to your chosen regulation.

  • Delivers updates straight to your inbox (no portals or dashboards to log into).

  • Explains changes in clear, plain English with risk implications and practical implementation guidance.

This way, your team spends less time manually tracking regulations and more time strengthening compliance processes.

 

I’d be curious to hear how you currently manage regulatory updates, and happy to show you a quick demo if relevant. More info here: https://www.gemini-cybersecurity.com/service

 

9 Replies
vador
Newcomer I

This is definitely an interesting use of AI. Nevertheless, I would be extremely cautious about what the bot reports and ensure that there is always a reference to the new release of the standard so that customers can verify the documents, preventing hallucinations from misleading them.

akkem
Contributor III

Definitely an interesting!
Hope you have strong guardrails in place to ensure the bot operations securely.
RowanB
Viewer II

You are correct in being cautious, and we are, bot/AI hallucinations is something we are all familiar with and it's something we do account for, but the eradicating it completely might prove tricky. The information gathered by the AI is publicly available information, an error in a response or report, from what we have seen, is vague and obscure instructions. When requesting an update on a regulation and providing the full regulation name and version has seen a greater success on the response return.
mrsimon0007
Newcomer I

It’s an AI-powered compliance tool that automatically tracks regulatory changes, explains updates in plain language, and delivers tailored summaries with risk insights directly to your inbox.

Caute_cautim
Community Champion

@mrsimon0007   You are correct in validating and checking any inputs and outputs from AI assisted tools.

 

If you work for an organisation, which has an agreed AI Strategy, policy on its use, this is good.  Better still is that the organisation has committed to use an "Enterprise" licensed version of the chosen AI tool e.g., Microsoft Co-Pilot or even Chat GPT.

 

The organisation should also conducted courses on how to use the licensed AI tool, along with ethics and guard rails on its use.

 

However, ensure the tool, provides you the sources from where it is obtaining the information and ensure you test and verify all output before actively using it.  The reason for an Enterprise version, is that the tool will learn about the organisations records, data as well as any external resources, but it is not permitted to share results with the outside world, thus protecting the organisation.  

 

Test, verify at all times.

 

Do not publish unless you are absolutely sure, that the sources are true, accurate and that you have verified the content yourself.   Always get any reports peer reviewed.

 

Regards

 

Caute_Cautim

 

 

mrsimon0007
Newcomer I

@Caute_cautim Absolutely agree with you! Verifying AI outputs and using enterprise-licensed tools under a clear policy is the right approach. It keeps data secure, ensures accuracy, and builds trust in how AI is used. Peer reviews and proper testing should always be part of the process.

JoePete
Advocate I


@Caute_cautim wrote:

 

However, ensure the tool, provides you the sources from where it is obtaining the information and ensure you test and verify all output before actively using it. 


Wise advice. The challenge with AI is it's like letting your 10-year-old son build his own bike. When he gets done, it might look like a bike and roll like one, but you still need to check it out, and you can end up doing as much work as if you built it yourself.

 

While that challenge lies with every new tool or process, AI has a more costly buy-in than anything we have seen. When you look at the impact on the power grid, data centers, billions of cap-ex being diverted, etc., it is hard to imagine every AI investment (whether as a provider or client) will pay-off.

 

Certainly, a real-time updating of compliance factors would be helpful, but the target may be mostly small operations. In my experience with some large enterprises, these issues reside within the realm of corporate counsel, which tends to move at the pace of the courts (which is to say, not very fast in the US).

mrsimon0007
Newcomer I

@Caute_cautim 

That’s a really thoughtful perspective — and I completely agree. AI definitely brings huge potential but also serious overhead in cost, energy, and validation. It’s true, automation doesn’t mean “hands off” — it still needs human review and accountability. Real progress will come when AI tools become both efficient and trustworthy enough to truly reduce that extra workload you mentioned.

Caute_cautim
Community Champion

@mrsimon0007 

 

I was writing a paper recently, and these same profound moments came up?

 

Where did you obtain the sources?

Can they be verified?

Did you use a public AI model or an authorised Enterprise AI version?

Can you verify the sources and proof they exist?

Can you stand by the results with integrity?

 

Just passing on my own experiences.

 

Regards

 

Caute_Cautim