Software AG reveals that 50% of employees are using unauthorized AI tools "Shadow AI". Notably, 46% of these users indicate they would continue using these tools even if explicitly banned by their organizations.
While AI tools offer benefits like increased productivity and efficiency, their unsanctioned use raises significant concerns.
This study underscores the need for organizations to develop comprehensive AI governance strategies. This includes providing approved AI tools that meet employees' needs and implementing training programs to ensure safe and effective use
https://www.securityweek.com/the-shadow-ai-surge-study-finds-50-of-workers-use-unapproved-ai-tools/
This is very informative. Thank you for sharing your time and expertise with us on this forum @akkem.
To foster a more secure and responsible AI adoption, I'd like to open the floor with these points for discussion and guidance:
From an operational perspective, how can IT departments gain better visibility into Shadow AI usage without creating a culture of mistrust? Are there monitoring tools or strategies that can provide insights without being overly intrusive?
The study underscores the need for comprehensive AI governance, including providing approved tools and training. What key elements should be included in an effective AI governance strategy to balance innovation with security and compliance? What does 'responsible AI adoption' truly look like in practice?
Let's use this as an opportunity to share best practices, discuss potential solutions, and develop actionable guidelines for navigating the complexities of AI adoption while mitigating the risks of Shadow AI.
I'm eager to hear everyone's perspectives and experiences on this crucial topic.
All perspectives welcome!
Good Morning
A timely article. From a personal perspective I recently update our AI policy and submitted it to legal for review. I also did some deep diving on internet traffic related to AI as well and found over a 30-day period that slightly over 90 different AI sites had been accessed. This data has sparked a review by HR and Legal and further fine tuning of the AI policy.
Most likely we will start hard blocking some access, but it shows that without guidance employees will do what they feel is appropriate to get on with their job.
Hello @akkem
I did my research based off of several tools, so no real clean path to success (at least not for me).
Most of my data came from a Cisco Umbrella. Agents are installed on all devices. Some additional data came from our IDS/IPS and some from reviewing FortiGate traffic. I recently demoed another agent (Cloudflare) and it seemed to have similar capabilities to Cisco Umbrella.
The piece that is still a bit elusive for me is determining which traffic is coming from embedded AI in an application vs the user reaching out to utilize the AI platform. Most likely I will need to interview heavy users of specific AI platforms and do some hands-on sleuthing.
Thanks for sharing this very interesting read.
In my experience, organisations often take too long enabling new technologies and tools for employees, and often times the "coporate" version of the product has been hardened and restricted beyond usefulness. If organisations become more agile in how they deliver things like GenAI internally, there'll be less need for employees to take their chances with unapproved solutions.