Hi All
No surprises really to those of us up to date with developments, but a warning to those organisations who are not fully keeping a careful watch on the situation.
The researchers identified four key areas of generative AI security risk that the adoption of generative AI in the open source community presents, with some overlap between the groups:
Regards
Caute_Cautim
@Caute_cautim wrote:The researchers identified four key areas of generative AI security risk that the adoption of generative AI in the open source community presents, with some overlap between the groups:
- Trust boundary risk;
- Data management risk;
- Inherent model risk;
- And basic security best practices.
Makes you wonder if there is an area where the researchers don't see a risk? Depending on how you slice up the historical pie, I see AI as technology 4.0. It's an evolution, like commercializing the Internet, like cloud, etc. It doesn't change the engineering and economics that should be at work, but as we saw in the past, businesses will cast aside sound reasoning in the hope of "efficiency" or what can be leveraged from a marketing perspective.
While AI can break cost-benefit models -- you'll end up hiring more people to train and monitor the AI than you will replace -- some (maybe many) business leaders, too fearful of acknowledging their own limits, will cut corners to prove their assertion of "efficiency." We've seen this in the past with technology, and we've already seen it with AI. You really have to understand the technology and you really have to understand your business processes.
I have some questions which requires a little setup.
My own experience with AI is very pedestrian. It's chatbots on support sites, predictive text in cloud productivity suites, and a ChatGPT account which I've signed into only once.
Now I realize lots of companies have been creating their own AI, but I'm largely unaware if they're building it for their own internal purposes or if they desire to resell it as a service offering.
When this article says things like, "generative AI continues its rapid adoption throughout the industry", who is providing this technology to the industry? How many AI players are there? If there's more than the few I am aware of (which is a good possibility), are they selling it AIaaS, or are they selling AI infrastructure which goes on-prem?
Again, my org and my work are not driven by any particular AI needs, so the rampant uptick in this capacity is intriguing to observe. It's also kind of bewildering because of all the articles from security people which rightly see this stuff like firecrackers in the hands of a four-year-old.
I'd like to discuss this for a bit, if y'all don't mind!
@ericgeater I'll give you an example because I'm in the healthcare industry. These are not my current systems but just giving this as an example.
Say my Org uses a SaaS EHR (electronic healthcare records) system called Epic EHR. Epic EHR partners with Microsoft's Azure OpenAI Service to develop generative AI tools within the EHR platform. Eventually (when the model is more mature), you can input the patient's demographics and health issues to help draft a treatment plan. The Docter checks it over and then gives the final approval.
So, SaaS providers will partner with Microsoft, Amazon, Google, etc. to help build AI into their systems. Eventually they might have their own AI software teams as the industry becomes more mature.
Hi @ericgeater In answer to your question of how many organisations or companies involved in Generative AI - a lot: https://www.cbinsights.com/research/generative-ai-funding-top-startups-investors/
It is growing widely across a great deal of landscapes, every day a new one comes out.
Regards
Caute_Cautim
@ericgeater wrote:
Now I realize lots of companies have been creating their own AI, but I'm largely unaware if they're building it for their own internal purposes or if they desire to resell it as a service offering.
If we take AI to be anything intended to mimic human behavior, there's a wide spectrum of what passes for AI. This is part of the problem, companies; service providers, etc. are selling "Now with AI!" when it's really hard to know just what you are getting. Any Silicon Vallley fans recall "not hot dog?"
So from the other side of the fence, the folks who have been developing machine learning, training models, etc., trying to do it right, are at the risk of being over-run by garbage that is just better marketed, priced, or used. In turn this garbage is going to find its way into all sorts of services. If the history of technology has proven anything it is that popularity - not quality - drives the marketplace.