cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Caute_cautim
Community Champion

Do we need a Red team for AI projects?

Hi All

 

I came across this piece, which I thought interesting, but not sure whether it will take off in concept?  What are your thoughts? 

 

AI and machine learning (ML) capabilities present a huge opportunity for digital transformation but open yet another threat surface that CISOs and risk professionals will have to keep tabs on.  Accordingly, CISOs will need to direct their teams to conduct red team exercises against AI models and AI-enabled applications — just as security teams do with any traditional application, platform, or IT system.

 

https://www.csoonline.com/article/3690416/why-red-team-for-ai-should-be-on-cisos-radars.html

 

Regards

 

Caute_Cautim

 

 

4 Replies
dcontesti
Community Champion

My thoughts only and MHOO.

 

I am seeing more and more articles being written where the author(s) are applying technology/processes that a CISO has in their tool kit to specific topics.

 

In my practise, I have always had red teams for all things security/business continuity/business resumption related.  The team is typically charged with testing scenarios, understanding the risks and documenting them, etc.

 

So while the author is correct, we should apply red team processes to AI.  

 

I personally prefer to use purple teams.(Purple teaming is a collaborative approach to cybersecurity that brings together red and blue teams to test and improve an organization's security posture. Your purple team changes the team dynamic and culture, maximizing the contribution of each set of skills).

 

In smaller organisations, management might need to hired these teams.

 

Again, MHOO

 

d

 

JoePete
Advocate I


@Caute_cautim wrote:

Accordingly, CISOs will need to direct their teams to conduct red team exercises against AI models and AI-enabled applications — just as security teams do with any traditional application, platform, or IT system.


I think the mentality is correct. These tools need to be tested, and we already have an ample supply of case studies where they weren't (or at least not sufficiently) before being sent into a production role.

 

In an earlier career in higher ed, a colleague and I started working on an academic paper about measuring technology return on investment. We never finished the project, in part, because the data was hard to get to. Organizations do a lot of analysis before a purchase or project, but very rarely do they confirm their original premise - Boards and senior management don't want to look in the mirror. Especially with AI, there is a need to look in the mirror before you send these things live: You think you are building a car, but it turns out you have actually built a four-wheeled bicycle.

 

The problem is there is a prevalent business misconception that technology of any kind equals efficiency. However, that is rarely the case - tech tends to allow incremental more with exponential more. To do AI well takes a large investment - including testing - but that bumps up against business drivers. In plainer terms, when Dr. Frankenstein finished all his surgery, it was hard for him to resist bringing his creation to life even though he wasn't sure what it would do.

CISOScott
Community Champion

Another thought, do we need oversight for AI projects to ensure they are not biased? AI is great but controlling AI outputs is greater.

Caute_cautim
Community Champion

@CISOScottA great thought - yes Governance counsel is absolutely required:

 

Yes, trustworthy AI principles and ethics need to be applied from the outset and constantly reviewed for all output, before it is released.

 

https://www.ibm.com/artificial-intelligence/ethics

 

Regards

 

Caute_Cautim