The annual Bugcrowd reports indicates how quickly that hackers are adapting and adopting Generative AI tools such as ChatGPT.
It also provides valuable insight into how malicious hackers will employ AI. For now, this is centered around the use of LLM GPTs, such as ChatGPT. There are numerous ‘specialist’ GPTs appearing, but for the most part they are wrappers around the GPT4 engine. ChatGPT remains the primary tool of hackers.
In sitting and thinking about this, maybe we need to rejig how we teach Information Security classes to folks?
Several years ago, a professor at a very well known university in Canada was actually teaching hacking classes, haven't checked lately to see if he is still reaching them but if the hackers are gaining that type of knowledge in schools and not using AI, maybe we should be teaching our security folks the same way,. And not have they rely on third party tools?
When third parties develop tools, etc., they are using many of the same tricks of the trade that hackers are using.