@yahhana6 I agree there is a fair bit of FUD, but at the end of the day, it comes to three factors:
1). Explainability and trust
2). AI Ethics and Governance
3). Purpose, transparency, and skills.
If you cannot find answers to those, perhaps you should not be using that particular foundation model or provider. Also do you know what happens with your corporate data or your personal data - is it sold off or passed to the provider as a means of providing further rev
The main tenants within security is automation, AI continuously learns, improving its knowledge to understand cybersecurity threats and cyber risk by consuming billions of data artifacts. AI reasoning finds threats easier - analysing relationships between threats like malicious files, suspicious IP addresses or insiders in seconds or minutes. AI eliminates time-consuming tasks - curated risk analysis, reducing the time security analysts take to make critical decisions and remediate threats.
AI will be used by the dark forces as well, conversely to create better means of compromising human beings or finding different weaknesses far quicker than humans can do so. FraudGPT, and WormGPT are examples of services being offered by subscription on the dark web today.
@esin That does not bode well then, because if we can only detect 73% of the fakes, how will we deal with understanding whether it is real or not. Social engineering just took on a new avenue and a race too, to see how many people can be compromised every day.