Well we know Steganography has been used in pictures and documents, now we have it encoded within Large Language Models (Generative AI Models) as this study discovered and it is dangerous for many reasons.
1). Hiding their reasoning
2). Lack of transparency
3). Undermining monitoring of AI systems
@Early_Adopter The key issue is AI Ethics and Governance, if they are intent on hiding techniques or information, it cannot be transparent and therefore it cannot be trusted. All sorts of nefarious methods could be applied, ready to strike out or carry out activities without the owner realising knowing what is going.
I certainly would add it to the security issues surrounding LLMs, and Generative AI models.
Given the law suit raised recently, OpenAI has a lot to answer for.