Hi All
Languages like English have introduced new vulnerabilities in prompt engineering, akin to the issues arising from SQL injection attacks. Telling an LLM to "summarize an article" is as vulnerable as using string concatenation to create an SQL statement. Consider the example below illustrating their similarity:
See attached document.
Regards
Caute_Cautim