Seeing this made me muse about some similarities between prompt injection and the Google This Ploy (coined by Caulfield (2019)) re data voids. TK.
A great (new) guide and overview on securing LLM systems against prompt injection by @nvidia
We did a webinar on prompt injection a few months and the main takeaway was more awareness was needed around this. Great to see posts like this doing that
https://developer.nvidia.com/blog/securing-llm-systems-against-prompt-injection/
Caulfield, M. (2019). Data voids and the google this ploy: Kalergi plan. https://hapgood.us/2019/04/12/data-voids-and-the-google-this-ploy-kalergi-plan/. [caulfield2019data]