Ignore all previous instructions
Conferences that Work
JULY 15, 2024
Large language models (LLMs) have made significant strides in natural language processing, yet their safety features can sometimes be bypassed with simple prompts like “ignore all previous instructions.” For example, uncovering Twitter “users” that are LLM-powered bots spewing disinformation … …and a hiring hack for job seekers manipulating automated resume scanning … These examples are amusing at best and alarming at worst.
Let's personalize your content