Why so much prompt injection in AI? 1. We don't follow the security engineering design principle "economy of mechanism," and 2, input to LLMs mixes control and data with impunity. We know better. #MLsec #infosec #security
https://www.darkreading.com/vulnerabilities-threats/llms-on-rails-design-engineering-challenges
@cigitalgem BUT ALIGNMENT AND GUARDRA1LZ and Firewulls will fix it, right?
I heard they’re releasing it next week, or month, or something. Just around d the corner.