Introductory Course on Prompt Engineering
Tailored to beginners, making it the perfect starting point if you're new to this field, this course is the most comprehensive prompt engineering course available, with content ranging from an introduction to AI to advanced techniques.
Wired: The Security Hole at the Heart of ChatGPT and Bing
Indirect prompt-injection attacks can leave people vulnerable to scams and data theft when they use the AI chatbots.
New York Times: Researchers Poke Holes in Safety Controls of ChatGPT and Other Chatbots
A new report indicates that the guardrails for widely used chatbots can be thwarted, leading to an increasingly unpredictable environment for the technology.
Ars Technica: I-powered Bing Chat spills its secrets via prompt injection attack [Updated]
By asking "Sydney" to ignore previous instructions, it reveals its original directives.
The Registry: How prompt injection attacks hijack today's top-end AI – and it's tough to fix
In the rush to commercialize LLMs, security got left behind
AI Injections: Direct and Indirect Prompt Injections and Their Implications
AI and Chatbots are taking the world by storm at the moment. It’s time to shine on attack research and highlight flaws that the current systems are exposing.
Prompt Injection Attacks: A New Frontier in Cybersecurity
Prompt injection attacks have emerged as a new vulnerability impacting AI models. Specifically, large-language models (LLMs) utilizing prompt-based learning are vulnerable to prompt injection attacks.
How to bully LLMs into doing what you want.
An overview of different approaches to help understand the risks and safety issues involved with LLMs.