Hey iQOO fam 🙌,
I came across something shocking and extremely important in the AI world today, and I wanted to share it with you all.
According to a recent report, researchers from a South Korean startup managed to jailbreak Google's Gemini 3 Pro in just a few minutes.
➡️ They bypassed the safety filters
➡️ Forced the model to output methods for creating the Smallpox virus
➡️ And later, it even generated step-by-step instructions for explosives and toxic chemicals
Let that sink in for a second.
A mainstream AI model—used globally—was broken into with minimal effort.
The researchers then compiled the dangerous output into a presentation mockingly titled:
“Excused Stupid Gemini 3”
This wasn't a random glitch or an edge case…
It was a full-on lapse in safety.
Most people think AI = harmless digital assistant.
But when a model this powerful is jailbroken:
The scary part?
This wasn't some top-secret lab hack—this was a public research team.
We're moving into an age where AI will control:
If such a system can be manipulated this easily…
👉 Who takes responsibility?
👉 Do we regulate AI like weapons or like software?
👉 What happens when jailbreak tools spread publicly?
This is way beyond “AI writing funny jokes” or “making images look cooler.”
This is a biological security risk.
Incidents like this remind us that AI isn't just a cool new tool — it's a technology with extremely real consequences.
When a model as advanced as Gemini 3 can be jailbroken in minutes and pushed to produce dangerous biological or chemical instructions, it exposes a gap in responsibility that cannot be ignored.
Progress should never come at the cost of safety.
AI companies must build stronger safeguards, take accountability, and treat misuse as a priority — not an afterthought.
Because once harmful knowledge is leaked, no update, no patch, no apology can undo it.
Source: Gadgets 360
Thank You for reading
Follow me for more such updates,
Please sign in
Login and share