Close Menu
Abu Dhabi NewsAbu Dhabi News
  • Home
    • Our Authors
    • Contact
  • Abu Dhabi
  • UAE
  • World
  • Business
  • Economy
  • Technology
  • Health
  • Lifestyle
  • Travel
  • Sport
What's Hot

Samsung’s 5-Minute Charging Foldable Might End Battery Anxiety Forever

February 24, 2026

Is the Hellish Heat Zone Already Here? New Warnings Raise Alarm

February 24, 2026

The UAE’s Digital Dirham Is Coming – And Cash May Feel Obselete Sooner Than You Think

February 24, 2026
Facebook X (Twitter) Instagram
Abu Dhabi NewsAbu Dhabi News
Facebook X (Twitter) Instagram TikTok
Login
  • Home
    • Our Authors
    • Contact
  • Abu Dhabi
  • UAE
  • World
  • Business
  • Economy
  • Technology
  • Health
  • Lifestyle
  • Travel
  • Sport
Subscribe
Abu Dhabi NewsAbu Dhabi News
  • Abu Dhabi
  • UAE
  • World
  • Economy
  • Technology
  • Health
  • Lifestyle
  • Travel
  • Sport
Home»Education
Education

Inside the AI Heist – Hackers Probe Gemini in Massive “Distillation” Attack

Annie GerberBy Annie GerberFebruary 24, 2026No Comments5 Mins Read
Facebook Twitter Pinterest LinkedIn Telegram Email WhatsApp Copy Link

Listen to the article

0:00
0:00

Key Takeaways

🌐 Translate Article

Translating...

📖 Read Along

💬 AI Assistant

🤖
Hi! I'm here to help you understand this article. Ask me anything about the content!
Google Says Hackers Used Gemini AI in 100,000+ Attack Prompts
Google Says Hackers Used Gemini AI in 100,000+ Attack Prompts

Rarely do Google offices’ brick facades convey a sense of urgency. With a steady and well-practiced rhythm, workers pour through glass doors with laptops and reusable coffee mugs. However, something strange started to happen late last year inside security operations rooms, the silent ones with dashboards and blinking alerts. There was a pattern: Google’s flagship AI system, Gemini, was being bombarded with thousands and thousands of strangely worded prompts that were looking for clues rather than answers.

In what the company refers to as a “distillation” or model-extraction attack, hackers sent over 100,000 specially designed prompts against Gemini, according to a 2026 report from the Google Threat Intelligence Group. User data was not the goal. It was the model itself, with its multilingual logic, decision-making processes, and lines of reasoning. Google categorized the activity as intellectual property theft, indicating that the attackers intended to use Gemini’s own intelligence as a scaffolding to create a more affordable, competitive AI.

Category Details
Company Google
AI System Gemini (Large Language Model & AI assistant)
Report Source Google Threat Intelligence Group (GTIG)
Attack Type Model Extraction / Distillation Attack
Scale 100,000+ crafted prompts
Primary Goal Reverse-engineer reasoning & build competing AI
Industry Impact Intellectual property theft risk across AI sector
Official Site https://blog.google/

The prompts initially appeared to be standard: translation assignments, logical conundrums, and requests for steps in reasoning. However, when combined, they created a methodical interrogation. Many were published in languages other than English, most likely in an effort to test the system’s multilingual logic. T

he digital equivalent of a locksmith testing pins one by one, engineers examining the logs saw subtle variation and repetition. The attackers might have been mapping Gemini’s thought patterns in an effort to find structural clues that would show how the system “thinks.”

Just the scale sparked curiosity. The volume of queries received from a single campaign would exceed the usual usage for research. Google hardened defenses, disabled related accounts, and immediately detected the activity. However, it appears that the business is preparing for additional attempts. T

he incident was referred to as a “canary in the coal mine” by John Hultquist, a chief analyst at GTIG, suggesting that the industry should anticipate similar attacks against both large AI labs and smaller businesses creating custom models.

The motivation seems more like competition than sabotage. According to Google, the actors were probably researchers, private businesses, and potentially state-sponsored organizations looking to gain technological advantage.

Distillation provides a shortcut in an era where developing AI costs billions of dollars: take the knowledge from a costly model and train a smaller replica. Even though the final system won’t be exactly the same, it can be competitively similar and much less expensive to implement.

Such imitation of technology has been done before. Within months of the late 2000s smartphone boom, hardware features were copied. Early on, Tesla was worried about drivetrain and battery imitation. Computational cognition has now replaced physical engineering as the battlefield. As this develops, it’s difficult to ignore how AI’s openness—its accessibility through web interfaces and APIs—creates vulnerability as well as accessibility.

Google stressed that the model, not users, was the target of the attack. However, the overall trend is disturbing. AI tools for phishing, malware creation, reconnaissance, and social engineering have become more and more popular among threat actors.

According to some intelligence reports, AI systems may even be used to create dossiers on targets in South Asia and other regions. This broader arc—AI as both a tool and a target—is where the Gemini probing campaign fits in.

It’s the technical detail that counts. Model extraction takes advantage of a system’s propensity to respond to inquiries. Attackers can deduce training patterns and reasoning structures by overloading a model with meticulously crafted prompts and examining responses. The amount of useful intelligence that can be gleaned in this manner and the ability of defensive measures to keep up are still unknown. However, the attempt itself heralds an increase in the stakes.

Quiet tension exists within the AI industry. Businesses have spent a great deal of money on data collection, architecture improvement, and model training. Their subtleties, such as the reasoning frameworks, safety layers, and tuning, give them a competitive edge. The economics of artificial intelligence may change significantly if those can be roughly estimated through persistent probing.

Life goes on as usual outside the tech campuses. Chatbots are used by students to draft essays. Automated marketing text is being tested by retailers. On top of AI APIs, developers create tools. The technology seems commonplace, almost unremarkable. Beneath that ease, however, is a covert struggle for control of the intelligence influencing software of the future.

Instead of being an exception, there is a sense that this episode might be an early warning sign. Attempts to decipher the inner logic of AI models will probably increase in strength as they become more valuable and capable. It’s still unclear if defenses will advance fast enough and if the industry can agree on standards for safeguarding AI intellectual property.

The prompts have ceased for the time being. The dashboards are now silent. However, the logs are still there, a record of 100,000 inquiries made out of pursuit rather than curiosity.

000+ Attack Prompts Google Says Hackers Used Gemini AI in 100
Annie Gerber

Please email Annie@abudhabi-news.com

Keep Reading

Samsung’s 5-Minute Charging Foldable Might End Battery Anxiety Forever

Is the Hellish Heat Zone Already Here? New Warnings Raise Alarm

Wormhole Hype, Rewritten – Why Einstein’s Bridge May Never Have Been a Bridge

Dubai Startup Claims It Can Decode Your Dreams Using AI, But the Science Looks Thin

ZeroDayRAT Turns Your iPhone Into a Live Camera Feed – And It’s Sold on Telegram

Android 17 Beta Just Killed Developer Previews – and Google Barely Blinked

Editors Picks

Is the Hellish Heat Zone Already Here? New Warnings Raise Alarm

February 24, 2026

The UAE’s Digital Dirham Is Coming – And Cash May Feel Obselete Sooner Than You Think

February 24, 2026

The Carbon Comeback – How Saudi Aramco Plans to Trap Millions of Tons of CO₂

February 24, 2026

Wormhole Hype, Rewritten – Why Einstein’s Bridge May Never Have Been a Bridge

February 24, 2026

Dubai Startup Claims It Can Decode Your Dreams Using AI, But the Science Looks Thin

February 24, 2026

Latest Articles

ChatGPT’s Deep Research Tool Now Lets Users Specify Sources – Here’s Why It Matters

February 24, 2026

ZeroDayRAT Turns Your iPhone Into a Live Camera Feed – And It’s Sold on Telegram

February 24, 2026

Android 17 Beta Just Killed Developer Previews – and Google Barely Blinked

February 24, 2026
Facebook X (Twitter) Instagram LinkedIn
© 2026 Abu Dhabi News. All Rights Reserved.
  • Privacy Policy
  • Terms of use
  • Contact

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?