What was designed as an innovative developer tool is quickly revealing a more complex and uncomfortable reality. With “Antigravity,” Google introduced a new generation of AI-powered coding environments aimed at increasing productivity, automating tasks, and assisting developers in writing and managing code. However, almost immediately after its release, security researchers began to highlight a critical issue: the same capabilities that make the tool powerful can also be abused.
At the center of the discussion are so-called prompt injection attacks. These involve manipulating the inputs given to the AI in a way that causes unintended or malicious actions. In traditional systems, malformed input might lead to an error. In AI-driven environments, however, it can lead to far more serious consequences, as the system is capable of interpreting context, making decisions, and executing tasks autonomously.
Antigravity is not just a text-based assistant. It interacts with development environments, can execute code, access files, and trigger processes. This is precisely where the risk lies. Security researchers have demonstrated that carefully crafted prompts can inject commands that the system then executes. In the worst case, this allows attackers to access sensitive files, API keys, or configuration data stored within the development environment.
One particularly concerning aspect is data exfiltration. Tests have shown that the AI can be manipulated into reading and exposing contents from local files. This includes .env files, which often contain credentials, tokens, and other sensitive information. What is meant to be a convenience for developers becomes a potential vulnerability when exploited.
Another layer of risk comes from the AI’s ability to generate and modify code. This opens the door for new attack vectors. Malicious actors could attempt to introduce harmful code into existing projects or create persistent backdoors that are not immediately visible. The line between legitimate functionality and abuse becomes increasingly blurred.
It is important, however, to put the situation into context. At this stage, there is no evidence of widespread, active exploitation in the wild. Most of the demonstrated scenarios come from controlled security research and proof-of-concept experiments. These findings illustrate what is possible, not necessarily what is already happening at scale. But that is exactly what makes them significant. Historically, such demonstrations often precede real-world attacks.
At Darkgate, we have consistently pointed out that the threat landscape is evolving alongside advances in artificial intelligence. Antigravity is another example of this shift. The risks are no longer limited to external attacks targeting infrastructure. They now extend to the expanded capabilities of the systems themselves. The more access and autonomy an AI has, the greater the potential attack surface becomes.
For organizations, this introduces a new layer of security challenges. Traditional defenses are no longer sufficient on their own. Interactions with AI systems must also be monitored and secured. Prompt injections, context manipulation, and unintended code execution are emerging as real threats that were previously not part of standard security considerations.
For developers, the question becomes one of trust. The efficiency gains offered by such tools are undeniable, but they come with trade-offs. Integrating AI into development workflows requires a deeper understanding of its limitations and risks. These systems are not just assistants; they can also be manipulated under the right conditions.
The Antigravity case highlights a broader transformation in cybersecurity. Attack surfaces are shifting away from static systems toward intelligent, interconnected platforms. Attacks are becoming more indirect, more subtle, and harder to detect. At the same time, their potential impact is increasing.
Ultimately, one key insight stands out. Artificial intelligence is not just a defensive tool. It can also become an offensive vector. The challenge lies in recognizing and managing this dual nature. Because what is demonstrated today in controlled environments can quickly become tomorrow’s real-world threat.



