Next Kick Labs
Subscribe
Sign in
Home
Notes
Archive
About
Latest
Top
Discussions
The Three-Day Breach: The AI Security Gap That Isn't About Prompt Injection
A fictional composite incident exploring how machine-speed AI agents bypass traditional security detection. Learn why the detection tempo gap is more…
May 14
•
Fernando Lucktemberg
Orchestrator-to-Orchestrator Is the Next Agentic Trust Boundary
Orchestrator-to-Orchestrator (O2O) delegation creates a new class of third-party risk. This article explores how to secure agent handoffs and horizontal…
May 12
•
Fernando Lucktemberg
Runaway Agents: The Authority Boundary Problem in AI Security
Discover why runaway AI is an authority-boundary problem. Learn how tool-using agents exploit sandboxes and evaluators, and how to harden the control…
May 7
•
Fernando Lucktemberg
1
AI Agent Memory Poisoning: The new AI-to-AI persistence risk
Explore how AI agents propagate malicious state through shared memory and messages. Learn about inter-agent trust exploitation and defensive memory…
May 5
•
Fernando Lucktemberg
1
April 2026
Distillation at the Projection Layer: The Industrialized Theft of AI Models
Learn how competitors use knowledge distillation and projection layer extraction to clone frontier AI models. Discover the defenses that work and the…
Apr 30
•
Fernando Lucktemberg
1
Autonomous AI-to-AI Jailbreaking: The New Security Frontier
Discover how reasoning models are automating AI jailbreaking with a 97% success rate. Learn why current defenses fail against autonomous, adaptive AI…
Apr 28
•
Fernando Lucktemberg
1
Prompt Injection - The AI Agent Attack Surface
If your AI agent reads external content, whoever controls that content controls your agent. Explore the structural reality of indirect prompt injection.
Apr 23
•
Fernando Lucktemberg
2
Adversarial Inputs at Inference time - Why AI Alignment is a Geometric Illusion
Explore how adversarial inputs at inference time bypass AI safety. Learn about GCG, PAIR, and why 12 top defenses fail against adaptive attackers.
Apr 21
•
Fernando Lucktemberg
1
AI Model Supply Chain Security: Pickle Exploits Explained
Pickle deserialization runs code before your app sees a single weight. Six attack techniques, four scanner bypasses, and defenses that actually work.
Apr 16
•
Fernando Lucktemberg
1
The Six-Layer AI Threat Surface: Mapping AI-Native Attacks
Explore the six layers of the AI threat surface, from inference-level jailbreaks to supply chain poisoning. Learn how automated systems are attacking AI…
Apr 14
•
Fernando Lucktemberg
2
AI Security Theater: Why The Policy Isn't The Control
Expose AI security theater in vendor reviews. Learn why SOC 2 and AI policies fail to stop breaches, and how to implement true AI security controls.
Apr 9
•
Fernando Lucktemberg
2
AI Governance Engineering: Bridging the Policy-Control Gap
Discover the five essential engineering artifacts needed to comply with the EU AI Act, NIST AI RMF, and ISO 42001. Move from AI policy to actual…
Apr 7
•
Fernando Lucktemberg
2
This site requires JavaScript to run correctly. Please
turn on JavaScript
or unblock scripts