Next Kick Labs
Subscribe
Sign in
Home
Notes
Archive
About
Latest
Top
Discussions
LiteLLM Supply Chain Attack: Why Your AI Stack Is Vulnerable
An analysis of the March 2026 LiteLLM supply chain breach. Learn why model security fails when the infrastructure and supply chain remain unexamined.
Mar 31
•
Fernando Lucktemberg
International AI Governance: OECD, G7 & Bletchley Standards
Explore the global AI governance landscape. Learn how the OECD, G7, and Bletchley Process shape the standards that eventually become binding law.
Mar 26
•
Fernando Lucktemberg
1
2
EU AI Act High-Risk Classification & Liability Guide
Navigate the EU AI Act's high-risk tiers and avoid the liability trap. Learn the difference between AI providers and deployers under Article 25.
Mar 24
•
Fernando Lucktemberg
2
1
NIST, ISO 42001 & IEEE: Technical Standards for AI Governance
Master the technical standards layer of AI governance. Learn how to implement NIST AI RMF, ISO 42001, and IEEE 7000 to bridge engineering and…
Mar 19
•
Fernando Lucktemberg
2
1
AI Governance Framework - The Three-Layer Compliance Stack
Navigate global AI regulation with the Three-Layer Stack. Learn how the EU AI Act, NIST AI RMF, and ISO 42001 define the future of AI compliance and…
Mar 17
•
Fernando Lucktemberg
2
Your AI Coding Assistant Is Quietly Undermining Security
AI-assisted coding produces 2.7x more vulnerabilities than human-only work. Explore the security risks, the confidence-competence gap, and how to build…
Mar 12
•
Fernando Lucktemberg
3
MCP, the protocol that wired your AI agent to the world (and left the door unlocked)
Explore the security risks of Model Context Protocol (MCP). Learn how AI agents use MCP to access data and the critical vulnerabilities like tool…
Mar 10
•
Fernando Lucktemberg
1
1
What Each AI Security Role Actually Expects From You
Break down the 10 emerging AI security roles. Learn the skills, mandates, and transition paths from AppSec, Data Science, and GRC into AI/ML security.
Mar 5
•
Fernando Lucktemberg
5
1
2
Audit Logging for Autonomous Systems - What to Capture and Why
Learn why standard AI logging is failing. Discover the 7 critical audit events and cryptographic integrity needed to survive breaches and EU AI Act…
Mar 3
•
Fernando Lucktemberg
1
1
February 2026
Agent-Native Identity - How AI Agents Authenticate, Authorize, and Trust Each Other
Stop using static keys for AI agents. Explore agent-native identity, SPIFFE, and short-lived credentials to close the gap between human and machine…
Feb 26
•
Fernando Lucktemberg
3
3
3
Zero Trust for AI Agents - Principles That Actually Work
Traditional IAM is broken for AI agents. Learn how SPIFFE, least agency, and the Plan-then-Execute pattern enable Zero Trust in autonomous agentic…
Feb 24
•
Fernando Lucktemberg
1
2
The two-brain security model - LLM security through tiered inference to stop prompt injection
Route cheap models for known threats, expensive ones for ambiguous calls. The two-tier security architecture cuts LLM inference costs 76% without…
Feb 19
•
Fernando Lucktemberg
1
This site requires JavaScript to run correctly. Please
turn on JavaScript
or unblock scripts