DockerDash Vulnerability in Ask Gordon AI Enables Code Execution via Image Metadata
Noma Labs discovered a critical flaw in Docker's Ask Gordon AI assistant allowing attackers to hijack AI reasoning through malicious image metadata, leading to remote code execution or data exfiltration.
Critical vLLM Vulnerability Lets Attackers Hijack AI Servers via Video Link
CVE-2026-22778, a critical RCE in vLLM versions 0.8.3-0.14.0, chains a PIL information leak with a JPEG2000 heap overflow to achieve code execution through a malicious video link.
OpenClaw AI Agent Vulnerability Enables One-Click Remote Code Execution
CVE-2026-25253 (CVSS 8.8) allows attackers to steal authentication tokens and achieve RCE through a single malicious link via cross-site WebSocket hijacking—even on localhost-only OpenClaw instances.
400+ Malicious OpenClaw Skills Flood ClawHub With Info-Stealing Malware
Over 400 malicious OpenClaw AI agent skills on ClawHub deploy Atomic Stealer via ClickFix-style social engineering. The hightower6eu account alone published 314 malicious skills targeting crypto and developer credentials.
Top Email Security Solutions for 2026
Ranking the best email security platforms based on the 2025 Forrester Wave results, AI-powered detection, and defense against phishing, BEC, and multi-channel social engineering.
Top NDR Platforms for 2026
Ranking the leading network detection and response platforms based on AI-driven threat detection, encrypted traffic analysis, cloud network visibility, and SOC integration.
Securing AI and LLM Applications
A practical guide to securing AI and large language model applications, covering the OWASP Top 10 for LLMs (2025), prompt injection defenses, RAG security, AI agent risks, and compliance with NIST AI RMF and the EU AI Act.
Securing LLM and AI Deployments
A practical guide to securing large language model and AI deployments, covering prompt injection, data extraction, RAG pipeline security, AI gateways, input/output filtering, and the OWASP Top 10 for LLM Applications.
NIST AI Risk Management Framework: Implementation Guide
Practical guide to implementing NIST's AI Risk Management Framework (AI RMF 1.0) and the Cyber AI Profile (IR 8596), covering the Govern, Map, Measure, and Manage functions for organizations building or deploying AI systems.
Microsoft Releases Enhanced Security Controls for Copilot for Microsoft 365 Amid Enterprise Data Oversharing Concerns
Microsoft introduces new Purview DLP integration, sensitivity label enforcement, and oversharing assessment tools for Copilot for Microsoft 365, responding to widespread CISO concerns about AI assistants accessing sensitive data through existing permissions.
Varonis Finds 'Reprompt' Prompt Injection That Exfiltrates Data From Microsoft Copilot
Varonis discovered a prompt injection attack chain that could steal sensitive data from Microsoft Copilot with a single click, bypassing safety filters through double-request and chain-request techniques. Patched January 13, 2026.
SentinelOne
Autonomous cybersecurity platform delivering AI-powered endpoint, cloud, and identity protection with automated response capabilities.