⚠️ MEDIUMresearch

tl;dr sec #303: MCP Security Scanners, GitLab CI/CD Attacks, AI Benchmarks

Security tooling and research roundup: (1) New MCP (Model Context Protocol) security scanners released—cisco-ai-defense/mcp-scanner combines YARA rules, LLM-as-judge, and Cisco AI Defense API to scan MCP tools/prompts/resources for vulnerabilities. fr0gger/proximity discovers MCP server capabilities and uses NOVA rules to detect prompt injection and jailbreak attempts. (2) GitLab CI/CD attack chain demonstrated: identifying instance runners, executing commands through pipelines to gain reverse shell, accessing sensitive build data (.env files, SSH keys), then pivoting to AWS via EC2 metadata and IAM roles to execute commands on other instances via SSM. Recommendations: restrict network access, avoid instance runners, use project/group runners instead. (3) Backbone Breaker Benchmark (b3) released for testing AI agent security using 'threat snapshots'—testing individual states/components instead of end-to-end flows. Findings: reasoning models more secure, open-weight models closing gap faster than expected. (4) CrowdStrike's CyberSOCEval evaluates LLMs on CTI concepts, CVE-to-CWE mapping, CVSS scoring, and MITRE ATT&CK extraction—larger models perform better but reasoning models don't see the boost they get in coding/math.

🎯CORTEX Protocol Intelligence Assessment

This highlights the emergence of AI-specific security tooling—MCP scanners address new attack surfaces introduced by AI agent architectures. The GitLab attack chain shows how CI/CD remains a high-value target: access to secrets, ability to execute code in production-adjacent environments, and cloud credential exposure. The AI benchmarks reveal that security evaluation lags behind capability development—we're still figuring out how to measure AI security effectively.

Strategic Intelligence Guidance

  • MCP deployments: implement scanning for prompt injection, jailbreak attempts, and tool misuse—treat MCP servers as untrusted input surfaces.
  • GitLab hardening: eliminate instance runners, restrict network access from CI/CD environments, minimize AWS IAM role permissions for build instances.
  • Secret management: never store credentials in repository files (.env, config)—use secure parameter stores, rotate regularly, audit access.
  • AI security posture: benchmark models against CTI and security tasks before deployment, understand failure modes, maintain human validation for critical security decisions.
  • Continuous testing: AI threat landscape evolves rapidly—regularly reassess model security as new attack techniques emerge.

Vendors

GitLabCiscoCrowdStrike

Targets

CI/CDAI Systems