đź”´ HIGHmalware

LLM-Enabled Malware in the Wild - LABScon25 Insights

LLM-enabled malware is rapidly shifting from proof-of-concept demos to operational threats that embed large language models directly inside malicious payloads. SentinelOne researchers Alex Delamotte and Gabriel Bernadett-Shapiro analyzed families such as PromptLock ransomware and APT28-linked LameHug/PROMPTSTEAL, focusing on samples that use LLMs as a core execution engine rather than as simple lures. These strains generate or transform malicious code at runtime, undermining traditional static-signature and sandbox approaches that expect payloads to be fully present on disk. Paradoxically, many LLM-enabled malware samples still hardcode high‑value artifacts, including provider API keys and prompt templates. By mining VirusTotal for these patterns, the team uncovered over 7,000 samples with more than 6,000 unique API keys, including early LLM-enabled malware like the MalTerminal family. They demonstrated two scalable hunting methods: YARA rules that match provider-specific key formats, such as Base64-encoded OpenAI identifiers, and prompt-hunting signatures that look for telltale instruction blocks embedded in binaries or scripts. Pairing these signatures with lightweight LLM classifiers made it possible to triage whether embedded prompts encode benign automation or clearly malicious intent like credential theft or code execution. For defenders, the research underscores that LLM-driven threats will increasingly evade conventional detections, but it also proves that prompts-as-code and exposed API keys form a lasting forensic trail. Enterprises integrating LLM APIs into internal tools or products now face a dual risk: their own keys being abused by attackers, and adversaries reusing similar techniques to deploy adaptive malware in enterprise environments.

🎯CORTEX Protocol Intelligence Assessment

Business Impact: LLM-enabled malware erodes the reliability of traditional static and sandbox detections, enabling adaptive payloads that change on each execution while still abusing enterprise LLM APIs. Defensive Priority: Build detection around API key misuse, prompt artifacts, and anomalous LLM traffic rather than relying solely on byte-pattern signatures. Industry Implications: Any organization embedding LLMs into products or workflows must treat model access as a privileged interface that can be turned into a malware runtime.

⚡Strategic Intelligence Guidance

  • Inventory all LLM providers, API keys, and integrations in your environment and treat them as high-privilege credentials with rotation and strict access control.
  • Deploy YARA and content-inspection rules that hunt for provider-specific API key formats and suspicious prompt blocks inside binaries, scripts, and repositories.
  • Integrate LLM usage telemetry into SIEM, flagging unusual request volumes, geographies, or models that deviate from normal application behavior.
  • Update threat-hunting playbooks to include prompts-as-code analysis, correlating API key exposures with potential malware samples or unauthorized automation.

Vendors

SentinelOneOpenAI

Threats

LLM-enabled malwarePromptLock ransomwareAPT28LameHugPROMPTSTEALMalTerminal

Targets

Enterprise endpointsOrganizations using LLM APIs
Intelligence Source: LLM-Enabled Malware in the Wild - LABScon25 Insights | Nov 4, 2025