The world of AI is still advancing rapidly, but so are the threats. Wherever you get your news, Clawdbot, or is it Moltbot, or is it now called OpenClaw(?) is everywhere lately. You can’t avoid talk of this AI personal assistant. It’s actually now called OpenClaw after some naming drama, and at the time of writing has 166k followers on GitHub. The repository also has an alarming number of forks, issues, and pull requests. This sure is one vibrant project at just under 9k commits since it first came into being November 2025.


The project is so new that the OpenClaw domain name was only registered on 29th January. New is great and all, but when it comes to security risk, it’s quite the opposite. It’s no surprise we are hearing reports of security vulnerabilities (Remote Code Execution) being found in the tool, as well as stories of people insecurely setting it up and regretting it. When it comes to managing dependencies, I recommend a small delay in acquiring the latest package updates. This is because malware typically has a shelf life of about a day. Here’s a screenshot from a slide deck I recently delivered, so perhaps holding off a little could be prudent from a risk perspective.

Interestingly, an entire ecosystem of skills (ClawHub) has been stood up at breakneck speed and, as I scroll down the list, it’s a seemingly infinite collection of community-produced contributions that can be selected to extend the AI assistant’s capabilities.
Because of the popularity and freshness of this tool, we are seeing many jumping on this “claw” bandwagon. From companies offering to host it, to people trying to contribute skills and improvements, to perhaps nefarious intents such as cloning it and capitalising on the confusion from its renaming and Search Engine Optimisation gaps. For example, which of these DuckDuckGo search results is the right one?

The above results were from a search for “clawdbot” performed on 5th February. Note that the now official site https://openclaw.ai was nowhere to be found on that first page of results.
ClawAtWork
Why are we talking about OpenClaw when it’s not exactly positioned for enterprise use (yet)?
We track malware threats all the time, and have recently seen “claw” related packages published to the ecosystems used by enterprises, so the line is blurring between this personal tool and it’s potential use in business. Here is a non-exhaustive list of reasons why organisations should be concerned about the potential threats relating to OpenClaw:
- An assumption that corporate-managed endpoints are sufficiently secure and hardened, so “it will be fine”, while unintentionally or intentionally allow access to their work credentials or sensitive information.
- Perhaps a bit of “I’m not putting it on my own computer—I don’t want to break it,” or maybe the only computer they have is the one from work.
- For automation, shadow IT.
- Staff could be considering evaluating it as useful in the enterprise.
MalClaw
Over Christmas, in general the malware authors also took a break and, as far as I recall, nothing significant occurred. I was expecting another worm, but that has yet to materialise. What I have found is that since the new year I feel the pressure from the ever-growing pile of NPM packages to look through, which continues to grow:

I wonder how much of the new package volume has to do with OpenClaw, and so if we filter by packages with “claw” in the name this is what it looks like:

I look at all this newness—the tool and its ecosystem, the name confusion—and I see great opportunity for attack and abuse from malicious actors:
- Typosquats on package and domain names and their variants.
- Clones from competitors to the tool maker. These are not necessarily malicious but are potentially unwanted.
- Clones and typosquats containing malware.
- The actual security vulnerabilities present in the tool itself, given its rapid development and short time in the wild.
- How are skills in the marketplace vetted—are people just pulling down any skill and how many of those skills are malicious?
- Prompt injection abuse potential.
- Security concerns for how people are standing up infrastructure to support running the tool, leaving ports exposed etc.
In the Python ecosystem we have 5 packages so far claws, soupclaw, claw-cli, pinkyclawd and clawpack, and in NPM there are at least 98:
@agentdog/clawdbot@agiflowai/clawdbot-mcp-plugin@ch4r10teer41/clawpass@clawdbot/bluebubbles@clawdbot/lobster@clawdbot/matrix@clawdbot/msteams@clawdbot/nextcloud-talk@clawdbot/voice-call@clawdbot/zalo@clawdcc/cvm-benchmark@clawdlets/plugin-cattle@clawguard/core@clawject/language-service@clawspace/cli@csuclawebservices/icons@dexterai/clawdexter@douglasdong/openclaw@gabe-clawd/feishu@gabe-clawd/feishu-webhook@getfoundry/foundry-openclaw@hoverlover/clawdbot-supermemory@joshualelon/clawdbot-skill-flow@jungjaehoon/clawdbot-mama@kookapp/clawdbot-kook@kookapp/clawdbot-plugin@lotreace/clawd@marshulll/openclaw-wecom@max1874/openclaw-wecom@mcinteerj/clawdbot-gmail@mcinteerj/openclaw-gmail@openclaw-china/dingtalk@openclaw-china/feishu@openclaw-china/shared@openclaw-china/wecom@qingchencloud/openclaw-zh@theaux/clawdbot@tobotorui/openclaw-wecom-channel@umimoney/clawdbot-skill@xwang152/claw-larkclaw-chatclawcityclawcloudclawctl-cliclawdclawd-bridgeclawd-faceclawdbotclawdbot-cli
clawdbot-cnclawdbot-dingtalkclawdbot-elymentsclawdbot-memory-mapperclawdbot-model-switchclawdbot-penfieldclawdbot-ringcentralclawdbot-sendblueclawdfunclawdhubclawdisclawdletsclawdocsclawdpadclawdrclawguardclawguard-openclawclawhubclawmarksclaworldcreate-clawfaf-clawdbotgewe-openclawopenclawopenclaw-agentopenclaw-agentsopenclaw-aihubmixopenclaw-apple-notesopenclaw-board-installeropenclaw-cnopenclaw-content-kitopenclaw-enforceopenclaw-langcacheopenclaw-loomopenclaw-manageropenclaw-memory-mapperopenclaw-meshopenclaw-orchestratoropenclaw-penfieldopenclaw-plugin-wecomopenclaw-pluginsopenclaw-promitheusopenclawaiopenclawbotopenclawdopenclawsopenclaws-botpaws-and-claws-landingsecureclaw
The challenge for malware hunters lies in the sheer size of these “claw” packages, which, by their nature, are packed with many capabilities to be a useful AI assistant. As example, consider the @dillobot NPM package which claims to be a “Security-hardened fork of OpenClaw” and “Enterprise-grade security without sacrificing power”:

There is just so much code to sift through when triaging these packages, and the attack surface area is large. We try to hypothesise the myriad of ways an LLM could be tricked to carry out malicious actions, e.g. through prompt injection or by proxying API calls, which seems to be popular to exfiltrate data. At present I can’t say for sure if this dillobot package is OK or not (it’s not obviously malware, though I am wary of anything that claims to be security-hardened), and it is only 11 hours old at the time of writing.
Jason Meller at 1Password put out a blog post on 2nd February in which he reported that the most downloaded skill from the ClawHub marketplace was malicious, and that it was a campaign and not a one-off incident. I’m going to echo their sage advice:
If you are experimenting with OpenClaw, do not do it on a company device. Full stop.
– Jason Meller, 1Password, 2nd Feb 2026
ClawAtHome
You may be considering experimenting with OpenClaw yourself. It is understood that the way OpenClaw interacts with some AI providers could be in violation of their Terms of Service (ToS), so be sure to check the details before proceeding. If after double-checking compatibility with your AI supplier you think the technology sounds interesting, then go for it at your own discretion, but perhaps follow these safety precautions:
- Never run it on any company-owned or managed device.
- Do not give it any keys, credentials or access to any company system.
- Try to source equipment that can be dedicated for your OpenClaw experiment. This should be hardware which is new or has been cleared/reset, with a fresh OS installation, so that if something bad happens the impact should be minimal.
- Practise the principle of less is more. Be careful not to give the assistant too much access too soon. Consider incrementally adding in new features and checking you are happy before moving on to the next. Recall that for every integration you wire up, that could be an avenue for someone to abuse should your AI assistant become compromised.
- Try to vet the skills, how new are they and is the author reputable.
- Consider using throw-away accounts so you don’t need to give it access to, for example, your emails, as part of your experiment.
Veracode Continues To Defend Enterprise Ecosystems
Whilst we do not currently monitor the ClawHub ecosystem, we are inspecting “claw” related packages entering the ecosystems we do support. Rest assured that we are continuously improving our malware detection capabilities to protect our customers from the emerging threats relating to this new tool.