OpenClaw在K8s Pod中稳定运行的Docker制作指南(源码版)

张开发
2026/4/17 8:23:20 15 分钟阅读

分享文章

OpenClaw在K8s Pod中稳定运行的Docker制作指南(源码版)
最近鼎道智联和联想合作推出的 Yoga AI mini 智能迷你主机中集成了 DingClaw这个设计让用户用上 OpenClaw 变得格外省心 —— 不用再费劲儿手动部署配置开机就能直接用极大降低了使用门槛。作为一名常年和智能硬件、容器化部署打交道的开发者在实际落地过程中我们发现容器化部署的灵活性对后续产品迭代至关重要。为了让未来更多集成 DingClaw 的智能硬件能在 Docker 环境下更稳定、更灵活地扩充功能同时适配 k8s 云环境的运行需求我近期专门深入研究了 OpenClaw 的 Docker 部署方案。这篇内容就记录下我在公司 k8s 云上部署 OpenClaw 时踩过的坑以及最终通过源码方式实现 Docker 稳定运行的过程和自己的解决思路希望能和同行开发者交流分享。通过 openclaw_zh 官方 docker 制作过程以及遇到的问题openclaw的汉化版制作流程如下拉取镜像docker pull justlikemaki/openclaw-docker-cn-im:latest制作文件 docker-compose.ymlversion: 3.8 services: openclaw-gateway: container_name: openclaw-cn-1 image: ${OPENCLAW_IMAGE} entrypoint: [/bin/bash, /usr/local/bin/init.sh] cap_add: - CHOWN - SETUID - SETGID - DAC_OVERRIDE # 可选指定容器运行 UID:GID例如 1000:1000 # 默认保持 root 启动以便 init.sh 自动修复挂载卷权限后再降权运行网关 user: ${OPENCLAW_RUN_USER:-0:0} environment: TZ: Asia/Shanghai HOME: /home/node TERM: xterm-256color # 模型配置 MODEL_ID: ${MODEL_ID} BASE_URL: ${BASE_URL} API_KEY: ${API_KEY} API_PROTOCOL: ${API_PROTOCOL} CONTEXT_WINDOW: ${CONTEXT_WINDOW} MAX_TOKENS: ${MAX_TOKENS} # 通道配置 TELEGRAM_BOT_TOKEN: ${TELEGRAM_BOT_TOKEN} FEISHU_APP_ID: ${FEISHU_APP_ID} FEISHU_APP_SECRET: ${FEISHU_APP_SECRET} DINGTALK_CLIENT_ID: ${DINGTALK_CLIENT_ID} DINGTALK_CLIENT_SECRET: ${DINGTALK_CLIENT_SECRET} DINGTALK_ROBOT_CODE: ${DINGTALK_ROBOT_CODE} DINGTALK_CORP_ID: ${DINGTALK_CORP_ID} DINGTALK_AGENT_ID: ${DINGTALK_AGENT_ID} QQBOT_APP_ID: ${QQBOT_APP_ID} QQBOT_CLIENT_SECRET: ${QQBOT_CLIENT_SECRET} # 企业微信配置 WECOM_TOKEN: ${WECOM_TOKEN} WECOM_ENCODING_AES_KEY: ${WECOM_ENCODING_AES_KEY} # 工作空间配置 WORKSPACE: ${WORKSPACE} # Gateway 配置 OPENCLAW_GATEWAY_TOKEN: ${OPENCLAW_GATEWAY_TOKEN} OPENCLAW_GATEWAY_BIND: ${OPENCLAW_GATEWAY_BIND} OPENCLAW_GATEWAY_PORT: ${OPENCLAW_GATEWAY_PORT} OPENCLAW_BRIDGE_PORT: ${OPENCLAW_BRIDGE_PORT} OPENCLAW_GATEWAY_ALLOW_INSECURE: true NODE_TLS_REJECT_UNAUTHORIZED: 0 volumes: - ${OPENCLAW_DATA_DIR}:/home/node/.openclaw # 使用匿名卷排除 extensions 目录使用镜像中预装的插件 - /home/node/.openclaw/extensions ports: - ${OPENCLAW_GATEWAY_PORT}:18789 - ${OPENCLAW_BRIDGE_PORT}:18790 init: true #restart: unless-stopped restart: no制作文件 .env# OpenClaw Docker 环境变量配置示例 # 复制此文件为 .env 并修改相应的值 # Docker 镜像配置 #OPENCLAW_IMAGEopenclaw-gateway:1 OPENCLAW_IMAGEopenclaw-gateway:1 # 模型配置 MODEL_IDqwen-plus-latest BASE_URLhttps://dashscope.aliyuncs.com/compatible-mode/v1 API_KEYak_xxxx # API 协议类型: openai-completions 或 anthropic-messages # openai-completions: OpenAI 协议 (适用于 OpenAI、Gemini 等模型) # anthropic-messages: Claude 协议 (适用于 Claude 模型支持 Prompt Caching) API_PROTOCOLopenai-completions # 模型上下文窗口大小 CONTEXT_WINDOW200000 # 模型最大输出 tokens MAX_TOKENS8192 # Telegram 配置可选留空则不启用 TELEGRAM_BOT_TOKEN # 飞书配置可选留空则不启用 FEISHU_APP_IDxxxx FEISHU_APP_SECRETxxxx # 钉钉配置可选留空则不启用 DINGTALK_CLIENT_ID DINGTALK_CLIENT_SECRET DINGTALK_ROBOT_CODE DINGTALK_CORP_ID DINGTALK_AGENT_ID # QQ 机器人配置可选留空则不启用 QQBOT_APP_ID QQBOT_CLIENT_SECRET # 企业微信配置可选留空则不启用 WECOM_TOKEN WECOM_ENCODING_AES_KEY # 工作空间配置不要更改 WORKSPACE/home/node/.openclaw/workspace # 挂载目录配置按实际更改 # OpenClaw 数据目录包含配置文件、工作空间等所有数据 OPENCLAW_DATA_DIR/home/liulj/.openclaw # 可选容器启动用户 UID:GID # 默认 0:0root用于 init.sh 自动修复挂载目录权限再降权为 node 启动服务 # 如需与宿主机用户对齐可设置为 1000:1000 或 Linux 上的 $(id -u):$(id -g) OPENCLAW_RUN_USER0:0 # Gateway 配置 ## 网关 token用于认证按实际更改 OPENCLAW_GATEWAY_TOKEN123456 #OPENCLAW_GATEWAY_BINDlan #OPENCLAW_GATEWAY_BINDloopback #OPENCLAW_GATEWAY_BINDcustom #OPENCLAW_GATEWAY_HOST0.0.0.0 OPENCLAW_GATEWAY_PORT18789 OPENCLAW_BRIDGE_PORT18790 #OPENCLAW_GATEWAY_URLws://127.0.0.1:18789 #OPENCLAW_GATEWAY_PAIRING_REQUIREDfalse OPENCLAW_GATEWAY_BINDlan OPENCLAW_GATEWAY_URLws://127.0.0.1:18789制作文件 nameenvDOCKERNAMEopenclaw-gateway制作 dockerdocker-compose up -d运行 dockersource nameenv docker start ${DOCKER_NAME}docker exec -it ${DOCKER_NAME} /bin/bash -l在 k8s 中遇到的问题该 docker 在k8s 的pod中运行遇到无法执行命令 openclaw devices list导致设备匹配无法进行因此无法从外围连接 openclaw。由于没有这个汉化版的 openclaw 的源代码 最终没有找到原因。通过源码制作 docker 并成功在k8s 云上运行的过程为了调试上面在 k8s 的pod中遇到的问题 下载了 openclaw 的源码 并进行调试。但通过源码编译运行的 openclaw 在 k8s 的 pod 中运行正常 无法复现上面遇到的问题。因此最后采用基于源代码的 openclaw 在docker 和 k8s 的 pod中运行的方案。下面是采用源码的 docker 制作过程下载源码mkdir -p /opt/openclawcd /opt/openclawgit clone https://github.com/openclaw/openclaw.git注 目前采用的版本是 98125e9982b712e129c4896891cc2e48ef2485a建立编译环境apt install -y build-essential cmake git pkg-config wget unzipapt install -y curl git ca-certificates build-essential jq wget python3 python3-pip python3-venv nodejs npm编译openclawpnpm build运行 openclaw建立 /usr/local/bin/init.sh#!/bin/bash #rm -f /var/run/openclaw.pid #/usr/local/bin/service_openclaw.sh --stop export PNPM_HOME/root/.local/share/pnpm case :$PATH: in *:$PNPM_HOME:*) ;; *) export PATH$PNPM_HOME:$PATH ;; esac openclaw_whichwhich openclaw env_varenv #echo openclaw which [${openclaw_which}] /var/log/openclaw.log #echo open env [${env_var}] /var/log/openclaw.log export OPENCLAW_STATE_DIR/home/node/.openclaw export OPENCLAW_WORKSPACE/home/node/.openclaw/workspace openclaw gateway /var/log/openclaw_running.log 21 while true; do sleep 3600 done编辑配置文件/home/node/.openclaw/openclaw.json{ meta: { lastTouchedVersion: 2026.3.13, lastTouchedAt: 2026-03-30T06:20:32.663Z }, update: { checkOnStart: false }, browser: { executablePath: /usr/bin/chromium, headless: true, noSandbox: true, defaultProfile: openclaw }, models: { mode: merge, providers: { default: { baseUrl: https://dashscope.aliyuncs.com/compatible-mode/v1, apiKey: sk-xxxxxx, api: openai-completions, models: [ { id: qwen-plus-latest, name: qwen-plus-latest, reasoning: false, input: [ text, image ], cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 }, contextWindow: 200000, maxTokens: 8192 } ] } } }, agents: { defaults: { model: { primary: default/qwen-plus-latest }, imageModel: { primary: default/qwen-plus-latest }, workspace: /home/node/.openclaw/workspace, compaction: { mode: safeguard }, elevatedDefault: full, maxConcurrent: 4, subagents: { maxConcurrent: 8 }, sandbox: { mode: off } } }, tools: { profile: full, sessions: { visibility: all }, fs: { workspaceOnly: true } }, messages: { ackReactionScope: group-mentions, tts: { edge: { voice: zh-CN-XiaoxiaoNeural } } }, commands: { native: auto, nativeSkills: auto, restart: true, ownerDisplay: raw }, channels: {}, gateway: { port: 18789, mode: local, bind: lan, controlUi: { allowedOrigins: [ http://localhost:18789, http://127.0.0.1:18789 ], allowInsecureAuth: true, dangerouslyDisableDeviceAuth: false }, auth: { mode: token, token: 123456 } }, memory: { backend: qmd, citations: auto, qmd: { command: /usr/local/bin/qmd, includeDefaultMemory: true, paths: [ { path: /home/node/.openclaw/workspace, name: workspace, pattern: **/*.md } ], sessions: { enabled: true }, update: { interval: 5m, debounceMs: 15000, onBoot: true }, limits: { maxResults: 16, timeoutMs: 8000 } } }, plugins: { allow: [], entries: { feishu: { enabled: false }, dingtalk: { enabled: false }, qqbot: { enabled: false }, wecom: { enabled: false }, openclaw-lark: { enabled: false } }, installs: {} } }运行 openclaw/usr/local/bin/init.sh注 由于/usr/local/bin/init.sh 每次在docker 启动时自动加载 因此可以成功在 k8s 的 pod 中运行 openclaw总结这次基于源码完成 OpenClaw 的 Docker 适配其实也是为后续鼎道智联和联想 Yoga AI mini 这类集成 DingClaw 的产品打基础 —— 毕竟这类智能迷你主机在实际场景中很可能需要在容器化环境下做功能扩充和环境适配而 k8s 环境的兼容性是绕不开的点。虽然过程中踩了不少坑比如官方镜像在 k8s pod 中执行 OpenClaw devices list 命令失败的问题因为没有汉化版源码没法深挖根因但换用源码编译部署的方式后不仅复现不了之前的问题还能更灵活地调整配置、适配产品的实际需求。作为开发者我觉得这类从实际产品落地需求出发的技术踩坑和复盘特别有价值既解决了当下 OpenClaw 在 k8s 环境的运行问题也为后续 DingClaw 相关产品的容器化扩充积累了实操经验。如果有同行也在做类似的智能硬件 AI 组件的容器化部署希望我的这些操作和思路能提供一点参考也欢迎大家交流不同的部署优化思路。

更多文章