PraisonAI has an SSRF bypass
The URL checking logic in PraisonAI has a logical flaw that could be bypassed by attackers, leading to SSRF attacks.
The URL checking logic in PraisonAI has a logical flaw that could be bypassed by attackers, leading to SSRF attacks.
The fix for CVE-2026-40315 added input validation to SQLiteConversationStore only. Nine sibling backends — MySQL, PostgreSQL, async SQLite/MySQL/PostgreSQL, Turso, SingleStore, Supabase, SurrealDB — pass table_prefix straight into f-string SQL. Same root cause, same code pattern, same exploitation. 52 unvalidated injection points across the codebase. postgres.py additionally accepts an unvalidated schema parameter used directly in DDL.
The fix for CVE-2026-40315 added input validation to SQLiteConversationStore only. Nine sibling backends — MySQL, PostgreSQL, async SQLite/MySQL/PostgreSQL, Turso, SingleStore, Supabase, SurrealDB — pass table_prefix straight into f-string SQL. Same root cause, same code pattern, same exploitation. 52 unvalidated injection points across the codebase. postgres.py additionally accepts an unvalidated schema parameter used directly in DDL.
web_crawl's httpx fallback path passes user-supplied URLs directly to httpx.AsyncClient.get() with follow_redirects=True and no host validation. An LLM agent tricked into crawling an internal URL can reach cloud metadata endpoints (169.254.169.254), internal services, and localhost. The response content is returned to the agent and may appear in output visible to the attacker. This fallback is the default crawl path on a fresh PraisonAI installation (no Tavily key, no Crawl4AI installed).
The list_files() tool in FileTools validates the directory parameter against workspace boundaries via _validate_path(), but passes the pattern parameter directly to Path.glob() without any validation. Since Python's Path.glob() supports .. path segments, an attacker can use relative path traversal in the glob pattern to enumerate arbitrary files outside the workspace, obtaining file metadata (existence, name, size, timestamps) for any path on the filesystem.
The execute_command function in shell_tools.py calls os.path.expandvars() on every command argument at line 64, manually re-implementing shell-level environment variable expansion despite using shell=False (line 88) for security. This allows exfiltration of secrets stored in environment variables (database credentials, API keys, cloud access keys). The approval system displays the unexpanded $VAR references to human reviewers, creating a deceptive approval where the displayed command differs from what actually executes.
read_skill_file() in skill_tools.py allows reading arbitrary files from the filesystem by accepting an unrestricted skill_path parameter. Unlike file_tools.read_file which enforces workspace boundary confinement, and unlike run_skill_script which requires critical-level approval, read_skill_file has neither protection. An agent influenced by prompt injection can exfiltrate sensitive files without triggering any approval prompt.
The web_crawl() function in praisonaiagents/tools/web_crawl_tools.py accepts arbitrary URLs from AI agents with zero validation. No scheme allowlisting, hostname/IP blocklisting, or private network checks are applied before fetching. This allows an attacker (or prompt injection in crawled content) to force the agent to fetch cloud metadata endpoints, internal services, or local files via file:// URLs.
Summary The memory hooks executor in praisonaiagents passes a user-controlled command string directly to subprocess.run() with shell=True at src/praisonai-agents/praisonaiagents/memory/hooks.py lines 303 to 305. No sanitization, no shlex.quote(), no character filter, and no allowlist check exists anywhere in this file. Shell metacharacters including semicolons, pipes, ampersands, backticks, dollar-sign substitutions, and newlines are interpreted by /bin/sh before the intended command executes. Two independent attack surfaces exist. The first is via pre_run_command and …
The AGUI endpoint (POST /agui) has no authentication and hardcodes Access-Control-Allow-Origin: * on all responses. Combined with Starlette/FastAPI's Content-Type-agnostic JSON parsing, any website a victim visits can silently trigger arbitrary agent execution against a locally-running AGUI server and read the full response, including tool execution results and potentially sensitive data from the victim's environment.
The approval system in PraisonAI Agents caches tool approval decisions by tool name only, not by invocation arguments. Once a user approves execute_command for any command (e.g., ls -la), all subsequent execute_command calls in that execution context bypass the approval prompt entirely. Combined with os.environ.copy() passing all process environment variables to subprocesses, this allows an LLM agent (potentially via prompt injection) to silently exfiltrate API keys and credentials without further …
PraisonAI automatically imports ./tools.py from the current working directory when launching certain components. This includes call.py, tool_resolver.py, and CLI tool-loading paths. A malicious tools.py placed in the process working directory is executed immediately, allowing arbitrary Python code execution in the host environment.
PraisonAI automatically imports ./tools.py from the current working directory when launching certain components. This includes call.py, tool_resolver.py, and CLI tool-loading paths. A malicious tools.py placed in the process working directory is executed immediately, allowing arbitrary Python code execution in the host environment.
praisonai workflow run <file.yaml> loads untrusted YAML and if type: job executes steps through JobWorkflowExecutor in job_workflow.py. This supports: run: → shell command execution via subprocess.run() script: → inline Python execution via exec() python: → arbitrary Python script execution A malicious YAML file can execute arbitrary host commands.
praisonai workflow run <file.yaml> loads untrusted YAML and if type: job executes steps through JobWorkflowExecutor in job_workflow.py. This supports: run: → shell command execution via subprocess.run() script: → inline Python execution via exec() python: → arbitrary Python script execution A malicious YAML file can execute arbitrary host commands.
praisonai browser start exposes the browser bridge on 0.0.0.0 by default, and its /ws endpoint accepts websocket clients that omit the Origin header entirely. An unauthenticated network client can connect as a fake controller, send start_session, cause the server to forward start_automation to another connected browser-extension websocket, and receive the resulting action/status stream back over that hijacked session. This allows unauthorized remote use of a connected browser automation session without …
praisonai browser start exposes the browser bridge on 0.0.0.0 by default, and its /ws endpoint accepts websocket clients that omit the Origin header entirely. An unauthenticated network client can connect as a fake controller, send start_session, cause the server to forward start_automation to another connected browser-extension websocket, and receive the resulting action/status stream back over that hijacked session. This allows unauthorized remote use of a connected browser automation session without …
execute_code() in praisonaiagents.tools.python_tools defaults to sandbox_mode="sandbox", which runs user code in a subprocess wrapped with a restricted builtins dict and an AST-based blocklist. The AST blocklist embedded inside the subprocess wrapper (blocked_attrs, line 143 of python_tools.py) contains only 11 attribute names — a strict subset of the 30+ names blocked in the direct-execution path. The four attributes that form a frame-traversal chain out of the sandbox are all absent from …
The MultiAgentLedger and MultiAgentMonitor components in the provided code exhibit vulnerabilities that can lead to context leakage and arbitrary file operations. Specifically: Memory State Leakage via Agent ID Collision: The MultiAgentLedger uses a dictionary to store ledgers by agent ID without enforcing uniqueness. This allows agents with the same ID to share ledger instances, leading to potential leakage of sensitive context data. Path Traversal in MultiAgentMonitor: The MultiAgentMonitor constructs file …
run_python() in praisonai constructs a shell command string by interpolating user-controlled code into python3 -c "<code>" and passing it to subprocess.run(…, shell=True). The escaping logic only handles \ and ", leaving $() and backtick substitutions unescaped, allowing arbitrary OS command execution before Python is invoked.
execute_code() in praisonai-agents runs attacker-controlled Python inside a three-layer sandbox that can be fully bypassed by passing a str subclass with an overridden startswith() method to the _safe_getattr wrapper, achieving arbitrary OS command execution on the host.
FileTools.download_file() in praisonaiagents validates the destination path but performs no validation on the url parameter, passing it directly to httpx.stream() with follow_redirects=True. An attacker who controls the URL can reach any host accessible from the server including cloud metadata services and internal network services.