Advisories for Pypi/Instructlab package

2026

InstructLab vulnerable to Path Traversal

A flaw was found in InstructLab. A local attacker could exploit a path traversal vulnerability in the chat session handler by manipulating the logs_dir parameter. This allows the attacker to create new directories and write files to arbitrary locations on the system, potentially leading to unauthorized data modification or disclosure.

InstructLab Includes Functionality from Untrusted Control Sphere

A flaw was found in InstructLab. The linux_train.py script hardcodes trust_remote_code=True when loading models from HuggingFace. This allows a remote attacker to achieve arbitrary Python code execution by convincing a user to run ilab train/download/generate with a specially crafted malicious model from the HuggingFace Hub. This vulnerability can lead to complete system compromise.