0.1.2 • Published 1 year ago
promptsafe v0.1.2
What is PromptSafe?
PromptSafe prevents GPT prompt attacks in Node.js and TypeScript backend applications.
PromptSafe is still in a very early state. You can stay updated with new releases by clicking watch -> custom -> releases in the top right corner.
Project Goals
The goal of the PromptSafe project is to provide comprehensive sanitization and validation for potentially unsafe GPT prompts and inputs.
The following is a non-exhaustive list of proposed features:
- Deny lists
- Allow lists
- Content filtering
- Input length limiting
- Input normalization
- Input obfuscation
- Output encoding
0.1.2
1 year ago