OpenAI Moderations API

The Moderations API can be used to check whether text or images are potentially harmful. It classifies content across several categories including harassment, hate speech, sexual content, self-harm, violence, and illicit content. The moderation endpoint is free to use and supports the omni-moderation-latest model for multi-modal inputs.