Overview
Klyra provides powerful AI-powered moderation models that can analyze and flag potentially harmful content across multiple modalities. Our models are continually trained on diverse datasets to ensure high accuracy and minimal bias.Available Models
Text Moderation
Our text moderation model can analyze text content in over 50 languages for harmful or inappropriate content.Supported Categories:
- Toxic: Hateful, aggressive, or insulting content
- Harassment: Bullying, threats, or intimidation
- Self-harm: Content promoting self-harm or suicide
- Sexual: Sexually explicit or adult content
- Violence: Graphic or violent descriptions
- Hate speech: Content targeting protected groups
- Spam: Unsolicited promotional content
- Profanity: Swear words and offensive language
Sample Request:
Model Selection
Klyra automatically selects the appropriate moderation model based on the content type you submit. You can also explicitly specify which model version to use:Available Model Versions
Model Name | Content Type | Description |
---|---|---|
klyra-text-v2 | Text | Latest text moderation model with improved multilingual support |
klyra-image-v3 | Image | High-precision image moderation with 98.7% accuracy |
klyra-audio-v1 | Audio | Audio transcription and moderation |
klyra-video-v1 | Video | Frame-by-frame video analysis |
Confidence Scores
All moderation results include confidence scores between 0.0 and 1.0 for each category:- 0.0: No detection of the category
- 1.0: Highest confidence that the content belongs to the category
Model Ethics & Bias Prevention
Klyra is committed to providing fair and unbiased moderation systems. Our models are:- Trained on diverse, representative datasets
- Regularly audited for demographic biases
- Continuously refined based on customer feedback
- Transparent in confidence scoring and decision making