Result of profanity detection analysis

ToxBlockResult

Example

const result: ToxBlockResult = {
isProfane: false,
confidence: 0.95,
language: 'en',
details: 'Clean content detected'
};
interface ToxBlockResult {
    isProfane: boolean;
    confidence: number;
    language?: string;
    details?: string;
}

Properties

isProfane: boolean

Whether the text contains profanity, toxic content, or inappropriate language

confidence: number

Confidence score between 0 and 1 (higher means more confident)

language?: string

Detected language code (e.g., 'en', 'es', 'fr') or 'unknown'

details?: string

Additional details about the detection or error information