ToxBlock - A professional profanity detection module using Gemini AI

Provides comprehensive text analysis for detecting profanity, toxic content, hate speech, and inappropriate language across multiple languages.

ToxBlock

Example

// Basic usage
const toxBlock = new ToxBlock({ apiKey: 'your-gemini-api-key' });
const result = await toxBlock.checkText('Hello world');
console.log(result.isProfane); // false
console.log(result.confidence); // 0.95

// Batch processing
const results = await toxBlock.checkTexts(['Hello', 'Bad word']);
results.forEach((result, index) => {
console.log(`Text ${index}: ${result.isProfane ? 'Toxic' : 'Clean'}`);
});

// Custom configuration
const customToxBlock = new ToxBlock({
apiKey: 'your-api-key',
model: 'gemini-2.0-flash-001',
timeout: 15000,
customPrompt: 'Analyze this text: {TEXT}'
});

Constructors

Methods

  • Checks if the provided text contains profanity or toxic content

    Parameters

    • text: string

      The text to analyze

    Returns Promise<ToxBlockResult>

    Promise resolving to detection result

    Throws

    When analysis fails

    Example

    const result = await toxBlock.checkText('This is a test');
    if (result.isProfane) {
    console.log('Profanity detected!');
    }
  • Checks multiple texts in batch

    Parameters

    • texts: string[]

      Array of texts to analyze

    Returns Promise<ToxBlockResult[]>

    Promise resolving to array of detection results

    Throws

    When batch analysis fails

  • Gets the current configuration

    Returns {
        model: string;
        timeout: number;
    }

    Current model and timeout settings

    • model: string
    • timeout: number