Chatoyant - v0.2.1
    Preparing search index...

    Interface GenStreamOptions

    Options for one-shot streaming.

    interface GenStreamOptions {
        provider?: ProviderId;
        timeout?: number;
        retries?: number;
        temperature?: number;
        creativity?: CreativityLevel;
        maxTokens?: number;
        topP?: number;
        stop?: string | string[];
        frequencyPenalty?: number;
        presencePenalty?: number;
        reasoning?: ReasoningLevel;
        webSearch?: boolean;
        cache?: boolean;
        extra?: Record<string, unknown>;
        onDelta?: (delta: string) => void;
        onComplete?: (fullContent: string) => void;
        onError?: (error: Error) => void;
        system?: string;
        model?: string;
    }

    Hierarchy (View Summary)

    Index

    Properties

    provider?: ProviderId

    Override the provider (auto-detected from model by default).

    timeout?: number

    Request timeout in milliseconds.

    120000
    
    retries?: number

    Number of retries on failure.

    3
    
    temperature?: number

    Sampling temperature (0-2). Lower = more deterministic, higher = more creative. undefined = provider default.

    Can also use semantic presets via creativity option.

    creativity?: CreativityLevel

    Semantic creativity level. Alternative to raw temperature values.

    • 'precise': Temperature 0 (deterministic)
    • 'balanced': Temperature 0.7 (default)
    • 'creative': Temperature 1.0
    • 'wild': Temperature 1.5

    If both temperature and creativity are set, temperature takes precedence.

    maxTokens?: number

    Maximum tokens to generate. undefined = provider default.

    topP?: number

    Top-p (nucleus) sampling. undefined = provider default.

    stop?: string | string[]

    Stop sequences.

    frequencyPenalty?: number

    Frequency penalty (-2 to 2).

    presencePenalty?: number

    Presence penalty (-2 to 2).

    reasoning?: ReasoningLevel

    Unified reasoning level across providers. Maps automatically to provider-specific implementations:

    • 'off': No reasoning (OpenAI: none, Anthropic: no thinking, xAI: *-non-reasoning)
    • 'low': Light reasoning (OpenAI: low, Anthropic: 2048 tokens)
    • 'medium': Moderate reasoning (OpenAI: medium, Anthropic: 8192 tokens)
    • 'high': Deep reasoning (OpenAI: high, Anthropic: 32768 tokens, xAI: *-reasoning)

    Note: Not all models support reasoning. For unsupported models, this is ignored.

    webSearch?: boolean

    Enable web search (xAI only). Ignored for other providers.

    cache?: boolean

    Enable prompt caching (Anthropic only). Marks the system prompt for caching.

    extra?: Record<string, unknown>

    Arbitrary additional options passed to the provider. Use for bleeding-edge features not yet in the typed interface.

    onDelta?: (delta: string) => void

    Callback for each content delta.

    onComplete?: (fullContent: string) => void

    Callback when streaming completes.

    onError?: (error: Error) => void

    Callback on error during streaming.

    system?: string

    System prompt

    model?: string

    Model to use