Chatoyant - v0.10.0
    Preparing search index...

    Interface GenDataOptions

    Options for one-shot structured data generation.

    interface GenDataOptions {
        provider?: ProviderId;
        timeout?: number;
        retries?: number;
        localBaseUrl?: string;
        localApiKey?: string;
        localTimeout?: number;
        temperature?: number;
        creativity?: CreativityLevel;
        maxTokens?: number;
        topP?: number;
        stop?: string | string[];
        frequencyPenalty?: number;
        presencePenalty?: number;
        reasoning?: ReasoningLevel;
        webSearch?: boolean;
        cache?: boolean;
        thinkingBudget?: number;
        extra?: Record<string, unknown>;
        system?: string;
        model?: string;
    }

    Hierarchy (View Summary)

    Index

    Properties

    provider?: ProviderId

    Override the provider (auto-detected from model by default).

    timeout?: number

    Request timeout in milliseconds.

    120000
    
    retries?: number

    Number of retries on failure.

    3
    
    localBaseUrl?: string

    Base URL for the local provider. Overrides the LOCAL_BASE_URL environment variable. Required when using provider: 'local' without LOCAL_BASE_URL set.

    'http://127.0.0.1:11434/v1'  // Ollama
    
    'http://127.0.0.1:8765/v1'   // mlx-lm / omlx
    
    localApiKey?: string

    API key for the local provider. Overrides the LOCAL_API_KEY environment variable. Defaults to "local" for servers that don't validate keys.

    localTimeout?: number

    Request timeout in ms for the local provider. Defaults to 60 000 ms. Increase for slow or large local models.

    temperature?: number

    Sampling temperature (0-2). Lower = more deterministic, higher = more creative. undefined = provider default.

    Can also use semantic presets via creativity option.

    creativity?: CreativityLevel

    Semantic creativity level. Alternative to raw temperature values.

    • 'precise': Temperature 0 (deterministic)
    • 'balanced': Temperature 0.7 (default)
    • 'creative': Temperature 1.0
    • 'wild': Temperature 1.5

    If both temperature and creativity are set, temperature takes precedence.

    maxTokens?: number

    Maximum tokens to generate. undefined = provider default.

    topP?: number

    Top-p (nucleus) sampling. undefined = provider default.

    stop?: string | string[]

    Stop sequences.

    frequencyPenalty?: number

    Frequency penalty (-2 to 2).

    presencePenalty?: number

    Presence penalty (-2 to 2).

    reasoning?: ReasoningLevel

    Unified reasoning level across providers. Maps automatically to provider-specific implementations:

    • 'off': No reasoning (OpenAI: none, Anthropic: no thinking, xAI: *-non-reasoning)
    • 'low': Light reasoning (OpenAI: low, Anthropic: 2048 tokens)
    • 'medium': Moderate reasoning (OpenAI: medium, Anthropic: 8192 tokens)
    • 'high': Deep reasoning (OpenAI: high, Anthropic: 32768 tokens, xAI: *-reasoning)

    Note: Not all models support reasoning. For unsupported models, this is ignored.

    webSearch?: boolean

    Enable web search (xAI only). Ignored for other providers.

    cache?: boolean

    Enable prompt caching (Anthropic only). Marks the system prompt for caching.

    thinkingBudget?: number

    Thinking budget in tokens for local models that support it (e.g. Qwen3.5 via oMLX). When set, the model will produce reasoning/thinking content before the final answer. Thinking content is streamed separately via reasoningContent and does not mix with the visible response.

    Only applies to local provider. Ignored for cloud providers.

    extra?: Record<string, unknown>

    Arbitrary additional options passed to the provider. Use for bleeding-edge features not yet in the typed interface.

    system?: string

    System prompt

    model?: string

    Model to use