Skip to content

useAiStream Streaming Request Engine 🌊

useAiStream is a low-level engine hook designed for AI streaming output, featuring:

  • 🏭 Multi-vendor Adapters (OpenAI / DeepSeek / Wenxin / Tongyi)
  • 🖋️ Typewriter Throttling (Based on requestAnimationFrame)
  • 🛑 AbortController Support

🖊️ Typewriter Effect Demo (Local AsyncGenerator)

Simulate real AI output rhythm using AsyncGenerator, allowing typewriter experience without networking or API keys.

What are the core advantages of Vue3 Composition API?
Select a question and click "Generate Answer" to experience typewriter effect ✨

🌐 Connecting Real SSE APIs

Replace the request adapter with a real API call. Pass your key to enable production-grade streaming conversation.

📋 Replace YOUR_API_KEY with your SiliconFlow / OpenAI / DeepSeek compatible key to achieve real streaming output.

👉 Supports: OpenAI · DeepSeek · SiliconFlow · iFlytek · Moonshot · MiniMax and all OpenAI SSE formats.

API

Options

ParamTypeDefaultDescription
request(query, signal, ...args) => AsyncGenerator | Promise<Response>RequiredRequest adapter, supports AbortSignal
parserStreamChunkParserplainTextParserStream chunk parser (multi-vendor)
typewriterbooleantrueEnable typewriter effect
charsPerFramenumber3Chars output per frame (speed control)
onUpdate(chunk, fullContent) => void-Incremental update callback
onFinish(content) => void-Final completion callback
onError(err) => void-Error callback

Returns

FieldTypeDescription
isStreamingRef<boolean>If streaming is in progress
currentContentRef<string>Full content received so far
fetchStream(query, ...args) => Promise<void>Trigger request
stop() => voidAbort request

Built-in Parsers

ParserVendorDescription
openaiParserOpenAI, DeepSeek, SiliconFlow, iFlytekCompatible with OpenAI SSE format
ernieParserBaidu WenxinCompatible with Wenxin SSE format
qwenParserAlibaba Tongyi (Direct)Compatible with Tongyi SSE format
plainTextParserRaw AsyncGeneratorYield strings appended directly

Connecting Other AI Platforms

ts
import { useAiStream, openaiParser } from '@yh-ui/hooks'

// ── SiliconFlow ──
const { fetchStream } = useAiStream({
  parser: openaiParser,
  request: async (query, signal) =>
    fetch('https://api.siliconflow.cn/v1/chat/completions', {
      method: 'POST',
      signal,
      headers: {
        Authorization: 'Bearer YOUR_SF_KEY',
        'Content-Type': 'application/json'
      },
      body: JSON.stringify({
        model: 'Qwen/Qwen2.5-7B-Instruct',
        stream: true,
        messages: [{ role: 'user', content: query }]
      })
    })
})

// ── DeepSeek ──
const { fetchStream: deepseekStream } = useAiStream({
  parser: openaiParser,
  request: async (query, signal) =>
    fetch('https://api.deepseek.com/chat/completions', {
      method: 'POST',
      signal,
      headers: {
        Authorization: 'Bearer YOUR_DEEPSEEK_KEY',
        'Content-Type': 'application/json'
      },
      body: JSON.stringify({
        model: 'deepseek-chat',
        stream: true,
        messages: [{ role: 'user', content: query }]
      })
    })
})

Released under the MIT License.