Streaming support for Open Source Models

Hi everyone,

I’m working with ADK streaming agents using Gemini Live models successfully, but I need to enable streaming for open-source models beyond Gemini.

Issue: I can run open-source models for general inference, but can’t configure them to interface with ADK in streaming mode.

Looking for:

  • Documentation on streaming interfaces for open-source LLMs in ADK

  • Configuration examples

  • Any community experiences

Haven’t found any resources on this specific integration. Has anyone done this successfully?

Thanks!

@robinrob55 Good question

I try with NVIDIA hosted model nvidia_nim/nvidia/llama-3.1-nemotron-nano-8b-v1

Yes voice streaming is not working due to web socket connection error but, SSE is working for other model also

1 Like

Thanks for sharing your experience. My focus is on multimodal support (video and audio included), but I think implementing it with an ADK is complicated for now outside of the live API.