Generate text (streaming)

Infer the next tokens for a given deployed model with a set of parameters, returning results as a stream.