Invoke a model with response streaming

Invokes the specified Amazon Bedrock model to run inference using the prompt and parameters provided in the request body. The response is streamed back in real time, allowing the client to begin processing results before the full response is generated.