Run inference on a model

Run inference on any model hosted on the Hugging Face Hub. The request and response format depends on the model's pipeline task.