fnsa-mlserver
FNSA-MLServer
This README provides instructions on how to deploy and query a sentiment analysis model using SeldonIO MLServer with a pre-trained Hugging Face model (
). The guide includes steps to set up the server using Docker Compose and send inference requests using
.
Prerequisites
- Docker
- Docker Compose
Setup
Run the Server:
docker-compose up
Querying the Model
You can send an inference request to the model using
. Here's an example of how to do it:
curl -X POST http://localhost:8080/v2/models/financial-news-sentiment/infer -H "Content-Type: application/json" -d '{ "inputs": [ { "name": "input", "shape": [1], "datatype": "BYTES", "data": ["The company\'s stock price surged after the positive earnings report."] } ]}'
Expected Response
The server will respond with the sentiment analysis results. The expected response format is:
{ "model_name": "financial-news-sentiment", "id": "1e993fa8-10d8-4ce3-a012-5c79ab8b127b", "parameters": {}, "outputs": [ { "name": "output", "shape": [ 1 ], "datatype": "BYTES", "data": [ { "label": "positive", "score": 0.9996684789657593 } ] } ]}
This indicates that the input text was classified as "positive" with a high confidence score.