API Documentation
General information related to our API and how it's used.
Prediction is the Goal
For most of our users and developers, prediction or inference is the goal. To this end we made a straightforward, simple API call that allows for any inference to be made. The API key you generate within your application gives us all the information we need to find your model and give you a result from its deployment.
For more information about how to use the endpoint check out the FAQs and Swagger link.
Inference Endpoint
The inference endpoint is where all automated predictions are generated. It’s purpose is to route your prediction request to the properly-hosted machine learning model.
You access the inference endpoint by posting your payload to https://api.stream.ml/api/inference. Once the payload is successfully received a prediction is returned.
A typical POST contains a JSON payload with a token field where you place your API key and a data field where you put a base64-encoded payload containing your RGB image, multispectral image, or spectrometer scan.
{ "token": "string", "data": "string" }
A typical response would be similar to the one you see below, a JSON-style payload containing each of the class elements and its certainty.
{ "predictions": [ { "label": "Ashphalt", "prediction": 0.0556697 }, { "label": "Concrete", "prediction": 0.944316 }, { "label": "Light Post", "prediction": 0.000011871 }, { "label": "Post", "prediction": 0.00000270726 }, { "label": "Spruce", "prediction": 3.1028e-15 } ] }
Direct Hardware Communications
Stream.ML began its journey as a platform for helping people analyse spectral reflectance, and to that end it supports direct hardware uploads from spectrometers and multi-spectral imagers. If you’re analyzing data from these more complicated devices, check out our hardware interaction API.
Hardware Interaction Endpoint
That hardware interaction endpoint allows for hardware to send its data directly into our services. This data can then be used as training information or prediction sets. Unlike the inference endpoint, all data sent to this endpoint is stored in the platform.
You can access the hardware interaction endpoint in many ways. Please check out our Swagger page to find out more.