Pricing - Building, Deploying, and Buying
Our pricing is designed to encourage everyone to use the Stream.ML platform and ensures users are getting the best value possible. Every model in our marketplace can be used for free – to ensure it is right for you. Once you determine the model will work for you and you’re ready to use it in production, you can immediately scale to meet your needs.
Training a Model
Each month, every account is given 2 hours of processing time at no cost. This time can be used to train models and is likely sufficient to train multiple models without incurring any cost whatsoever.
Should more time be required to complete the training process, additional time can be purchased, either by the hour, as an overage charge, or in preset blocks of 15, 45, or 135 hours.
Storing Data
To build a model, the data files used as the training data set needs to be stored in your account. For this purpose, ten gigabytes of storage is included with every account, at no cost. Storage space in excess of the included 10 GB costs $0.48/GB monthly.
Should more storage space be required to facilitate very large training data sets, more storage can be purchased. The amount of storage space required will depend on the amount and type of data in the training set, but can be purchased for $36, $240 or $1200 monthly, for 100 GB, 1,000 GB, or 10,000 GB, respectively with overage charges of $0.36/GB, $0.24/GB and $0.12/GB. See the table below.
Deploying a Model in the Cloud
The fastest and easiest way to use models in development and production is our cloud-based deployment. In seconds you can deploy a model to test, or deploy a model for production workloads.
DEVELOPMENT
- 1 Core
- Up to 100,000 inferences
- $0.0013/inference
- No Up-time Guarantee
- No Throughput Guarantee
SMALL
- 2 cores
- > 2,000,000 inferences
- $0.0013/inference
- 99% Up-time Guarantee
- Throughput Guarantee
MEDIUM
- 3 cores
- > 3,000,000 inferences
- $0.0013/inference
- 99.9% Up-time Guarantee
- Throughput Guarantee
LARGE
- 4 cores
- > 4,000,000 inferences
- $0.0013/inference
- 99.99% Up-time Guarantee
- Throughput Guarantee
Deploying a Model On-Premise
Whether you need extra control, faster response times, or just don’t have a consistent internet connection, an on-premise installation may make more sense.
Deploying a Model on the Edge
Edge devices are any device that would only be using data from itself to make inferences, for example, a small low-power edge computer like a Raspberry Pi or an nVidia Jetson.