Appearance
UnityPredict Python Package
To make it easier to invoke models on UnityPredict, you can use the unitypredict
pip package. The following guide provides instructions on how to use this package.
📚 Related: For detailed API documentation and advanced usage patterns, see the UnityPredict API documentation.
Installation
shell
pip install unitypredict
Usage
Initializing the UnityPredict Client
To initialize a UnityPredictClient instance, simply provide your API key.
python
from unitypredict import UnityPredictClient
client = UnityPredictClient(apiKey="YourUnityPredictAPIKey")
Running Inference
There are two different ways you can perform inference:
- Synchronous: Blocking operation using the
predict
function of the UnityPredict Client. This is idea for small models that have fast response times. - Asychronous: Non-Blocking operation that will invoke the model and return control back to the caller with a
RequestId
that can be used for checking the status of a job.
Synchronous Inference
python
response: UnityPredictResponse = client.Predict('TargetUnityPredictModelId', UnityPredictRequest(
InputValues = {
'Age': 20,
'Chol': 207
},
DesiredOutcomes = ['Label'],
OutputFolderPath = "path/to/desired/output/folder" ))
Asynchronous Inference
python
response: UnityPredictResponse = client.AsyncPredict('TargetUnityPredictModelId', UnityPredictRequest(
InputValues = {
'Input File': LocalFile("path/to/myFile.txt"),
'Language': 'English'
},
DesiredOutcomes = ['OutputFile'],
OutputFolderPath = "path/to/desired/output/folder" ))
# Check status of the job and get the results if it is Completed
response: UnityPredictResponse = client.GetRequestStatus(response.RequestId, outputFolderPath="path/to/desired/output/folder")
NOTE:
- In order to upload a local file for inference, the file path should be mentioned under the format:
LocalFile("path/to/myFile.txt")
.
Full Example
python
import time, sys
from unitypredict import UnityPredictClient, UnityPredictRequest, UnityPredictResponse, UnityPredictFileTransmitDto, UnityPredictFileReceivedDto, LocalFile
client = UnityPredictClient(apiKey="YourUnityPredictAPIKey")
request = UnityPredictRequest(
InputValues = {
'Input File': LocalFile("path/to/myFile.txt"),
'Language': 'English'
},
DesiredOutcomes = ['OutputFile'],
OutputFolderPath = "path/to/desired/output/folder" )
response: UnityPredictResponse = client.AsyncPredict('TargetUnityPredictModelId', request)
if response.Status == None:
print (f"Error: {response.ErrorMessages}")
sys.exit(0)
# NOTE: This is a sample only. DO NOT make loops like this (they can become infinite loops if the job fails)!
maxtimeout = 30 # seconds
timeout = 0.5 # seconds
while response.Status != 'Completed':
# Check status of the job and get the results if it is Completed
response: UnityPredictResponse = client.GetRequestStatus(response.RequestId, outputFolderPath=request.OutputFolderPath)
if response.Status == 'Completed':
break
print (f"Waiting for {timeout} seconds ...")
time.sleep(timeout)
timeout = min(2*timeout, maxtimeout)
for keys in request.DesiredOutcomes:
for outcome in response.Outcomes.get(keys, []):
dataType = outcome.get("dataType", None)
if dataType == None:
continue
if dataType == "File":
fileDetails: UnityPredictFileReceivedDto = outcome.get("value", None)
if fileDetails == None:
continue
print (f"File: {fileDetails.FileName}, Path: {fileDetails.LocalFilePath}")
else:
value = outcome.get("value", None)
prob = outcome.get("probability", None)
print (f"Value: {value}, Probability: {prob}")
print (f"Error: {uptResponse.ErrorMessages}")
print (f"Cost: ${uptResponse.ComputeCost}")