Appearance
UnityPredict — Chatbot Interface
Embeddable Conversational UI for Any Compatible Model
Overview
The UnityPredict Chatbot Interface is an auto-generated conversational UI that can be attached to any model that exposes at least one string input and one string output.
It provides a simple, intuitive chat-style experience for testing models and interacting with LLM-like engines — without writing any UI code.
Just like UnityPredict’s auto-generated Form Interface, the Chatbot UI lets users quickly experiment with their models.
But what makes the Chatbot Interface unique is that it can be embedded directly into any external website via an <iframe>.
This enables you to:
- Build your own AI chatbots, LLM wrappers, and custom engines on UnityPredict.
- Deploy them instantly with a ready-made UI.
- Embed the chatbot anywhere on the web with a single HTML snippet.
This eliminates the need for frontend development and dramatically reduces the time it takes to turn an AI model into an end-user–ready product.
Key Features
💬 Conversational Interface
Natural chat-style interactions for any model with string inputs/outputs.🧠 Optional Context Retention
Session-aware conversations allow for multi-turn dialog.🌐 Embeddable Widget (iframe)
Copy/paste a snippet to place your chatbot directly on your website, portal, or product page — no coding required.⚙️ Automatic UI Generation
UnityPredict handles message rendering, input fields, file uploads, and dynamic forms.🔌 Works With Any Compatible Model
As long as the model exposes:- one input:
string - one output:
string
…it can be used with the Chatbot UI.
- one input:
🗂️ Hybrid Workflow Support
Models can return not only text but also structured outputs (forms, option lists, files).🪄 Custom System Instructions
Set behavior, tone, policy instructions, or conversation rules for your chatbot.
When to Use the Chatbot Interface
Use the Chatbot Interface when:
- You want a full chat-like experience for interacting with your model.
- You need a zero-code UI for demonstrations, prototyping, or user testing.
- You want to publish a chatbot on your own website without building a frontend.
- Your model behaves like a conversational agent or expects free-form text.
If your model has many structured parameters, the Form Interface may be a better fit — but both can be used depending on your workflow.
How to Create a Chatbot Model in UnityPredict
Navigate to Models → Create Model
Click “Create Model” in your UnityPredict dashboard.Select “Chatbot” as the Interface Type
This tells UnityPredict to generate a chat-style UI for your model.Define Inputs and Outputs
At minimum, ensure you have:InputMessage(string)OutputMessage(string)
You may optionally add:
InputFileDynamicFormResultsEmbedInputsOutputFileDynamicFormDefinitions
Attach an Engine
Choose an existing engine or create a new one for your model. For a minimal example, see Simple Echo Engine.Save & Deploy
Once deployed, your chatbot becomes immediately usable through UnityPredict.Embed or Share
Access it directly via the UnityPredict UI, or embed it into any website using an iframe.
How the Chatbot Interface Works
- The user sends a message (InputMessage) or uploads files.
- UnityPredict forwards the input and session context to your engine.
- The engine runs your model and returns:
OutputMessage(required)- Optional: dynamic form fields, options, files, or other structured outputs.
- The Chatbot UI renders the model’s response and updates session state.
This supports:
- LLM-style conversation
- Step-by-step workflows
- Multi-turn data collection
- File-based processing
- Form-driven interactions
Input Specification
| Name | Type | Description |
|---|---|---|
| InputMessage | String | Primary text input from the user. |
| InputFile | File | Optional file upload for processing. |
| DynamicFormResults | Dictionary | Data returned from previously generated dynamic forms. |
| EmbedInputs | Dictionary | Optional metadata/context passed through each model call. |
Output Specification
| Name | Type | Description |
|---|---|---|
| OutputMessage | String | The model's main response (required). |
| DynamicFormDefinitions | Dictionary | Form fields for collecting additional user input. |
| Options | String | Provides selectable options for user actions. |
| AutoInvokeIn | Integer | Delay before auto-triggering another model call. |
| OutputFile | File | File generated by the model in the response. |
Sample Engine: Simple Echo Engine
The Simple Echo Engine is a minimal reference implementation designed to demonstrate how the Chatbot Interface interacts with an engine.
It simply echoes the user’s input while preserving conversation context.
This makes it ideal for:
- First-time setup validation
- Testing Chatbot UI wiring
- Understanding request, response, and context flow
Engine Behavior
- Accepts
InputMessagefrom the Chatbot UI - Appends each message to a running transcript
- Returns the full transcript in
OutputMessage - Maintains request count and timestamps
- Calculates basic inference cost
Required Chatbot Inputs & Outputs
This engine is compatible with the Chatbot Interface because it exposes:
Inputs
InputMessage(string)
Outputs
OutputMessage(string)
No additional UI configuration is required.
Simple Echo Engine – Sample Code
python
import os
import json
import datetime
from unitypredict_engines.Platform import (
ChainedInferenceRequest, ChainedInferenceResponse, FileReceivedObj,
FileTransmissionObj, IPlatform, InferenceRequest, InferenceResponse, OutcomeValue
)
def run_engine(request: InferenceRequest, platform: IPlatform) -> InferenceResponse:
platform.logMsg("Starting Simple Echo Engine with File Support...")
response = InferenceResponse()
# Inputs
current_input = request.InputValues.get("InputMessage", "").strip()
# Timestamp
current_timestamp = datetime.datetime.now().isoformat()
# Context handling
context = {}
context = request.Context.StoredMeta
request_count = int(context.get("request_count", 0)) + 1
transcript = context.get("transcript", "")
if current_input:
if not isinstance(current_input, str):
raise ValueError("Invalid input: InputMessage must be a string.")
if len(current_input.strip()) < 2:
raise ValueError("Invalid input: InputMessage is too short.")
try:
if current_input:
transcript += f"\n[TIME: {current_timestamp}] Message: {current_input}"
response.Outcomes["OutputMessage"] = [
OutcomeValue(f"[Request #{request_count}] I received your input. Transcript so far: {transcript}")
]
context["transcript"] = transcript
context["request_count"] = request_count
context["last_request_time"] = current_timestamp
response.Context.StoredMeta['request_count'] = request_count
# --------------------------
# Cost calculation
# --------------------------
numWordsIn = len(current_input.split())
numWordsOut = len(transcript.split())
response.AdditionalInferenceCosts = (
((1000 / 750) * numWordsIn * (0.50 / 1000000))
+ ((1000 / 750) * numWordsOut * (1.50 / 1000000)) * 2
)
except Exception as e:
response.ErrorMessages = f"Error processing input: {str(e)}"
platform.logMsg(response.ErrorMessages)
return responseEmbedding the Chatbot (iframe)
Embedding takes only a few seconds:
html
<iframe
src="https://your-unitypredict-url/chatbot/{modelId}?hostUrl={window.location.origin}&endChatCallback={true}&hideChatCallback={true}"
width="100%"
height="600"
frameborder="0"
allow="microphone"
></iframe>