The AI action allows you to securely and privately run a large language model (LLM) at any point in your workflow.
AI action usage is based on a credits system: all tenants include an allowance of monthly credits, and each execution of the action deducts some credits. See more detail on credits and executions.
Learn more about how AI works in Tines at Tines Explained, and see it in action here.
Features
Invoke an LLM on-demand in your workflow.
Choose from a variety of language models (Claude 3 Haiku, Claude 3 Sonnet, Llama 3).
AI models run inside Tines's infrastructure, with strong security and privacy guarantees.
Include image data to take advantage of Claude 3's vision capabilities.
Adjust AI response temperature.
Configuration options
prompt
: A request to make of the model. Pass input data, instructions, examples, etc.model
: Pick which large language model to use. Optional; defaults to Claude 3 Haiku.image
: Image content to include when invoking the model. Must be the Base64-encoded content of the image, or an array of contents when including multiple images. Supported on Claude 3 models only.temperature
: Controls the creativity and randomness of the AI response. Lower values result in a more predictable output, higher values result in a more creative output. Range from 0 to 1. Default is 0.2.json_mode
: If enabled, the LLM will be prompted to return JSON formatted data only. This can be useful when the LLM response is not JSON formatted by default. Note that valid JSON may not be returned if responses are truncated due to token limits or if the model encounters an error. You will need to handle this case in your workflows.
Emitted event
Each time the action runs, a single event is emitted, containing output from the AI model. for example:
{
"output": "Estimated severity: high"
}
If the model returns valid JSON or if you have JSON mode enabled, Tines will automatically parse this in the event data:
{
"output": {
"estimated_severity": "high"
}
}
Example configuration options
Proposing remediation steps for a security alert (alert_data
):
{
"prompt": "Summarize the potential remediation steps for the alert\n\n Data:<<alert_data>>"
}
Analyzing a support request from a user (support_request
):
{
"prompt": "Help the user with their product issue.\n\n Data:<<support_request.query>>",
"image": "=support_request.screenshot.contents"
}