Technical Documentation

How the TIA Portal
AI Copilot Works

A transparent look at the architecture behind T-IA Connect's AI assistant. Understand how your messages become TIA Portal actions through intelligent tool selection and LLM orchestration.

Overview

The T-IA Connect Copilot is an integrated AI assistant that controls TIA Portal via tools (function calling). You send a message in natural language, the LLM decides which tools to call, and T-IA Connect executes the corresponding actions in TIA Portal.

The entire process runs locally on your machine. Your API keys are encrypted via Windows DPAPI, never logged, and never transmitted to third parties. T-IA Connect contacts LLM providers directly with no proxy or relay server.

Data Flow

User

Sends a natural language message

T-IA Connect

Builds context, selects tools, calls LLM

LLM Provider

Analyzes and returns tool calls

TIA Portal

Executes actions via Openness API

User >Create a FB Motor
LLM >tool_call: create_block(FB)
Result >FB Motor_FB created successfully

Supported LLM Providers

T-IA Connect is provider-agnostic. Bring your own API key and choose your preferred provider.

ProviderDefault ModelAuthentication
OpenAIgpt-4oBearer token
Claude (Anthropic)claude-sonnet-4-20250514x-api-key header
Gemini (Google)gemini-1.5-flashAPI key in query
Groqllama-3.3-70b-versatileBearer token
CustomOllama, vLLM, etc.Optional Bearer token

API Key Security

  • Keys stored locally, encrypted via Windows DPAPI
  • Direct connection to providers, no proxy or relay
  • Keys are never logged or transmitted to third parties
  • Custom endpoints supported (Azure OpenAI, enterprise proxies)

Smart Tool Selection

With ~400 tools available, sending all of them to every request would be costly and counterproductive. T-IA Connect solves this with contextual category selection.

The Challenge

  • Each tool definition consumes ~50 input tokens
  • Some providers limit tools to 128 max (OpenAI)
  • Too many tools can confuse the LLM

The Solution: Contextual Categories

T-IA Connect analyzes your message and activates only the relevant tool categories.

Always Included

Core (~34 tools): project management, devices, export/import

Knowledge (~21 tools): documentation, tips, analysis, memory

CategoryToolsTrigger Keywords
Blocks~31block, fb, fc, ob, db, scl, lad, compile, program, code...
Tags~16tag, watch, force, variable, address, diagnostic...
HMI~51hmi, screen, panel, wincc, display, visualization...
Hardware~24hardware, module, rack, cpu, slot, profinet, gsd...
Simulation~23plcsim, simul, runtime, power_on, instance...
Security~21security, password, protection, opcua, webserver...
UDT~23udt, type, struct, data_type...
Online~10online, offline, download, upload, go_online...
Advanced~66fds, graph, sfc, safety, blueprint, motion, alarm...
Infrastructure~54report, vcs, git, test, library, codesys...

Concrete Examples

"Create a FB Motor"

core + knowledge + blocks

~86 tools

"Configure PLCSim"

core + knowledge + simulation

~78 tools

"Add an HMI screen"

core + knowledge + hmi

~106 tools

Sticky Context

If your message contains no keywords (e.g. "yes", "continue", "do it"), T-IA Connect reuses the categories from the previous message. This enables natural conversations without losing context.

Execution Loop

The Copilot works in a loop: the LLM can call multiple tools successively before responding to the user.

Send message + context to LLM
LLM returns response
Tool calls detected?
Yes
Execute tools in TIA Portal
Send results back to LLM
No
Final response to user

Anti-Infinite Loop Protections

ProtectionThresholdBehavior
Identical consecutive calls2Stops the loop
Consecutive failures (same tool)3Stops the loop
Max absolute iterations200Safety net
LLM error retries2Then failure
Empty response retries3Then failure

Token Consumption

Understand what consumes tokens and how T-IA Connect optimizes costs.

ComponentEstimated TokensFrequency
System prompt (instructions)~2,000-3,000Each message
Project context (devices, blocks)~500-2,000Each message
Tool definitions (128 max)~5,000-8,000Each message
Conversation history~1,000-10,000Growing
User message~50-500Each message
Typical total input~10,000-20,000Per message
LLM response~200-2,000Per message

Cost Estimate

For a typical message with GPT-4o (OpenAI pricing, April 2026):

Input: ~15,000 tokens x $2.50/1M =~$0.037
Output: ~500 tokens x $10/1M =~$0.005
Total per message:~$0.04

A full exchange with tool calling (2-3 LLM iterations) costs approximately $0.10-0.15.

Automatic Optimizations

  • Contextual selection: only relevant tools are sent (not all 400)
  • Auto-compaction: when history exceeds ~200,000 characters, old messages are automatically summarized
  • Limited history: max 20 messages and 32,000 characters kept in context
  • 128 tool cap: limits the fixed cost of tool definitions

Multimodal Support

The Copilot can process images and PDF documents alongside text.

Images

Images sent to the Copilot are transmitted to the LLM in base64 (vision format). Useful for analyzing program screenshots, identifying visual errors, or describing schematics.

PDFs (Design Specification)

PDFs are processed via the CDC (Custom Design Companion) system: text extraction, chunking (1,500 chars with 200 overlap), table of contents injected into the prompt, and on-demand chunk access via dedicated tools. This avoids sending the entire PDF into context.

Autonomous Mode

The autonomous mode allows the Copilot to execute action sequences without user confirmation.

AspectInteractiveAutonomous
ConfirmationsRequired before destructive actionsSkipped
System promptFull (rules, formatting, interactive)Compact (rules, scope)
Early stop detectionNoYes (max 3 retries)

Rate Limiting

T-IA Connect applies separate quotas for each access channel.

apiDirect REST API calls
mcpMCP tools (Claude Desktop, etc.)
copilotIntegrated assistant

Free Tools (Not Counted)

Read-only tools do not consume quota: list_*, get_documentation, get_llm_tips, discovery and status tools.

Real-Time Communication

The Copilot uses SignalR for live updates during execution.

onAssistantResponseChat display
onToolExecution"Thinking..." indicator
onTokenUsageToken counter
onStatusUpdated"Sending to Claude..."

Compatible Models

Detailed compatibility per provider.

OpenAI

Recommended: gpt-4o, gpt-4o-mini, gpt-4-turbo

Not compatible: o1, o3-mini (use Responses API, not Chat Completions)

Claude (Anthropic)

Recommended: claude-sonnet-4-20250514, claude-haiku-4-5-20251001, claude-opus-4-6

All Claude models are compatible

Gemini (Google)

Recommended: gemini-1.5-pro, gemini-1.5-flash, gemini-2.0-flash

Auto-filter excludes non-chat models (embedding, vision-only)

Groq

Recommended: llama-3.3-70b-versatile, mixtral-8x7b

Note: free tier may be insufficient for 128 tools

Custom (Ollama, vLLM)

Any model supporting OpenAI-format function calling

Automatic detection of "fake tool calls" (models returning JSON as text)

Dual Model Routing

For Custom providers, T-IA Connect supports two models: a reasoning model for analysis/planning and a code model for SCL/LAD generation. The switch is automatic based on the tool type being executed.

Key Figures

~400
Total tools
128
Tools per request (max)
5 (+ custom)
Supported providers
13
Tool categories
90 seconds
HTTP timeout
20 messages / 32,000 chars
History in context
~$0.04
Est. cost per message (GPT-4o)
~$0.10-0.15
Est. cost per full exchange

Frequently Asked Questions

Does T-IA Connect send my PLC code to the cloud?

Only when you use a cloud LLM provider (OpenAI, Claude, Gemini). Your messages and project context are sent to the provider you chose. For maximum privacy, use Ollama with a local model and nothing leaves your machine.

How much does the AI cost per message?

With GPT-4o, a typical message costs about $0.04 and a full exchange with tool calling costs $0.10-0.15. You pay the LLM provider directly using your own API key.

Can the Copilot damage my TIA Portal project?

In interactive mode, the Copilot asks for confirmation before destructive actions. Anti-loop protections prevent runaway tool execution. You can also stop execution at any time.

Which LLM model should I choose?

For best results, use GPT-4o or Claude Sonnet. For budget-conscious usage, GPT-4o-mini or Gemini Flash work well for simpler tasks. For full privacy, use Ollama with a local model like Llama 3.

Related Pages

Ready to Try the Copilot?

Download T-IA Connect and start generating PLC code with AI today.