Put AI to work where workflows work best

Written by Jason EnglishDirector and Principal Analyst, Intellyx

Published on April 2, 2025

In this guest post, Jason English, Director and Principal Analyst at Intellyx explores how GenAI is moving beyond chat to orchestrate real action for SOC teams.

As my colleague Eric Newcomer mentioned in the previous chapter of this series, GenAI changes the security automation game, with multi-system discovery, documentation, and task execution capabilities that can reduce cognitive load and toil for security analysts.

To get started, all the analyst has to do is ask an AI-powered solution like Tines Workbench to pull in data and investigate their authorized systems using a natural language chat interface, with intuitive summaries to keep up awareness of an ever-changing application tech stack.

But conversational interfaces like chatbots are only the first step on the road toward the productivity that AI can help deliver for the SOC. 

To get sustainable improvements, we must go beyond simply chatting with an LLM.

We need to combine the learnings and patterns from a broader set of development, operations and business stakeholders, with automated workflows that include machine language-driven skillsets and AI-driven tasks. 

Learning automation lessons from RPA 

Starting about a decade ago, we saw the fast rise of RPA (robotic process automation), through companies like UiPath and Automation Anywhere alongside newer workflow automation tools fostered within industry giants such as Salesforce and Microsoft.

We expected RPA to recruit a bonafide workforce of semi-autonomous bots to help us out. Literally, a ‘bot for every employee’ to do our bidding, capturing and replaying our process of logging into different SaaS tools, clicking buttons, entering strings into fields, and automating the next logical step in a workflow.

The golden chalice in the race to RPA? Modeling automation to a detailed enough extent that bots could handle unsupervised work—taking the need for attention off the plate of employees, so they could automatically solve 80% or more of their most repetitive tasks—and thus focus on solving higher-value problems that would help customers most.

Bots started proliferating throughout organizations, even automating certain tasks within the SOC, but unfortunately it turned out that the bots were very brittle, and non-adaptive to process change. As soon as an unexpected data value, menu item, or option appeared on screen that deviated from what was captured within the RPA tool, the bots would break. Every minute spent manually fixing bots by adding more rules and error handling took away a minute of productive work from employees.

While we saw some bright spots of digital process automation getting better at handling tasks using heuristics and algorithms, the software industry casted about for a new kind of automation buzzwords to sell to enterprises. There was just no ready-to-sell way forward until we jumped straight from that era into GenAI, and now, Agentic AI.

Reducing the overhead of GenAI with specialized models 

No, ChatGPT did not invent AI, not even close. It landed an LLM (large language model) in people’s hands at the exact right moment to generate excitement.

We’ve been using more specialized forms of applied AI for years—for instance, for fraud detection in the financial industry, or for network threat detection in cybersecurity. These applied AIs could be trained with much smaller data sets because they focused on a limited amount of inputs, often in machine-native form, such as logs, metrics, transactions, code sniffs and session tokens.

Using huge training data sets scraped from all human content ever found on the web, an LLM like Anthropic or OpenAI will ‘learn’ the English language—to the extent that it can seem to understand English prompts and respond with natural-looking prose. This requires billions of dollars of investment in modeling work and computing resources for GenAI, to try and understand the meaning behind just about anything the user asks, and respond with a plausible answer that doesn’t present dangerous biases.

Perhaps we don’t always need the first letter of LLM, the “large” part of the language model. What if you had a specialized language model (SLM) with focused learning data sets in one field, say, cybersecurity?

An AI tuned just for security could achieve higher levels of relevance with lower cost and smaller system and data footprint. The SLM only needs to speak the lingo of security professionals, as well as understanding the API interfaces and data formats of the many other security solutions and data sources their enterprises already have in place to do threat detection, scanning and remediation work.  

Orchestrating actions through composite AI 

If nothing else, the introduction of DeepSeek popped the GenAI bubble. There will not be one AI to rule them all, no matter how much money is invested in it. If anything, to successfully mitigate cyber risk, we will likely need multiple AIs working in conjunction, as well as leveraging existing workflow automation assets. 

Let’s say there is a possible active data breach that has been flagged by an automated XDR tool with its own inference algorithms. 

With a simple chat request, an investigator kicks off a resolution workflow within the Tines Workbench platform, notifying an ITSM platform in operations, and a Slack group at the SOC. The Workbench orchestration AI can then call on other specialized AIs to do both network packet inspection and code level investigation, as well calling an existing SAST/DAST tool via API to scan artifacts before and after the last deployment.

From there, all of the metadata and assets required for finding root causes are collected within an incident workflow, where relevant stakeholders of that incident are notified to participate.

We will lose the value of reducing MTTR (mean time to resolution) if we can only look at it as a cold stat, isolated from all the work it actually takes to achieve a resolution when it really matters.

AI-assisted workflow automation can take extended teams from the identification and triage of a significant problem, to the time the right people focus on the right information to resolve it.

Will the SOC operate itself? 

I would bet against the SOC reaching autonomy—or doing fully unattended work—thanks to AI. Instead, specialized AIs trained on security and operations training data will someday handle the most repetitive SOC work alongside policy-based automation, so human experts can focus on critical issues.

There are many aspects of resolving vulnerabilities and addressing issues that are not within the traditional realm of SOC work that AI can help us investigate. For instance, looking within the CI/CD deployment pipeline for suspect packages or code snippets introduced by developers. Or, referring to performance metrics in an observability dashboard traditionally used by IT operations to find the exact moment a DDoS attack might have started.

SoterICS offers a particularly interesting case study of this improvement. This security consulting and reseller firm was able to translate its own hard-won expertise and best recommendations about zero trust security policies into an AI-augmented solution that helps them mitigate more cyberrisk for clients with limited resources.

Instead of a linear analyst-based cost scaling, where we'd need at least a dozen hires and potentially millions in wages to tackle our expected sales funnel, we can now invest in a smaller, more expert automation-focused team that scales gracefully. This is our new growth story. Engineering-first.

Amine Besson, Industrial MDR Lead, Soterics

Using Tines’s latest Workbench product, SoterICS reduced the time to demonstrate and create security workflows for their clients from months, to weeks or days. Now their SOC can orchestrate the work of automation, analytics, GenAI and Agentic AI in a mixed-mode fashion, starting from natural language prompts, and they can even pass this capability on to their end customers.

The Intellyx take 

Task automation and business process orchestration tools have been around for decades. We’ve also had various precursors of AI around for years: OCR text reading, voice recognition, and autocomplete in our writing and coding interactions with emails and IDEs.

Despite all the current hype around LLMs and image generators, GenAI-based automation still needs something else to make a sustainable impact on complex SOC workflows—process expertise—or it will become just as brittle as the RPA bots we are struggling to manage.

Tines Workbench, with its multi-AI agents that can query multiple data sources and execute multi-step problems across systems, allows even non-expert humans in the security loop to routinely dispatch common problems and take action on the most intractable incidents.

©2025 Intellyx B.V. Intellyx is editorially responsible for this document. At the time of writing, Tines is an Intellyx customer, and Microsoft is a former customer. None of the other organizations mentioned here are Intellyx customers. No AI bots were used to write this content.

Built by you,
powered by Tines

Already have an account? Log in.