“AI is only useful when it solves real customer problems”: Tines on Risky Biz

Last updated on

Written by Eoin HinchyCo-founder & CEO, Tines

We’re all huge fans of the Risky Biz podcast here at Tines, so we were thrilled to be invited to appear on the show recently to talk about AI’s role in security automation.

I had a great conversation with host Patrick Gray about the security and privacy challenges that go along with deploying an LLM in your environment, and how our approach to AI in Tines is fundamentally different. I loved every minute of this chat, and I hope you’ll find it interesting, too. 

Patrick Grey: Tines does security automation and they do it extremely well. Just talk to any of their customers and you’ll typically get a stream of praise. They’re sort of doing more than just security now, and they have some big plans. As a company that’s already automating stuff, Tines is extremely well placed to make use of decision engines like large language models, so Eoin joined me for this absolutely terrific interview all about how they’re thinking about AI.

Eoin Hinchy: The way that we think about Tines is that we provide software to help companies build, run, and monitor their most important workflows. And for the longest time, we spent probably like 80, 90% of our engineering and product resources on the build section.

And that was the workflow builder, allowing our customers to build these incredibly intricate, powerful, flexible workflows that integrated with any tool in their stack. When we last spoke, we had probably spent about six months of experimentation around AI. 

Patrick Grey: Well, at that point, I think you said all it was good for was crapping out broken workflows, right? It just wasn't there.

Eoin Hinchy: Totally, we were quickly descending into a trough of disillusionment around AI. And we were seeing other companies quickly release these features that felt bolted on, that were mostly like demoware. And we were like, ‘Jeez, are we gonna have to descend to that and do something like that?' But what we realized was, with these technologies, there's a huge difference between building something that's demoable and building something that's actually deployable.

The difference with something that's deployable is it solves real customer problems, it runs at scale, it's cost effective. And we've seen huge companies get this wrong.

Like Microsoft, their security copilot, I don't know if you were reading recently, but it costs 100k minimum, and Microsoft themselves are saying, ‘hey, don't trust this thing.’ It's staggering. And so we said we can build something that demos really well, but can we build something that's deployable? 

And, eventually, what we realized was, yes, we absolutely can, but we were trying to solve two separate problems. And the first problem we were trying to solve is your point - how do we make these workflows a little bit easier to build? So both in terms of configuration, but also in terms of using natural language to describe the kind of workflow that I want, and also using natural language to iterate on it once it's built.

And that's the easy, obvious application of AI and LLMs to this problem. And I think there are very little technology moats, honestly, to providing that type of capability. What’s really interesting is that over time, we solve this build problem with the help of AI. But now we're spending more and more of our time focused on the running of the workflows and the monitoring of the workflows. And so we suspect that over time, building your workflows will become commoditized either through AI or by writing like Python, and you don't need to be an engineer anymore to describe the kind of scripts that you want.

What will become more and more important is, how do I run these things at scale? How do I make sure they can run both in the cloud and on-prem and in hybrid? And how do I make sure they're massively scalable and elastic? 

How do I monitor these things to make sure they're working in the way I expect them to work? And how do I get notified if a service I'm relying upon falls over?

How do I get notified if something breaks midway through a workflow? And I think that will become more and more of an important differentiator for us as we continue to grow as a company. 

Patrick Grey: I mean, when you think about the way that you're gonna deploy these models, you almost think about them like people doing a job, because they're they're a natural language interface. So you want it to ask you an important question, but not bother you with not-important questions. And you're gonna have models being the bosses of other models. And I wonder if they're gonna start firing their sub-models or complaining up the chain and saying, ‘You gotta do something about this underling, it’s no good.’ Anyway, I'm getting a little bit…

Eoin Hinchy: No, I think you're right. You touch on a really important point in that, up to this, and that’s not quite true, but we've really considered a model.

Like, everybody is using ChatGPT as the model. But now they're being fast followed by a bunch of models. And now we see open source models like Llama and some of the Anthropic stuff being as capable for the vast majority of the use cases.

I think what's also going to become really interesting is, how does a platform like Tines give our customers a choice of model for an appropriate task? 

So if you've got some really gnarly complex decision that needs to be made related to security, what model is best suitable for that? Because it's gonna be a little bit more expensive to run. And then for this basic stuff like, is this email an ITunes gift card fraud? What model can we provide that's very cost effective and very simple to understand whether this is bad or not?

And so when we're thinking about providing this technology to our customers, we're also thinking about, how can we give them access to a secure and private model as well as giving them access to all these cutting-edge models like ChatGPT4 and 5 eventually. 

Patrick Grey: You raise a really interesting point there, and that was something that came up in conversations I had with various experts on this at RSA where I was talking to them about the costs involved in all of the compute that you used to do this stuff. And I was saying, well, we're at the point now where we're using much more specific models, exactly like you were saying.

And they're not that expensive anymore. You know, you don't need to throw every single thing at a cutting-edge, whole-of-the-planet ChatGPT-style model. You just don't. 

Eoin Hinchy: Absolutely. And the really interesting thing to me is that the adoption of these technologies, it's not black and white. It's not a case of we're using AI in our workflows or we're not using our AI in our workflows. 

There can be a spectrum. You can use the really cheap models to do the very basic, mundane work, like phishing email analysis. And then you can slowly and responsibly increase your usage as you build trust in the system until eventually, you reach this utopia, which is all the repetitive work being handled by AI and LLMs.

It's a varying degree of sophistication between models for the level of sophistication required for the task.

Patrick Grey: I think it would be helpful if you'd explain to people from a Tines perspective the scale of the opportunity for you. Because you've got such a head start in understanding what needs to be automated. And doing it the old way without the AI. So now to sort of paste in the AI goodness, you're not just slapping a ‘Now with AI!’ sticker on something that doesn't really need it. The scale of the opportunity for you must be just extraordinary. You're talking about going from automating certain tasks to automating everything. I mean, I'm guessing that's that's how you would be thinking about this. 

Eoin Hinchy: Oh, 100%. And and honestly, sometimes, Patrick, I have to temper how much I think about this because it can be a little bit overwhelming when you think about the size of the opportunity.

But when we think about software space in general and the dawn of AI… and I should also say that I'm not an AI fan by nature. I was hugely skeptical about AI for the longest time. And it's only really in the last 12, 18 months as we've actually seen these experiments we've run ourselves and the results of them that I've been like, okay, yeah, there's something actually real to this technology.

When we think about software in general, it feels as if every single product will become a workflow product. Like, everything, regardless of what that product is. If it's a point solution, it's going to become a workflow product. And Tines, and I'm not exaggerating when I say this, we fundamentally believe we have the best workflow product available.

Patrick Grey: Well, I mean, just for anyone who might think that's too bold a claim, I think you can make a legitimate case that that's true. I agree. 

Eoin Hinchy: Thank you. It’s not just CEO hubris! 

Patrick Grey: No. I mean, someone listening to this might not know who you are, might not know Tines. But it's true. You can make that claim.

Eoin Hinchy: Thank you. I think you're right. Tines now, we've got this super, super powerful workflow engine. We now power something like 40,000 of the world's most important workflows across all manner and sizes of companies. We do something like 40 million automated actions on behalf of our customers every single day, and we’ve got six years worth of data that have resulted from all those workflows. And as a result, we’ve got a little bit of a headstart, both in terms of the technology that we’ve built for the workflows, but also the data that has come from those workflows. 

And we're now in a position where we can act as the plumbing between all these individual workflow products, but we also have all this data around workflows, and we can provide recommendations to customers. Like, here's the type of workflow that we've seen be really successful when they have a technology stack that looks like X, Y, Z. And we can do it in a secure and private way, because one of the things that's unique about our technology and how we've applied LLMs and, in an effort to kind of sidestep some of the security and privacy concerns, is when you use AI within Tines, you're using an LLM that's running in your environment. So we're not using Microsoft or Google or OpenAI.

We're running an AI in your environment. So it's secure. It's private. There's no logging. It's in region. There's no tuning or anything like that. So customers can immediately embrace this technology without having to worry about some of the downstream security concerns. 

Patrick Grey: I mean, I'm guessing that you can ask the Tines AI, this is my technology stack, I have this problem. Can you recommend anything that I can do about that? And it'll tell you. And it'll do it for you. 

Eoin Hinchy: Correct. Absolutely. And, again, I think that's a really interesting thing that we probably have the best answer for. But there's gonna be loads of companies who will be able to do that to, like, 60%, right? 

Patrick Grey: We're off to the races though. We are absolutely off to the races with this stuff. And it's gonna just change business quite a lot. It's fascinating. 

Eoin Hinchy. I know. And I think we'll continue to lead in that build space. But, again, I think what's going to become increasingly important is the overall picture. Like, okay, well, you've got your workflow. It's designed correctly and it's built correctly. Now how do you run it and monitor it in a fashion that's representative and aligned to how important those workflows are? And that's where we'll be investing a bunch of our time in the future.

Patrick Grey: So what's ready to ship now? 

Eoin Hinchy: So what we have in the product today that's already released is… think about this in 2 separate ways. One is AI that makes the product easier to use. So this is kind of what we've been talking about - help me configure this action. Or build me a workflow that does X, Y, Z.

Patrick Grey: This has been an immensely successful use of AI across all different types of vendors. It's just great having that little AI person sitting there. It's like Clippy, but that doesn't suck, right?

Eoin Hinchy: I think as well, what we've seen be hugely impactful in this category of make the product easier to use is data transformation. So if you've ever done any automation, both in terms of no-code platforms like Tines, but even in terms of scripting, honestly, one of the hardest parts is manipulating the data from format A that came from a tool to format B that needs to go to another tool. 

That's tricky unless you know what you're doing and understand things like regular expressions and so on. Being able to apply an LLM for that problem is just magic. 

Patrick Grey: I know what you mean. You're trying to transform something and you mess it up a little bit and it drops a comma in the wrong place and it's all wrong. I've seen it. 

Eoin Hinchy: And remember as well, for things like the transformation, they are build-time AI. So you're only running that AI when you're asking it, ‘Help me transform this data to that data.’ And then the AI is giving you back a script, and that script is static. So the workflow may execute a million times, but you've only used one execution against AI during the build period. 

Patrick Grey: To get the script that turns the thing into the thing you need.

Eoin Hinchy: So that is outrageously cost-effective. So that's the category of making the product easier to use. The second category then is, how do we give security teams and, people who are performing workflow automation access in a secure and private way to this outrageously impactful power of LLMs, without them having to go and open a third-party risk review with their procurement team?

What we’ve done is we’ve created an action type… and, without going too far into the weeds, the way Tines works is we provide this set of basic building blocks for automation. We had 7 for, like, the longest time. We added an 8th most recently, which is essentially primitive access to an LLM. 

So now our customers have raw access to this secure and private LLM that runs inside their tenant, that they can use in any workflow at any point and get it to do anything that they want. 

Patrick Grey: Let me guess. You get to build the prompt and then shove the data in, and then you can ask it to give you a yes or a no or a true or a false. So I'm guessing that's how it works.

Eoin Hinchy: Correct. Like, recommend some next steps based on this security alert. Or rank this security alert from 0 to 100. Let me know if this code has any security vulnerabilities and recommend fixes. Summarize this document. So all these prompts are available.

We're not telling our customers, ‘Here's how you should use LLMs for your program.’ You know better than us. But what we do provide, as always, is a big bunch of best practices. 

So here’s a great prompt if you want to act like a security analyst performing incident response. Or here’s a really good prompt that’s going to act like a vulnerability manager if you want to analyze a Qualys report or something like that. 

Patrick Grey: I mean, if we could get the machines to do that for us, it's a service to humanity. You do realize this is what will inspire the machines to rise against us, Eoin, is making them read Qualys reports until they've had enough of humanity!? 

Eoin Hinchy: That’s true. 

Patrick Grey: We're out of time. I could talk to you about this all day. Fascinating, discussion. Eoin Hinchy, great to have you on the show as always. And I can't wait to see you dropping some of these advanced AI features in the product in the future. It's gonna be great.

Eoin Hinchy: Beautiful. Thanks, man. 

Patrick Grey: That was Eoin Hinchy there with a bit of a mind-bending interview. I do hope you enjoyed it.

Built by you, powered by Tines

Talk to one of our experts to learn the unique ways your business can leverage Tines.