I was initially very skeptical about AI and its potential impact on cybersecurity. Having spent 15 years working on security teams at eBay, PayPal, and DocuSign, I've deployed and subsequently had to rip out plenty of overhyped technologies. Why would AI be any different?
For years, we as security practitioners were promised that AI and machine learning would change our lives for the better, but time and time again, the companies that touted this technology disappointed us. In the first decade or so of AI-branded security tools, we saw plenty of products that demoed well, but were completely undeployable.
Demoable vs deployable
AI features shine during demonstrations because you can tailor the demo to make it look and feel impressive, and the AI always provides the correct answer. It seems like magic because vendors can carefully control the information given to the AI during the demo, using clean data and specific scenarios that the AI can understand well.
In real-world scenarios, you’re dealing with real-world problems. Data is often messy, essential tools may not be available, and there's typically more noise than useful information in the data.
One memorable example of an undeployable tool was a real-time attack map released by a technology company. Because it was purely make-believe and not based on real data, it became known as the "pew pew map." Despite the skepticism of security practitioners, some less informed buyers purchased the technology. Of course, practitioners were then tasked with implementing it, knowing that it couldn't live up to the buyer's expectations.
The "pew pew map" was widely ridiculed by the security community. People began to refer to all these kinds of technologies that demoed well but were awful in reality as "pew pew" technologies.
When you factor in experiences like this, I think we can be forgiven for initially dismissing AI technology and LLMs.
I’ve been badly burned by technologies that overpromised and underdelivered too many times. And unfortunately, when deployed in actual situations, many AI tools hallucinate and generate numerous false positives and inaccuracies.
These technologies are still hallucinating all the time, even though they're improving and getting more accurate. They're still not 100% correct 100% of the time, and they probably never will be. As a result, there needs to be a degree of trust and an ability to verify. Human oversight is always going to be important. That’s why we’re particularly proud of Tines' monitoring capability.
How are leading CISOs approaching AI? Find out in our latest report.
Hopeful skeptic
Even with my healthy levels of skepticism, I’ve always been hopeful that AI and machine learning would have an impact on the technologies that we were building at Tines. For the longest time, the innovation just wasn’t there. That all started to change around 2022 when more sophisticated large language models arrived on the scene.
Ultimately, what turned me from an AI skeptic to a champion was the experimentation of our product team at Tines.
I only became an advocate when I saw with my own two eyes, in production environments, how AI in Tines can solve real customer problems. I had never seen that with previous generations of AI-powered technology.
Our product team delivered a pivotal "eureka!" moment by streamlining the process of converting timestamps. This mundane yet intricate task, typically burdensome for software engineers, was seamlessly automated by AI in Tines, eliminating the need for referencing complex materials.
That use case was all it took to open my eyes to AI’s potential to simplify complex, mundane tasks. And the use cases for AI in Tines got more exciting from there.
Read Head of Product Stephen O’Brien’s blog post on how the product team developed AI in Tines.
Feeling the pressure
The journey to AI in Tines was a long one - we had something like 70 failed experiments in AI before we found an approach that met our strict security and privacy requirements.
We were quickly descending into a trough of disillusionment around AI. And it would have been easy to buckle to the pressure like other companies and release rushed AI features that weren’t up to the task.
Thankfully, this was a brief moment of weakness. We quickly reminded ourselves that we don't build anything, AI-related or not, just because our competitors are doing it.
We only build things if they add value to our customers. That’s what drove us to explore AI in Tines in the first place.
So we returned to our customers’ problems. We looked at the value Tines is currently adding in powering their most important workflows, and asked how we could build on this even further. And we identified two key areas where we could improve.
How do we make these workflows a little bit easier to build? In other words, can I use natural language to describe the workflow that I want and iterate on it once it’s built?
How do we make these workflows a little bit easier to run and monitor? This encompassed a bunch of other questions - how do I run workflows at scale? How do I make sure they can run both in the cloud and on-prem and in hybrid? How do I monitor these things to make sure they're working how I expect them to work? And how do I get notified if something breaks midway through a workflow?
We also considered the biggest problem that security, IT, and other technical teams face - too much work and not enough people.
Tines was already making a huge impact here. We’re grateful that so many of our customers have gone on the record about the efficiency that Tines helps them create and the impact this redistribution of resources has on their business objectives.
If AI in Tines was going to add additional value, it needed to do two things for our customers:
Help them work faster
Reduce barriers to entry even further
Our first two AI-powered features - automatic mode and the AI action - proved this even before launch day, with longtime Tines customers who participated in the early-access program reporting additional time-saving opportunities and greater accessibility for non-developers on the team.
But that’s just the beginning. I can see the number of workflows teams build within the first year skyrocketing. I can also see the number of Tines builders per team skyrocketing.
I really believe that AI in Tines is going to 10x the usability of our product.
In 18 months, I went from inherent skepticism to the opinion that AI is the single most important technology shift I've seen in my lifetime. But we haven’t reached an AI utopia just yet. We’ve yet to see AI have a meaningful impact on business outcomes, outside of a couple of fairly narrow use cases.
I can’t wait to see AI in Tines buck the trend created by other AI tools and become part of this promising next phase in AI’s evolution.
Keeping humans in the loop
Just because I’m feeling optimistic about the future of AI doesn’t mean I’m ready to hand over all of my workflows.
What we offer in Tines right now is AI-enhanced workflow automation that’s driven by people. Our customers are describing what they want, or confirming the next steps that the AI should take.
As AI, LLMs, and relevant guardrails continue to evolve and improve, user trust and use cases will also grow, enabling even more people to focus on more creative and engaging work.
But what’s great about AI in Tines is that you have full control over that progression. We run the language model within our own infrastructure. Your data never leaves the region, travels online, is logged, or is used for training. You are in complete control - you decide when and how your workflows interact with AI. This is what we mean when we say that AI in Tines secure and private by design.
Read Eoin Hinchy’s seven best practices for implementing AI.
Moving forward
Even after seeing people do really exciting things with AI in Tines, I’m still building my own trust in AI. And I’m guessing that you are too.
What should be your next step? How do the most security-conscious people in the world learn to trust AI and tap into its potentially game-changing benefits?
Despite what some vendors may tell you, it doesn’t have to be a sprint. My recommendation is always to introduce AI in areas where the risk of failure is minimal, and gradually increase the scope of AI in your processes as your proof of its capabilities and reliability grows.
It’s also important to grow your understanding of AI and your experience in using it - our always-free Community Edition is a nice place to start.
And it helps to check in regularly with your vendors, and ask how and why they’re introducing AI features to their products. They should be happy and excited to share this information with you.
If you use AI in Tines, I’d love to hear your feedback. These features are part of a much broader AI strategy at Tines - we’re just getting started!
Learn more about powering your workflows with AI.