I’m Peter Wrenn, my friends call me Pete!
I have the pleasure of being the moderator of the Tines Technical Advisory Board (TAB) which is held quarterly. In it, some of Tines’s power users engage in conversations around product innovations, industry trends, and ways we can push the Tines vision forward — automation for the whole team.
Well, that’s the benefit to our customers and Tines. The biggest benefit for me is just getting to be a sponge around thought leaders in the cybersecurity space – people who have been running teams and defending companies for a long time. Having only been in cybersecurity for about three years, and never having been a practitioner myself, the vast landscape of concepts to learn is intimidating to say the least.
Quarter-by-quarter, I get to pick a handful of topics and then get a crash course from listening to the advisory board's thoughtful and intellectual discussions. And I walk away with a wealth of knowledge I didn’t have going into the conversation and perspectives I never would have known to consider — and it may sound cheesy, but that is the greatest gift of all.
The biggest benefit for me is just getting to be a sponge around thought leaders in the cybersecurity space – people who have been running teams and defending companies for a long time.
After our recent session, where the conversation pivoted into the world of LLMs, AI, and ChatGPT, I decided I needed to start sharing what I learn from these quarterly meetups with my network.
So, with the permission of our guests and my marketing team, I’m sharing a quarterly blog post on the key takeaways from each Technical Advisory Board session.
As I said, the May edition centered our conversations around the rapid rise of LLMs, AI, and ChatGPT. I teed up the conversation with, "As these continue to take the world by storm, how do you see them shifting the landscape of industries like technology, education, and more?"
Out of that conversation, here are the top three topics where I gained a whole new perspective:
Sensitive data leakage; can it be controlled?
This was the greatest potential risk that I identified going into the conversation: the potential for sensitive company data to be leaked into ChatGPT’s learning model. We’ve already seen articles on folks who got fired for this practice, but if you don’t do anything to control access, this will be an inevitability in any enterprise due to human error.
What I learned: While talks of policy-based controls and zero-trust controls dominated the conversation, the key takeaway for me was a customer pointing out the risk of how other vendors are using OpenAI that may not have strict controls in place. If a vendor doesn’t exercise extreme caution with their customers' data with ChatGPT, they are opening up their customers to a great deal of liability as OpenAI and/or a small group of other AI companies become so ubiquitous that they are a ripe target for cybercrime.
Writing malicious code — not as easy as it seems?
I figured that it would be pretty easy to get ChatGPT to write malware for you, but as it turns out, there are enough ethics baked into GPT4 to make this a bit more challenging. While one customer reported that one of their red-teamers was able to write malware by breaking it down into steps, it still took about six hours to do so.
As it turns out, the far greater risk that I hadn’t even considered, given my software engineering background, was the number of code vulnerabilities that could be introduced by lazy developers asking ChatGPT to write certain methods for them that might involve vulnerable libraries. With vulnerable exploits becoming a key attack vector for companies, this is a risk that needs to be mitigated with strict access controls.
Potential benefits of our robot overlords taking over…
The only potential benefit I heard about came from our CEO, Eoin Hinchy. He suggested an AI security analyst that can take in raw alerts and come up with a basic determination of recommended next steps for that alert.
Disclaimer: This idea would not be something that could replace humans today.
A quote from one customer that received universal agreement was: "If an AI can replace your security analysts, then you need to train your security analysts better."
A couple of Tines customers expanded on this concept and said they could envision using ChatGPT as a means of Tier 1 triage and questioning so that more information can be presented to an analyst and quicker decisions can be made on handling those alerts.
Bonus takeaway: Tines cases
Shameless plug for our product here, but we also got a sneak preview of Tines cases before the official launch that took place on 6/7/23. My personal favorite part is the timeline and collaboration component, which makes it feel very much like the IRBot stories our customers use in Slack with a super clean UI. Learn more here.
That’s it for last quarter's recap, and I look forward to sharing more from future sessions!