Seth Holland: As I’m scrolling through here, the first one, David. What are you seeing as common hold-ups or nuances that will stall progression in the programmatic approach?
David Arturi: Yeah, I think we… So I guess two thoughts there. The first one is something we’ve touched on, which is not having alignment between the business and the boots-on-the-ground lines of business.
There have been a number of times where we’ll go into an engagement where one group is expecting one outcome, and then the business or the executives are expecting another. Then we get to the end, and it’s like, well, now who’s pointing the finger at who? Right? So we did all this work, we got the investment, we invested the time, but now it’s tough to move forward because different people on the client side had different expectations. So really making sure that everybody is aligned and concrete on: What are we trying to get out of this? What are our KPIs? And how is this going to help us?
The other thing we’ve kind of come across, again, Chris touched on this before, is the idea that… and let’s be real, a lot of this stuff is very new. Automation itself has been around for 30-some-odd years, but it picked up steam with AI and Chat GPT. People think Skynet half the time, right? So there’s this idea that okay, there are non-human users and non-human identities within my environment. So where are they going? What credentials are they logging? What do they have access to? How am I setting these things up?
So really, it’s that security. Once we’ve gotten to that stage of these bots being in my environment, what do I need to do to protect myself?
And we’re not a security company, so that’s when we’ll lean on somebody like CyberSec to actually secure that environment, to make sure that you have that outside voice that everybody up the food chain is going to be comfortable with having these bots or having these agents—whatever it is—inside your environment. So those are the two key areas, and typically how we get around them.
Seth Holland: Excellent. Chris, the question for you: Now that CyberSecOp has been brought in as an objective third party to help with cybersecurity issues or potential conversations around avoiding issues from an automation perspective, what are most leaders concerned about with the implementation from a cyber standpoint? David touched on it from the business standpoint of getting that implemented. Any thoughts there?
Chris Yula: Yeah, I mean, I think the three big things that I’m seeing, or we’re seeing, would be around…
There is no hard regulation yet. It’s coming so fast, it’s not part of the framework or the standard. So we’re creating that and giving recommendations on how we think it should be. And that’s a collaborative conversation with each client based on their particular culture, directive, but also the vertical, which may dictate something.
There are still ethical concerns, especially for those that are consumer-based. Right? To sit down and go, well, what does this mean to me? What’s going to happen? What are you touching, all that?
And then I… The other one is, to David’s point earlier, there’s sometimes a concern about somebody unintentionally or intentionally messing with thedata. Right? So some people are calling it data poisoning.
It’s a matter of actually making sure that doesn’t happen. Really keeping clean sandboxes, understanding what’s going on, sitting down, and making sure it’s secure. So that really is a blend of security and IT, not just security alone.
The thing that David also mentioned is we do a bunch of assessments going into it. Even non-human identity scanning wasn’t even on our tip of our tongue a year ago. It just wasn’t a thing. It didn’t have the pop that all of a sudden now it does. And those were bots that maybe were put in place, like David said, 5, 10, 15 years ago, that some little group in the corner put together, and no one really knew what was going on. But I really loved whatever was happening on the back end.
But now you’re talking about putting that on steroids and doing that more. We’re seeing more and more of a move, even in the news with companies making commitments that they’re going to be driving AI. Yes, it’s going to have a positive impact, but there’s going to be some fallout, such as employee count or whatever.
But in reality, they’re going in that direction. They’re looking at it as operational efficiency. They’re looking at it as a financial benefit, and they want to move as quickly as possible. So it’s a little bit of just kind of pumping the brakes to know where you’re going before you take massive leaps. And that’s really where we spend the bulk of our time. But once we get through that parental phase, now all of a sudden we can jog, and then when we’re jogging, we can run. And that’s what we’re trying to make sure that everyone’s kind of aware of.
Seth Holland: Excellent. Thank you. David. Is it possible to establish clear guidelines and oversight to ensure AI is used in a secure and compliant manner, while also using AI to accomplish their company’s business goals? I think we’ll have you answer, and then, Chris, we’ll turn to you from the compliance and regulatory part of that response.
David Arturi: Yeah, so I think exactly right. But I think as we go to deploy and develop these bots or implement them, what we’re doing is we’re not throwing anything disruptive into the environment. What we’re doing is we’re training these with the business analyst. First, what we do is take a business analyst, and we’ll have that business analyst go through with your subject matter experts every single step—where they’re clicking, where they’re pointing. So we have that whole process mapped out. From there, we provision the bots accordingly, with the same levels of access that the subject matter experts have. So we’re not going to give them access to follow.
We call it LPA (least privileged access). If there is something in question, take it more secure. We should never be granting access, and this is a broad security concern, but these bots should never have more access than the absolute bare minimum. So that’s how we provide that oversight and guidance when we’re implementing and developing these bots. I’ll turn it over to Chris for the second part of that after that.
Chris Yula: Yeah, I mean, I think the thing that we should be doing in any organization that’s in our place, including the CISOs that are on staff, is enabling it but doing it with caution—making sure that you’re doing it in a safe manner and not creating unintended consequences. So if anything, we’re part of the solution. By doing it as part of the solution, we’re just making sure that we’re kind of in the boundaries. We’re watching this be limited, and we can move as we get more comfortable and get more accomplished. That’s really what we should be doing. There will always be assumed risks—some of them small, some of them large—that existed 5 or 10 years ago and still exist today. It doesn’t matter if we’re talking about asset management or AI automation, disaster recovery, or compliance. All of that stuff is a matter of knowing you can’t have everything completely locked down and choke the business from functioning. You’ve got to work in harmony with them. And that’s where we’re seeing a big benefit from the AI andautomation stuff that Tony brings to sit down and really make that movement and flow happen in a much greater sense than it would otherwise.
Seth Holland: Looks like we have another two-part question. What types of controls do I need to have in place to use AI in a useful, mindful manner? And are there steps that we can take to identify and mitigate security risks and threats that target AI? Let’s start with what types of controls from a business standpoint, David, would be in place. I know that you touched on the idea of having a process that is heavily focused on the client experience of excellence, and it goes back to that defined process to get to the idea of a programmatic approach. So, as it relates to this question, what types of controls do people need to have in place, or would you suggest they have a process? Or do you have, in your process, really the one-on-one of how you get started, and then here’s what it would look like?
David Arturi: Yes, every customer’s environment is going to be different. That’s a good question. We obviously have our recommended guidelines, the bare minimum of what we’ll need in order to deploy. But ultimately, we want our team, our solutions architects, and everybody working with the customer. All development work is going to be, and should be, done in their own environment. Everything should happen in a test environment and move to a production environment for the customer. We never want to have any of that stuff in-house. We don’t want to have access to any access that we don’t need to have. So, we really want to try to adhere to the customer’s security protocol as much as possible and make sure that all development work, anything we’re touching, is provisioned to their standards and being done as one of the least touch. Everything should be done on their side.
Seth Holland: Excellent question, boss.
Chris Yula: Yeah, I mean, I think, like I mentioned, there’s still, from the standard side of it, no trickle-down yet to actually change each of the frameworks, like I mentioned, whether it’s NIST, ISO, or whatever. So we’re making those adaptations with the customers in real time.
I think the controls and sub-controls within it are going to be, in some cases, attached to some of the things that already preexist, maybe adding some minor elements to it, just to make sure that we’re taking the extra step and caution. Right now, it hasn’t slowed down any of the development or any of the companies that are putting stuff in place and getting the process. I would say, the process is more fluid right now because they know they need the policies, the frameworks, and the cultural announcements in place to move forward. There is definitely great interest in adopting AI. You read about it, but organizations are making sure they’re doing it with whatever protective cautions they can. They also realize that whatever that might be written today might have to change in 3, 6, or 9 months based on a new use case, a new situation, or a new technology that wasn’t available before. They have to tweak again.
For us, I would say, when we’re looking at the compliance side of it, we’re actually reviewing things more frequently—almost on a monthly basis or wherever applicable—to make sure that these things, like AI, are being assimilated and how they’re being affected by the marketplace overall.
Seth Holland: You’re staying in the hot seat because the next question is pointed directly to you. From a practice standpoint, meaning more for you, your experience, and providing some insight to this, the question is: Can AI in general be used to improve the effectiveness and efficiency of my organization’s security operations?
Chris Yula: I mean, we’re cautious about Gen AI compared to, say, Copilot, which is a closed environment. I don’t think there is as much awareness that everything you’re putting in is also shared for the next person who has a separate question. So realistically, we’re recommending just to limit the amount of use for Gen AI. Use cases, because you’re probably unintentionally sharing information you weren’t even aware of. It’s different if you’re sitting down trying to write a resume or putting a job posting together—whatever those are kind of innocuous use cases. But when you’re using it at the business level, it definitely could create some problems. You’re getting into, you know, one of the biggest things is policy, where we’re spending a lot of time on general use policies inside each organization. What is or is not acceptable, and what can be used? Or maybe by departments, it might be fine for HR, but not for finance. Those are decisions that each organization can make based on their particular business or vertical that they serve.
Seth Holland: An interesting question. It typically is implementing an AI solution disruptive? I think this is a great question, but unfortunately, not one with a direct black-and-white answer, as it depends on the organization, the use cases they’re trying to implement, and where they want to fall and what they’re considering disruptive. So, David, do you mind adding some commentary on that?
David Arturi: Yeah, I think for the most part, you’re pretty on point there. When we think about disruptive, it goes back to that pragmatic approach. Where we also see companies fail is if they want to start out of the gate with one massively complex use case that’s touching 17 different systems, and has all these advanced technologies and this and that. That’s when it becomes disruptive: one, because the ROI is probably not there, and then two, you have to try to tie all these things together. That’s when it gets a little messy. And that’s typically why I recommend starting with those simpler use cases with the high time to value where we know they’re going to work and they’re touching fewer systems in general. So again, respect to that “crawl, walk, run” approach that Chris mentioned earlier. So, it’s not disruptive; it’s an investment of time. But like any investment, you have to make sure you’re picking the right one and you’re ready for the long run. This isn’t trading options; this is a long, pragmatic approach. So, if you do it the right way, it’s not disruptive, it’s an investment. If you do it the wrong way, then yes, it is absolutely disruptive. At times, the term “disruptive” is a bit unfair because there’s a difference between investing in your business and disrupting your business. It goes back to that project versus program approach. If you’re going to swing for the fences, throw everything out of it, pull your team off their workload for two weeks because you’re trying to take a stab at something blindly, that’s going to be disruptive. And I think that applies to most things. But if you’re going to have a nice, simplified approach, you have somebody who’s been there, done that, showing you, guiding you, then it’s an investment.
Chris Yula: I think the ones that are disruptive are the ones where there may be a skill gap missing inside the organization, and there’s a push from the business to just get it done. So, the planning is quick and inadequate, the skill set is short of what it could be, and someone’s being forced to go, “Hey, you’re the new AI guy, go figure it out.” Now you’ve got a problem that shouldn’t have happened if, in fact, the business was doing it at a controlled pace. So, to me, that’s the only thing we’ve seen so far that’s in that kind of disruptive category. Otherwise, everything else, like David said, with proper planning, proper awareness, proper safeguards, and starting small and building—and also building rapidly—is really the safest manner.
Seth Holland: Excellent. So, as we’re winding down on time, I have just a couple of quick points that I wanted to go back to that were touched on a number of times throughout the last 45 minutes or so. So, David, you talked about that programmatic approach and getting to a place of manageable steps so that the desired outcome is evolved. And that’s where you get to, Chris, the idea from a cyber standpoint: there really aren’t any hard and fast rules at this point. However, managing expectations is key, and taking an approach that doesn’t add risk to your ecosystem is key. In a minute, I’m going to ask you both for some closing statements. We go back to that defined process, and to me, that really applies to both the cybersecurity standpoint and the Lydonia standpoint, and the idea of leveraging automation and AI to upskill the staff—moving skilled labor to where those skills are really needed. So, to me, those are some of the points that really stuck out. So, David, just some closing thoughts?
David Arturi: Yeah, I think that automation, while it may seem new or complicated on the surface, really isn’t that challenging. But you have to approach it the right way. Work as a team, work as an organization, determine what you want to get out of it—whether it’s hard dollars or time saved—and then work with people who have been there and done that before. This isn’t something I necessarily recommend trying on your own. I’m sure you have a great team, but lean into partners who have experience, who can show you where to look, who can show you what works and what doesn’t. And then take those pragmatic steps, and you’ll absolutely be successful.
Seth Holland: Excellent, thanks, David. We really appreciate you joining us today. Chris, any closing thoughts?
Chris Yula: Yeah, I mean, I would echo everything that Dave is talking about. From a pure security or cybersecurity perspective, like I mentioned, those three swim lanes, I’m really excited about how AI and automation are going to start filtering their way into the toolsets on the defensive side of things. That’s going to improve things tremendously for that segment of the world. For what we’re doing with Lydonia to sit down and actually make sure an organization is prepared, aware, and smartly going into it, we’re spending a lot of time making sure that we look at that data classification mapping piece, making sure that the data is segmented. Even for some instances, we’ve been helping build out a dev-ops environment so that the environment is completely clean and safe to physically build and write code, and then take it out to production and have it run through co-testing. So, for us, I think this is going to happen as rapidly as we keep seeing it in the news. We’re seeing small, medium, large, and extra-large companies diving in with both feet, and all we’re working on is trying to make sure that they’re going with caution, knowing that this is still a new space. It’s not fully vetted yet. It’s not mature enough to go at some of the paces that people want to go to. But to David’s point, some of these small to medium changes or projects or bots can have massive impacts, both on operations and cost savings. So, we’re high on automation. We’re fine with Lydonia—they’re a great partner. They do amazing work for their customers. And we’re looking forward to the future.
Seth Holland: Chris, thank you again for your time today and for joining us on the panel. Thank you again so much. Take care, everybody.