Seth Holland: All right, good afternoon, everybody. My name is Seth Holland. I’ll be proctoring today’s webinar. We want to welcome you and thank you all for joining. Our topic today is one that’s affecting us all in one way or another: artificial intelligence. Today, we’re going to explore how helpful AI can be to organizations, but also the potential security pitfalls associated with it. Our goal is not to create fear but rather to help drive critical thinking and preparation for this exciting AIjourney.
We’re at a bit of a crossroads. As we were preparing for this event, we noticed an exponential advancement and escalating cyber threats, which means the interplay between cybersecurity and AI has never been more critical for organizations of all sizes across all verticals. We have two exceptional panelists with us today: David, sitting next to me, and Chris Yula. Davidis from Lydonia Technologies, and Chris Yula is from CyberSecOp. They will share their insights and address some of your burning questions.
Without further ado, let’s get started. David, thank you very much for joining us. Chris, you as well. Why don’t we start with David? Tell us a little bit about yourself.
David Arturi: Sure, yeah, thanks. Hey everyone, thank you for joining. My name is David. I run the financial services practice at Lydonia Technologies. Lydonia is an intelligent automation consultancy. What that means is we sit at the intersection of automation, artificial intelligence, and data, and we do so through a secure partnership with CyberSecOp. A lot of times, when people think about AI or different technologies, it’s often marketed as a silver bullet. But the truth is there’s no one tool to rule them all. It always comes down to addressing the solution and backing into the right technologies. Our job is to work with businesses, identify use cases, prioritize them, design solutions, and ultimately implement them. I’ll turn it to you, Chris.
Chris Yula:
Yeah, so my name is Chris Yula. I’m the VP of Sales and Strategy. Background-wise, I grew up on the IT side of the world as a consultant and system integrator and migrated about six or seven years ago full-time over to security and joined CyberSecOp. As you see up on the screen, CyberSecOp is a puresecurity consulting company. We help organizations with everything from compliance to more pragmatic security issues, which we’ll get into in a second. We’ve been at the forefront and a thought leader in this space, even recognized by Gartner Peer Review as the number one cybersecurity consulting company globally. We just got that award about a week ago, moving up from number two, so it’s been a journey for us and something we’re super proud of.
If we go to the next slide, one of the things I wanted to share to set some boundaries is a depiction of a layered approach from a security perspective. Everything denoted in green represents what you would or should expect from a compliance perspective, based on the framework important to your particular industry—whether it’s ISO 270001, SOC 2, NYDFS, or CMMC if you’re in the Department of Defense space. Everything in green, like advanced policies, data governance, and compliance, is the core foundation. The advent of AI has pushed us further. About a few years ago, we focused more on the pragmatic side of security because without the pragmatic side, governance doesn’t mean as much, and without governance, the practical side doesn’t push as much.
Some of the things you see here that are coming up consistently now when we’re talking about automation in AI would be non-human identity scans, cloud governance monitoring, really understanding what you’re doing as far as data mapping and data classification—where the data is being stored and how secure it is. Those things are elements inside of compliance but are really factored in when you get into the security aspect of it. On the next slide, it shows where there is some fear inside of our clients right now. What we’re seeing is a point where, with the onset of AI coming in so rapidly, there’s definitely a gap. CIOs and CEOs are very concerned about the gap in talent for AI and readiness, making sure that they understand what’s going on. AI for us, and our clients, is both a tool to take advantage of to help move things more quickly in a defensive mode, but also to sit down and be prepared for how a threat actor might use it as a weapon. It allows them to move more swiftly into defensive mode from there. Through this conversation, we’ll go into this more deeply.
Seth Holland: Chris, thanks for that. You mentioned something, and I have a question for you. It has two parts. First, can you talk us through the difference between what you’d mentioned—security and compliance? And the second part, which you just mentioned, is how AI is affecting both pieces—security and compliance.
Chris Yula: Yeah, I kind of touched on it. It’s a super question and one we get asked a ton. What you see still up on the screen, the compliance side of it, is really getting into the controls and sub-controls that are important and are the guiding factor for any of those frameworks. It’s the standard that everybody can use to measure themselves against themselves, as well as against their peers and against that standard. Some are auditable, some are not. Where we get into the true security side is—and what you will hear from our organization, both our CISOs and all the way through our strategists—is how capable an organization is to protect itself and recover. That is really the practical side of security. That is, where are you for backups? Where are you from a DR perspective? What are you doing with data loss protection, your monitoring, and management side of it? It really is an overlapping piece. The side where AI comes in, if we’re on the defense, is how a threat actor could really leverage it and penetrate and escalate very quickly. That’s where we’re working with organizations and other technology companies to build in AI functions to more rapidly digest and process information to be protective and defensive more rapidly. We want to containerize that and hopefully extinguish any kind of attack that might be coming on. What we’re trying to do in conjunction with Lydonia is to sit down and make sure that an organization is truly prepared to take a proactive stance and leverage automation and AI to make things better, move things quicker, minimize the amount of errors that could be there in things that would traditionally be more human-oriented, and reduce the possibility for a negative outcome. We do know statistically that about 84% of all breaches or successful attacks are actually caused or allowed by human errors. One of the ways we can minimize that is by reducing the amount of mundane human interaction in those aspects. That’s one of the things David will get into on the automation side of it—the machine-to-machine piece.
Seth Holland: Chris, thanks—that’s a great segue. David, again, it’s a multi-part question for you. From our conversations and our interactions between our organizations, you’re absolutely at the crossroads. You talk to the heads of divisions and business leaders, but they’re turning to the CTO, CISO, or CIO on the technology side because they’re trying to figure out these use cases. Do you see typical use cases where organizations are, and what would be an atypical use case for automation and AI? As you consider that, I see it as the creation of workflows, design from an engineering standpoint. Maybe it’s taking an old design and a new design and combining them together. But tell us a little bit about those workflows and how you’re handling the crossroads between business leaders and IT leaders.
David Arturi: Yeah, that’s a great question. It’s relevant in every conversation we have, regardless of the organization. When people are approaching automation, you have to look at it as a program, not a project. Where organizations can fail and business leaders can get in trouble is when they try to go too quickly, don’t properly map out the use cases, or don’t have a clear North Star or KPIs. They inject money and hope it works, but it kind of falls to the wayside. You really have to look at it from an entire program approach. The most critical thing you have to do is determine what the outcome you are actually driving to is. There are so many different benefits with automation and AI—some hard, some soft—whether it’s revenue generation, reduction in operational costs, scaling without growth (which is the one we’re hearing the most frequently), or true FTE hour reduction. You have to make sure that everybody in your organization, from the top down, is perfectly aligned in what you’re driving toward or what the desired outcome is. When we’re thinking about those great places to start—the low-hanging fruit, the places where every organization wants to dip their toe in—we typically find the most success where we have the highest-paid employees who are doing that mundane work, the copy-paste, the manual back-and-forth, what we call swivel-chair work. So, oftentimes, we find the quickest time to value and the greatest ROI within middle to back-office operations, most notably within your finance department. We see a ton within AR, AP, things like reconciliation, journal entries, vendor onboarding, vendor management—anything where you have highly paid people performing low-skill labor and you want them to reinvest that time in other areas. That’s a pretty broad example, I’d say. Obviously, every organization will have its own unique workflows and use cases, but typically we want to look in those middle to back offices where we know employees are dealing with complicated processes, like Excel or similar tools.
Seth Holland: Excellent, thanks, David, we appreciate that. Chris, from a cybersecurity standpoint, any thoughts on these concerns based on what your organization is seeing? There’s a lot of data at flow—where it comes from, where it’s going, what your susceptibility is, the associated risks. Again, this isn’t about creating fear—it’s about creating awareness. Just some thoughts, if you wouldn’t mind.
Chris Yula: Yeah, I mean, in our recent history, this is probably the fourth real technology change everybody has had to react to. The push and the need are greater than the readiness. You can go back to the first iPhone, which displaced the security you had in a Blackberry. You could talk about the CEO who got an iPad and walked in, thinking he could make it work, but it broke all the security aspects. Then you had cloud technology, which suddenly became the movement everyone had to adopt before they knew what it was. Now we’ve got AI that really came upon us in the last six or nine months. So, when you sit down and look at the lack of regulation, the ethical concerns, and the privacy aspects, you’re making sure that everybody’s prepared. Whether it’s CyberSecOp working on behalf of the CISO or working as the CISO if they don’t have one, we’re making sure there’s an understanding that there are potentially additional attack vectors. When you look at the proactive side, the things Lydonia and David are doing, it’s a matter of understanding the value AI and automation have, ensuring that you’re prepared ethically, with governance in place, and that data classification and mapping are done. This ensures that everything you’re doing is in as clean an environment as possible, not creating rapid issues if there were to be an incident. A lot of it is about walking before you run. I think that’s the same approach Lydonia takes with looking at where a salesperson might say “low-hanging fruit.” We look at it from a risk perspective—what can we automate to make things more secure, improving security while minimizing any risk? Getting comfortable with lessons learned, because every environment is different. Technologies are different, and change is happening in months, not years, moving more quickly with each step.