It’s equally important to explore the flip side — things that CISOs should say “no” to. What is the advantage of saying no? Remember, every time you say yes, there’s an opportunity cost, and it’s not always worth it (especially when you have other things to do). Below are three scenarios that CISOs should avoid; doing so will free up time to focus on what matters most.
The latest technologies (zero trust, DevSecOps, something artificial intelligence/machine learning related) continue to make waves and, despite the considerable benefits that many of them deliver, it’s all too easy to end up purchasing tools and tweaking processes in ways that add unnecessary complexity — not to mention a mental load required to navigate said complexity. A common scenario is when teams purchase duplicative scanning tools in a noble attempt to identify more vulnerabilities. If that team hasn’t explored all of the functionality of its original tool, or optimized its vulnerability management tool, it will find that this pile-up of scanning tools will simply create more strain on systems without solving the problem.
It’s imperative that security leaders actively fight against complexity in their organizations. I recently read two books, Leidy Klotz’ “Subtract” and Martin Lindstrom’s “The Ministry of Common Sense,” that inspired me to seek out and find all the places where unnecessary complexity has either crept in or persisted. Both books focused on the reduction of unnecessary complexity in systems, business processes, policies, or organizations.
It’s critical to identify where complexity exists and start to simplify. Complexity can manifest itself in a lot of different places; examples include third-party risk management and vulnerability management. Reducing complexity requires that teams think critically about how to prune and optimize. In that process, you will run into organizational inertia and resistance to change. Fighting through that resistance will bring its own challenges, but the results will be worth it.
Jumping Into AI/ML Prematurely
There are so many market pressures to jump into artificial intelligence (AI) solutions. Everywhere we see fast-track paths to deliver machine learning models in environments to scale key security functions like compliance, security operations, and vulnerability management. The benefits of a well-implemented AI solution are real: improved decision-making about alert triage, speed of process execution, and more. But there are dangers to consider in moving too quickly with any cutting-edge technology including AI:
- You may be taking on something unsustainable due to insufficient personnel
- You may not have the proper platforms in place to effectively deploy, operate, and iterate on machine learning models
- You may not have a good posture around data management that sets your team up for properly training models, reinforcing the garbage in/garbage out factor
This list could go on, and it could also just as easily focus on any other cutting-edge technology that promises massive disruption in the market. Don’t try to sprint before you’re effectively walking. There are plenty of high-impact basics that security teams can get better at first, such as automating tasks through SOAR (security orchestration, automation, and response) tools or selective use of managed services. Don’t let the shiny objects on the market distract your team from doing the basics well and then intentionally expanding and building.
Which companies are the most important vendors in cybersecurity? Check out
the Acceleration Economy Cybersecurity
Top 10 Shortlist.
There is no shortage of firms willing to come in and build out wholesale strategic plans, design metrics, and develop reports along the way. In many cases, these kinds of offerings are built off of boilerplate re-usable material — these may be appropriate, but every leader should be intentional about whether or not this is actually what is needed. This sort of engagement comes packaged in roles like: consultant, agile coach, management consulting team, advisor, and more. None of these roles and services are inherently bad, but before you engage them, you must know what you want from a potential partnership.
If you’re outsourcing the core function of thinking, that’s detrimental for any team over the long term. It degrades a team’s ability to think critically, apply the organizational context it has, and deliver. This is especially true in the arena of cybersecurity innovation, an area in which people on a team particularly need to be able and empowered to think for themselves. If that is outsourced to expensive consultants and employees feel disempowered, then innovation will be stifled and dependent on the consultants being engaged.
Your team should remain in the driver’s seat, thinking critically and steering the security program. A team that can think and debate together is a team that will thrive, especially when they’re limiting dynamics like groupthink. Outsourced teams are there to support your vision, not set it and think for you.
To make sure that your team is as effective as it can be, I strongly recommend pausing before moving too aggressively into any of the aforementioned areas. Cybersecurity market hype can push you towards any of them. In my experience, they each have the potential to be harmful or limiting in the long term. Steering clear will enable you and your team to get more of the right things done.
Want more cybersecurity insights? Visit the Cybersecurity channel: