Governing Security at Large Enterprises
- by nlqip
“Quantity has a quality all its own”—a quote apocryphally attributed to Joseph Stalin.
As part of the research that went into F5 Labs’ 2018 Application Protection Report, we surveyed information security professionals. We found that 37% of respondents were from organizations with more than 5,000 people. Here’s how the percentages broke down:
What is the worldwide headcount of your organization? | Percentage |
100 to 500 people | 14% |
501 to 1,000 people | 23% |
1,001 to 5,000 people | 26% |
5,001 to 25,000 people | 22% |
25,001 to 75,000 people | 10% |
More than 75,000 people | 5% |
The fact that more than a third of our respondents came from large organizations (over 5,000) stood out to us because we know that as organizations get bigger, everything changes. Information systems become more complex, their user base and use cases mushroom, and management priorities and processes change. Oftentimes the security practices and architecture that worked until now become increasingly strained as organizations try to adapt to scenarios far from their original context. In short, successfully leading a security program at large enterprises is a different beast altogether.
Ginormous Tech Ecosystems
On the face of it, this qualitative difference is a surprise. Look how big the budget is! Look how much experience the team has! Look at all that sweet hardware, blinking obediently, awaiting your command. How could anything possibly go wrong?
We, and many others, have pointed out that running a security program is less about moments of technical wizardry and more about the thorough, unrelenting, and well-documented grinding through the basics: Inventory, patching, access control, and measurement. Well, it turns out that even doing the basics well at the scale needed by a big organization is surprisingly hard.
One of the issues is simply the inertia that comes with size. As the number of people and structures grows, it becomes harder and harder to get an informed quorum into the same room to make a decision. This holds for technical systems as well as for policy. It also becomes harder for actionable feedback to reach decision makers. This means that over successive generations of decisions, past mistakes (or controversial decisions, or compromises) might not get fixed, but instead become the foundation for more misguided decisions.
It is also hard just to know what you’re working with. An updated inventory of assets is a prerequisite for proper security. Discovery of both apps and digital assets to protect is a nontrivial problem at large scale, and the difficulty goes up exponentially. Think about an organization with hundreds of office LANs, dozens of major data centers, thousands of cloud deployments, tens of thousands of TLS/SSL certificates, and petabytes of data. Spread within that is sensitive information, in the form of intellectual property, employee PII, and customer PII, in both physical and digital forms, in offices around the world, with staff subject to different laws and speaking different languages.
Much of the time, big organizations grow from dozens and dozens of acquisitions, each with its own risk appetite, technological infrastructure, business needs, custom applications, ops teams, dev processes, and culture. Given the discoverability problem, it can take years to bring them all into line. Worse, it can be difficult to decide how much conformity of policy and implementation is feasible across disparate teams and business units.
Compliance Problems on a Galactic Scale
There are also, inevitably, issues with compliance. Global organizations with multiple lines of business need to comply with a wider set of international and local regulations, which in turn complicates the task of harmonizing existing policies and architectures. This can make innovation difficult, or at least more complicated, as it becomes trickier to implement new ideas and technologies for an organization grappling with a complex of overlapping compliance regimes. Take, for example, the problem of rolling out a new global HR self-service application. This is a surmountable task, but it requires not just technical security hurdles, but also privacy and compliance challenges.
In short, the complexity that comes with size and geographic dispersion means that your monitoring surface is more like a muddy pond than a single pane of glass. Changes of direction are more elephant-like than cheetah-like. And often large organizations are more in the cross-hairs of regulators, so this too slows things down. The result is that the burden more than outweighs the added budgets and slick technology.
How Large Enterprises Deal
What’s a CISO at a large organization to do? We have identified five principles that really apply across the board, but they apply even more for large organizations. Naturally, they each support and amplify each other, so we recommend you implement them each to the greatest degree possible but, as always, tweak them to fit your organization and your expertise.
Principle 1: Simplify
The first remedy for the exponentially increased complexity at large organizations is just to simplify as much as possible. Of course, it is not realistic to expect, say, 50,000 users across three continents to be able to use the same handful of apps, the same detailed security policy, or to be under the same compliance regimes. However, it is possible to embrace a principle of simplicity and pursue it to the greatest degree possible, letting it take different forms in different situations.
For policy, simplicity hinges on recognizing which parts of policy are strategic, and best controlled centrally, and which are tactical, and best controlled locally and with granularity. More and more organizations are moving toward an overarching information security policy that is short, simple, broad in scope, and high-level. This document lays out minimum baselines for settings, acceptable technologies, and responsibility matrices. The details that used to be included in policy can be turned into local procedures, standards, and guidelines that correspond precisely to their contexts. This allows disparate stakeholders to tailor concrete control objectives to granular needs while remaining true to the overall policy in principle.
It is still a good idea to maintain a list of approved tools and vendors to ensure a degree of centralized control, but the key with this simplification trend in policy is to avoid the bureaucracy that comes with deep centralization. This means that the CISO no longer needs to formally approve all exceptions to technical standards, for example, as these questions have been pushed out to the edges. In other words, as size and complexity grow, information security policy should increasingly concern itself with ends, not means.
There are other dimensions in which CISOs can simplify their approach. The principle of least privilege takes on greater significance as networks become more complex, fragmented, and dispersed. Reducing the number of root admins and restricting the sweep of elevated privileges to specific geographic locales, tool suites, or business units will improve visibility, reduce inertia, and diminish the likelihood of an adverse audit finding. Which leads to the second principle.
Principle 2: Segment
We know that all organizations should assume breach, especially large ones. Given the complexity of large enterprises today, there is almost certainly an ecosystem of malware and compromised hosts floating around somewhere. However, a breach or a disruption somewhere in the network should not compromise everything. Segmenting large information systems can reduce the impact of an incident from a catastrophic breach to a local nuisance.
There are many dimensions by which networks can be segmented. Geolocation, business function, and data classification are the most common. Irrespective of criteria, large enterprises should use firewalls, data loss prevention (DLP) tools, intrusion prevention systems (IPS), and SSL/TLS decryption to filter and monitor traffic moving between segments. If the segmentation is done with logging and monitoring, these tools can also aid in anomaly detection.
For universal assets that receive traffic from the entire organization, such as intranets, HR resources, and so on, it is best to place them directly on the web, anyway. As the concept of the perimeter changes, and internal controls start to resemble edge controls (see below for more on this), the user experience of web resources will be no different, and the risk will be lower than maintaining these resources inside of a gigantic global perimeter.
Segmentation also has implications for user privileges and authorization. As different parts and functions of the network are separated, there will be a natural process of user privilege auditing, which will allow you to restore the principle of least privilege anew.
Principle 3: Harden the edges
One of Napoleon’s military innovations was to delegate tactical decision making to the officers in frontline units, which was a concession that was forced by the size and relative chaos underpinning his conscript army. This is a principle that modern information systems have embraced to varying degrees since the microcomputer revolution, and it has largely borne great fruit in terms of productivity, innovation, and flexibility. However, providing agency to users is also a big part of why we need to assume the breach. This is particularly true for large organizations. This tension is reflected in Google’s shift in 2016 to the BeyondCorp trust model (or really lack-of-trust model). In return for greater flexibility, scalability, and creativity, the edge has turned inwards and taken on the characteristics of a fractal, with trust boundaries on all sides. This means that authentication and verification happen with the combination of user and device, not merely at the level of a network boundary.
It is for this reason that Kip Boyle, CEO of Cyber Risk Opportunities, makes the following recommendations:
“In terms of technical cyber hygiene, most of the action is at the end points controlled by individual users, so I encourage (1) removing local admin and (2) application whitelisting as the first two actions from the “Essential Eight” out of the Australian Cyber Security Centre.“
In short, the contemporary approach to large environments demands rigorous endpoint hardening precisely because the edge characterizes so much of the overall environment.
Principle 4: Choose flexible and integrated controls
One of the challenges in large enterprises is that it can be surprisingly difficult to predict the future. While a large organization might not change fast, the rest of the world does, which means that processes of procurement and implementation are paradoxically chaotic and rushed. There is rarely enough time for a proper risk assessment or forward-thinking architecture. For this reason, rigid, single-purpose tools are particularly limiting for larger organizations. Inflexible tools have little resilience against evolving use cases and become prohibitively expensive to scale in the context of change.
To reprise an old maxim from systems theory, if a system is to be able to accommodate the diversity of challenges that its environment produces, then it needs to have a repertoire of response which is (at least) as nuanced as the problems thrown up by the environment. In the case of a large organization, this means that tools need to work at scale. They must be multipurpose, customizable, and resilient to changes, not just in use case but in infrastructure and environment as well. In some cases, enterprises have even been able to coerce or cajole vendors into creating customized firmware or operating systems to reflect specific needs.
Principle 5: Observe the herd
One of our primary tactics for overcoming the exponentially greater difficulties of situational awareness is to, uh, just try harder. That means that the enterprise CISO must remain in a constant state of discovery, verification, testing, and scanning. At a certain scale the task of observation approaches that of the proverbial bridge painter—as soon as you’ve finished, it’s time to start over again.
On the plus side, the principles listed above will all support this task. A simplified and segmented complex of systems with hardened edges and flexible, resilient, scalable tools will be much easier to monitor than a monolithic, chaotic, rigid network. This work will also go a long way toward preparing for an audit, and will reduce the likelihood of an unpleasant discovery in an external vulnerability assessment or penetration test that you didn’t even know existed.
Wrap-up
The elephant in the room in any discussion about large enterprises and risk is the question of risk appetite. While CEOs have worked hard in the last decade to close the gap, real or perceived, in innovation and flexibility between enterprises and start-ups, large organizations still have much more to lose. Even if the organization is expected to grow and create like a start-up, it is still expected to be stable and predictable. This doesn’t make the CISO’s job any easier. As always, frank conversations with stakeholders in all directions about the conditions for success will contextualize decisions for all involved and make the above principles easier to implement.
What is not in doubt, however, is the magnitude of the task at hand. Difficulties in visibility, inertia, and control mean that even achieving the basics becomes a stentorian task at large scale. The principles outlined above—simplification, segmentation, hardening, flexibility, and observation—should help. The fact that we are seeing and hearing about these principles more and more from enterprise CISOs just goes to show that when it comes to information security program management, quantity really does have a quality all its own.
Source link
lol
“Quantity has a quality all its own”—a quote apocryphally attributed to Joseph Stalin. As part of the research that went into F5 Labs’ 2018 Application Protection Report, we surveyed information security professionals. We found that 37% of respondents were from organizations with more than 5,000 people. Here’s how the percentages broke down: What is the…
Recent Posts
- Arm To Seek Retrial In Qualcomm Case After Mixed Verdict
- Jury Sides With Qualcomm Over Arm In Case Related To Snapdragon X PC Chips
- Equinix Makes Dell AI Factory With Nvidia Available Through Partners
- AMD’s EPYC CPU Boss Seeks To Push Into SMB, Midmarket With Partners
- Fortinet Releases Security Updates for FortiManager | CISA