What Cybersecurity Can Learn from UX Design
- by nlqip
If we think about vulnerabilities in this way, as a matter of action signaling, then malicious actors are, in their own malicious way, members of our audience. Applications are engineered to function, but they are designed to signal. The specific ways we design apps tell our audience how we expect them to act. When we release applications with vulnerabilities, we are also inadvertently telling this other subset of our audience how they can interact with our application. The problem is that we haven’t yet recognized that, when we release a vulnerability and send signals about potential actions, we must also modify our expectations about how our audience will act.
This might all sound completely obvious to anyone who’s worked in security for any length of time. But we consistently see applications built (or, increasingly, hastily stapled together) on the assumption that everyone who interacts with the app is going to (1) play nice and (2) stay at the user interface level. We consistently acknowledge the existence of only the affordances we want to believe in. This means that we are constantly surprised when attackers go beneath the user interface and inspect applications at deeper layers, such as at the level of source code, HTTP or TCP/IP protocols.
Threat Landscape … or Audience?
The point is that attackers are also members of our audience, and the entire application stack, from bare metal to the most inconsequential plugin, is the affordance space. Since we design an application with a focus on signaling potential actions to users, shouldn’t we recognize that a certain type of audience member perceives, and acts on, a different set of signals? We don’t go to the meanest, filthiest honky-tonk bar around and get surprised when someone throws a bottle. We should not be surprised that a certain kind of audience member always shows up on our networks when those networks are connected to the Internet.
There is, of course, a population that specializes in this perceptual problem: penetration testers. If we think of vulnerabilities as affordances, then penetration testers are actually a sort of design focus group. This niche specializes in recognizing potentials for action against any surface, at every level of abstraction within an application, and communicating that affordance space back to application owners in an actionable way.
My design courses also emphasized the importance of consulting user experience and design specialists as early in the process as possible—just like security. True penetration testing needs a finished, functioning, and fully integrated application to ensure that it covers all of its bases. However, thinking about vulnerabilities as partly a signaling problem implies that we need to consider the potential for malicious human behavior from the earliest stages of business logic planning. This is the practice known as threat modeling, and thinking about it in terms of affordances would, I believe, help coordinate the process between the many different kinds of experts responsible for building complex applications.
Another ramification is that we should be thinking of vulnerabilities as action potentials (that is, impact) first, and coding flaws or misconfigurations second. Of course, we need to know the exact nature of a flaw—which line in which file—to fix the problem. However, fixating on the problem’s location and how it got there can also obscure the actions it lets an attacker take, which is how we get to the problem of three low-risk vulnerabilities making one high-risk vector. Similarly, while verbose error messages or code comments are useful for debugging during development, they inadvertently signal more than is necessary when they make it into production code.
The perception aspect also brings in the question of deception. Deception is a defense principle that cybersecurity practitioners have been experimenting with for decades, from honeypots and honeytokens to network tarpits. Most deception capabilities that I know of remain tactical in scope, but we could intentionally sow deeper and broader confusion below the user interface level than we currently can by considering it in the design phase. App defenders have been dreaming of turning their applications into labyrinths for years—taking a designer’s approach to everything, not just the user interface, would make apps into an attacker’s nightmare.
All of these ideas—deception, threat modeling, impact analysis—have been around for years. But by thinking in terms of affordances, specifically in terms of attacker perception, we can start to see malevolent and benevolent user experiences as two sides of the same coin. This lets us treat vulnerability management, availability, and application architecture as linked aspects of the same essential problem, which is managing risk while connected to the largest, filthiest, meanest dive bar of them all.
Source link
lol
If we think about vulnerabilities in this way, as a matter of action signaling, then malicious actors are, in their own malicious way, members of our audience. Applications are engineered to function, but they are designed to signal. The specific ways we design apps tell our audience how we expect them to act. When we…
Recent Posts
- Windows 10 KB5046714 update fixes bug preventing app uninstalls
- Eight Key Takeaways From Kyndryl’s First Investor Day
- QNAP pulls buggy QTS firmware causing widespread NAS issues
- N-able Exec: ‘Cybersecurity And Compliance Are A Team Sport’
- Hackers breach US firm over Wi-Fi from Russia in ‘Nearest Neighbor Attack’