Network Defense and Behavioral Biases: An Experimental StudyDaniel Woods, Mustafa Abdallah, Saurabh Bagchi, Shreyas Sundaram, Timothy Cason February 9, 20211AbstractHow do people distribute defenses over a directed network attack graph, where they must defend a criticalnode? This question is of interest to computer scientists, information technology and security professionals.Decision-makers are often subject to behavioral biases that cause them to make sub-optimal defense decisions,which can prove especially costly if the critical node is an essential infrastructure. We posit that nonlinear probability weighting is one bias that may lead to sub-optimal decision-making in this environment,and provide an experimental test. We find support for this conjecture, and also identify other empiricallyimportant forms of biases such as naive diversification and preferences over the spatial timing of the revelationof an overall successful defense. The latter preference is related to the concept of anticipatory feelings inducedby the timing of the resolution of uncertainty.2IntroductionEconomic resources spent on securing critical infrastructure from malicious actors are substantial and increasing, with worldwide expenditure estimated to exceed 124 billion in 2019 (Gartner, 2018). Cybersecuritydefense is becoming increasingly difficult, as systems are frequently connected to the outside world throughthe Internet, and attackers innovate many new methods of attack. The interaction of computers, networks,and physical processes (termed ‘Cyber-Physical Systems’, or CPS) has a wide variety of applications, suchas manufacturing, transportation, medical care, power generation and water management (Lee, 2015), andhas both practical and theoretical importance. Proposed CPS such as the ‘Internet of Things’ promise vastbenefits and efficiencies, but at the cost of increased attack vectors and targets (see Alaba et al. (2017) andHumayed et al. (2017) for surveys). To realize the potential gains that these new technologies can provide,we must understand and maximize their security.To reduce interference with their systems, institutions allocate a security budget and hire managersresponsible for minimizing the probability of successful attacks on important assets and other vital partsof the infrastructure. Such decision-makers, however, are subject to behavioral biases that can lead tosub-optimal security decisions (Abdallah et al. (2019b), Abdallah et al. (2019a), Acquisti and Grossklags(2007)). Human decision-makers can exhibit many possible biases. The security decisions they face broadlyinvolve probabilistic assessments across multiple assets and attack vectors, many featuring low individuallikelihood. We therefore ex-ante focus on the possibility that people incorrectly weight the actual probabilityof attack and defense (Tversky and Kahneman, 1992). We ex-post find that people also exhibit locational andspreading biases in their defense resource allocations, due to the directional and compartmentalized nature ofthese systems. Given the immense size of global expenditures on cybersecurity, as well as successful attacksbeing potentially very damaging, it is important to understand the nature and magnitude of any biasesthat can lead to sub-optimal security decisions. Such insights on biases can then be applied by securityprofessionals to reduce their impact. This research was supported by grant CNS-1718637 from the National Science Foundation. Daniel Woods and TimothyCason are with the Economics Department in the Krannert School of Management at Purdue University. Email: {woods104,cason} Mustafa Abdallah, Saurabh Bagchi, and Shreyas Sundaram are with the School of Electrical and ComputerEngineering at Purdue University. Email: {abdalla0, sbagchi, sundara2} We thank the editor, two anonymousreferees, and participants at the Economic Science Association and Jordan-Wabash conferences for valuable comments.1

We focus on human biases as infrastructure security decisions have not yet been given over to algorithmictools. They are still mostly made by human security managers (Paté-Cornell et al., 2018). Adoption ofautomated tools are stymied by legacy components in these interconnected systems, so instead managers usethreat assessment tools which return the likely probability that individual components of the infrastructurewill be breached (Jauhar et al., 2015). These probabilities must be interpreted by the human manager,which motivates our initial emphasis on non-linear probability weighting. Evidence also exists that securityexperts ignore more accurate algorithmic advice when available and instead rely more on their own expertise(Logg et al., 2019).We model a security manager’s problem as allocating his budget over edges in a directed attack graphwith the nodes representing various subsystems or components of the overall CPS. An example of a directedattack graph is shown in Figure 1. The manager’s goal is to prevent an attacker who starts at the red nodeon the left from reaching the critical green node on the far right. The inter-connectivity of different systems is represented by connections between nodes, and alternative paths to a given node represent differentmethods of attack. Allocating more of the security budget to a given edge increases the probability thatan attack through that edge will be stopped. Such an ‘interdependency attack graph’ model is consideredan appropriate abstraction of the decision environment a security professional faces in large-scale networkedsystems.1 The probability of successful defense along an edge is weighted according to the manager’s probability weighting function. We use the common Prelec (1998) probability weighting function, but similarcomparative statics can be obtained with any ‘inverse S-shaped’ weighting function. We assume the attackeris sophisticated and observes the manager’s allocation decision, and does not mis-weight probabilities. Thisreflects a ‘worst-case’ approach to security (discussed further in Section 3.1), and represents a necessary firststep in investigating the impact of probability weighting and other biases on security expenditures.The manager’s mis-weighting of probabilities can cause investment decisions to substantially divergefrom optimal decisions based on objectively correct true probabilities, depending on network structure andthe security production function. The security production function maps defense resources allocated toan edge to the probability that an attack along that edge will be stopped. Empirical evidence has shownprobability weighting to be relatively non-linear on the aggregate subject level (Bleichrodt and Pinto, 2000),so the impact on security decisions could be substantial. Probability weighting is also heterogeneous acrossindividuals (Tanaka et al. (2010), Bruhin et al. (2010)). Therefore, if probability weighting affects choices inthis environment, individuals should exhibit heterogeneity in their sub-optimal security decisions.x1x2x3x4x5Figure 1: Example Directed Network Attack GraphWe seek to address the following research questions:Question 1: What is the effect of probability weighting on security investments over a directed networkgraph?Question 2: Is probability weighting an empirically relevant factor in human security decision-making?Question 3: What other behavioral biases significantly affect decision-making in this environment?To address Question 1, we numerically solve the security manager’s problem described above. In practicalsituations the relationship between investment spending and reductions in the probability of an attack is farfrom explicit to an outside observer. Moreover, investigations of successful breaches are often not revealeduntil months or years later. Furthermore, information on security investments is highly confidential forobvious reasons, making it difficult or impossible to obtain directly from firms. We therefore conduct anincentivized laboratory experiment to address Questions 2 and 3. We employ networks that cleanly identifythe impact of non-linear probability weighting on security investment decisions, and the generated data alsoreveal other behavioral biases that exist in this environment.1 A non-exhaustive list of research considering the attack graph model from the Computer Security literature includes Sheynerand Wing (2003), Nguyen et al. (2010), Xie et al. (2010), Homer et al. (2013), and Hota et al. (2018). The length of this listand the ease in which it could be extended is indicative of the prominence that this literature places on the attack graph model.2

Our experiment elicits separate measures of probability weighting outside the network defense problemto help address Question 2. One measure uses binary choices between lotteries which is relatively standard,and elicits probability weighting while controlling for the confound of utility curvature. The other measureis novel, and uses a similar network path framing to the network defense environment. This new measurereduces procedural variance relative to the main network defense task. It also exploits the irrelevance ofutility curvature when there are only two outcomes to focus solely on probability weighting.We find that the network-framed measure of non-linear probability weighting is statistically significantlycorrelated with all the network defense allocations situations we consider. However, this correlation existseven in cases where probability weighting should have no impact. This suggests that subjects may exhibitlimited sophistication beyond probability weighting alone. We therefore conduct a cluster analysis to identify heterogeneous patterns of behavior not predicted by probability weighting. This identifies additionalbehavioral biases. The first is a form of ‘naive diversification’ (Benartzi and Thaler, 2001), where subjectshave a tendency towards allocating their security budget evenly across the edges. The second is a preferencefor stopping the attacker earlier or later along the attack path. Stopping an attack earlier can be seen asreducing the anticipatory emotion of ‘dread’ (Loewenstein, 1987) while stopping it later can be seen as delaying the revelation of potentially bad news (e.g., see Caplin and Leahy (2004) for a strategic environment).Accounting for these additional biases, we continue to find some evidence that non-linear probability weighting influences subject behavior, as well as strong evidence for the additional biases. In our environment theadditional biases seem especially naive, as edges are not different options with benefits beyond defending thecritical node, and information on the attacker’s progress is not presented to the subjects sequentially. Theseinconsistencies possibly reflect a subject’s own mental model (e.g., of how an attack proceeds), but shouldbe accounted for in future directed network decision environments.This paper contributes to the theoretical literature on attack and defense games over networks of targets,most of which can be related to computer network security in some fashion.2 Our attack graph environment israther flexible, and can represent some of the strategic tensions present in alternative network environments.Instead of focusing on attack graph representations of these other environments (which can be quite complex),we utilize more parsimonious networks in order to specifically parse out the effect of probability weighting. Wehave the ‘security manager’ play against a sophisticated computerized attacker who moves after observingthe manager’s allocation. Playing against a computer dampens socially related behavioral preferences.3It also removes the need for defenders to form beliefs about the attacker’s probability weighting. Thisallows us to more cleanly identify the empirical relevance of non-linear probability weighting in this spatialnetwork defense environment. If probability weighting is important empirically, then future research shouldincorporate it into models to better understand the decisions of real-world decision-makers.This paper also contributes to the experimental literature of attack and defense games in network environments.4 One set of related experimental studies test ‘Network Disruption’ environments. McBride andHewitt (2013) consider a problem where an attacker must select a node to remove from a partially obscurednetwork, with the goal to remove as many edges as possible. Djawadi et al. (2019) consider an environmentwhere the defender must both design the network structure as well as allocate defenses to nodes, with thegoal of maintaining a network where all nodes are linked after an attack. Hoyer and Rosenkranz (2018)consider a similar but decentralized problem where each node is represented by a different player. Our environment differs from these Network Disruption games as we consider a directed attack graph network, i.e.the attacker must pass through the network to reach the critical node rather than remove a node to disruptthe network. Some other related experimental papers include ‘multi-battlefield’ attack and defense games,such as Deck and Sheremeta (2012), Chowdhury et al. (2013) and Kovenock et al. (2019). The most closelyrelated of these types of papers is Chowdhury et al. (2016), who find experimental evidence for the bias ofsalience in a multi-battlefield contest, which induces sub-optimal allocations across battlefields. We are thefirst to investigate empirically the bias of probability weighting in networks and attack and defense games.2 A non-exhaustive list of related theory papers include Clark and Konrad (2007), Acemoglu et al. (2016), Dziubiński andGoyal (2013), Goyal and Vigier (2014), Dziubiński and Goyal (2017), Kovenock and Roberson (2018), and Bloch et al. (2020).3 Sheremeta (2019) posits that things such as inequality aversion, spite, regret aversion, guilt aversion, loss aversion (see alsoChowdhury (2019)), overconfidence and other emotional responses could all be important factors in (non-networked) attackand defense games. Preferences and biases have not received substantial attention in the experimental or theoretical literaturein these games, although it should be noted that Chowdhury et al. (2013) and Kovenock et al. (2019) both find that utilitycurvature does not appear to be an important factor in multi-target attack and defense games.4 See Kosfeld (2004) for a survey of network experiments more generally.3

3Theory and Hypotheses3.1Attacker ModelIn order to describe the security manager’s (henceforth defender) problem, it is necessary to describe andjustify the assumptions we make about the nature of the attacker that he faces. As our focus is on networkdefense by humans, in our main experimental task we automate the role of the attacker and describe theirdecision process to a human defender. We assume that the attacker observes the defender’s decision, hassome fixed capability of attack, and linearly weights probabilities. While these assumptions may seem strong,they are consistent with a ‘worst-case’ approach, the motivation of which we now describe.Due to the increasing inter-connectivity of cyber-physical systems to the outside world (e.g. throughthe internet), a defender faces a wide variety of possible attackers who can differ substantially in theirresources, abilities and methods. The defender could undertake the challenging exercise of considering theattributes of all possible attackers, but this would involve many assumptions that the defender might getwrong. Instead, we assume that the defender takes a worst-case approach and defends against a sophisticatedattacker, so that he can achieve a certain level of defense regardless of what type of attacker eventuates. Thesophisticated attacker can be interpreted as the aggregate of all attackers perfectly colluding. They mayalso have the ability to observe the defender’s decision either through a period of monitoring or by usinginformants. Taking a worst-case approach is common in the security resource allocation literature (e.g. Yanget al. (2011), Nikoofal and Zhuang (2012), and Fielder et al. (2014)), as is the assumption that the attackerobserves the defender’s allocation.53.2Defender ModelThe defender faces a network consisting of J total paths from the start node to the critical node, with eachedge belonging to one or more of the J paths. The defender’s security decision is to allocate a security budgetof B R 0 units across the edges; this is represented by a vector x with N elements, where N is the numberof edges. The edge defense function p(xi ) is a production technology that transforms the number of unitsallocated to edge i (denoted by xi ) to the probability of stopping an attack (from the worst-case attacker)as it passes along edge i. We assume the defender has probability weighting from the one parameter modeldescribed in Prelec (1998), i.e. w(p(xi ); α) exp[ ( log(p(xi )))α ] with α (0, 1], although our findingshold with other ‘inverse-S’ shaped weighting functions (e.g., Tversky and Kahneman (1992)). For ease ofnotation we will frequently shorten w(p(xi ); α) to w(p) or w(p(xi )).The defender gains a payoff of 1 if the critical node is not breached by the attacker, and gains a payoff of0 if the attacker breaches the critical node. As the attacker observes the defender’s allocation and choosesthe objectively most vulnerable path (i.e. the attacker has α 1), the attacker’s action directly follows froma given allocation. However, the defender’s non-linear weighting of probabilities (α 1) may cause himto have a different perception about which paths are the most vulnerable. Thus, the defender thinks theattacker will choose the path with the lowest perceived probability of successful defense (from the defender’sperspective, in accordance with their probability weighting parameter). The defender’s goal is to maximizehis perceived probability of successfully defending the critical node, which is determined by his weakestperceived path. The defender’s optimization problem depends on the network structure, edge allocations,edge defense function p(xi ), and his probability weighting parameter α. We denote the defender’s overallperceived probability of defense along path j as fj (x; α).An attacker passing along an edge to reach a specific node is a separate and independent event fromall other edges.6 We assume the defender applies his weighting function to each probability individually before calculating the probability of overall defense along a path. The defender ranks the event ofstopping an attack along a given edge higher than the event of an attack proceeding. Therefore, in accordance with Rank Dependent Utility (RDU) (Quiggin, 1982) and Cumulative Prospect Theory (CPT)5 Forexample, Bier et al. (2007), Modelo-Howard et al. (2008), Dighe et al. (2009), An et al. (2013), Hota et al. (2016),Nithyanand et al. (2016), Guan et al. (2017), Wu et al. (2018), and Leibowitz et al. (2019).6 The events are independent as each edge represents a unique layer of security that is unaffected by the events in otheredges/layers of security. Breaches of other layers of security can affect whether a specific layer is encountered, but they do notchange the probability that layer is compromised.4

(Tversky and Kahneman, 1992), he applies his weighting function to the probability of stopping an attack along an edge (w(p)), and considers the other event (the attack proceeding) to have a probability of 1 w(p). Therefore, a path j with three edges has an overall perceived probability of defense offj (x; α) w(p(x1 )) [1 w(p(x1 ))] [w(p(x2 )) (1 w(p(x2 )))w(p(x3 ))].7 The defender’s constrained objective problem is presented in Equation 1.argmaxmin{f1 (x; α), f2 (x; α), . . . , fJ (x; α)}xs.t.xi 0, i 1, 2, . . . , NNX(1)xi BiWe now consider the impact that non-linear probability weighting by a defender has on various networkstructures and defense production functions. We analyze the situation in a general setting, before consideringthe experimental design that we implement in the laboratory.3.3Common EdgesThe described objective in Equation 1 is a straightforward constrained optimization problem. Unfortunately,the problem is analytically intractable and no closed-form solution exists. Consider our first type of networkstructure, presented in Figure 1. The key feature of this network is that one of the edges is common toboth paths, while the other edges belong only to the top or bottom path. We denote x3 y, and assume xithat v x1 x2 x4 x5 , an edge defense function of p(xi ) 1 e z (where z is some normalizationparameter), and that v 0, y 0. Even with these simplifications and assumptions, taking the first orderconditions of the associated Lagrangian yields a set of equations that is intractable to solve for a closedform solution for either y or v.8 Fortunately, it is possible to numerically solve the defender’s optimizationproblem. For example (and anticipating our experimental design), when z 18.2, B 24 and α 0.6,the optimal allocation is v x1 x2 x4 x5 1.26 and y x3 18.96. Appendix 7.1 provides moreanalysis on how the numerical solution is calculated and whether the solution is unique.The main trade-off in this type of network is the allocation to edges that are common to both paths or toedges that are only on one path. Consider taking a small amount from the common edge x3 and placing iton a non-common edge. Placing the only on one edge is non-optimal for any α, as the sophisticated attackerwill attack the weaker path, meaning should be split across paths. This need to split over paths reduces themarginal impact of units allocated to the non-common edges on the overall probability of defense, makingthem relatively less attractive compared to the common edge. However, with non-linear probability weighting(α 1), small probabilities are over-weighted, i.e. perceived to be higher than their actual probabilities.This increases the perceived impact of units placed on non-common edges, and can exceed the loss of havingto split the allocation across more than one path. This makes expenditures on non-common edges morelikely for those with non-linear probability weighting.We can confirm this intuition numerically for a variety of edge defense functions. We mainly considerconcave functions in our experiment, which have a natural interpretation of diminishing marginal returns xiof production.9 In particular, consider the edge defense function from before (p(xi ) 1 e z ). Figure2 plots the optimal amount to allocate to the common edge for different values of z and different levels ofprobability weighting α. At α 1 the optimal allocation is to place all B 24 units on the common edge.A defender with α 1 will always place all of his units on the common edge for the exponential family7 This approach is similar to the concept of ‘folding back’ sequential prospects, as described in Epper and Fehr-Duda (2018)with regards to ‘process dependence’. The alternative (i.e., fj (x; α) w(p(x1 ) [1 p(x1 )] [p(x2 ) (1 p(x2 ))p(x3 )])) doesnot yield interesting comparative statics in α due to the monotonicity of the probability weighting function, so we do notconsider it further.8 Weighting the probability of a successful attack along an edge instead is analytically tractable as terms conveniently cancel,as shown in Abdallah et al. (2019b). However, this would be inconsistent with how events are ranked and weights are appliedin RDU and CPT. Despite the lack of symmetry in the one parameter Prelec weighting function, the qualitative comparativestatics presented in Abdallah et al. (2019b) have been numerically confirmed to hold in the current environment.9 Concavity and diminishing marginal returns is a common assumption in the computer security literature (e.g., Pal andGolubchik (2010), Boche et al. (2011), Sun et al. (2018), Feng et al. (2020))5

of edge defense functions (Abdallah et al., 2019b). As α decreases, i.e., the defender exhibits increasinglevels of non-linear probability weighting, he places fewer units on the common edge (and more units on thenon-common edges).Figure 2: Allocation to Common Edge for p(xi ) 1 e xizConsider next a non-exponential edge defense function p(xi ) ( xzi )b , where z is again a normalizationfactor and b (0, ). If b 1, this function is concave, if b 1 it is linear and if b 1 it is convex. Figure 3illustrates that regardless of the convexity of the edge defense function, the amount allocated to the commonedge decreases as α decreases from 1. Note also that for concave functions of this form, it is no longeroptimal for α 1 defenders to place all of their allocation on the common edge. This is because the slope ofthe edge defense function for small values is sufficiently steeper than the slope of the function when all unitsare allocated to one edge. To see this, consider some p(xi ) and denote the number of units allocated to thenon-common edges as v, and the number of units allocated to the common edge as y. Denoting the overallprobability of a successful defense as F (v, y), then: F (v, y) p( v4 ) (1 p( v4 ))(p( v4 ) (1 p( v4 ))p(y)). Taking(v,y)(v,y) 12 p0 ( v4 )[1 p( v4 ) p(y) p( v4 )p(y)] and F y p0 (y)[1 2p( v4 ) p( v4 )2 ].the first order conditions: F vAt the boundary solution corresponding to v 0 and y B, if p(0) 0 the above expressions show thatallocating all units to the common edge is optimal if p0 (0)(1 p(B)) 2p0 (B), i.e., the marginal returnto placing another unit on y exceeds that of v at the boundary. It follows that if the slope is sufficiently2p0 (B)steep for small v’s (i.e. p0 (0) 1 p(B)), then an α 1 defender will allocate a strictly positive amount tonon-common edges.10These observations lead to our first testable hypotheses:Hypothesis 1 The amount allocated to common edges (weakly) decreases as α decreases from 1.02p (B)Hypothesis 2 If p0 (0) 1 p(B)(such as for a concave power function), then a decision-maker with linearprobability weighting (α 1) will allocate a strictly positive amount to non-common edges. F (v,y) F (v,y)10 Any α (0, 1] defender is making a similar trade-off ofagainst, either equating them if the solution is v yinterior, or allocating to whichever is greater at the boundary. We do not present these first order conditions here as they arenot as succinct due to the presence of w(p; α), although we do report the first order condition in Appendix 7.1. Where exactlythe trade-off is resolved depends on α as well as the specific functional form of p(xi ). This is why the optimal allocation differsover α for a given p(xi ), as well as over different p(xi ) for a given α. Both patterns are displayed in Figures 2 and 3.6

Figure 3: Allocation to Common Edge for p(xi ) ( xzi )bWe now present the three color-coded networks from our experiment that are designed to explore thesetwo hypotheses.3.3.1Network RedNetwork Red employs the network structure presented earlier in Figure 1, and has an edge defense function xiof p(xi ) 1 e 18.2 .11 According to Hypothesis 1, a defender with α 1 will place less than 24 units onthe common edge, and the amount placed on the common edge is decreasing as α decreases from 1. Forexample, a defender with α 0.5 will allocate x3 17.36, and x1 x2 x4 x5 1.66, while other α’sare displayed graphically in Figure 2 by the line associated with z 18.2.12 According to Hypothesis 2, adefender with α 1 would allocate x3 24, and x1 x2 x4 x5 OrangeNetwork Orange also takes place on the network shown in Figure 1, but differs in having an edge defense xifunction of p(xi ) 1 e 31.1 . The prediction for a defender with α 1 remains unchanged from Network Red.Because p(xi ) 0.46 xi [0, 24], edge allocations in Network Orange mostly result in probabilities thata defender with α 1 will overweight. Therefore, the predictions for a defender with a particular value ofα 1 will differ from Network Red. For example, a defender with α 0.5 will now allocate x3 14.92, andx1 x2 x4 x5 2.27. The prediction for other α’s is displayed in Figure 2 on the line associated withz 31.1. The change in the edge defense function increases the separation of behavior between moderate tohigh levels of non-linear probability weighting, increasing our ability to detect differences between α types.3.3.3Network YellowNetwork Yellow also takes place on the network shown in Figure 1. The edge defense function is now of ax0.4different concave functional form, p(xi ) 70i0.4 . Unlike Networks Red and Orange, it is now optimal for a11 The normalization factor z 18.2 was chosen such that 1 unit allocated to an edge would yield a commonly overweightedprobability (p 0.05), while 24 units allocated to an edge would yield a commonly underweighted probability (p 0.73).12 These numerical solutions are continuous, although subjects were restricted to discrete (integer-valued) allocations.7

non-behavioral defender to allocate units to the non-common edges, in accordance with Hypothesis 2. Inparticular, a defender with α 1 will allocate x3 15.64, and x1 x2 x4 x5 2.09, while a defenderwith α 0.5 will allocate x3 12.68, and x1 x2 x4 x5 2.83. Predictions for other α’s are presentedin Figure 3, on the line associated with z 70, b 0.4.Networks Red, Orange, and Yellow are jointly designed to test Hypotheses 1 and 2. In all three of thesenetworks, the amount allocated to the common edge should decrease as α decreases, according to Hypothesis1. In Networks Red and Orange, Hypothesis 2 predicts that those with α 1 should place all 24 units onthe common edge, while in Network Yellow, Hypothesis 2 predicts those with α 1 should place less than24 units on the common edge.3.4Extraneous Edgesx2x1x3x5x4Fig

network defense environment. If probability weighting is important empirically, then future research should incorporate it into models to better understand the decisions of real-world decision-makers. This paper also contributes to the experimental literature of attack and