Multidisciplinary University Research Initiative. University of Arizona, Tucson.

research thrusts | people | abstracts | papers
home | presentations | meetings | references
home | sponsors | committee chairs | program | abstracts | tucson weather | tucson
home | HD Analysis - Trial | HD Analysis - Goal based | Battle of Sexes | BDI Agent and Crowd Simulation | TradeOff
subglobal6 link | subglobal6 link | subglobal6 link | subglobal6 link | subglobal6 link | subglobal6 link | subglobal6 link
subglobal7 link | subglobal7 link | subglobal7 link | subglobal7 link | subglobal7 link | subglobal7 link | subglobal7 link
subglobal8 link | subglobal8 link | subglobal8 link | subglobal8 link | subglobal8 link | subglobal8 link | subglobal8 link

thrust areas


Connections between Mathematical and Behavioral Decision-Making Models


We will search for systematic and replicable patterns of behavior in an attempt to formulate descriptive models that are psychologically interpretable, have potential practical implications, and can better account for human decision behavior. These studies range from theoretical underpinnings to the exploration of new decision-making theories on human subjects. In most cases, the data are collected to support our studies by presenting financially motivated subjects, whose payoff is contingent on their performance, with various decision scenarios that simulate or otherwise capture major ingredients of real decision making situations. Goto Abstracts.


Modeling and exploiting decision-making weaknesses in enemy behavior



One form of complexity in decision-making arises in game-theoretic situations.  We will analyze human decision-making traits that arise in both cooperative and non-cooperative games, including those situations that deal with minimizing the most damaging enemy attack.  While some theory on optimal behavior exists (e.g., Nash equilibrium models), there are several contemporary problems for which current methods are inadequate. 

While some of these applications arise naturally in economic scenarios, others take place in complex “Stackelberg game” scenarios (leader / follower games).  We will examine a set of network interdiction problems where an enemy seeks to destroy a set of arcs first, followed by the optimal flow along arcs that remain operational on the network.  The “enemy” action can be interpreted either literally or as a “worst-case scenario” agent.  A more difficult problem might examine how a network can be constructed to minimize the maximum amount of damage that an enemy might be able to do.  However, such studies traditionally assume the optimality of decision-making actions, when in fact humans must solve such problems by heuristic rules of thumb.  The refined version of our problem thus becomes how to build a network to exploit the suboptimal decision-making tendencies of humans.  We will also investigate the implications of our research on actual human subjects:  Is it more effective in practice to develop networks against a rational enemy, or to try and exploit models of suboptimal behavior?  How do our planning models change according to cooperative scenarios as opposed to non-cooperative (or outright hostile) behaviors? Goto Abstracts.


Application study on sequential search problems


mob mentality

We will develop optimal strategies for sequential decision-making problems.  Questions that we will answer include the following:  Do humans employ a near-optimal policy in making such decisions?  What sort of policies do they follow, and what lessons can be generalized from any observed suboptimal or irrational behavior?  This investigation captures a unique blend of behavioral studies and optimization research, and relies heavily upon the interdisciplinary nature of our team. Goto Abstracts.


Software models for human decision-making



The goal of this thrust is to develop a novel human decision-making software agent to replace the partial decision-making function of a human, whose role involves only decision-making functions as opposed to physical functions. The proposed agent is configured so that it operates autonomously until it faces a situation that cannot be handled by itself, in which case it asks for a human’s input. Goto Abstracts.



If you have any comments or suggestions, please email us at
muri AT sie DOT arizona DOT edu