A picture of me, below are papers, some course info, and my vita in .pdf format.

Click to get
back to the Economics
Department at the University of Texas at Austin.

*************Decision theory papers***********************

Multiple Priors for the Open-Minded (with Martin Dumav) Abstract. A multiple prior decision maker is open-minded if they can describe, as subjective uncertainty, all convex sets of distributions over consequences. Open-mindedness is equivalent to the ability to subjectively describe both the uniform distribution on an interval and the set of all distributions on an interval. Parametrized sets of i.i.d. distributions from classical statistics satisfy these conditions. The use of open-minded sets of priors to model decision makers allows the objective and the subjective approaches to uncertainty to inform each other and changes the implications of previously used axioms for multiprior preferences. Models with sets of priors that are \textit{not} open-minded yield preferences only over those subjective sets of distributions that are describable. This preference incompleteness always fails to rank a dense class of sets, and may rank so few sets that ambiguity attitudes do not affect choices between subjectively uncertain prospects.

Planning for the long run: Programming with patient, Pareto responsive preferences (with Urmee Khan, Journal of Economic Theory, 2018, 176, 444-478) Abstract. Respect for first order distributional overtaking guarantees that social welfare functions for intergenerational problems treat present and future people equally and respect the Pareto criterion, modulo null sets. For weakly ergodic optimization problems, this class of social welfare functions yields solutions that respect welfare concerns, sharply contrasting with extant patient criteria. For problems in which the evolution of future paths hinges on early events and decisions, the curvature of our social welfare functions determines the risks that society is willing to undertake and leads to a variant of the precautionary principle.

Unrestricted and controlled identification of loss functions:
Possibility and impossibility results (with Robert P. Lieli and Viola M. Grolmusz, International Journal of
Forecasting, 35(3), 878-890.
Abstract. The property that the conditional mean is the unrestricted optimal forecast characterizes
the Bregman class of loss functions, while the property that the alpha-quantile is the unrestricted
optimal forecast characterizes the generalized ¿-piecewise linear (alpha-GPL) class.
However, in settings where the forecasterfs choice of forecasts is limited to the support
of the predictive distribution, different Bregman losses lead to different forecasts. This is
not true for the alpha-GPL class: the failure of identification is more fundamental. Motivated
by these examples, we state simple conditions that can be used to ascertain whether loss
functions that are consistent for the same statistical functional become identifiable when
off-support forecasts are disallowed. We also study the identifying power of unrestricted
forecasts within the class of smooth, convex loss functions. For any such loss ell(.), the set
of losses that are consistent for the same statistical functional as ell(.) is a tiny subset of this
class in a precise mathematical sense. Finally, we illustrate the identification problem that
is posed by the non-uniqueness of consistent losses for the moment-based loss function
estimation methods proposed in the literature.

Objective and Subjective Foundations for Multiple Priors
(Journal of Economic Theory 2016, 165, 263-291)
Abstract. Foundations for priors can be grouped in two broad categories: objective, deriving probabilities from observations of
similar instances; and subjective, deriving probabilities from the internal consistency of choices. Partial observations of
similar instances and the Savage-de Finetti extensions of subjective priors yield objective and subjective setsof priors
suitable for modeling choice under ambiguity. These sets are best suited to such modeling when the distribution of the
observables, or the prior to be extended, is non-atomic. In this case, the sets can be used to model choices between elements
of the closed convex hull of the faces in the set of distributions over outcomes, equivalently, all sets bracketed by the
upper and lower probabilities induced by correspondences.

Skorohod's Representation Theorem for Sets of Probabilities
(with Martin Dumav, Proc. AMS 2016, 144(7), 3123-33)
Abstract. We characterize sets of probabilities, $\Pi$, on a measure space
$(\Omega,\mathcal{F})$, with the following representation property: for every measurable
set of Borel probabilities, $A$, on a complete separable metric space,
$(M,d)$, there exists a measurable $X:\Omega \rightarrow M$ with $A = \{X(P): P \in
\Pi\}$. If $\Pi$ has this representation property, then: if
$K_n \rightarrow K_0$ is a sequence of compact sets of probabilities on $M$, there
exists a sequence of measurable functions, $X_n:\Omega \rightarrow M$ such that
$X_n(\Pi) \equiv K_n$ and for all $P \in \Pi$, $P(\{\omega:
X_n(\omega) \rightarrow X_0(\omega)\}) = 1$; if the $K_n$ are convex as well as
compact, there exists a jointly measurable $(K,\omega) \mapsto H(K,\omega)$
such $H(K_n,\Pi) \equiv K_n$ and for all $P \in \Pi$,
$P(\{\omega: H(K_n,\omega) \rightarrow H(K_0,\omega)\}) = 1$.

The Virtues of Hesitation: Optimal
Abstract. Timing in a Non-Stationary World (with Urmee Khan, American Economic Review 2015, 105(3), 1147-1176
In many economic, political, and social situations, circumstances
change at random points in time, reacting is costly, and reactions
appropriate to present circumstances may become inappropriate
upon future changes, requiring further costly reaction. Waiting is
informative if arrival of the next change has non-constant hazard
rate. We identify two classes of situations: in the first, delayed
reaction is optimal only when the hazard rate of further changes is
decreasing; in the second, it is optimal only when the hazard rate
of further changes is increasing. These results in semi-Markovian
decision theory provide motivations for building delay into decision
systems.

The von Neumann-Morgenstern Approach to Ambiguity (with
Martin Dumav)
(Extended Abstract for the vNM Approach to Ambiguity)

Abstract. A choice problem is risky (respectively ambiguous) if the decision maker is choosing
between probability distributions (respectively sets of probability distributions) over utility
relevant consequences. We provide an axiomatic foundation for and a representation of
continuous linear preferences over sets of probabilities on consequences. The representation
theory delivers: rst and second order dominance for ambiguous problems; a utility interval
based dominance relation that distinguishes between sources of uncertainty; a complete
theory of updating convex sets of priors; a Bayesian theory of the value of ambiguous information
structures; complete separations of attitudes toward risk and ambiguity; and new
classes of preferences that allow decreasing relative ambiguity aversion and thereby rationalize
recent challenges to many of the extant multiple prior models of ambiguity aversion. We
also characterize a property of sets of priors, descriptive completeness, that resolves several
open problems and allows multiple prior models to model as large a class of problems as the
continuous linear preferences presented here.

On the Recoverability of Forecaster
Preferences (with Robert Lieli, Econometric Theory, 2013)

Abstract. We study the problem of identifying a forecaster's loss function from observations
on forecasts, realizations, and the forecaster's information set. Essentially different
loss functions can lead to the same forecasts in all situations, though within the class
of all continuous loss functions, this is strongly nongeneric. With the small set of
exceptional cases ruled out, generic nonparametric preference recovery is theoretically possible,
but identification depends critically on the amount of variation in the
conditional distributions of the process being forecast. There exist processes with
sufficient variability to guarantee identification, and much of this variation is also
necessary for a process to have universal identifying power. We also briefly address
the case in which the econometrician does not fully observe the conditional distributions used
by the forecaster, and in this context we provide a practically usefulset
identification result for loss functions used in forecasting binary variables.

Countably Additive Subjective
Probabilities (Review of Economic Studies 1997)

Abstract. The subjective probabilities implied by Savage's (1954, 1972) Postulates are finitely but not
countably additive. The failure of countable additivity leads to two known classes of dominance
paradoxes, money pumps and indifference between an act and one that pointwise dominates it.
There is a common resolution to these classes of paradoxes and to any others that might arise
from failures of countable additivity. It consists of reinterpreting finitely additive probabilities as
the ``traces'' of countably additive probabilities on larger state spaces. The new and larger state
spaces preserve the essential decision-theoretic structures of the original spaces.

*************Theory theory***********************

The Empty Set Marks the Spot Abstract. A population game consists of a non-atomic, finitely additive probability space of agents, a set of actions, and, for each agent, a utility function that depends continuously on their action and the population distribution of actions. If the probability is not countably additive, then approximate equilibria may not exist. Existence failures are due to the positive mass of agents that can only be represented as elements of the empty set in the original model. These mislaid agents can be characterized using nonstandard analysis or compactification-based representations of the distribution of utility functions. Restoring the missing agents yields equilibrium existence and the finite approximability of equilibria.

The Empty Set Marks the Spot (slides) Abstract. Describing the population characteristics in a large game with a non-atomic, purely finitely additive probability $p$ means that $\epsilon$-equilibria may not exist. This happens because a mass of agents and their characteristics seem to belong to $\emptyset$. This paper uses $p$ to characterize the mislaid agents and their characteristics. Restoring them to the model yields equilibrium existence and a well-behaved equilibrium correspondence.

Well-behaved infinite normal form games (GEB 2005, with C. J. Harris and W. R. Zame) Abstract. Normal form games are nearly compact and continuous (NCC) if they can be understood as games played on strategy spaces that are dense subsets of the strategy spaces of larger compact games with jointly continuous payoffs. There are intrinsic algebraic, measure theoretic, functional analysis, and finite approximability characterizations of NCC games. NCC games have finitely additive equilibria, and all their finitely additive equilibria are equivalent to countably additive equilibria on metric compactifications. The equilibrium set of an NCC game depends upper hemicontinuously on the specification of the game and contains only the limits of approximate equilibria of approximate games.

General infinite
normal form games (GEB 2005)

Abstract. Infinite normal form games that are mathematically simple have been treated (see above). Under
study in this paper are the other infinite normal form games, a class that includes the normal forms of
most extensive form games with infinite choice sets.
Finitistic equilibria are the limits of approximate equilibria taken along generalized sequences
of finite subsets of the strategy spaces. Points must be added to the strategy spaces to represent
these limits. There are direct, nonstandard analysis, and indirect, compactification and selection,
representations of these points. The compactification and selection approach was introduced [Simon,
L.K., Zame, W.R., 1990. Discontinuous games and endogenous sharing rules. Econometrica 58,
861-872]. It allows for profitable deviations and introduces spurious correlation between players'
choices. Finitistic equilibria are selection equilibria without these drawbacks. Selection equilibria
have drawbacks, but contain a set-valued theory of integration for non-measurable functions tightly
linked to, and illuminated by, the integration of correspondences.

Correlated equilibrium existence
for games with type-dependent strategies (JET 2011)

Abstract. Under study are games in which players receive private signals and then simultaneously choose actions
from compact sets. Payoffs are measurable in signals and jointly continuous in actions. This paper gives
a counter-example to the main step in Cotter's [K. Cotter, Correlated equilibrium in games with
type-dependent strategies, J. Econ. Theory 54 (1991) 48-69] argument for correlated equilibrium existence for
this class of games, and supplies an alternative proof.

Balance and discontinuities in infinite
games with type-dependent strategies (JET 2011)

Abstract.
Under study are games in which players receive private signals and then simultaneously choose actions
from compact sets. Payoffs are measurable in signals and jointly continuous in actions. Stinchcombe
(see above) proves the existence of correlated equilibria for this class of games. This paper is a study of
the information structures for these games, the discontinuous expected utility functions they give rise to,
and the notion of a balanced approximation to an infinite game with discontinuous payoffs.

Proper scoring rules with arbitrary value functions
(JME October 2010)

The value function associated with a (strictly) proper scoring rule is (strictly) convex. This paper
gives conditions on a set of probabilities under which Lipschitz (or smooth) functions have convex
extensions.

General infinite extensive
form games

Abstract. The additions to, or fixes for, game structures necessitated by use of the usual models of
infinite sets can be unified in the concept of a game expansion. This paper identifies a class of expansions,
the finitistic ones, that sharpens the previous fixes, delivers a well-behaved theory for infinite games,
and clarifies the relation between classes of games and the requisite expansions. Finitistic equilibria are
the minimal closed set of expansion equilibria consistent with the idea that continuous sets are limits of finite
approximations.

Genericity does NOT have
a Bayesian flavor (2001 Proceedings of the AMS, v. 129, p. 451-7)

Abstract. The best available definition of a subset of an infinite dimensional,
complete, metric vector space, V, being ``small'' is Christensen's Haar zero sets,
equivalently, Hunt, Sauer, and Yorke's shy sets. The complement of a shy set
is a prevalent set. There is a gap between prevalence and likelihood. For any
probability on V, there is a shy set having mass 1. Further, when V is
locally convex, any i.i.d. sequence with law repeatedly visits neighborhoods
of only a shy set of points if the neighborhoods shrink to 0 at any rate.

*************Applied theory***********************

Buying Truth in a Competitive Market (with Hong Xu and Andrew
Whinston)

Abstract. Organizations face a competitive certification market for their statements, the
statements do not convince third parties unless certified, the organizations are sometimes
better served by a lie, and honest mistakes are possible. In our model of such a market: if
certifiers are liable for mistakes, certifier contracts must be contingent; when certification
is inelastically demanded, increases in certifier liability ectively reduce third party trust;
organizational liability for mis-statements has a strong deterrent ect on mis-statements
and increases third party trust; and after a strong negative shock to the financial system,
loosening certification standards can only make it harder to raise third party trust levels.

Torture in Counterterrorism:
Agency Incentives and Slippery Slopes (with Hugo M. Mialon and Sue H. Mialon, J. Pub. Econ. 2012)

Abstract. We develop a counterterrorism model to analyze the effects of allowing a government
agency to torture suspects when evidence of terrorist involvement is strong. We
find that legalizing torture in strong-evidence cases has offsetting effects on agency incentives
to counter terrorism by means other than torture. It increases these incentives
because other efforts may increase the probability of having strong enough evidence to
warrant the use of torture if other efforts fail --- a complementarity effect. However, it
also lowers these incentives because the agency might come to rely on torture to avert
attacks --- a decommitment effect. The decommitment effect is more likely to dominate
if the agency's non-torture efforts are good at stopping attacks. Moreover, legalizing
torture in strong-evidence cases is likely to reduce security if the effectiveness of torture
is low while that of non-torture efforts is high. Lastly, we find that legalizing torture in
strong-evidence cases can increase agency incentives to torture even in weak-evidence
cases --- a slippery slope

The Virtues of Hesitation (with Urmee Khan)

Abstract. In many economic, political and social situations, circumstances
change at random points in time, reacting is costly, and changes appropriate
to present circumstances may be inappropriate to later changes, requiring further
costly change. Waiting is informative if the hazard rate for the arrival of
the next change is non-constant. We identify two broad classes of situations:
in the first, delayed reaction is optimal only when the hazard rate of further
changes is decreasing; in the second, it is optimal only when the hazard rate
of further changes is increasing. The first class of situations correspond to
having waited long enough to know that future changes in circumstances are
comfortably in the future, and the associated non-optimality of action in the
face of an increasing hazard rate corresponds to the counsel of patience in
unsettled circumstances. The second class of situations corresponds to the
delay of costly precautionary steps until the danger is clear enough. These
results in non-stationary dynamic optimization provide a new set of motivations
for building delay into legislative and other decision systems, and arise
from extensions of semi-Markovian decision theory.

Bundling Information Goods of Decreasing Value
with X. Geng, and A. B. Whinston, Management Science April, 2005)

Abstract.
Consumers' average value for information goods (websites, weather forecasts, music, news) declines with
the number consumed. This paper provides simple guidelines to optimal bundling marketing strategies in
this case. If consumers' values do not decrease too quickly, we show that bundling is approximately optimal. If
consumers' values to subsequent goods decrease quickly, we show by example that one should expect bundling
to be suboptimal.

*************Econometrics***********************

On the Recoverability of Forecaster
Preferences (with Robert Lieli, Econometric Theory, 2013)

Abstract. (See above)

Regression Efficacy and the Curse of
Dimensionality (in Recent Advances and Future Directions in Causation, Prediction, and Specification Analysis)

Abstract. This paper gives a geometric representation of a class of non-parametric regression estimators
that includes series expansions (Fourier, wavelet, Tchebyshev and others),
kernels and other locally weighted regressions, splines, and artificial neural networks. For
any estimator having this geometric representation, there is no curse of dimensionality ---
asymptotically, the error goes to 0 at the parametric rate. Regression efficacy measures
the amount of variation in the conditional mean of the dependent variable, Y, that can be
achieved by moving the explanatory variables across their whole range. The dismally slow,
dimension-dependent rates of convergence are calculated using a class of target functions
in which efficacy is infinite, and the analysis allows for the possibility that the dependent
variable, Y, may be an ever-receding target.

Generic inconsistency of Bayesian updating

Genericity analyses in
nonparametric statistics

*************TEACHING**********************

**Grad Prob-Stats, Econ 385C, Fall 2018*******************

Sketchy Lecture Notes, Fall 2018

**Grad Math for Economists, Econ 385D, Fall 2018*******************

**Environmental and Resource Economics, Econ 359M, Spring 2018*******************

Assignment 1, due Wed. February 8

Sandmo, Early History Env. Econ. (2015)

Assignment 2, due Wed. February 28

Ostrom et al., Revisiting the Commons (1999)

Aizer et al., Do Low Levels of Blood Lead Reduce Test Scores? (2018)

Assignment 3, due Wed. March 21

Gayer and Hahn, Designing Env. Policy (2006)

Portney, Trouble in Happyville (1992)

**Managerial Economics, Econ 351M, Spring 2019*******************

Homework 1, plus lecture notes, Spring 2019

Amir (2005) survey of supermodularity

Hayek (1945) on knowledge and prices

Khan (1996) on changes in properties rights and their effects

Older lecture notes, containing too much material

**Managerial Economics, Econ 351M, Fall 2016*******************

Notes on discounting/the opportunity cost of capital

Notes on discounting/the opportunity cost of capital

Some examples for optimal timing

**Math for Economists, Fall 2017*******************

**Math for Economists, Fall 2016*******************

***************Older Stuff*******************

**Advanced Microeconomics, Fall Semester 2013*******************

**Advanced Microeconomics, Spring Semester 2014*******************

Notes for first half of course

Mathematics for Economists II, Spring 2010

Mathematics for Economists, Econ 362M, Spring 2010

Game Theory, Econ 354K, Spring 2009

Micro III, Spring Quarter 2007, Cal Tech. Information and Exchange

Homework 1,
Homework 2,
Homework 3,
Homework 4,
Homework 5.

Econometrics III, Spring 2007, Cal Tech. Time Series

Homework 1,
Homework 2,
Homework 3.

Organization for Fall 04 Quantitative Methods and Mathematical Economics:

Topics outline for Fall 04 Quantitative Methods and Mathematical Economics:

Homeworks for Fall 04 Quantitative Methods and Mathematical Economics (updated Sep 10, 04):

Old and sketchy notes for Fall '03 Quantitave Methods, ECO 492L (Nov. 23, 2003):

Final Exam for Quantitave Methods, ECO 492L, due, noon, Dec. 15, 2003:

Marinacci's JET paper on the math of patience

Advanced Micro Course Notes (updated Dec. 4, 2001, valid up to p. 87):

Micro I Notes:

Largish file, notes, homeworks,
etc.

Notes for Fall Semester, 2008, Graduate Game Theory:

Slowly increasing set of notes and homeworks, Fall 2008, Graduate Introductory Game Theory

Notes for Fall Semester, 2007, Honors Game Theory:

Slowly increasing set of notes and homeworks for Honors Game Theory, Fall 2007

A history:

Another history:

*************WHAT I'VE WRITTEN**********************

Vita: