Max Stinchcombe's Page

A picture of me, below are papers, some course info, and my vita in  .pdf  format.

Click to get back to the Economics Department at the University of Texas at Austin.

*************Decision theory papers***********************

     The von Neumann-Morgenstern Approach to Ambiguity (with Martin Dumav) (Extended Abstract for the vNM Approach to Ambiguity)
  Abstract. A choice problem is risky (respectively ambiguous) if the decision maker is choosing between probability distributions (respectively sets of probability distributions) over utility relevant consequences. We provide an axiomatic foundation for and a representation of continuous linear preferences over sets of probabilities on consequences. The representation theory delivers: rst and second order dominance for ambiguous problems; a utility interval based dominance relation that distinguishes between sources of uncertainty; a complete theory of updating convex sets of priors; a Bayesian theory of the value of ambiguous information structures; complete separations of attitudes toward risk and ambiguity; and new classes of preferences that allow decreasing relative ambiguity aversion and thereby rationalize recent challenges to many of the extant multiple prior models of ambiguity aversion. We also characterize a property of sets of priors, descriptive completeness, that resolves several open problems and allows multiple prior models to model as large a class of problems as the continuous linear preferences presented here.

     Skorohod's Representation Theorem for Sets of Probabilities (with Martin Dumav)
  A tremendously useful result due to Skorohod tells us that every probability distribution (on a wide class of spaces) is the image of a nonatomic distribution on e.g. the unit interval under some measurable function. Further, if a sequence of distributions is Prohorov convergent, the measurable functions can be chosen to be converging almost everywhere. Blackwell and Dubins (1983) improved this result by giving a jointly measurable function on the space of probabilities cross the unit interval with the almost everywhere convergence built in. This paper extends these results from single probabilities to sets of probabilities satisfying a condition we call descriptive completeness: there is a jointly measurable function on the space of sets of probabilities cross the unit interval such that every closed set of probabilities (on a wide class of spaces) is the image of a descriptively complete set of probabilities, and if a sequence of sets is convergent, there is almost everywhere convergence of the set of functions with respect to every probability in the descriptively complete set.

     On the Recoverability of Forecaster Preferences (with Robert Lieli, Econometric Theory, 2013)
  Abstract. We study the problem of identifying a forecaster's loss function from observations on forecasts, realizations, and the forecaster's information set. Essentially different loss functions can lead to the same forecasts in all situations, though within the class of all continuous loss functions, this is strongly nongeneric. With the small set of exceptional cases ruled out, generic nonparametric preference recovery is theoretically possible, but identification depends critically on the amount of variation in the conditional distributions of the process being forecast. There exist processes with sufficient variability to guarantee identification, and much of this variation is also necessary for a process to have universal identifying power. We also briefly address the case in which the econometrician does not fully observe the conditional distributions used by the forecaster, and in this context we provide a practically usefulset identification result for loss functions used in forecasting binary variables.

     Weightless Learning and Decision Makers
  Sequences of observations drawn from a purely finitely additive, i.e.\ weightless, probability are always consistent uncountably many probabilities having mutually disjoint supports. Such observations generally have no value for expected utility maximizers. Using the probabilities consistent with observations as a set of priors in ambiguous choice: we can model only a negligible set of problems when there are three or more outcomes; we may partially, or completely, convexify the set of problems that can be modeled using the core of the complete ignorance capacity; and we may model problems that cannot be modeled using the core of any convex capacity.

*************Theory theory***********************

     Well-behaved infinite normal form games (GEB 2005, with C. J. Harris and W. R. Zame) Abstract. Normal form games are nearly compact and continuous (NCC) if they can be understood as games played on strategy spaces that are dense subsets of the strategy spaces of larger compact games with jointly continuous payoffs. There are intrinsic algebraic, measure theoretic, functional analysis, and finite approximability characterizations of NCC games. NCC games have finitely additive equilibria, and all their finitely additive equilibria are equivalent to countably additive equilibria on metric compactifications. The equilibrium set of an NCC game depends upper hemicontinuously on the specification of the game and contains only the limits of approximate equilibria of approximate games.

     General infinite normal form games (GEB 2005)
  Abstract. Infinite normal form games that are mathematically simple have been treated (see above). Under study in this paper are the other infinite normal form games, a class that includes the normal forms of most extensive form games with infinite choice sets. Finitistic equilibria are the limits of approximate equilibria taken along generalized sequences of finite subsets of the strategy spaces. Points must be added to the strategy spaces to represent these limits. There are direct, nonstandard analysis, and indirect, compactification and selection, representations of these points. The compactification and selection approach was introduced [Simon, L.K., Zame, W.R., 1990. Discontinuous games and endogenous sharing rules. Econometrica 58, 861-872]. It allows for profitable deviations and introduces spurious correlation between players' choices. Finitistic equilibria are selection equilibria without these drawbacks. Selection equilibria have drawbacks, but contain a set-valued theory of integration for non-measurable functions tightly linked to, and illuminated by, the integration of correspondences.

     Correlated equilibrium existence for games with type-dependent strategies (JET 2011)
  Abstract. Under study are games in which players receive private signals and then simultaneously choose actions from compact sets. Payoffs are measurable in signals and jointly continuous in actions. This paper gives a counter-example to the main step in Cotter's [K. Cotter, Correlated equilibrium in games with type-dependent strategies, J. Econ. Theory 54 (1991) 48-69] argument for correlated equilibrium existence for this class of games, and supplies an alternative proof.

     Balance and discontinuities in infinite games with type-dependent strategies (JET 2011)
  Abstract. Under study are games in which players receive private signals and then simultaneously choose actions from compact sets. Payoffs are measurable in signals and jointly continuous in actions. Stinchcombe (see above) proves the existence of correlated equilibria for this class of games. This paper is a study of the information structures for these games, the discontinuous expected utility functions they give rise to, and the notion of a balanced approximation to an infinite game with discontinuous payoffs.

     Proper scoring rules with arbitrary value functions (JME October 2010)
  The value function associated with a (strictly) proper scoring rule is (strictly) convex. This paper gives conditions on a set of probabilities under which Lipschitz (or smooth) functions have convex extensions.

     General infinite extensive form games
  Abstract. The additions to, or fixes for, game structures necessitated by use of the usual models of infinite sets can be unified in the concept of a game expansion. This paper identifies a class of expansions, the finitistic ones, that sharpens the previous fixes, delivers a well-behaved theory for infinite games, and clarifies the relation between classes of games and the requisite expansions. Finitistic equilibria are the minimal closed set of expansion equilibria consistent with the idea that continuous sets are limits of finite approximations.

     Genericity does NOT have a Bayesian flavor (2001 Proceedings of the AMS, v. 129, p. 451-7)
  Abstract. The best available definition of a subset of an infinite dimensional, complete, metric vector space, V, being ``small'' is Christensen's Haar zero sets, equivalently, Hunt, Sauer, and Yorke's shy sets. The complement of a shy set is a prevalent set. There is a gap between prevalence and likelihood. For any probability on V, there is a shy set having mass 1. Further, when V is locally convex, any i.i.d. sequence with law repeatedly visits neighborhoods of only a shy set of points if the neighborhoods shrink to 0 at any rate.

*************Applied theory***********************

     Buying Truth in a Competitive Market (with Hong Xu and Andrew Whinston)
  Abstract. Organizations face a competitive certification market for their statements, the statements do not convince third parties unless certified, the organizations are sometimes better served by a lie, and honest mistakes are possible. In our model of such a market: if certifiers are liable for mistakes, certifier contracts must be contingent; when certification is inelastically demanded, increases in certifier liability ectively reduce third party trust; organizational liability for mis-statements has a strong deterrent ect on mis-statements and increases third party trust; and after a strong negative shock to the financial system, loosening certification standards can only make it harder to raise third party trust levels.

     Torture in Counterterrorism: Agency Incentives and Slippery Slopes (with Hugo M. Mialon and Sue H. Mialon, J. Pub. Econ. 2012)
  Abstract. We develop a counterterrorism model to analyze the effects of allowing a government agency to torture suspects when evidence of terrorist involvement is strong. We find that legalizing torture in strong-evidence cases has offsetting effects on agency incentives to counter terrorism by means other than torture. It increases these incentives because other efforts may increase the probability of having strong enough evidence to warrant the use of torture if other efforts fail --- a complementarity effect. However, it also lowers these incentives because the agency might come to rely on torture to avert attacks --- a decommitment effect. The decommitment effect is more likely to dominate if the agency's non-torture efforts are good at stopping attacks. Moreover, legalizing torture in strong-evidence cases is likely to reduce security if the effectiveness of torture is low while that of non-torture efforts is high. Lastly, we find that legalizing torture in strong-evidence cases can increase agency incentives to torture even in weak-evidence cases --- a slippery slope

     The Virtues of Hesitation (with Urmee Khan)
  Abstract. In many economic, political and social situations, circumstances change at random points in time, reacting is costly, and changes appropriate to present circumstances may be inappropriate to later changes, requiring further costly change. Waiting is informative if the hazard rate for the arrival of the next change is non-constant. We identify two broad classes of situations: in the first, delayed reaction is optimal only when the hazard rate of further changes is decreasing; in the second, it is optimal only when the hazard rate of further changes is increasing. The first class of situations correspond to having waited long enough to know that future changes in circumstances are comfortably in the future, and the associated non-optimality of action in the face of an increasing hazard rate corresponds to the counsel of patience in unsettled circumstances. The second class of situations corresponds to the delay of costly precautionary steps until the danger is clear enough. These results in non-stationary dynamic optimization provide a new set of motivations for building delay into legislative and other decision systems, and arise from extensions of semi-Markovian decision theory.

     Bundling Information Goods of Decreasing Value with X. Geng, and A. B. Whinston, Management Science April, 2005)
  Abstract. Consumers' average value for information goods (websites, weather forecasts, music, news) declines with the number consumed. This paper provides simple guidelines to optimal bundling marketing strategies in this case. If consumers' values do not decrease too quickly, we show that bundling is approximately optimal. If consumers' values to subsequent goods decrease quickly, we show by example that one should expect bundling to be suboptimal.


     On the Recoverability of Forecaster Preferences (with Robert Lieli, Econometric Theory, 2013)
  Abstract. (See above)

     Regression Efficacy and the Curse of Dimensionality (in Recent Advances and Future Directions in Causation, Prediction, and Specification Analysis)
  Abstract. This paper gives a geometric representation of a class of non-parametric regression estimators that includes series expansions (Fourier, wavelet, Tchebyshev and others), kernels and other locally weighted regressions, splines, and artificial neural networks. For any estimator having this geometric representation, there is no curse of dimensionality --- asymptotically, the error goes to 0 at the parametric rate. Regression efficacy measures the amount of variation in the conditional mean of the dependent variable, Y, that can be achieved by moving the explanatory variables across their whole range. The dismally slow, dimension-dependent rates of convergence are calculated using a class of target functions in which efficacy is infinite, and the analysis allows for the possibility that the dependent variable, Y, may be an ever-receding target.

     Generic inconsistency of Bayesian updating

     Genericity analyses in nonparametric statistics


**Mathematics for Economists, Summer Camp 2013*******************

Mathematics for Economists, Summer 2013, Week 1

     Wednesday, August 7

     Wednesday, August 7

     Thursday, August 8

     Friday, August 9

Mathematics for Economists, Summer 2013, Week 2


     Tuesday and Wednesday

     Thursday thru Monday

Mathematics for Economists, Summer 2013 Week 3

     Tuesday through Thursday

     Friday and Tuesday

**Mathematics for Economists, Fall Semester 2013*******************


Weeks 1 through 4

     Homework, Wednesday, August 28

     Homework, Monday, September 9

     Homework, Monday, September 16

     Homework, Monday, September 23

     Homework, Monday, September 30

     Homework, Monday, October 7

     Homework, Monday, October 14

     Mid-term solutions, October 28

     Homework, Monday, October 28

     Homework, Monday, November 4

     Homework, Monday, November 11

     Concavity Implies Continuity

     Homework, Monday, November 18

     Homework, Monday, November 25

**Advanced Microeconomics, Fall Semester 2013*******************


     Some notes for the course

**Advanced Microeconomics, Spring Semester 2014*******************


     Notes for first half of course

***************Older Stuff*******************

Mathematics for Economists II, Spring 2010


     Assigments for weeks 1 thru 6

Mathematics for Economists, Econ 362M, Spring 2010


     Assignments 1 and 2

     Assignment 3

     Assignments 4 and 5

     Assignments 6 and 7

     Assignment 8

     Assignment 9

     Assignment 10


Game Theory, Econ 354K, Spring 2009

     Syllabus and assignments


Micro III, Spring Quarter 2007, Cal Tech. Information and Exchange

     Syllabus with links to papers

     Homework 1, Homework 2, Homework 3, Homework 4, Homework 5.

Econometrics III, Spring 2007, Cal Tech. Time Series


     Homework 1, Homework 2, Homework 3.

Organization for Fall 04 Quantitative Methods and Mathematical Economics:


Topics outline for Fall 04 Quantitative Methods and Mathematical Economics:


Homeworks for Fall 04 Quantitative Methods and Mathematical Economics (updated Sep 10, 04):


Old and sketchy notes for Fall '03 Quantitave Methods, ECO 492L (Nov. 23, 2003):

     Quant. Methods.

Final Exam for Quantitave Methods, ECO 492L, due, noon, Dec. 15, 2003:

     Final Exam

     Marinacci's JET paper on the math of patience


Advanced Micro Course Notes (updated Dec. 4, 2001, valid up to p. 87):


Micro I Notes:

     Largish file, notes, homeworks, etc.

Notes for Fall Semester, 2008, Graduate Game Theory:

     Slowly increasing set of notes and homeworks, Fall 2008, Graduate Introductory Game Theory

Notes for Fall Semester, 2007, Honors Game Theory:

     Slowly increasing set of notes and homeworks for Honors Game Theory, Fall 2007

A history:

     Middle ages

Another history:

     Middle ages 2

*************WHAT LITTLE I'VE WRITTEN**********************