Papers

Sort: Earliest First

Author:

Keyword:

Private Information, Asset Price Volatility, and Liquidity // Sept. 16, 2022

Abstract

How does public information indicating ominous economic conditions affect traders’ acquisition of private information? How does that private information affect asset price volatility and liquidity? To investigate such questions, we derive theoretical predictions from a stylized bond market model where investors receive public information on default probability and then can purchase costly private information on the repayment recovery rate conditional on default. We implement such markets in the laboratory, and allow human investors to trade under three different market formats with controlled variation in information cost schedules and default probabilities. The laboratory data support most of the theoretical predictions, e.g., traders purchase more private information when it is less expensive and when the public information conveys more ominous conditions. In turn, more private information purchase increases the bond’s overall price volatility and liquidity. Additional results concern trader profitability and price efficiency. This paper highlights the role of public information in private information acquisition and the impact of their interaction on asset market volatility and illiquidity trade-off, and thus may provide useful policy implications.

Coauthored with Grace Weishi Gu and Vivian Juehui Zheng

Current version can be downloaded from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4037757

    Keywords: public information; private information; asset price volatility; liquidity; laboratory experiment; BDM mechanism; call market; continuous double auction

On the empirical relevance of correlated equilibrium // Aug. 10, 2022

Abstract

Absent coordinating signals from an exogenous benevolent agent, can an efficient correlated equilibrium emerge? Theoretical work in adaptive dynamics suggests a positive answer, which we test in a laboratory experiment. In the well-known Chicken game, we observe time average play that is close to the asymmetric pure Nash equilibrium in some treatments, and in other treatments we observe collusive play. In a game resembling rock-paper-scissors or matching pennies, we observe time average play close to a correlated equilibrium that is more efficient than the unique Nash equilibrium. Estimates and simulations of adaptive dynamics capture much of the observed heterogeneity across player pairs as well as dynamic regularities.

The final version of this paper is forthcoming in a 2023 special issue of Journal of Economic Theory.

    Keywords: Correlated equilibrium, Laboratory experiment, Adaptive dynamics

Naturally Occurring Preferences and General Equilibrium // May 9, 2021

Abstract

Prior laboratory experiments have studied general equilibrium economies constructed from ''induced preferences" for arti ficial goods. We introduce new methods that allow us to study economies constructed instead from subjects' actual, ''homegrown" preferences. Our subjects reveal their preferences by choosing portfolios of Arrow securities from budget lines through fixed endowments for a series of prices. We then construct several diff erent economies by sorting subjects according to their revealed preferences. The constructed economies exhibit a wide range of predicted outcomes, where predictions are competitive general equilibria given the revealed preferences. Perhaps surprisingly, in every one of our markets the predicted excess demand is well-behaved, and avoids the pathologies highlighted in the Sonnenschein-Mantel-Debreu theorem. (The main reason seems to be heterogeneity in revealed preferences.) Actual trade in the constructed economies using a tatonnement market institution closely tracks predictions in most markets. The exceptions occur in economies with severe wealth eff ects that generate excess demands that are flat relative to measured preference volatility.

Final version published in International Economic Review, 62:2, pp. 831-859 (2021)

    Keywords: Experimental Economics, General Equilibrium, Aggregation, Portfolio Choice, Heterogeneity, Risk Preferences, tatonnement.

Varieties of Risk Preference Elicitation // July 31, 2020

Abstract

We explore risk preference elicitation via direct choice over lotteries. Our choice tasks differ incrementally, e.g., from choosing between two lotteries to selecting a portfolio from a continuous set of bundled Arrow securities, and from text to spatial presentation. Each subject completes multiple instances of five different tasks, and responses for each task are summarized in parametric (CRRA) and non-parametric (normalized risk premium) measures of risk preference. Variation in task attributes explains much of the observed wide variation in elicited preferences and in correlations across task pairs.

Published in Games and Economic Behavior 2022; online link is https://doi.org/10.1016/j.geb.2022.02.002

    Keywords: Risk Aversion, Experiment, Elicitation, Multiple Price List

An Experimental Investigation of Price Dispersion and Cycles // July 9, 2020

Abstract

We report a continuous time laboratory experiment studying the classic Burdett and Judd (1983) model, which features a unique Nash equilibrium (NE) that has dispersed prices. Adaptive dynamics predict that the NE is stable for one parameter set we use, and unstable for another parameter set. We fi nd that time average dispersions are close to the NE distribution for the stable parameter set, but skew towards prices higher than NE for the unstable parameter set. We offer an empirical de finition of price cycles in terms of changes over time in robust measures of central tendency (median) and dispersion (interquartile range). By that defi nition, the data exhibit persistent cycles in both treatments, with larger cycle amplitudes for the unstable parameters.

A slightly updated version is forthcoming in Journal of Political Economy

    Keywords: price dispersion, laboratory experiment, cycles, stability of equilibrium

oTree Markets // March 16, 2020

Abstract

This paper presents oTree Markets, a flexible framework for the construction of market simulation experiments in oTree. oTree Markets provides three components: a Python implementation of a Contin- uous Double Auction exchange, a reference text-based interface and an oTree layer for communication and state management. These three components are designed with modularity in mind, with the intent that each of them could be replaced with modified versions to suit the needs of a wide variety of market simulation experiments.

    Keywords: oTree, Markets, Market Design, Experimental Economics

ESA Priorities 2020 // Nov. 18, 2019

Abstract

Three top current priorities for Economic Science Association.

    Keywords: ESA priorities, actions

Experimenting with Measurement Error: Comment // Oct. 5, 2019

Abstract

Gillen, Snowberg and Yariv (2019, henceforth GSY) present an estimation technique called ORIV intended to cope with measurement error, and argue that applying ORIV overturns conclusions obtained in some previous laboratory studies. We agree that not properly accounting for measurement error can invalidate inferences made from laboratory data, and that the ORIV technique can help cope with that problem. However, in this Comment, we (a) show that ORIV applied to GSY's data may actually reinforce previous conclusions regarding the elicitation of risk preferences, and (b) offer cautions for applying ORIV more generally.

When Are Mixed Equilibria Relevant? // Aug. 26, 2019

Abstract

Mixed Nash equilibria are a cornerstone of game theory, but their empirical relevance has always been controversial. We study in the laboratory two games whose unique NE is in completely mixed strategies; other treatments include the matching protocol (pairwise random vs population mean-matching), whether time is discrete or continuous, and whether players can specify mixtures or only pure strategies. Comparing point predictions, NE always does better than maximin and often does no worse than Logit QRE. NE predicts better than Center (50-50 mixes) under mean-matching, but otherwise not as well. By contrast, in a dominance solvable game, NE predicts better than alternatives in all treatments. Qualitative and quantitative dynamic models capture regularities across all treatments.

Download from this SSRN link.

    Keywords: Nash equilibrium, minimax, mixed strategy, directional learning, laboratory experiment

Experiments in High-Frequency Trading: Comparing Two Market Institutions // July 13, 2019

Abstract

We implement a laboratory financial market where traders can access costly technology that reduces communication latency with a remote exchange. In this environment, we conduct a market design study on high-frequency trading: we contrast the performance of the newly proposed Frequent Batch Auction (FBA) against the Continuous Double Auction (CDA), which organizes trades in most exchanges worldwide. Our evidence suggests that, relative to the CDA, the FBA exhibits (1) less predatory trading behavior, (2) lower investments in low-latency communication technology, (3) lower transaction costs, and (4) lower volatility in market spreads and liquidity. We also find that transitory shocks in the environment have substantially greater impact on market dynamics in the CDA than in the FBA.

JEL Classification: C91, D44, D47, D53, G12, G14

Forthcoming in Experimental Economics. preliminary version downloadable from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3154070

    Keywords: Market Design, Auctions, High-Frequency Trading, Continuous Double Auction, Frequent Batch Auction

Dissecting the Monty Hall Anomaly // Oct. 22, 2017

Abstract

We assess competing explanations of irrational behavior in the Monty Hall problem by creating new variants of the problem. In some variants we pre-process the merging-of-probability-masses step so as to render transparent that switching has a posterior win probability 2/3. This “Merge” feature also enables systematic variation in informational asymmetry, and in ordering of actions. Data from 77 subjects, each of whom makes 30 switch/not-switch decisions, indicates that Merge raises the switch rate from under 40% to over 80%. Other features examined have much less impact, indicating that the main source of irrational behavior is Bayesian updating failure.

This paper is forthcoming in Economic Inquiry.

    Keywords: Monty Hall

A Theoretical Model of the Investors Exchange // July 8, 2017

Abstract

Investors' Exchange LLC (IEX) is a newly approved public exchange that is designed to discourage aggressive high-frequency trading. We explain how IEX differs from traditional continuous double auction markets and present summary data on IEX transactions by trader class and order type. Our primary contribution is a simple analytic model of IEX as a constrained version of the continuous double auction. The model predicts that IEX will generally improve price efficiency and lower transactions cost while increasing delay costs. A subset of the model's predictions are testable in the field or in a laboratory environment.

The paper is downloadable from
this SSRN link.

    Keywords: Market design, IEX, lab experiments, high-frequency trading, continuous double auction.

Payoff and Presentation Modulation of Elicited Risk Preferences in MPLs // June 1, 2017

Abstract

Since Holt & Laury (2002), the multiple price list (MPL) procedure has been widely used to elicit individual risk preferences. We assess the impact of varying list order and spacing, and of presentation via text or graphs. Relative to the original MPL baseline, some nonlinear transformations of lottery prices systematically increase elicited risk aversion, while some graphical displays tend to reduce it.

This paper appeared in Journal of the Economic Science Association 3, pp. 183-194 (2017).

    Keywords: Multiple Price List, Elicitation, Risk Aversion, Experiment

How Fundamentalism Takes Root: A Simulation Study // Dec. 21, 2016

Abstract

We report agent-based simulations of religiosity dynamics in a spatially dispersed population. Agents' religiosity responds to neighbors via pairwise interactions as well as via club goods effects. A simulation run is deemed fundamentalist if the final distribution contains a sizable minority of very high religiosity together with a majority of lesser religiosity. Such simulations are more prevalent when parameter values shift from values reflecting traditional societies towards values reflecting the modern world. The simulations suggest that the rise of fundamentalism in the modern world is boosted by greater real income, lower relative prices for secular goods, less substitutability between religious and secular goods, and less time spent with neighbors. Surprisingly, the simulations suggest little direct role for the rise of long distance communication and transportation.

    Keywords: fundamentalism, simulations, agent based models

Multi-product Utility Maximization for Portfolio Recommendation // Dec. 20, 2016

Abstract

Basic economic relations such as substitutability and complementarity between products are crucial for recommendation tasks, since the utility of one product may depend on whether or not other products are purchased. For example, the utility of a camera lens could be high if the user possesses the right camera (complementarity), while the utility of another camera could be low because the user has already purchased one (substitutability). We propose multi-product utility maximization (MPUM) as a general approach to recommendation driven by economic principles. MPUM integrates the economic theory of consumer choice with personalized recommendation, and focuses on the utility of sets of product sets for individual users. MPUM considers what the users already have when recommending additional products. We evaluate MPUM against several popular recommendation algorithms on two real-world E-commerce datasets. Results confirm the underlying economic intuition, and show that MPUM significantly outperforms the comparison algorithms under top-K evaluation metrics.

To be presented at the tenth ACM International WSDM Conference, Cambridge UK (February 2017).

    Keywords: recommender systems, multiproduct utility

Supply Chain Dynamics With Assortative Matching // Nov. 17, 2016

Abstract

by Caichun Chai, Eilin Francis, and Tiaojun Xiao

This paper studies the evolutionarily stable strategies of one-manufacturer and one-retailer supply chains. Each manufacturer and retailer chooses between two pure strategies of management: shareholder-oriented or stakeholder-oriented. Based on its management strategy, the firm decides its wholesale or retail price. In this paper, we consider supply chains formed by two matching processes: random matching and assortative matching. Our results indicate that random matching does not support interior Nash equilibria; the evolutionarily stable strategy is for both manufacturer and retailer to choose shareholder strategy. We extend Bergstrom (2003) to a two-population game, and compare the dynamics of supply chains under random matching and assortative matching. Interior Nash equilibrium is observed with assortative matching. However, this interior equilibrium is unstable. The four unique strategy profiles obtained by various combinations of the two strategy choices may be evolutionarily stable for certain values of the indices of assortativity.

    Keywords: evolutionary games, assorative matching, supply chains

Recommendation based on Total Surplus Maximization // May 20, 2016

Abstract

In this paper, we show how to adapt economists' traditional idea of maximizing total surplus (the sum of consumer net benefit and producer profit) to the heterogeneous world of online service allocation, in an effort to promote the web intelligence for social good in online eco-systems. Modifications of traditional personalized recommendation algorithms enable us to apply Total Surplus Maximization (TSM) to three very different types of real-world tasks: e-commerce, P2P lending and freelancing. The results for all three tasks suggest that TSM compares very favorably to currently popular approaches, to the benefit of both producers and consumers.

Presented as a long paper at the 25th International World Wide Web Conference, Montreal (April 2016).

Online Ad Auctions: An Experiment // May 14, 2016

Abstract

A human subject laboratory experiment compares the real-time market performance of the two most popular auction formats for online ad space, Vickrey-Clarke-Groves (VCG) and Generalized Second Price (GSP). Theoretical predictions made in papers by Varian (2007) and Edelman, et al. (2007) seem to organize the data well overall. Efficiency under VCG exceeds that under GSP in nearly all treatments. The diff erence is economically signifi cant in the more competitive parameter confi gurations and is statistically signi ficant in most treatments. Revenue capture tends to be similar across auction formats in most treatments.

    Keywords: Laboratory Experiments, Auction, Online Auctions, Advertising

Boiling Frogs Optimally // March 15, 2016

Abstract

The fraction of a user population willing to tolerate nuisances of size x is summarized in the survivor curve S(x); its shape is crucial in economic decisions such as pricing and advertising. We report a laboratory experiment that, for the first time, estimates the shape of survivor curves in several different settings. Laboratory subjects engage in a series of six desirable activities, e.g., playing a video game, viewing a chosen video clip, or earning money by answering questions. For each activity and each subject we introduce a chosen level x in [xmin; xmax] of a particular nuisance, and the subject chooses whether to tolerate the nuisance or to switch to a bland activity for the remaining time. New non-parametric techniques provide bounds on the empirical survivor curves for each activity. Parametric fits of the classic Weibull distribution provide estimates of the survivor curves' shapes. The fitted shape parameter depends on the activity and nuisance, but overall the estimated survivor curves tend to be log-convex. An implication, given the model of [Aperjis and Huberman, 2011], is that introducing nuisances all at once will generally be more profitable than introducing them gradually.

Forthcoming as “Intolerable Nuisances: Some Laboratory Evidence on Survivor Curve Shapes,” in Experimental Economics.

    Keywords: Internet monetization,  online advertising,  pricing,  reference points,  adaptation,  laboratory experiment

Emergence of Networks and Market Institutions in a Large Virtual Economy // Feb. 29, 2016

Abstract

Starting with a complete set of transactions from an on-line trading community, we construct trader and goods networks, and track them over time using metrics such as node strength, assortativity, betweenness and closeness. The trading platform was designed to make barter exchange as attractive as possible; money was not part of the design and all players were created equal. Yet, within weeks, several specific goods emerged as media of exchange, and various specialized traders appeared. Eventually trade was predominantly money-mediated and market-makers played a major role. Our results illustrate how network analysis can capture the spontaneous emergence of economic institutions.

    Keywords: money as medium of exchange, market makers, virtual economy, market efficiency, network analysis

An Experiment on the Core // Feb. 14, 2016

Abstract

Each of n identical buyers (and m identical sellers) wants to buy (sell) a single unit of an indivisible good. The core predicts a unique and extreme outcome: the entire surplus is split evenly among the buyers when m > n and among the sellers when m < n; the long side gets nothing. We test this core conjecture in the lab with n + m = 3 or 5 randomly rematched traders and minimal imbalances (m = n +/- 1) in three market institutions. In the standard continuous double auction, the surplus indeed goes overwhelmingly towards the short side. The DA-Chat institution allows traders to have cheap talk prior to the double auction, while the DA-Barg institution allows the long siders to negotiate enforceable profit sharing agreements while trading. Despite frequent attempts to collude and occasional large deviations from the core prediction, we find that successful collusion is infrequent in both new institutions. A disproportionate fraction of the successful collusions are accompanied by appeals to fairness.

Forthcoming, 2016, in Games and Economic Behavior

    Keywords: Core, Collusion, Laboratory experiment, Fairness, Market

E-commerce Recommendation with Personalized Promotion // Dec. 20, 2015

Abstract

Most existing e-commerce recommender systems aim to recommend the right products to a consumer, assuming the properties of each product are fixed. However, some properties, including price discount, can be personalized to respond to each consumer's preference. This paper studies how to automatically set the price discount when recommending a product, in light of the fact that the price will often alter a consumer's purchase decision. The key to optimizing the discount is to predict consumer's willingness-to-pay (WTP), namely, the highest price a consumer is willing to pay for a product. Purchase data used by traditional e-commerce recommender systems provide points below or above the decision boundary. In this paper we collected training data to better predict the decision boundary. We implement a new e-commerce mechanism adapted from laboratory lottery and auction experiments that elicit a rational customer's exact WTP for a small subset of products, and use a machinelearning algorithm to predict the customer's WTP for other products. The mechanism is implemented on our own e-commerce website that leverages Amazon's data and subjects recruited via Mechanical Turk. The experimental results suggest that this approach can help predict WTP, and boost consumer satisfaction as well as seller profit.

This paper appeared in Proceedings of the ACM SIGRecSys 2015 Workshops (held in Vienna, September 2015)

    Keywords: e-commerce, WTP, preference elicitation

Morality as a Variable Constraint on Economic Behavior // Nov. 12, 2015

Abstract

In social creatures, evolutionary forces constrain self-interested behavior in various ways. For humans, group interests are served, and individual self interest is constrained, by the moral system -- the shared understanding of proper behavior. This chapter explores the coevolution of human moral systems and market-oriented institutions. It observes that morals constrain economic behavior in many ways that are seldom recognized in traditional economic models but that have considerable practical importance.

The first half of the paper sets the context. All social creatures require some way to resolve the tension between self-interest and group interest. Humans achieve unparalleled degrees of cooperation by means of two distinctive tension reducers. The first is our moral system, and the second is market exchange. The second is particularly important in the modern world, but it still relies on the first, and is constrained by it.

The second half of the paper explores some of those constraints and their economic impact on firms’ pricing decisions, on employee relations, on financial market institutions, and on the existence of markets more generally.

This paper is scheduled to be published as a chapter in Handbook of Behavioral Economics, edited by R. Franz et al (Routledge, 2016).

    Keywords: morals, constraints, markets

Stability in Competition? Hotelling in Continuous Time // Sept. 29, 2015

Abstract

We study Hotelling's classic duopoly location model in continuous time with flow payoff s accumulated over time and the price dimension made explicit. In an experimental setting, subjects chose price and location in treatments varying only by the speed of adjustment. We find that the principle of minimum differentiation generally holds, with little distance between subjects' location decisions. Price decisions, however, tend to be volatile, consistent with theory. Our data also support recent literature that the ability to respond quickly increases cooperation.

    Keywords: spatial competition, experiment, continuous time

How moral codes evolve in a trust game // June 3, 2015

Abstract

This paper analyzes the dynamic stability of moral codes in a two population trust game. Guided by a moral code, members of one population, the Trustors, are willing to punish members of the other population, the Trustees, who defect. Under replicator dynamics, adherence to the moral code has unstable oscillations around an interior Nash Equilibrium (NE), but under smoothed best response dynamics we obtain convergence to Quantal Response Equilibrium (QRE).

published in Games 2015, 6, 150-160; doi:10.3390/g6020150

    Keywords: Prisoner’s Dilemma; evolutionary stability; moral codes

BABEEW5 Program (2015) // May 2, 2015

Abstract

The program for the fifth Bay Area Behavioral and Experimental Economics Workshop (BABEEW5) hosted at UC Santa Cruz.

Continuous Differentiation: Hotelling Revisits the Lab // Oct. 11, 2014

Abstract

We investigate experimentally the impact of continuous time on a four-player Hotelling location game. The static pure strategy Nash equilibrium (NE) consists of firms paired-up at the first and third quartiles of the linear city. In repeated simultaneous move games (discrete time grid), we fail to obtain convergence to this distinctive NE, as have previous studies. However, with asynchronous moves in continuous time treatments, the NE clearly emerges.

This paper is forthcoming in the second issue of JESA.

Continuous Time and Communication in a Public-goods Experiment // Sept. 5, 2014

Abstract

We investigate the effects of continuous time and communication on contributions in a public-goods experiment with a set of parameters that make cooperation difficult. We vary whether communication amongst the four people in a group is feasible, as well as whether decisions are made in continuous time during a 10-minute interval or only at 10 discrete points of time during this interval. The data show that continuous time leads to an increase in cooperation relative to a standard protocol and to a very substantial increase when subjects are allowed to communicate in an unrestricted manner.

The paper is forthcoming in J Econ Behavior and Organization.

    Keywords: public goods, voluntary contribution mechanism, continuous time games

The Marginal Utility of Money // Aug. 14, 2014

Abstract

We present a rational model of consumer choice, which can also serve as a behavioral model. The central construct is λ, the marginal utility of money, derived from the consumer’s rest-of-life problem. It provides a simple criterion for choosing a consumption bundle in a separable consumption problem. We derive a robust approximation of λ, and show how to incorporate liquidity constraints, indivisibilities, and adaptation to a changing environment. We find connections with numerous historical and recent constructs, both behavioral and neoclassical, and draw contrasts with standard partial equilibrium analysis. The result is a better grounded, more flexible and more intuitive description of consumer choice.

This paper is forthcoming in Theory and Decision.

    Keywords: Budget constraint, Separability, Value for money

Risky Curves: On the Empirical Failures of Expected Utility // May 23, 2014

Abstract

Penultimate versions of two chapters from a book published February 2014 by Routledge. Coauthored with Duncan James, Mark Isaac and Shyam Sunder

Abstract.

Seven decades ago, Von Neumann and Morgenstern proposed curved utility functions for explaining choice under risk, generalizing a suggestion two centuries earlier by Daniel Bernoulli. That proposal continues to dominate the field, as theorists continue to devise new parameterized curves (e.g., for value from gains and losses, and for cumulative probability) while experimenters devise new protocols to elicit data and report estimates of parameters. From intensive interest and large volume of this literature, it is easy to get the impression of scientific progress.

In this book we show that the empirical harvest so far has, in fact, been quite meager. Estimated parameters (e.g., risk-aversion coefficients) exhibit remarkably little stability outside the context in which they are fitted. Their power to predict out-of-sample is in the poor-to-nonexistent range, and we have seen no convincing victories over naïve alternatives. Outside the laboratory, expected utility theory and its generalizations have provided surprisingly little insight into economic phenomena such as securities, real estate or labor markets, insurance, gambling, or business cycles. It is perhaps time to ask whether the failure to find stable replicable results IS the result.

Although our main purpose is to raise doubt about the current approach, we do offer some positive suggestions. We reconsider the meaning and measures of risk and of risk aversion; we recommend using simple expected value criterion, while looking for explanatory power in the constraints and the real options that decision makers face; and we note recent work in evolution, learning, and physiology that someday might lead to a better understanding of, and ability to predict, decisions in an uncertain world.

    Keywords: risk, preferences

2013 Regional ESA Conference Program // Nov. 1, 2013

Abstract

2013 Regional ESA Conference October 24-26, 2013 Hotel Paradox Santa Cruz, California Last Revised 08:21:56 PDT, 2013.10.22

Software for Continuous Game Experiments // Oct. 11, 2013

Abstract

ConG is software for conducting economic experiments in continuous and discrete time. It allows experimenters with limited programming experience to create a variety of strategic environments featuring rich visual feedback in continuous time and over continuous action spaces, as well as in discrete time or over discrete action spaces. Simple, easily edited input files give the experimenter considerable flexibility in specifying the strategic environment and visual feedback. Source code is modular and allows researchers with programming skills to create novel strategic environments and displays.

This paper is forthcoming in Experimental Economics.

    Keywords: Experimental economics. Continuous time. Software for laboratory experiments.

Incomplete Information, Dynamic Stability and the Evolution of Preferences: Two Examples // Sept. 14, 2013

Abstract

We illustrate general techniques for assessing dynamic stability in games of incomplete information by re-analyzing two models of preference evolution, the Arce (2007) Principal- Agent game and the Friedman and Singh (2009) Noisy Trust game. The techniques include extensions of replicator and gradient dynamics, and for both models they confirm local stability of the key static equilibria. That is, we obtain convergence in time average for initial conditions sufficiently near equilibrium values.

paper is forthcoming in a special issue of Dynamic Games and Applications, 2014.

    Keywords: Stability, Perfect Bayesian Equilibrium, Evolutionary dynamics

Efficient Investment via Assortative Matching in One-Shot Games: Theory and Evidence // Oct. 11, 2012

Abstract

This paper studies pre-commitment investment strategy in a one shot game, where agents seek to form matches in the presence of an assortative matching rule. We show that a bimodal distribution of investment arises in equilibrium where most players select high levels of investment —achieving a pareto superior solution— meanwhile few stay at the lower bound —the trivial NE. The experimental evidence obtained supports our predictions and shows that the median investment levels are remarkably high, close to 91 percent of the initial endowment. This result is novel as one shot games have generally been unable to produce such high levels of cooperation (efficiency).

    Keywords: Public goods, Assortative Matching, Cooperation, Efficiency

A Continuous Dilemma // Sept. 10, 2012

Abstract

We study prisoner's dilemmas played in continuous time with flow payo ffs over 60 seconds. In most cases, the median rate of mutual cooperation rises to 90% or more. Control sessions with 8-time repeated matchings achieve less than half as much cooperation, and cooperation rates approach zero in one-shot control sessions. In follow-up sessions with a variable number of subperiods, cooperation rates increase nearly linearly as the grid size decreases and, with one-second subperiods, they approach the level seen in continuous sessions. Our data support a strand of theory that explains how the capacity to respond rapidly stabilizes cooperation and destabilizes defection in the prisoner's dilemma.

The attached version was published in the AER and won the 2012/13 Exeter Prize.

    Keywords: Prisoner’s dilemma, Game theory, Laboratory experiment, Continuous time game

Cycles and Instability in a Rock-Paper-Scissors Population Game: a Continuous Time Experiment // July 19, 2012

Abstract

We report laboratory experiments that use new, visually oriented software to explore the dynamics of 3 × 3 games with intransitive best responses. Each moment, each player is matched against the entire population, here 8 human subjects. A “heat map” offers instantaneous feedback on current profit opportunities. In the continuous slow adjustment treatment, we see distinct cycles in the population mix. The cycle amplitude, frequency and direction are consistent with standard learning models. Cycles are more erratic and higher frequency in the instantaneous adjustment treatment. Control treatments (using simultaneous matching in discrete time) replicate previous results that exhibit weak or no cycles. Average play is approximated fairly well by Nash equilibrium, and an alternative point prediction, “TASP” (Time Average of the Shapley Polygon), captures some regularities that NE misses.

This paper appeared in Review of Economic Studies 81:1 (2014)

    Keywords: experiments, learning, mixed equilibrium, continuous time

From Imitation to Collusion: Long-run Learning in a Low-Information Environment // July 18, 2012

Abstract

We study long-run learning in an experimental Cournot game with no explicit information about the payoff function. Subjects see only the quantities and payoffs of each oligopolist after every period. In line with theoretical predictions and previous experimental findings, duopolies and triopolies both reach highly competitive levels, with price approaching marginal cost within 50 periods. Using the new ConG software, we extend the horizon to 1,200 periods, far beyond that previously investigated. Already after 100 periods we observe a qualitative change in behavior, and quantity choices start to drop. Without pausing at the Cournot-Nash level quantities continue to drop, eventually reaching almost fully collusive levels in duopolies and often reaching deep into collusive territory for triopolies. Fitted models of individual adjustment suggest that subjects switch from imitation of the most profitable rival to other behavior that, intentionally or otherwise, facilitates collusion via effective punishment and forgiveness. Remarkably, subjects never learn the best-reply correspondence of the one-shot game. Our results suggest a new explanation for the emergence of cooperation.

A revised version was published in Journal of Economic Theory 155:1, pp. 185-205 (2015).

    Keywords: Cournot oligopoly, imitation, learning dynamics, cooperation.

Evolutionary Learning of Policies for MCTS Simulations // June 1, 2012

Abstract

Monte-Carlo Tree Search (MCTS) grows a partial game tree and uses a large number of random simulations to approx- imate the values of the nodes. It has proven effective in games with such as Go and Hex where the large search space and difficulty of evaluating positions cause difficulties for standard methods. The best MCTS players use carefully hand-crafted rules to bias the random simulations. Obtain- ing good hand-crafting rules is a very difficult process, as even rules promoting better simulation play can result in a weaker MCTS system [12]. Our Hivemind system uses evo- lution strategies to automatically learn effective rules for bi- asing the random simulations. We have built a MCTS player using Hivemind for the game Hex. The Hivemind learned rules result in a 90% win rate against a baseline MCTS sys- tem, and significant improvement against the computer Hex world champion, MoHex.

Evolutionary Dynamics for Playing the Field // Oct. 16, 2011

Abstract

Any piecewise-smooth, symmetric two-player game can be extended to define a population game in which each player interacts with a large representative subset of the entire population, i.e., is ``playing the field.'' Assuming that players respond to the payoff gradient over a continuous action space, we obtain nonlinear integro-partial differential equations that are often numerically tractable and sometimes analytically tractable. Economic applications include oligopoly, growth theory, and financial bubbles and crashes.

This paper appeared in Journal of Economic Theory 148:2, pp. 743-777 (2013).

    Keywords: Population games, gradient dynamics, shock waves.

Rejection Pattern of Generous Offers in a Three Player Ultimatum Game: A Tale of Two Tails. // Sept. 23, 2011

Abstract

We present a three-player game, in which a decision-maker, in the role of referee, accepts or rejects the offer made by a proposer to a passive receiver. The results show a high level of rejection of both selfish and generous offers by the referee. We show that contrary to the best-known models of social preferences, our judge’s decisions are independent of their payoff. In addition, we are able to show that concerns for intentions of proposers are secondary when compared to inequality concerns

    Keywords: inequality, ultimatum game, fairness, experiment

Separating the Hawks from the Doves // July 1, 2011

Abstract

Human players in our laboratory experiment converge closely to the symmetric mixed Nash equilibrium when matched in a single population version of the standard Hawk-Dove game. When matched across two populations, the same players show clear movement towards an asymmetric (and very inequitable) pure Nash equilibrium of the same game. These findings support a distinctive prediction of evolutionary game theory.

This paper is forthcoming in Journal of Economic Theory.

    Keywords: Evolutionary dynamics, Hawk-Dove game, Game theory, Laboratory experiment, Continuous time game

Risky Curves: From Unobservable Utility to Observable Opportunity Sets // June 8, 2011

Abstract

Most theories of risky choice postulate that a decision maker maximizes the expectation of a Bernoulli (or utility or similar) function. We tour 60 years of empirical search and conclude that no such functions have yet been found that are useful for out-of-sample prediction. Nor do we find practical applications of Bernoulli functions in major risk-based industries such as finance, insurance and gambling. We sketch an alternative approach to modeling risky choice that focuses on potentially observable opportunities rather than on unobservable Bernoulli functions.

    Keywords: Expected utility, Risk aversion, St. Petersburg Paradox, Decisions under uncertainty, Option theory

The Liberal Tradition in America Reconsidered // March 2, 2011

Abstract

Utilizing a two-pronged empirical approach, I test the assumption that most Americans are tolerant of growing economic inequality in the United States. My experimental and survey results show that many groups of Americans - e.g., women, Latinos and African-Americans - are more egalitarian than we've been led to expect and are even willing to sacrifice personal gain for the well-being of others. Put another way, the assertion that most Americans are individualistic and attribute differences in socioeconomic outcomes to skill and effort is overstated, as is the notion that Americans largely tolerate inequality in the economic domain, a claim advanced by Hochschild (1981) and later reinforced by Bartels (2008). Scholars in economics and political science have been assuming for over 150 years that Americans tend to think alike on matters of distributive justice. The findings presented in this paper suggest that particular groups in the US are more intolerant of inequality - they often choose in experiments to reduce inequality rather than maximize their own expected income at the individual level.

Human and Artificial Agents in a Crash-Prone Financial Market // Jan. 1, 2010

Abstract

We introduce human traders into an agent based financial market simulation prone to bubbles and crashes. We find that human traders earn lower profits overall than do the simulated agents (“robots”) but earn higher profits in the most crash-intensive periods. Inexperienced human traders tend to destabilize the smaller (10 trader) markets, but have little impact on bubbles and crashes in larger (30 trader) markets and when they are more experienced. Humans’ buying and selling choices respond to the payoff gradient in a manner similar to the robot algorithm. Similarly, following losses, humans’ choices shift towards faster selling.

published in Computational Economics 36:3, pp. 201-229 (December 2010)

    Keywords: Financial markets, Agent-based models, Experimental economics

laboratory financial markets // Jan. 1, 2010

Abstract

Abstract. Small scale financial markets have been studied in the laboratory for more than two decades. Typically 6-20 human subjects buy and sell units of a single asset whose dividends extend over several periods and/or are uncertain. Such markets permit direct observation of informational efficiency, allow sharp tests of theoretical predictions. They also provide test-beds for policy initiatives, new market formats and automated trading strategies.

A slightly improved version of this article appeared as a chapter in the New Palgrave Dictionary of Economics, circa 2010.

Gradient Dynamics in Population Games: Some Basic Results // Oct. 30, 2009

Abstract

When each player in a population game continuously adjusts her action to move up the payoff gradient, then the state variable (the action distribution) obeys a nonlinear partial differential equation. Extending techniques from fluid dynamics, we collect some results on the existence, uniqueness and properties of solutions to such equations, find conditions that render gradient adjustment optimal, find sufficient conditions for asymptotic convergence, and use a local form of Nash equilibrium to characterize the limiting distributions.

Published in Journal of Mathematical Economics 46.5 (2010): 691-707.

    Keywords: Population games, Gradient dynamics, Potential games

Testing the TASP: An Experimental Investigation of Learning in Games with Unstable Equilibria // Oct. 5, 2009

Abstract

We report experiments designed to test between Nash equilibria that are stable and unstable under learning. The TASP (Time Average of the Shapley Polygon) gives a precise prediction about what happens when there is divergence from equilibrium under fictitious play like learning processes. We use two 4x4 games each with a unique mixed Nash equilibrium; one is stable and one is unstable under learning. Both games are versions of Rock-Paper-Scissors with the addition of a fourth strategy, Dumb. Nash equilibrium places a weight of 1/2 on Dumb in both games, but the TASP places no weight on Dumb when the equilibrium is unstable. We also vary the level of monetary payoffs with higher payoffs predicted to increase instability. We find that the high payoff unstable treatment differs from the others. Frequency of Dumb is lower and play is further from Nash than in the other treatments. That is, we find support for the comparative statics prediction of learning theory, although the frequency of Dumb is substantially greater than zero in the unstable treatments.

    Keywords: Games, Experiments, TASP, Learning, Unstable, Mixed equilibrium, Fictitious play

Preemption Games: Theory and Experiment // May 31, 2009

Abstract

Several investors face an irreversible investment opportunity whose value V is governed by Brownian motion with upward drift and random expiration. The first investor i to seize the opportunity before expiration receives the current V less a privately known cost Ci; the other investors receive nothing. We characterize Bayesian Nash Equilibrium (BNE) for this game, extending previously known results. We also report a laboratory experiment with 72 subjects randomly matched into 600 tri- opolies. As predicted in BNE, subjects in triopolies invested at lower values than in monopolies, changes in Brownian parameters significantly altered investment values in monopoly but not in triopoly; and the lowest cost investor in a triopoly usually preempted the others. Evidence was mixed on other BNE predictions, e.g., whether higher cost brings smaller markups. Overall, subjects' earnings came rather close to the BNE prediction.

forthcoming in American Economic Review

    Keywords: Preemption, Incomplete Information, Irreversible Investment, Laboratory Experiment

Humans, Robots and Market Crashes: A Laboratory Study // March 25, 2009

Abstract

We introduce human traders into an agent based nancial market simulation prone to bubbles and crashes. We fi nd that human traders earn lower pro fits overall than do the simulated agents (''robots") but earn higher profi ts in the most crash-intensive periods. Inexperienced human traders tend to destabilize the smaller (10 trader) markets, but otherwise they have little impact on bubbles and crashes in larger (30 trader) markets and when they are more experienced. Humans' buying and selling choices respond to the payoff gradient in a manner similar to the robot algorithm. Likewise, following losses, humans' choices shift towards faster selling.

    Keywords: Financial markets, Agent-based models, Experimental economics

Preferences, Beliefs and Equilibrium: What Have Experiments Taught Us? // Sept. 3, 2008

Abstract

The key primitives of microeconomic theory include preferences and beliefs at the individual level, and equilibrium at the aggregate level. In my response to Vernon Smith's target article in JEBO, I focus on what experiments can teach us about these primitives and about the theoretical models that we construct from them.

Equilibrium Vengeance // July 23, 2008

Abstract

This paper introduces two ideas, emotional state dependent utility components (ESDUCs), and evolutionary perfect Bayesian equilibrium (EPBE). Using a simple extensive form game, we illustrate the efficiency-enhancing role of a powerful ESDUC, the vengeance motive. Incorporating behavioral noise and observational noise leads to seven continuous families of (short run) Perfect Bayesian equilibria (PBE) that involve both vengeful and non-vengeful types. We then show that the evolutionary equilibrium concept shrinks the long-run equilibrium set to two points. In one EPBE, only the non-vengeful type survives and there are no mutual gains. In the other EPBE, both types survive and reap mutual gains.

Forthcoming in Games and Economic Behavior.

    Keywords: Reciprocity, Vengeance, Perfect Bayesian equilibrium, Social dilemmas

Bubbles and Crashes - Gradient Dynamics in Financial Markets // July 1, 2008

Abstract

We develop a financial market model focused on fund managers who continuously adjust their exposure to risk in response to the payoff gradient. The base model has a stable equilibrium with classic properties. However, bubbles and crashes occur in extended models incorporating an endogenous market risk premium based on investors' historical losses and constant gain learning. When losses have been small for a long time, asset prices inflate as fund managers adopt riskier portfolios. Then slight losses can trigger a crash, as a widening risk premium accelerates the decline in asset price.

Forthcoming in Journal of Economic Dynamics and Control.

    Keywords: Bubbles, Escape dynamics, Time varying risk premium, Constant-gain learning, Agent-based models

Bubbles and Crashes: a Cyborg Approach // June 17, 2008

Abstract

In joint work since 2004 we have created a family of agent-based models for financial markets in which bubbles and crashes occur in imitation of real markets. The evolution of behavioral rules in these models has shed light on some possible mechanisms used by human account managers or traders. Our programming environment, NetLogo, has proved ideal for this work, and also offers a feature, HubNet, capable of extending simulation to include human as well as robot traders. Recently we have used this feature to test a bubbles and crash model in a controlled laboratory environment. The experiment uses agent-based modeling to create a virtual financial market where human subjects act as stock market traders alongside automated robots. We use the experimental data to first test whether humans adjust their exposure to risk in response to a payoff gradient and to test second whether humans perceive risk by responding to an exponential average of their losses. We find that humans do not exactly follow a gradient but are very close. We also find that humans strongly respond to losses putting more weight on the most current losses. However, how they respond to losses depends on the frequency and predictability of crashes.

forthcoming, Journal of the Calcutta Mathematical Society.

    Keywords: Bubbles, Crashes, Agent-based models, NetLogo, Financial markets, Escape dynamics, Experimental economics

Laboratory financial markets // May 23, 2008

Abstract

An article for the Palgrave Dictionary of Economics, Second Edition.

Small scale financial markets have been studied in the laboratory for more than two decades. Typically 6-20 human subjects buy and sell units of a single asset whose dividends extend over several periods and/or are uncertain. Such markets permit direct observation of informational efficiency, allow sharp tests of theoretical predictions. They also provide test-beds for policy initiatives, new market formats and automated trading strategies.

BUY IT NOW: A HYBRID INTERNET MARKET INSTITUTION // May 7, 2008

Abstract

This paper analyzes seller choices and outcomes in approximately 700 Internet auctions of a relatively homogeneous good. The 'Buy it Now' option allows the seller to convert the auction into a posted price market. We use a structural model to control for the conduct of the auction as well as product and seller characteristics. In explaining seller choices, we find that the 'Buy it Now' option was used more often by sellers with higher ratings and offering fewer units; and posted prices were more prevalent for used items. In explaining auction outcomes, we find that auctions with a 'Buy it Now' price had higher winning bids, ceteris paribus, whether or not the auction ended with the 'Buy it Now' offer being accepted, possibly reflecting signaling or bounded rationality. We also find that posting prices, by combining 'Buy it Now' and an equal starting price, was an effective strategy for sellers in the sample.

This paper appeared in the Journal of Electronic Commerce Research, VOL 9, NO 2, 2008.

    Keywords: Market institutions, Posted prices, Auctions, E-commerce

Conspicuous Consumption Dynamics // Dec. 19, 2007

Abstract

We formalize Veblen's idea of conspicuous consumption as two alternative forms of rank-dependent preferences, dubbed envy and pride. Agents adjust consumption patterns gradually, in the direction of increasing utility. From an arbitrary initial state, the distribution of consumption among agents with identical preferences converges to a unique equilibrium distribution. When pride is stronger, the equilibrium distribution has a right-skewed density. When envy is stronger, the equilibrium is concentrated at a single point, and the adjustment dynamics involve a shock wave that can be interpreted as a growing, moving, homogeneous middle class.

This paper is forthcoming in Games and Economic Behavior.

    Keywords: Veblen effects, Gradient dynamics, Shock waves

Revealed Altruism // Dec. 16, 2007

Abstract

This paper develops a theory of revealed preferences over one's own and others' monetary payoffs. We introduce more altruistic than(MAT), a partial ordering over preferences, and interpret it with known parametric models. We also introduce and illustrate more generous than (MGT), a partial ordering over opportunity sets. Several recent discussions of altruism focus on two player extensive form games of complete information in which the first mover (FM) chooses a more or less generous opportunity set for the second mover (SM). Here reciprocity can be formalized as the assertion that an MGT choice by the FM will elicit MAT preferences in the SM and, furthermore, that the effect on preferences is stronger for acts of commission than acts of omission by FM. We state and prove propositions on the observable consequences of these assertions. Then we test those propositions using existing data from investment games with dictator controls and Stackelberg games and new data from Stackelberg mini-games. The test results provide support for the theory of revealed altruism.

The copyright to this Article is held by the Econometric Society. It may be downloaded, printed and reproduced only for educational or research purposes, including use in course packs. No downloading or copying may be done for any commercial purpose without the explicit permission of the Econometric Society. For such commercial purposes contact the Office of the Econometric Society (contact information may be found at the website http://www.econometricsociety.org or in the back cover of Econometrica). This statement must the included on all copies of this Article that are made available electronically or in any other format.

    Keywords: Neoclassical preferences, Social preferences, Convexity, Reciprocity, Experiments

eBay Sellers // Dec. 16, 2007

Abstract

We examine seller tactics in 1177 eBay auctions. The largest
volume sellers make rather homogeneous choices; smaller sellers are more heterogeneous. Some tactics, such as starting the auction with a 'Buy it Now' offer, appear to increase revenue. Perhaps due to intense competition, however, the overall impact of most tactics appears to be quite small. The main exception is the use of a secret reserve price, which raises the winning bid conditional on a sale, but reduces the probability of a sale. This can be advantageous, depending on the seller's risk aversion and impatience.
***This paper is forthcoming in International Journal of Electronic Business

    Keywords: Internet auctions, Posted prices, Market institutions, Electronic business, eBay, Seller strategy

A Laboratory Investigation of Networked Markets // Oct. 2, 2007

Abstract

When contracts are not perfectly enforceable, can interpersonal networks improve market efficiency? We introduce exogenous networks into laboratory markets in which traders can cheat in 'distant' transactions but not in 'local' ones. Traders are anonymous outside their network, but inside it they can build a reputation. We examine network configurations that have the potential to completely overcome market failure and achieve competitive equilibrium (CE) efficiency. Our results fall short of that mark, but the networks do significantly reduce cheating and increase efficiency. Moreover, the theoretical upper bounds correctly predict the main qualitative trade patterns across our four network architectures. The networks support increased international trade volume and reduced domestic volume, and divert transactions of the highest value and lowest cost units from domestic markets to international networks.

This paper is forthcoming in Economic Journal

    Keywords: Networks, Laboratory Markets, Market Frictions, Social Capital, Missing Trade Puzzle

Cheating in Markets // July 1, 2007

Abstract

We develop a two-market model under three conditions: autarky, frictionless free trade, and free trade with cheating. Under cheating, in cross market trades the buyer can underpay by x% and the seller can deliver x% less than full value. We fully characterize competitive equilibrium with cheating and obtain novel testable predictions on price, volume and surplus. For example, agents with highest value and lowest cost are predicted to trade exclusively in the domestic market, while agents closer to the margin trade only in the cross market and always cheat; the overall volume is higher (!) than in frictionless free trade; and as x% decreases from 100 to 0, domestic prices move non-linearly from autarky levels to the frictionless free trade level. We test many of these predictions in a laboratory experiment with human traders using parameters intended to challenge the theory. The results are generally consistent with the competitive predictions, in the cheating treatments as well as in the autarky and frictionless free trade treatments. We find evidence of price unification, market segmentation, and an overall volume of trade higher (but a cross market volume lower) under cheating than in frictionless free trade.

This paper is forthcoming in Journal of Economic Behavior and Organization

    Keywords: Cheating, Missing Trade Puzzle, Frictions, Market Experiment

Market theories evolve, and so do markets // June 30, 2007

Abstract

Responding to Mirowski's target article, this paper discusses some intellectual currents of 1970s-1990s and offers suggestions on measuring market performance, on including automated agents as market participants, on evolving new market formats, and on dealing with highly differentiated goods.

The paper was published in a special issue of the Journal of Economic Behavior & Organization, Vol. 63 (2007) pp. 247-255.

Litigation with symmetric bargaining and two-sided incomplete information // April 1, 2007

Abstract

We construct game theoretic foundations for bargaining in the shadow of a trial. Plaintiff and defendant both have noisy signals of a common-value trial judgment and make simultaneous offers to settle. If the offers cross, they settle on the average offer; otherwise, both litigants incur an additional cost and the judgment is imposed at trial. We obtain an essentially unique NE and characterize its conditional trial probabilities and judgments. Some of the results are intuitive, e.g., an increase in trial cost (or a decrease in the range of possible outcomes) reduces the probability of a trial. Other results reverse findings from previous literature. For example, trials are possible even when the defendant's signal indicates a higher potential judgment than the plaintiff's signal, and when trial costs are low, the middling cases (rather than the extreme cases) are more likely to settle.

Published in Journal of Law, Economics and Organization, 23:1, 98 – 126 (April 2007).

A laboratory Investigation of Deferral Options // March 21, 2007

Abstract

An irreversible investment opportunity has value $V$ governed by Brownian motion with upward drift and random expiration. Human subjects choose in continuous time when to invest. If she invests before expiration, the subject receives $V - C$: the final value $V$ less a given avoidable cost $C$. The optimal policy is to invest when $V$ first crosses a threshold $V^ = (1+w^)C$, where the option premium $w^$ is a specific function of the Brownian parameters representing drift, volatility and discount (or expiration hazard) rate. We ran 80 periods each for 69 subjects. Subjects in the Low $w^$ treatment on average invested at values quite close to optimum. Subjects in the two Medium treatments and the High $w^*$ invested at values below optimum, but with the predicted ordering, and values approached the optimum by the last block of 20 periods. Behavior was most heterogeneous in the High treatment. Subjects underrespond to differences in both the volatility and expiration hazard parameters. A directional learning model suggests that subjects react reliably to ex-post losses due to early investment, and react much more heterogeneously (and on average more strongly) to missed investment opportunities. Simulations show that this learning process converges on a nearly optimal steady state.

This paper is forthcoming in Review of Economic Studies.

    Keywords: Real options, Optimal stopping, Laboratory experiment, Fractional factorial design

ITEM Proposal // Sept. 1, 2006

Abstract

The project description for the ITEM GRANT

Speculative Attacks: A Laboratory Study in Continuous Time // Aug. 1, 2006

Abstract

We test speculative attack models in a controlled laboratory environment featuring continuous time, size asymmetries, and varying amounts of public information. Attacks succeeded in 233 of 344 possible trading periods. When speculators have symmetric size and access to information: (a) weaker (or more rapidly deteriorating) fundamentals increase the likelihood of successful speculative attacks and hasten their onset, and (b) contrary to some theory, public access to information about either the net speculative position or the fundamentals also enhances success. The presence of a larger speculator further enhances success, and experience with large speculators increases small speculators' response to the public information. However, giving the large speculator increased size or better information does not significantly strengthen his impact.

forthcoming, Journal of International Money and Finance.

    Keywords: Currency Crisis, Speculative Attack, Laboratory Experiment, Coordination Game, Pre-emption, Large Player

Chapters from Handbook of Experimental Economics Results // Jan. 1, 2006

Abstract

Collected here are 4 chapters from the long-awaited Handbook:

  1. Equilibrium Convergence, by NB and DF, about learning in games.
  2. Learning to Forecast Price, by HK and DF, about forecasting given two continuous signals.
  3. A Comparison of Market Institutions, by TC and DF, about the CDA and Call market, and intermediate institutions.
  4. The Matching Market, by CR and DF, about a peculiar variant of the Call market.

These chapters were written around 1998 but were published only in 2008.

Markups in Double Auction Markets // Oct. 1, 2005

Abstract

We study the continuous double auction market with simulated traders using various markup rules. A higher markup trades off increased profitability against reduced probability of a transaction. The tradeoff in Nash equilibrium turns out to be remarkably close to the most efficient tradeoff. This may partially explain the mysterious efficiency of double auction markets.

This paper appeared in Journal of Economic Dynamics and Control 31:9, 2984-3005 (September 2007)

    Keywords: Markup, Simulations, Continuous double auction

Searching for the Sunk Cost Fallacy // Oct. 1, 2005

Abstract

We isolate in the laboratory factors that encourage and discourage the famous but elusive sunk cost fallacy. Subjects play a computer game in which they decide whether to keep digging for treasure on an island or to sink a cost (which will turn out to be either high or low) to leave for another island. The research hypothesis is that subjects will stay longer on islands that were more costly to find. Eleven treatment variables are considered, including alternative displays, whether the treasure value of an island is shown on arrival or discovered by trial and error, and cost and other parameters. We detect only a small sunk cost effect that seems insensitive to the hypothesized psychological drivers of self-justification and loss aversion.

This paper appeared in Experimental Economics 10:1, 79-104 (March 2007)

    Keywords: Sunk costs, Sunk cost fallacy, Search, Self-justification, Loss aversion

Financial Engineering and Rationality: Experimental Evidence Based on the Monty Hall Problem // July 1, 2005

Abstract

Financial engineering often involves redefining existing financial assets to create new financial products. This paper investigates whether financial engineering can alter the environment so that irrational agents can quickly learn to be rational. The specific environment we investigate is based on the Monty Hall problem, a well-studied choice anomaly. Our results show that, by the end of the experiment, the majority of subjects understand the Monty Hall anomaly. Average valuation of the experimental asset is very close to the expected value based on the true probabilities.

forthcoming, Journal of Behavioral Finance.

A tractable model of reciprocity and fairness // April 12, 2005

Abstract

We introduce a parametric model of other-regarding preferences in which my emotional state determines the marginal rate of substitution between my own and others' payoffs, and thus my subsequent choices. In turn, my emotional state responds to relative status and to the kindness or unkindness of others' choices. Structural estimations of this model with six existing data sets demonstrate that other-regarding preferences depend on status, reciprocity, and perceived property rights.

Published in Games and Economic Behavior 59:1, 17-45 (April 2007).

Cheating in Markets: A Methodological Exploration // Dec. 1, 2004

Abstract

In the 1970s, experimental economics split from social psychology by embracing rational choice and equilibrium methods. Behavioral economics has recently narrowed the divide, to the dismay of some. The present paper argues that evolutionary dynamics provides a framework which unifies the best features of social psychology with equilibrium and rational choice. Ongoing research in cheating in markets illustrates the main points. A new equilibrium model provides distinctive testable predictions under three regimes: autarky, frictionless free trade, and anonymous foreign trade with opportunities to cheat. The predictions organize quite well the data collected so far. Later phases of the project will allow trader networks to evolve, altering the market institution and perhaps affecting preferences. Thus the major forces recognized by social psychologists can be combined with a rationality and equilibrium to study how markets respond to the risk of cheating.

This paper appeared as a chapter in S. Oda, editor, Developments on Experimental Economics, Springer Lecture Notes in Economics and Mathematical Sciences #590, July 2007.

Internet Congestion: A Lab Experiment // April 30, 2004

Abstract

Human players and automated players (bots) interact in real time in a congested network. A player's revenue is proportional to the number of successful ''downloads'' and his cost is proportional to his total waiting time. Congestion arises because waiting time is an increasing random function of the number of uncompleted download attempts by all players. Surprisingly, some human players earn considerably higher profits than bots. Bots are better able to exploit periods of excess capacity, but they create endogenous trends in congestion that human players are better able to exploit. Nash equilibrium does a good job of predicting the impact of network capacity and noise amplitude. Overall efficiency is quite low, however, and players overdissipate potential rents, i.e., earn lower profits than in Nash equilibrium.

Published in in R. Zwick, ed., Experimental Business Research, Vol II, New York: Kluwer, October 2005. A shorter version of the same article, aimed at computer scientists, appeared in Proceedings of the ACM SIGCOMM 2004 Workshops, ACM Press, NY (ISBN: 1-58113-942-X), 177-182.

Negative Reciprocity: The Coevolution of Memes and Genes // April 1, 2004

Abstract

A preference for negative reciprocity is an important part of the human emotional repertoire. We model its role in sustaining cooperative behavior but highlight an intrinsic free-rider problem: the fitness benefits of negative reciprocity are dispersed throughout the entire group, while the fitness costs are borne personally. Evolutionary forces tend to unravel people's willingness to bear the personal cost of punishing culprits. In our model, the countervailing force that sustains negative reciprocity is a meme consisting of a group norm together with low-powered (and low-cost) group enforcement of the norm. The main result is that such memes coevolve with personal tastes and capacities so as to produce the optimal level of negative reciprocity.

A slightly revised version of this paper appeared in Evolution and Human Behavior, 25(3), 155-173 (May 2004).

    Keywords: Altruism, reciprocity, negative reciprocity, coevolution

Vengefulness Evolves in Small Groups // Jan. 1, 2004

Abstract

We discuss how small group interactions overcome evolutionary problems that might otherwise erode vengefulness as a preference trait. The basic viability problem is that the fitness benefits of vengeance often do not cover its personal cost. Even when a sufficiently high level of vengefulness brings increased fitness, at lower levels, vengefulness has a negative fitness gradient. This leads to the threshold problem: how can vengefulness become established in the first place? If it somehow becomes established at a high level, vengefulness creates an attractive niche for cheap imitators, those who look like highly vengeful types but do not bear the costs. This is the mimicry problem, and unchecked it could eliminate vengeful traits. We show how within-group social norms can solve these problems even when encounters with outsiders are also important.

This paper appeared as a chapter in the Werner Guth festschrift: S. Huck, ed., Advances in Understanding Strategic Behavior, NY: Palgrave Macmillan, 2004.

    Keywords: second order free riding, social norms, reciprocity, vengeance

Vengefulness evolves in small groups // Jan. 1, 2004

Abstract

We discuss how small group interactions overcome evolutionary problems that might otherwise erode vengefulness as a preference trait. The basic viability problem is that the fitness benefits of vengeance often do not cover its personal cost. Even when a sufficiently high level of vengefulness brings increased fitness, at lower levels, vengefulness has a negative fitness gradient. This leads to the threshold problem: how can vengefulness become established in the first place? If it somehow becomes established at a high level, vengefulness creates an attractive niche for cheap imitators, those who look like highly vengeful types but do not bear the costs. This is the mimicry problem, and unchecked it could eliminate vengeful traits. We show how within-group social norms can solve these problems even when encounters with outsiders are also important.

A lightly revised version of this paper appeared in S. Huck, ed., Advances in Understanding Strategic Behavior, NY: Palgrave Macmillan, 2004.

Asset Market Experiments // May 23, 2003

Abstract

This 2003 article for the Encyclopedia of Cognitive Science discusses laboratory evidence connecting cognitive biases to asset market prices.

    Keywords: cognitive biases, asset markets

Dynamics of Price Dispersion // March 1, 2003

Abstract

Hypotheses on the dynamics of dispersed prices are extracted from computer simulations, as well as traditional and recent theory. The hypotheses are tested on existing laboratory data. As predicted in some variations of the Edgeworth hypothesis, the laboratory data exhibit a significant cycle. Relative to the unique stationary distribution, the empirical distribution of posted prices has excess mass in an interval that moves downward over time until it approaches the lower boundary of the stationary distribution. Then the excess mass jumps upward and the downward cycle resumes. The amplitude of the cycle seems fairly constant over the longer experimental sessions. Of the simulations we consider, the one closest to Edgeworth's 1925 account, a hybrid of gradient dynamics and logit dynamics, seems to best reproduce the observed dynamics.

Published in Journal of Economic Dynamics and Control, 29(4), 801-822 (April 2005).

Evolution and Negative Reciprocity // Dec. 1, 2001

Abstract

We offer a theoretical explanation of negative reciprocity or vengeance, the human desire to harm those who have harmed us. Our model shows how negative reciprocity can be sustained by the coevolution of genes that determine the capacity for vengeance and group memes (e.g., social norms) that regulate its expression. The model begins with a standard free rider game that captures, simply and directly, a personal cost incurred to reap social gains. The model shows that a taste for vengeance realigns incentives and supports a socially efficient equilibrium, but that by itself the taste for vengeance is not evolutionarily viable. We then show how groups of individuals can use low-power sanctions (or simply status changes) to enforce a particular norm on the proper degree of vengeance. The main result is that actual behavior typically will fall short of the norm, but selection across groups will adjust the norm so that actual behavior maximizes the fitness of group members.

A lightly edited version of this paper appeared in Y. Aruka, ed., Evolutionary Controversies in Economics, Springer, 2001.

    Keywords: Altruism, reciprocity, negative reciprocity, coevolution

Calendar Auction White Paper // March 25, 2001

Abstract

White paper written for defunct startup company One Day Free. The paper describes their proposed Dynamic Price Calendar Auction[TM], a new electronic format that combines features of traditional descending (Dutch) and ascending (English) auctions. It allows sellers to auction multiple units of a good or service, and offers buyers the immediacy of posted offer markets together with full price transparency.

    Keywords: auctions, e-commerce

Contagion of Financial Crises under Local and Global Networks // Jan. 1, 2001

Abstract

As the world economy becomes increasingly global, will the financial sector become more stable or fragile? In this paper we study how the pattern of relations linking financial institutions - the network - affect the diffusion of a financial crisis. We analyze two such networks with a computational model: the local network, in which each bank is allowed to interact only with the most immediate neighbors, and the global network, in which each bank is allowed to interact with banks located anywhere in the system. We find that the network matters both for the amount of illiquidity in the system and for the spread of bankruptcy. When interactions are local, bankruptcy spreads slower but illiquidity hits harder. When interactions are global, bankruptcy spreads faster, but illiquidity presents fewer problems. We explain our results applying tools from graph theory. We conclude that a global system, in which financial institutions are not restricted to interact only with close neighbors, is more efficient in collecting and allocating funds, but is more vulnerable to contagion of bankruptcy crises.

ZippedData for Customer Market Experiments // Jan. 1, 2001

Abstract

Data below is attached to many papers in the project, "Customer Markets // 1997 - 2001 // SBR 9617917".

Bargaining versus Posted Price Competition in Customer Markets // Sept. 1, 2000

Abstract

We compare posted price and bilateral bargaining (or haggle) market institutions in 12 pairs of laboratory markets. Each market runs 50-75 periods in a customer market environment, where buyers incur a cost to switch sellers. Costs evolve following a random walk process. Coasian and New Institutionalist traditions provide competing conjectures on relative market performance. We find that efficiency is lower, sellers price higher, and prices are stickier under haggle than under posted offer.

Published in International Journal of Industrial Organization, 21(2), 223-251 (February 2003).

Towards Evolutionary Game Models of Financial Markets // Aug. 1, 2000

Abstract

Evolutionary game models analyze strategic interaction over time; equilibrium emerges (or fails to emerge) as players/traders adjust their actions in response to the payoffs they earn. This paper sketches some early and some recent evolutionary game models that contain ideas useful in modeling financial markets. It spotlights recent work on adaptive landscapes. In an extended example, the distribution of player/trader behavior obeys a variant of Burgers' partial differential equation, and solutions involve travelling shock waves. It is conjectured that financial market crashes might insightfully be modeled in a similar fashion.

A slightly modified version appears in Quantitative Finance 1:1 (Jan 2001)pp. 177-185. Reprinted in Beyond Equilibrium and Efficiency, D. Farmer and J. Geanokoplos (eds), Oxford University Press, 2002.

Laboratory Study of Customer Markets // April 1, 2000

Abstract

In our laboratory customer markets, sellers post price and buyers incur cost (controlled at zero, low and high values) when they switch to a new seller. Sellers' production costs follow various random walks in 28 sessions, each with 50-100 trading periods. We find that prices are sticky, and sellers absorb almost half of their cost shocks. Sellers price about 10 percent higher when buyers face either high or low switch costs, and trading efficiency is slightly impaired. Experienced buyers switch about 10 percent of the time with either high or low switch costs. Buyers switch more often when they face a higher posted price, have a lower valuation for the good, face lower switch costs, have more time remaining, and have more favorable information on alternative prices. Sellers price higher when they have more attached buyers, when buyers have less information on rivals' prices, when rivals post higher prices, and when less time remains.

Abstract

Published in JET. Posted offer markets with costly buyer search are investigated in 18 laboratory sessions. Each period sellers simultaneously post prices. Then each buyer costlessly observes one or (with probability 1-q) two of the posted prices, and either accepts an observed price, drops out, or pays a cost to search again that period. The sessions vary q, the search cost, the number of sellers, and the number and kind of buyers. Observed transaction prices conform fairly closely to theory (specific unified prices for q=0 and 1 and specific distributions of dispersed prices for q=1/3 and 2/3) when there are more sellers and when there are more buyers (especially robot buyers). With smaller numbers of traders we often see conscious parallelism in pricing behavior.

On the Viability of Vengeance // May 1, 1999

Abstract

Using a simple symmetric game to illustrate, we point out shortcomings in previous theoretical accounts of how the human vengeance motive survives despite a free rider problem. We offer a new theoretical explanation involving the coevolution of genes that determine the capacity for vengeance and memes that regulate its expression. The main result, illustrated in a simple parametric example, is that coevolution in a fixed environment circumvents the free rider problem and that the prevailing vengeance level will efficiently serve groups of individuals.

Evolutionary Economics Goes Mainstream: A Review of the Theory of Learning in Games // March 1, 1999

Abstract

Evolutionary economics in recent decades has defined itself as a radical al- ternative to mainstream economics. The mainstream studies simple interac- tions of unboundedly clever agents, and assumes that the agents instanta- neously achieve mutual consistency, as in competitive equilibrium or Nash equilibrium. The intent is to illuminate the fundamental role of tastes and technology in determining economic outcomes, and to reveal unsuspected consequences of policies that alter private opportunities and incentives (e.g., Mas-Colell, Whinston and Green, 1995).

Abstract

Posted offer markets with costly buyer search are investigated in 18 laboratory sessions. Each period sellers simultaneously post prices. Then each buyer costlessly observes one or two of the posted prices and either accepts an observed price, drops out, or pays a cost to search again that period. The sessions vary the number of observed prices (one or two), the search cost, and the number and kind of buyers. When there are more buyers (especially robot buyers), observed transaction prices conform remarkably closely to theory (competitive Bertrand prices when buyers observe two prices and monopoly Diamond prices when buyers observe only one price). With human subject buyers we observe less extreme prices, but outcomes are closer to theory than outcomes in previous laboratory experiments with similar environments.

Learning in a Laboratory Market with Random Supply and Demand // Jan. 1, 1999

Abstract

We propose a quantitative version of directional learning in a call market. Buyers (resp. sellers) adjust markdown (or markup) relative to true value (or cost) towards the ex-post optimum after each period. The model explains a considerable portion of behavior in a relevant laboratory experiment.

    Keywords: Laboratory Experiment, Call market, Auction, Bidding

Learning to Forecast Price // Jan. 1, 1999

Abstract

We study human learning in a individual choice laboratory task called Or- ange Juice Futures price forecasting (OJF), in which subjects must implic- itly learn the coefficients of two independent variables in a stationary linear stochastic process. The 99 subjects each forecast in 480 trials with feedback after each trial. Learning is tracked for each subject by fitting the forecasts to the independent variables in a rolling regression. Results include: (1) learning is fairly consistent in that coecient estimates for most subjects converge closely to the objective values, but there is a mild general tendency toward over-response. (2) Typically learning is noticeable slower than the Marcet-Sargent ideal. Among the more striking treatment effects are a gen- eral tendency towards (3) over-response with high background noise and (4) under-response with asymmetric coefficients.

Matching Market Institution: A Laboratory Investigation // Dec. 12, 1998

Abstract

The matching market (MM) trading institution matches the buyer with the highest bid to the willing seller with the highest ask, the second highest bidder to the second highest remaining willing seller, etc., until no seller remains who is willing to transact at any remaining buyer's bid. Our laboratory study shows that, compared to uniform pricing, the MM institution results in lower efficiency, more variable trading volume, and less value revelation.

    Keywords: call market, matching market

A Comparison of Learning and Replicator Dynamics Using Laboratory Data // Nov. 22, 1998

Abstract

We compare the explanatory power of replicator dynamics with belief learning dynamics using laboratory data in a variety of games. Overall, and especially in two-population games, the belief learning model fits the data better.

    Keywords: Belief learning, Experimental data, Mixed Nash equilibrium, Replicator dynamics

On economic applications of evolutionary games // March 1, 1998

Abstract

Evolutionary games have considerable unrealized potential for modeling substantive economic issues. They promise richer predictions than orthodox game theoretic models but often require more extensive specification. This article exposits the specification of evolutionary game models and classifies their asymptotic behavior in one and two dimensions.

    Keywords: Evolutionary games, Adjustment dynamics, ESS, Evolutionary equilibrium

Broadening the Tests of Learning Models // Jan. 1, 1998

Abstract

For many years psychological studies of the learning process have used a simulated medical diagnosis task in which symptom configurations are probabilistically related to diseases. Participants are given a set of symptoms and asked to indicate which disease is present, and feedback is given on each trial. We enrich this standard laboratory task in four dif- ferent ways. First, the symptoms have four possible values (low, medium low, medium high, and high) rather than just two. Second, symptom configurations are generated from an expanded factorial design rather than a simple factorial design. Third, subjects are asked to make a con- tinuous judgment indicating their confidence in the diagnosis, rather than simply a binary judgment. Fourth, cumulated performance scores, payoffs, and the availability of a historical summary of the outcomes are varied in order to assess how these treatments modulate performance. These enrichments provide a broader data set and more challenging tests of the models. Using 123 subjects each in 480 trials, we compare five existing learning models plus several variants, including the well-known Bayesian, fuzzy logic, connectionist, exemplar, and ALCOVE models. We find that the subjects do learn to distinguish the symptom configurations, that subjects are quite heterogeneous in their response to the task, and that only a small part of the variation across subjects arises from the differences in treatments. The most striking finding is that the model that best predicts subjects' behavior is a simple Bayesian model with a single fitted parameter for prior precision to capture individual differences. We use rolling regression techniques to elucidate the behavior of this model over time and find some evidence of over-response to current stimuli.

Monty Hall's Three Doors: Construction and Deconstruction of a Choice Anomaly // Jan. 1, 1998

Abstract

Sometimes people make decisions that seem inconsistent with rational choice theory. We have a choice anomaly when such decisions are systematic and well documented. From a few isolated examples such as the Maurice Allais (1953) paradox and the probability matching puzzle of William K. Estes (1954), the set of anomalies expanded dramatically following the work of Daniel Kahneman and Amos Tversky (e.g., 1979).

Understanding Variability in Binary and Continuous Choice // Jan. 1, 1998

Abstract

Excessive variability in binary choice (categorical judgment) can take the form of probability matching rather than the normatively correct behavior of deterministically choosing the more likely alternative. Excessive variability in continuous choice (judgment rating) can take the form of underconfidence, understating the probability of highly likely events and overstating the probability of very unlikely events. We investigate the origins of choice variability in terms of noise prior to decision (at the evidence stage) and at the decision stage.

Understanding variability in binary and continuous choice // Jan. 1, 1998

Abstract

Excessive variability in binary choice (categorical judgment) can take the form of probability match- ing rather than the normatively correct behavior of deterministically choosing the more likely alterna- tive. Excessive variability in continuous choice (judgment rating) can take the form of underconfi- dence, understating the probability of highly likely events and overstating the probability of very unlikely events. Weinvestigated the origins of choice variability in terms of noise prior to decision (at the evidence stage) and at the decision stage. Aversion of the well-known medical diagnosis task was conducted with binary and continuous choice on each trial. Noise at evidence stage was reduced by al- lowing the subjects to view historical summaries of prior relevant trials, and noise at the decision stage was reduced by giving the subjects a numerical score on the basis of their continuous choice and the actual outcome. Both treatments greatly reduced variability. Cash payments based on the numerical score had a less reliable incremental effect in our experiment. The overall results are more consistent with a Logitmodel of decision than with a simple criterion (or maximization) rule or a simple probability- matching rule.

Motivation and Coordination Games; Experiencing Organizational Dynamics // Sept. 1, 1997

Abstract

The motivation and coordination games covered here are used in the Managerial Economics class that is taught at UC Santa Cruz. Both games provide students with a hands on way to experience the differences between problems of motivation and coordination, a distinction which many under-graduates do not immediately understand.

Both games are conducted in class and they have a short follow-up assignment that is announced after the game is finished. This assignment is meant to help the students understand what they have been doing and why the two games are different.

In the coordination game, the students have a common interest (the equilibria are Pareto ranked, and one is efficient). The problem is aligning expectations (and actions). Generally, the students initially settle on an inefficient equilibrium. Direct communication between students allows students to achieve efficiency and move to the Pareto efficient equilibrium without the need for binding commitments.

In contrast, in the motivation game, the players have a personal interest diametrically opposed to the common interest (a sort of multilateral prisoner’s dilemma). By playing the game, students come to realize how difficult it can be to achieve cooperation when the benefits to defection are great. Even in the classroom, it seems impossible to get the Pareto optimal equilibrium without some kind of binding agreement.

Price Formation in Single Call Markets // March 1, 1997

Abstract

This paper reports a laboratory experiment designed to examine the price formation process in a simple market institution, the single call market. The experiment features random values and costs each period, so each period generates a new price formation observation. Other design features are intended to enhance the predictive power of the Bayesian Nash equilibrium (BNE) theory developed recently for this trading institution. We find that the data support several qualitative implications of the BNE, but that subjects' bid and ask behavior is not as responsive to changes in the pricing rule as the BNE predictions. Bids and asks tend to reveal more of the underlying values and costs than predicted, particularly when subjects are experienced. Nevertheless, observed trading efficiency falls below the BNE prediction. The results offer more support for the BNE when subjects compete against Nash "robot" opponents. A simple learning model accounts for several of the deviations from BNE.

Evolving Landscapes for Population Games // Feb. 1, 1997

Abstract

We consider population games where the possible actions of each player are labeled by a real number that ranges over a finite interval. The adjustment dynamics of such games can be visualized in terms of the ``landscape'' - the graph of the payoff (or fitness) function. A leading example is gradient dynamics, in which the speed with which a player changes action is proportional to the gradient (or slope) of the landscape at his current action. The time behavior of the action distribution in gradient dynamics is described by a class of nonlinear partial differential equations. Cases are exhibited in which the distribution of actions develops compressive and rarefaction shock waves. We discuss connections to the learning in games literature and to replicator and other monotone (or order compatible) dynamics. Applications are suggested in economics and population biology.

An Empirical Analysis of Price Formation in Double Auction Markets // Jan. 1, 1997

Abstract

This chapter from Friedman and Rust (1996) compares the explanatory power of three models of price formation in DA markets. Bid-ask sequences in the laboratory data are best explained in the Bayesian Game against Nature model, and the negative autocorrelation in transaction price changes is best explained by the Zero Intelligence model.

    Keywords: price formation, BGAN, WGDA, ZI

Individual Learning in Normal Form Games: Some Laboratory Results // Jan. 1, 1997

Abstract

We propose and test a simple belief learning model. We find considerable heterogeneity across individual players; some players are well described by ficti- tious play (long memory)learning, other players by Cournot (short memory)learning, and some players are in between. Representative agent versions of the model fit significantly less well and sometimes point to incorrect inferences. The model tracks players' behavior well across a variety of payoff matrices and information conditions.

Learning Liability Rules // Jan. 1, 1997

Abstract

We conduct experiments regarding the equilibrium and convergence properties of three different liability rules: negligence with contributory negligence, comparative negligence, and no-fault. Our experimental results show that, in comparison to contributory negligence, comparative negligence promotes a faster and more reliable convergence to the efficient equilibrium. Furthermore, as predicted by theory, the no-fault equilibrium yields suboptimal amounts of effort. Along the way we also test various hypotheses regarding learning and other adjustment dynamics. Thus our article extends the traditional static notion of institutional choice—liability rules with efficient equilibria are chosen—to a more dynamic perspective—rules that rapidly achieve efficient equilibria are chosen.

Learning Liability Rules // Jan. 1, 1997

Abstract

We conduct experiments regarding the equilibrium and convergence properties of three different liability rules: negligence with contributory negligence, comparative negligence, and no-fault. Our experimental results show that, in comparison to contributory negligence, comparative negligence promotes a faster and more reliable convergence to the efficient equilibrium. Furthermore, as predicted by theory, the no-fault equilibrium yields suboptimal amounts of effort. Along the way we also test various hypotheses regarding learning and other adjustment dynamics. Thus our article extends the traditional static notion of institutional choice—liability rules with efficient equilibria are chosen—to a more dynamic perspective—rules that rapidly achieve efficient equilibria are chosen

International Trade and the Internal Organization of Firms: An evolutionary approach // June 21, 1996

Abstract

Masahiko Aoki and others have distinguished two alternative modes for a firm’s internal organization. We argue that the profitability of each mode depends on the distribution of firms across modes and on the general economic environment. We characterize the evolutionary equilibria in both a parametric and a general model, and argue that corner equilibria predominate. We analyze the effects of trade between the two countries in (a) outputs only and (b) inputs (factors) as well as outputs. Our most striking conclusion is that in case (b) a less efficient mode can displace a more efficient mode when trade barriers are sufficiently low.

    Keywords: Japanese corporation, Evolutionary games, Internal organization of firms, Factor movement

Price formation in double auction markets // June 6, 1996

Abstract

This paper reports 14 laboratory experiments that examine existing theories of the price formation process in the continuous double auction. The experiments feature random values and costs, and therefore a new price formation observation each period. We find that efficiency is high and rises with experience in this environment. We also find that trades with greater exchange surplus tend to occur earlier in the period, that increased trader experience reduces an anomalous intertemporal arbitrage opportunity observed previously, and that only when traders are very experienced does exchange surplus accrue disproportionately to the side of the market with a smaller number of traders.

The Factory System // May 1, 1996

Abstract

Rough notes for modelling production in extensive form

An Empirical Analysis of Price Formation in Double Auction Markets // Jan. 8, 1996

Abstract

Chapter 9 in Friedman and Rust 1993 volume on the Double Auction Market. This chapter compares the ability of three theoretical models -- Wilson's Waiting Game/ Dutch Auction model, Friedman's Bayesian Game Against Nature model, and Gode and Sunder's Zero Intelligence model -- to account for empirical regularities in lab data. The second model does the best job of predicting bid/ask sequences, and the third model correctly predicts the negative serial correlation in prices. All three models have the same predictions on efficiency and transactions order, and those predictions organize the data reasonably well.

    Keywords: CDA, price formation

Double Auction Markets book Ch 1 // Jan. 1, 1996

Abstract

Survey of theoretical and laboratory studies of the double auction market institution. Chapter 1 in the 1993 book by D. Friedman and J. Rust. The classification of market institutions in Table 1 and Figure 1 may be helpful.

    Keywords: DAsurvey

Equilibrium in Evolutionary Games: Some Experimental Results // Jan. 1, 1996

Abstract

Evolutionary game theory informs the design and analysis of 26 experimental sessions using normal form games with 6 - 24 players. The state typically converges to the subset of Nash equilibria called evolutionary equilibria, especially under conditions of mean matching and history. Mixed strategy equilibria are explained better by 'purification' strategies than by homogenous independent individual randomisation. The risk dominance criterion fares poorly in some coordination game environments. With small player populations and large gains to cooperative behaviour, some players apparently attempt to influence other players' actions, contrary to a key theoretical assumption.

Equilibrium in evolutionary games: Some experimental results // Jan. 1, 1996

Abstract

Evolutionary game theory informs the design and analysis of 26 experimental sessions using normal form games with 6-24 players. The state typically converges to the subset of Nash equilibria called evolutionary equilibria, especially under conditions of mean matching and history. Mixed strategy equilibria are explained better by 'purification' strategies than by homogenous independent individual randomisation. The risk dominance criterion fares poorly in some coordination game environments. With small player populations and large gains to cooperative behaviour, some players apparently attempt to influence other players' actions, contrary to a key theoretical assumption.

ZippedData for Price Formation and Learning Experiments // Jan. 1, 1996

Abstract

Attached if data associated with project: "Price Formation and Learning".

Price formation in double auction markets // Oct. 1, 1995

Abstract

This paper reports 14 laboratory experiments that examine existing theories of the price formation process in the continuous double auction. The experiments feature random values and costs, and therefore a new price formation observation each period. We find that efficiency is high and rises with experience in this environment. We also find that trades with greater exchange surplus tend to occur earlier in the period, that increased trader experience reduces an anomalous intertemporal arbitrage opportunity observed previously, and that only when traders are very experienced does exchange surplus accrue disproportionately to the side of the market with a smaller number of traders.

Why voters vote for incumbents but against incumbency: A rational choice explanation // June 27, 1995

Abstract

In recent elections, voters supported initiatives to limit the number of terms that their representatives may serve, yet at the same time, overwhelmingly re-elected their incumbents. We provide a theoretical explanation for this and other puzzles associated with voting on term limitations. The pattern of voting on term limits can be explained by the desire to redistribute power from one party to another, from one branch of government to another, and from districts with long-term incumbents to districts whose representatives have served only for a short time span. We test these hypotheses by looking at voting patters on California Proposition 140 and the vote on the 22nd Amendment with generally positive results.

    Keywords: Term limits; Political redistribution; Rational voter

A Comparison of Learning Models // Jan. 1, 1995

Abstract

We investigate learning in a probabilistic task, called "medical diagnosis." On each trial, a subject is presented with a stimulus configuration indicating the value of four medical symptoms. The subject responds by guessing which of two diseases is present and is then given feedback about which disease was actually present. The feedback is determined according to fixed conditional probabilities unknown to the subject. We test a normative Bayesian model as well as simple variants of well-known psychological models including the Fuzzy Logical Model of Perception, an Exemplar model, a two-layer Connectionist model and an ALCOVE model. Both the asymptotic predictions of these models (i.e., predictions regarding behavior after it has stabilized and learning is complete) and predictions of trial-by-trial changes in behavior are tested. The models are tested against existing data from Estes et al. (1989, Journal of Experimental Psychology: Learning, Memory, & Cognition,15, 556-571) and new data from medical diagnosis tasks that include not only asymmetric but also symmetric base rates. Learning was observed in all cases in that subjects tended to match the objective probabilities of the symptom configurations more closely in later trials. All of the descriptive models give a more accurate account of performance than the normative Bayesian model. Relative to a benchmark measure, however, none of these models does an especially good job of characterizing asymptotic performance or the learning process. We suggest that future experiments should address individual performance, rather than group learning curves.

Competitivity in Auction Markets: An Experimental and Theoretical Investigation // Jan. 1, 1995

Abstract

We report successive rounds of theory and laboratory experiment investigationg price-taking behaviour and market efficiency. We focus on the impact of structural parameters as well as trading institution. the structural parameters involve non-competitive supply and demand, and a new odd-lot trading procedure for divisible goods. We present and justify an as-if complete information theory which explains the competitive outcomes and which correctly predicts highly non-competitive outcomes in CHQ markets.

Privileged Traders and Asset Market Efficiency: A Laboratory Study // Dec. 1, 1993

Abstract

The 39 experiments reported here examine the impact on trading profits and on market performance of awarding special trading privileges to some traders and not others. In call market experiments, the last-mover and orderflow access privileges are both modestly prof? itable and neither impairs market performance. In continuous market experiments, quicker access to orderflow information is quite profitable and more detailed access is possibly prof? itable; both privileges seem to enhance market performance slightly. By contrast, privileged marketmaking is extremely profitable and greatly impairs market performance.

How trading institutions affect financial market performance: Some laboratory evidence // Jan. 1, 1993

Abstract

The effects of trading institutions on market efficiency and trading volume are examined. The trading institutions are computerized versions of continuous double auction and "clearinghouse" markets. Traders are experienced, profit-motivated undergraduates. The traded good is a financial asset whose monetary value is state- and trader type-contingent. Traders possess asymmetric private information on asset value. The results show that clearinghouse markets are as informationally efficient as double auction markets and almost as allocationally efficient; the double auction encourages greater trading volume but the clearinghouse provides greater depth; public orderflow information enhances double auction performance but impairs clearinghouse performance.

On evolution and learning in games // Jan. 1, 1993

Abstract

ZippedData for Evolutionary Game Experiments // Jan. 1, 1993

Abstract

Attached is data associated with "Evolutionary Game Experiments" projects.

INEFFICIENT INFORMATION AGGREGATION AS A SOURCE OF ASSET PRICE BUBBLES // Oct. 8, 1992

Abstract

This paper presents a new theory of bubbles, or discrepancies between the market clearing price and the fundamental value of an asset. In our setting, Bayesian traders, oriented towards long-term gains, receive private information (‘news’) and also make inferences from noisy price signals. Price exhibits higher variance than fundamental value (the latter defined as fully-aggregated expected value) especially when news is informative but infrequent. The corresponding bubbles are self-limiting but may exhibit momentum and overshooting. A parametric example, involving the exponential/gamma conjugate families, is provided.

    Keywords: bubbles

How Trading Institutions Affect Financial Market Performance: Some Laboratory Evidence // Oct. 1, 1992

Abstract

The effects of trading institutions on market efficiency and trading volume are examined. The trading institutions are computerized versions of continuous double auction and "clearinghouse" markets. Traders are experienced, profit-motivated undergraduates. The traded good is a financial asset whose monetary value is state-and trader type-contingent. Traders possess asymmetric private information on asset value. The results show that clearinghouse markets are as infomationally efficient as double auction markets and almost as allocationally efficient; the double auction encourages greater trading volume but the clearinghouse provides greater depth; public orderflow information enhances double auction performance but impairs clearinghouse performance.

The Market Value of Information: Some Experimental Results // April 1, 1992

    Authors: Thomas Copeland

Abstract

We examine the price and allocation of purchased information and of the underlying asset in eight double-auction asset market experiments. Observed outcomes support fully revealing rational expectations in simple environments in which uninformed traders can easily infer the private information of informed traders but support nonrevealing rational expectations in more complex environments. The private value of information is positive in the more complex (noisy) environments, but competition forces the information prices to its Nash equilibrium value, and the net gain by purchasers is approximately zero.

Evolutionary Games in Economics // April 13, 1991

Abstract

Evolutionary games are introduced as models for repeated anonymous strategic interaction. The basic idea is that actions (or behaviors) which are more fit, given the current distribution of behaviors, tend over time to displace less fit behaviors. Simple numerical examples motivate the key concepts of fitness function and compatible dynam- ics, and illustrate the relation to previous biological models. Cone fields are introduced to characterize the continuous-time dynamical processes compatible with a given fitness function. The analysis focuses on dynamic steady state equilibria and their relation to the static equilibria known as NE (Nash equilibrium) and ESS (evolutionary stable state). For large classes of dynamics it is shown that all stable dynamic steady states are NE and that all NE are dynamic steady states. The biologists' ESS condition is less closely related to the dynamic equilibria. The paper concludes with a brief survey of economic applications.

published in Econometrica (1991): 637-666.

Partial Revelation of Information in Experimental Asset Markets // March 1, 1991

Abstract

We develop a model of market efficiency assuming private information is partially revealed to uninformed traders via the behavior of those who are informed. This partial revelation of information (PRE) model is tested in fourteen computerized double auction laboratory markets. It explains the market value and allocation of purchased information, and asset allocations, better than either a fully revealing information model (FRE strong-form efficiency) or a nonrevealing expectations model; but it takes second place to FRE in explaining asset prices. We conjecture that refined versions of PRE may provide insight into "technical analysis" and minibubbles in securities markets.

A Simple Testable Model of Double Auction Markets // Jan. 1, 1991

Abstract

We propose a model of price formation in Double Auction markets which employs the strong simplifying assumption that agents neglect strategic feedback efforts and regard themselves as playing a Game against Nature. Agents otherwise are strict expected utility maximizers employing Bayesian updating procedures. We prove the optimality of simple ('aggressive reservation price') strategies in our general model and propose a parametric form that yields very detailed and computable predictions of market behavior.

Common Value and Private Value Experimental Asset Markets with Inside Information // June 1, 1990

    Authors: Rod Merys

Abstract

Rod Merys MS thesis. It examines information aggregation and dissemination in a variety of laboratory asset markets in which some traders have better private information than others.

Insider Information in Experimental Markets // June 1, 1990

    Authors: Bret Carthew

Abstract

Bret Carthew's MS thesis. It focuses on the number of insiders (traders with superior private information) and their impact on asset price.

Models of Integration Given Multiple Sources of Information // April 20, 1990

Abstract

Several models of information integration are developed and analyzed within the context of a prototypical pattern-recognition task. The central concerns are whether the models prescribe maximally efficient (optimal) integration and to what extent the models are psychologically valid. Evaluation, integration, and decision processes are specified for each model. Important features are whether evaluation is noisy, whether integration follows Bayes's theorem, and whether decision consists of a criterion rule or a relative goodness rule. Simulations of the models and predictions of the results by the same models are carried out to provide a measure of identifiability or the extent to which the models can be distinguished from one another. The models are also contrasted against empirical results from tasks with 2 and 4 response alternatives and with graded responses.

The S-Shaped Value Function as a Constrained Optimum // Dec. 1, 1989

Abstract

This paper derives an S-shaped value function as the optimal allocation of scarce sensitivity. The value function is similar to that used in Prospect Theory, except with no kink at the reference point. As the constraint relaxes, the value function converges to a classic Bernoulli function.

Producers' Markets // Jan. 1, 1989

Abstract

A model of oligopoly with sales costs. Producers set prices and incur most transactions costs. The paper shows that non-collusive behavior leads to price leadership by the lowest cost producer. The prevailing price will be somewhat sticky. Essentially, the model provides microfoundations for the kinked demand curve model of imperfect competition.

The Effect of Sequential Information Arrival on Asset Prices: An Experimental Study // July 1, 1987

Abstract

A complete understanding of security markets requires a simultaneous explanation of price behavior, trading volume, portfolio composition (i.e., asset allocation), and bid-ask spreads. In this paper, these variables are observed in a controlled setting--a computerized double auction market, similar to NASDAQ. Our laboratory allows experimental control of information arrival--whether simultaneously or sequentially received, and whether homogeneous or heterogeneous. We compare the price, volume, and share allocations of three market equilibrium models: telepathic rational expectations, which assumes that traders can read each others minds (strong-form market efficiency); ordinary rational expectations, which assumes traders can use (some) market price information, (a type of semi-strong form efficiency); and private information, where traders use no market information. We conclude 1) that stronger-form market models predict equilibrium prices better than weaker-form models, 2) that there were fewer misallocation forecasts in simultaneous information arrival (SIM) environments, 3) that trading volume was significantly higher in SIM environments, 4) and that bid-ask spreads widen significantly when traders are exposed to price uncertainty resulting from information heterogeneity.

ZippedData for Financial Market Mechanism Experiments // Jan. 1, 1987

Abstract

Attached is data associated with "Financial Market Mechanisms" papers.

Two Microdynamic Models of Exchange // June 2, 1986

Abstract

Two models are presented of real-time exchange and price adjustment for a system of interrelated markets mediated by trade specialists. The first (MBM) features partially decentralized barter exchange in which specialists continuously maintain inventories and periodically adjust prices. A 'correspondence principle' yields existence and efficiency of (steady- state) equilibrium. Local stability follows from a direct argument. The second model (MMM) features fully decentralized money-mediated exchange as well as specialists. Existence, approximate-efficiency, local-stability and asymptotic-neutrality results are derived.

    Keywords: money, exchange, dynamics

Term Structure of FX Rates // Oct. 4, 1985

Abstract

A two-currency partial equilibrium model, featuring speculators, hedgers, arbitrageurs and traders, is constructed to model the simultaneous determination of spot and a range of forward foreign exchange rates together with the term structure of Eurocurrency interest rates. The main result is that forward FX rates are NOT an unbiased reflections of market expectations of future spot rates, even after adjustment for a risk premium. Rather, due to interest arbitrage, the forward rates largely reflect yield curve differences.

On the Efficiency of Experimental Double Auction Markets // Oct. 27, 1984

Abstract

I propose a theoretical explanation for the surprisingly efficient outcomes typically observed in laboratory experiments with only a handful of buyers and sellers operating in a continuous double auction market. Using the solution concept "no congestion equilibrium,' -- essentially what later would be called renegotiation-proof equilibrium -- I show that Bertrand-like competition on both sides of the market forces transaction prices into a close neighborhood of competitive equilibrium.

The article was published in the March 1984 issue of the American Economic Review.

    Keywords: continuous double auction, price formation

Effective Scoring Rules for Probabilistic Forecasts // Jan. 1, 1983

Abstract

This paper studies the use of a scoring rule for the elicitation of forecasts in the form of probability distributions and for the subsequent evaluation of such forecasts. Given a metric (distance function) on a space of probability distributions, a scoring rule is said to be effective if the forecaster's expected score is a strictly decreasing function of the distance between the elicited and true distributions. Two simple, well-known rules (the spherical and the quadratic) are shown to be effective with respect to suitable metrics. Examples and a practical application (in Foreign Exchange rate forecasting) are also provided.

    Keywords: Forecasting,Theoretical,

Short-run Fluctuations in Foreign Exchange Rates: Evidence from the Data 1973-1979. // Jan. 1, 1982

Abstract

(published in J. Int. Econ). The paper examines statistical properties of daily changes in exchange rates in major currencies. It finds evidence that leptokurtosis (fat tails) is due mainly to changes in the parameters (e.g., variance) in the underlying data generating process.

Money-Mediated Disequilibrium Processes // Feb. 2, 1979

Abstract

Dan's Dissertation. Studies cone fields of price adjustment and allocation adjustment in continuous time in a pure exchange economy. Demonstrates convergence to competitive equilibrium under rather mild conditions.